概述

微软亚洲研究院北京与温哥华实验室联合推出星跃计划,旨在为优秀人才创造与微软亚洲研究院多个研究团队一起聚焦真实前沿问题的机会。自20211月项目推出以来,收到了海内外学子的积极报名与热情关注。同学们可以在国际化的科研环境中、在多元包容的科研氛围中、在顶尖研究员的指导下,做有影响力的研究! 

星跃简介

微软亚洲研究院北京与温哥华实验室联合推出星跃计划,旨在为优秀人才创造与微软亚洲研究院多个研究团队一起聚焦真实前沿问题的机会。自20211月项目推出以来,收到了海内外学子的积极报名与热情关注。同学们可以在国际化的科研环境中、在多元包容的科研氛围中、在顶尖研究员的指导下,做有影响力的研究! 

星跃亮点 

  • 同时在微软亚洲研究院多个 lab 顶级研究员的指导下进行科研工作,与不同研究背景的科研人员深度交流 
  • 聚焦来自于工业界的真实前沿问题,致力于做出对学术及产业界有影响力的成果 
  • 通过线下与线上的交流合作,在微软了解国际化、开放的科研氛围,及多元与包容的文化 

星跃计划申请资格 

  • 硕士、博士在读学生(具体参考项目要求);延期(deferred)或间隔年(gap year)学生 
  • 可全职在国内工作 6-12 个月 
  • 项目详细要求详见下方项目介绍 

申请方式 

在线提交申请材料:https://jsj.top/f/LwjRie 

附中英文简历: 合并为一个PDF格式的文件,命名格式范例:Name_Resume 

星跃计划开放项目将持续更新,请及时关注获取最新动态。加入星跃计划” ,和我们一起跨越重洋,探索科研的更多可能! 

如有任何问题,请邮件咨询:msrainterncomm@microsoft.com 

项目介绍

概览

星跃计划共包括11个联合科研项目,覆盖自然语言处理、数据智能、计算机系统与网络、智能云、图像缩放、计算机视觉、行为检测、社会计算等领域。

项目招完即止,目前在招募的研究项目如下:

  • Large Language Models for Real-World Optimization
  • LLM-empowered knowledge production and consumption

星跃计划项目

Large Language Models for Real-World Optimization

【Introduction】

Join our pioneering research team to work on harnessing the power of Large Language Models (LLMs) to address complex real-world optimization problems requiring long-term planning and dynamic information gathering from environments. Traditional optimization techniques often struggle with the high dimensionality, dynamic nature, and intricate dependencies inherent in real-world settings.

Addressing these challenges, our research aims to push the boundaries of LLM capabilities to automate the decision-making processes, improve reliability, and provide innovative solutions to both existing and classical optimization challenges. The successful candidate will have the opportunity to collaborate with world-class researchers and engineers from diverse backgrounds and expertise, access to state-of-the-art computational resources, and contribute to the advancement of LLM research and its impact on real-world optimization problems.

【Qualifications】

  • Conduct cutting-edge research on the application of LLMs to real-world optimization problems.
  • Develop and implement novel methodologies to improve the performance of LLMs in dynamic and complex environments.
  • Collaborate with cross-functional teams to integrate advanced AI models with traditional optimization techniques.
  • Design experiments and simulations to test new hypotheses and validate the effectiveness of LLM-driven solutions.
  • Publish research findings in top-tier conferences and journals, and present results to both technical and non-technical audiences.

【Required Qualifications】

  • Currently enrolled in a master's, or PhD program in CS, EE, ML, Mathematics, or a related field.
  • Proficient analytical and problem-solving skills
  • Proficiency in Python, C/C++, and other programming languages.
  • Experience with Linux and development on Linux platforms.
  • Excellent communication and presentation skills.
  • Ability to work independently and collaboratively in a dynamic research environment.

【Preferred Qualifications】

  • Familiarity with optimization techniques and models.
  • Experience with machine learning frameworks (e.g., PyTorch, TensorFlow).
  • Knowledge of multi-agent systems and Active Learning.
  • Experience with LLMs and their applications in dynamic and complex environments.
  • Strong publication record in top-tier conferences and journals.
  • Active contribution to open-source projects on platforms like GitHub.

【How to Apply】

Interested candidates should submit their resume along with a cover letter detailing their relevant experience and research interests.

**Join us and contribute to groundbreaking research that integrates advanced AI models with optimization techniques, driving impactful decision-making across various domains.**

LLM-empowered knowledge production and consumption

【Introduction】

Knowledge is essential for identifying issues, accelerating remediation, and enhancing existing infrastructure in large-scale systems. However, there is a knowledge gap due to the lack of easily consumable, vast infrastructure data. Because the data is immense and dynamically evolving. Large-language and multi-modal models have created opportunities to better support knowledge production and consumption, from gleaning new insights to extracting entities and generating signatures from unstructured data at scale, as demonstrated in recent research. In this project, we aim to leverage these models to automate and accelerate raw data processing, build knowledge graphs, and connect them to gain a deeper understanding of system infrastructure.

We’ll work with scientists who are at the forefront of system and network research, leveraging the world-leading platforms to solve the challenges problems in this area. The current project team members, from both MSRA Vancouver and MSR Redmond labs, have rich experience contributing to both industry and academic community through transferring innovations that support production systems and publications at top conferences.

【Qualifications】

  • Major in computer science, electrical engineering, or equivalent field
  • Solid knowledge of data structure/algorithm
  • Familiarity with Python, C/C++ and other programming languages, familiar with Linux and development on Linux platform
  • Good communication and presentation skills
  • Good English reading and writing ability, capable of system implementing based on academic papers in English, capable of writing English documents

【Preferred Qualifications】

  • Rich knowledge of machine learning and machine learning models
  • Have some basic security knowledge and participated in one security-related projects.
  • Familiarity with engineering process as a strong plus
  • Active on GitHub, used or participated in well-known open-source projects

Small Language Model Alignment

【Introduction】

We are developing a suite of smaller language models (SLMs) that are similar to LLMs but use less computing power. This project focuses on studying advanced training techniques that can better align the capabilities of SLMs with various aspects of different product scenarios, including but not limited to instruction following and task planning.

【Research Areas】

Language Model, Machine Learning

【Qualifications】

  • Passionate about research in language models
  • Solid coding and communication skills
  • Prior experience in language model and/or machine learning research is preferred

Smarter, Faster, Cheaper: Improving Large Language Models by Compressing Prompts and Responses

【Introduction】

Large language models (LLMs) have revolutionized various fields with technologies like Retrieval Augmented Generation (RAG), In-Context Learning (ICL), Chain of Thought (CoT), and Agent-based models. These advancements, while groundbreaking, often result in lengthy prompts that lead to increased computational and financial costs, higher latencies, and added redundancy. Moreover, the intrinsic position bias of LLMs and redundancy within prompt will impact their performance, leading to the "lost in the middle" issue.

Previous studies have introduced prompt compression methods such as LLMLingua and LongLLMLingua, which address these issues and show promising results in generic scenarios. This project aims to explore research questions around complex scenarios, such as agent-related prompts and the compression of LLM responses. Furthermore, it seeks to investigate the effects of such compression techniques on adversarial attacks, security, and other critical aspects.

【Research Areas】

Large Language Models, Agent-based, Efficient Method

https://www.microsoft.com/en-us/research/project/llmlingua/overview/

【Qualifications】

  • Passionate about research in LLMs and AGI
  • Solid coding and communication skills
  • Prior experience in LLM and/or Machine Learning research is preferred

Automated Evaluation - the Right Way

【Introduction】

While there have been significant efforts on leveraging LLMs as an evaluator, it is not quite there yet. It is only useful in English and in certain tasks, which severely limits its useability and trustworthiness across language. Join a groundbreaking project at Microsoft Research Asia, focusing on answering fundamental questions around LLM-based evaluation, but with direct production impact. This project aims to surpass the current capabilities of LLMs in certain tasks, emphasizing accuracy, reliability, robustness, and generalizability. The intern will be instrumental in creating a production-deployed system that adapts to needs serving hundreds of millions of users; and answer fundamental questions around the capabilities, limitations, and usages of LLMs and beyond.

【Research Areas】

Large Language Models, LLM-based Evaluation, LLM for low-resource language

https://www.microsoft.com/en-us/research/group/natural-language-computing/

【Qualifications】

  • Passionate about research in LLMs and AGI
  • Solid coding and communication skills
  • Graduate student working towards a Ph.D
  • Prior experience in LLM and/or Machine Learning research is preferred

Towards Efficient and Robust Retrieval Augmented Generation

【Introduction】

Retrieval-augmented generation (RAG) is a technique for enhancing the quality of responses generated by large language models (LLMs) by using external sources of knowledge to supplement the LLM's internal representation of information. RAG allows LLMs to access the most up-to-date and reliable facts from a knowledge base or internal information storage. It can be used for various natural language generation tasks, such as question answering, summarization, and chat. However, the documents retrieved might be redundant and noisy. This project aims to develop efficient and robust RAG methods, which leverage shorter context length by removing contradiction and redundancy, reduce hallucination and are robust in different domains.

【Research Areas】

Large Language Models, Retrieval-Augmented Generation

https://www.microsoft.com/en-us/research/group/natural-language-computing/

【Qualifications】

  • Passionate about research in LLMs and AGI
  • Solid coding and communication skills
  • Prior experience in LLM and/or Machine Learning research is preferred

Online Aesthetic-Aware Smart Image Resizing

【Introduction】

For the new Designer app and Designer in Edge, we need to resize templates to different sizes, since different social media platforms require different target dimensions of the media, e.g., Facebook Timeline Post for personal accounts and business pages (1200 x 628), LinkedIn timeline post (1200 x 1200), Twitter timeline post (1600 x 900), etc. Image is the center of a template design. We need an ML-powered technique to automatically resize (including aspect ratio change, crop, zoom in/out) an image and put it into a resized template (more specifically speaking, resized image placeholder) for the target platform, so that the image placement looks good (i.e., maintaining the aesthetic values).

【Research Areas】

Computer Vision and Machine Learning

【Qualifications】

  • Ph.D. students majoring in computer science, applied mathematics, electrical engineering or related technical discipline
  • Relevant experience in the development and application of computer vision and/or machine learning algorithms to solve challenging image understanding problems
  • Strong scientific programming skills, including C/C++, MATLAB, Python
  • Independent analytical problem-solving skills
  • Experience collaborating within research teams to develop advanced research concepts, prototypes, and systems
  • Strong communication skills

UserBERT: Pretrain User Models for Recommendation

【Introduction】

Pretrained language models such as BERT and UniLM have achieved huge success in many natural language processing scenarios. In many recommendation scenarios such as news recommendation, video recommendation, and ads CTR/CVR prediction, user models are very important to infer user interest and intent from user behaviors. Previously, user models are trained in a supervised task-specific way, which cannot achieve a global and universal understanding of users and may limit they capacities in serving personalized applications.

In this project, inspired by the success of pretrained language models, we plan to pretrain universal user models from large-scale unlabeled user behaviors using self-supervision tasks. The pretrained user models aim to better understand the characteristics, interest and intent of users, and can empower different downstream recommendation tasks by finetuning on their labeled data. Our recent work can be found at

https://scholar.google.co.jp/citations?hl=zh-CN&user=0SZVO0sAAAAJ&view_op=list_works&sortby=pubdate

【Research Areas】

Recommender Systems and Natural Language Processing

【Qualifications】

  • Ph.D. students majoring in computer science, electronic engineering, or related areas
  • Self-motivated and passionate in research
  • Solid coding skills
  • Experienced in Recommender Systems and Natural Language Processing

Visual representation learning by vision-language tasks and its applications

【Introduction】

Learning visual representation by vision-language pair data has shown highly competitive compared to previous supervised and self-supervised approaches, pioneered by CLIP and DALL-E. Such vision-language learning approaches have also demonstrated strong performance on some pure vision and vision-language applications. The aim of this project is to continually push forward the boundary of this research direction.

【Research Areas 】

Computer vision

https://www.microsoft.com/en-us/research/group/visual-computing/

https://www.microsoft.com/en-us/research/people/hanhu/

【Qualifications】

  • Currently enrolled oversea Ph. D. students with promised or deferred offer, and is now staying in China
  • Major in computer vision, natural language processing, or machine learning

DNN-based Detection of Abnormal User Behaviors

【Introduction】

Are you excited to apply deep neural networks to solve practical problems? Would you like to help secure enterprise computer systems and users across the globe? Cyber-attacks on enterprises are proliferating and oftentimes causing damage to essential business operations. Adversaries may steal credentials of valid users and use their accounts to conduct malicious activities, which abruptly deviate from valid user behavior. We aim to prevent such attacks by detecting abrupt user behavior changes.

In this project, you will leverage deep neural networks to model behaviors of a large number of users, detect abrupt behavior changes of individual users, and determine if changed behaviors are malicious or not. You will be part of a joint initiative between Microsoft Research and the Microsoft Defender for Endpoint (MDE). During your internship, you will get to collaborate with some of the world’s best researchers in security and machine learning.

You would be expected to:

  • Closely work with researchers in China and Israel towards the research goals of the project.
  • Develop and implement research ideas and conduct experiments to validate them.
  • Report and present findings.

Microsoft is an equal opportunity employer.

【Research Areas】

Software Analytics, MSR Asia

https://www.microsoft.com/en-us/research/group/software-analytics/

Microsoft Defender for Endpoint (MDE)

This is a Microsoft engineering and research group that develops the Microsoft Defender for Endpoint, an enterprise endpoint security platform designed to help enterprise networks prevent, detect, investigate, and respond to advanced threats

https://www.microsoft.com/en-us/security/business/threat-protection/endpoint-defender

【Qualifications】

  • Must have at least 1 year of experience applying machine learning/deep learning to real world/ research problems
  • Demonstrated hands on the experience with Python through previous projects
  • Familiarity with Deep Learning frameworks like PyTorch, Tensorflow, etc
  • Keen ability for attention to detail and a strong analytical mindset
  • Excellent in English reading and reasonably good in English communications
  • Advisor’s permission

Those with the following conditions are preferred:

  • Prior experience in behavior modeling
  • Prior experience in anomaly detection
  • Security knowledge a plus

Reinforcing Pretrained Language Models for Generating Attractive Text Advertisements

【Introduction】

While PLMs have been widely used to generate high-quality texts in a supervised manner (by imitating texts written by humans), they lack a mechanism for generating texts that directly optimize a given reward, e.g., given user feedback like user clicks or a criterion that cannot be directly optimized by using gradient descent. In real-world applications, we usually wish to achieve more than just imitating existing texts. For example, we may wish to generate more attractive texts that lead to increased user clicks, more diversified texts to improve user experience, and more personalized texts that are better tailored to user tastes. Combing RL with PLMs provides a unified solution for all these scenarios, and is the core for machines to achieve human parity in text generation. Such a method has the potential to be applied in a wide range of products, e.g., Microsoft Advertising (text ad generation), Microsoft News (news headline generation), and Microsoft Stores and Xbox (optimizing the description for recommended items).

In this project, we aim to study how pretrained language models (PLMs) can be enhanced by using deep reinforcement learning (RL) to generate attractive and high-quality text ads. While finetuning PLMs have been shown to be able to generate high-quality texts, RL additionally provides a principled way to directly optimize user feedback (e.g., user clicks) for improving attractiveness. Our initial RL method UMPG is deployed in Dynamic Search Ads and published in KDD 2021. We wish to extend the method so that it can work for all pretrained language models (in addition to UNILM) and study how the technique can benefit other important Microsoft Advertising products and international markets.

【Research Areas】

Social Computing (SC), MSR Asia

https://www.microsoft.com/en-us/research/group/social-computing-beijing/

Microsoft Advertising, Microsoft Redmond

【Qualifications】

  • Ph.D. students majoring in computer science, electrical engineering, or equivalent areas
  • Experience with deep NLP and Transformers a strong plus
  • Background knowledge of language model pre-training and/or reinforcement learning
  • Capable of system implementing based on academic papers in English

Those with the following conditions are preferred:

  • Good English reading and writing ability and communication skills, capable of writing English papers and documents
  • Active on GitHub, used or participated in well-known open source projects

High-performance Distributed Deep Learning

【Introduction】

The parallel and distributed systems are the solution to address the ever-increasing complexity problem of deep learning trainings. However, existing solutions still leave efficiency and scalability on the table by missing optimization opportunities on various environments at industrial scale.

In this project, we’ll work with scientists who are at the forefront of system and network research, leveraging the world-leading platforms to solve system and networking problems in parallel and distributed deep learning area. The current project team members, from both MSR Asia and MSR Redmond labs, have rich experience contributing to both industry and academic community through transferring innovations that support production systems and publications at top conferences.

【Research Areas】

System and Networking, MSR Asia

https://www.microsoft.com/en-us/research/group/systems-and-networking-research-group-asia/

Research in Software Engineering, MSR Redmond

https://www.microsoft.com/en-us/research/group/research-software-engineering-rise/

【Qualifications】

  • Major in computer science, electrical engineering, or equivalent field
  • Solid knowledge of data structure/algorithm
  • Familiarity with Python, C/C++ and other programming languages, familiar with Linux and development on Linux platform
  • Good communication and presentation skills
  • Good English reading and writing ability, capable of system implementing based on academic papers in English, capable of writing English documents

Those with the following conditions are preferred:

  • Familiarity with deep learning systems, e.g., PyTorch TensorFlow, GPU programming and networking
  • Familiarity with NCCL, MPI communication protocols such as OpenMPI and MVAPICH
  • Rich knowledge of machine learning and machine learning models
  • Familiarity with engineering process as a strong plus
  • Active on GitHub, used or participated in well-known open-source projects

Intelligent Data Cleansing

【Introduction】

Tabular data such as Excel spreadsheets and databases are one of the most important assets in large enterprises today, which however are often plagued with data quality issues. Intelligent data cleansing focuses on novel ways to detect and fix data quality issues in tabular data, which can assist the large class of less-technical and non-technical users in enterprises.

We are interested in a variety of topics in this area, including data-driven and intelligent techniques to detect data quality issues and suggest possible fixes, leveraging inferred constraints and statistical properties based on existing data assets and software artifacts.

【Research Areas】

Data, Knowledge, and Intelligence (DKI), MSR Asia

https://www.microsoft.com/en-us/research/group/data-knowledge-intelligence/

Exploration and Mining (DMX), MSR Redmond

https://www.microsoft.com/en-us/research/group/data-management-exploration-and-mining-dmx

【Qualifications】

  • Graduate-level students in Computer Science or related STEM fields. PhD students are preferred
  • Students with research background in database, data mining, statistics, software engineering, and visualization are preferred

Intelligent Power-Aware Virtual Machine Allocation

【Introduction】

As one of the world-leading cloud service providers, Microsoft Azure manages tens of millions of virtual machines every day. Within such a large-scale cloud system, how to efficiently allocate virtual machines on servers is critical and has been a hot research topic for years. Previously, teams from MSR-Asia and MSR-Redmond have made significant contributions in this area that resulted in production impact and publication of academic papers at top-tier conferences (e.g., IJCAI, AAAI, OSDI, NSDI). In this project we intend to unify the strength of MSR-Asia and MSR-Redmond for performing forward-looking and collaborative research on power management in datacenters, including power-aware virtual machine allocation. The project involves developing power prediction models by leveraging the start-of-the-art machine learning methods, as well as building efficient and reliable allocation systems in large-scale distributed environments.

【Research Areas】

Data, Knowledge, and Intelligence (DKI), MSR Asia

https://www.microsoft.com/en-us/research/group/data-knowledge-intelligence/

System, MSR Redmond

https://www.microsoft.com/en-us/research/group/systems-research-group-redmond/

【Qualifications】

  • Currently enrolled in a graduate program in computer science or equivalent field
  • Good research track record in related areas
  • Able to carry out research tasks with high quality
  • Good communication and presentation skills in written and oral English
  • Knowledge and experience in machine learning, data mining and data analytics are preferred
  • Familiarity with AIOps or AI for systems is a strong plus

Learning Bandwidth Estimation for Real-Time Video

【Introduction】

In today’s real-time video applications, a key component for optimizing the user’s quality of experience is bandwidth estimation and rate control. It estimates the network capacity based on congestion signals observed on the path and adapts the video bitrate accordingly through the codec. However, existing handcrafted bandwidth estimators have failed to accommodate a wide range of complex network conditions, calling for a data-driven approach.

Motivated by the recent success in applying reinforcement learning (RL) to video streaming and congestion control, we have made an initial attempt at designing an RL-based bandwidth estimator for one-on-one video calls. Going forward, we are working to optimize the performance of our current neural network model, as well as extending the research study of bandwidth estimation and rate control to multiparty videoconferencing.

【Research Areas】

System and Networking, MSR Asia

https://www.microsoft.com/en-us/research/group/systems-and-networking-research-group-asia/

Mobility and Networking, MSR Redmond

https://www.microsoft.com/en-us/research/group/mobility-and-networking-research/

【Qualifications】

  • Major in computer science or a related field
  • Strong programming skills in Python or C++
  • Excellent English communication skills
  • Experience with deep reinforcement learning or related areas is preferred
  • Knowledge of computer networks is preferred
  • Background in AI for systems and networking is a strong plus
  • Track record of publications in top systems, networking, or AI conferences is strongly preferred

Neuro-Symbolic Semantic Parsing for Data Science

【Introduction】

Our cross-lab, inter-disciplinary research team develops AI technology for interactive coding assistance for data science, data analytics, and business process automation. It allows the user to specify their data processing intent in the middle of their workflow using a combination of natural language, input-output examples, and multi-modal UX – and translates that intent into the desired source code. The underlying AI technology integrates our state-of-the-art research in program synthesis, semantic parsing, and structure-grounded natural language understanding. It has the potential to improve productivity of millions of data scientists and software developers, as well as establish new scientific milestones for deep learning over structured data, grounded language understanding, and neuro-symbolic AI.

The research project involves collecting and establishing a novel benchmark dataset for data science program generation, developing novel neuro-symbolic semantic parsing models to tackle this challenge, adapting large-scale pretrained language models to new domains and knowledge bases, as well as publishing in top-tier AI/NLP conferences. We expect the benchmark dataset and the new models to be used in academia as well as in Microsoft products.

【Research Areas】

Natural Language Computing, MSR Asia

https://www.microsoft.com/en-us/research/group/natural-language-computing

Neuro-Symbolic Learning, MSR Redmond

【Qualifications】

  • Masters or Ph.D. students, majoring in computer science or equivalent areas
  • Background in deep NLP, semantic parsing, sequence-to-sequence learning, Transformers required
  • Experience with PyTorch and HuggingFace Transformers
  • Fluent English speaking, listening, and writing skills
  • Background in deep learning over structured data (graphs/trees/programs) and program synthesis preferred
  • Students with papers published at top-tier AI/NLP conferences are preferred

Next-Gen Large Pretrained Language Models

【Introduction】

The goal of this project is to develop game-changing techniques for next-gen large pre-trained language models, including:

(1) Beyond UniLM/InfoXLM: novel pre-training frameworks and self-supervised tasks for monolingual and multilingual pre-training to support language understanding, generation and translation tasks;

(2) Beyond Transformers: new model architectures and optimization algorithms for improving training effectiveness and efficiency of extremely large language models;

(3) Knowledge Fusion: new modeling frameworks to fuse massive pre-compiled knowledge into pre-trained models;

(4) Lifelong Self-supervised Learning: mechanisms and algorithms for lifelong (incremental) pre-training. This project extends our existing research and aims to advance SOTA on NLP and AI in general.

【Research Areas】

Natural Language Computing, MSR Asia

https://www.microsoft.com/en-us/research/group/natural-language-computing

Deep Learning, MSR Redmond

https://www.microsoft.com/en-us/research/group/deep-learning-group

【Qualifications】

  • Major in computer science or equivalent areas
  • One+ year research experience in deep learning for NLP, CV or related areas
  • Experience with open-source tools such as PyTorch, Tensorflow, etc.
  • Background knowledge of language model pre-training is preferred
  • Track record of publications in related top conferences (e.g., ACL, EMNLP, NAACL, ICML, NeurIPS, ICLR) is preferred
  • Excellent communication and writing skills

星跃风采∕星跃感言


刘国栋
智能云端系统组

我在微软亚洲研究院期间的研究工作是加速大规模深度学习模型的预训练。通过“星跃计划”,我的收获很大,研究能力和专业知识都得到了很大提升。除此之外,“星跃计划”跨研究院合作的独特性,让我同时收获了两个mentor的指导以及两个研究院中很多其他研究员的帮助,让我更快地了解了要做什么,也避免了很多弯路,并且也锻炼了我在国际交流合作方面的能力。

 


陈思蓓
软件分析组

我在“星跃计划”中研究的是自动化推荐电子表格中的公式。由微软雷德蒙研究院和微软亚洲研究院的两位导师联合指导。在这个过程中,我收获良多。导师经常给予我做研究的指导,他们的思路和想法总是源源不断,我也从中学到了很多科研思考的方式。我非常喜欢这种联合指导的方式,两位导师能够从不同侧重点提出有用的见解,这能够帮助我开阔视野,锻炼思维。

 


崔昊天
自然语言计算组

我在“星跃计划”中主要研究是代码解释的生成,对我来说这是比较陌生的方向。而来自微软雷德蒙研究院和微软亚洲研究院的豪华mentor阵容为我提供了超乎想象的帮助,他们既有高屋建瓴的意见也有具体到代码层面的经验。这样的指导十分难得,我也因此可以很快在工作中找到方法。除了项目上的成果,微软亚洲研究院自由的科研氛围和高效的交流机制也让我耳目一新,可以每周和高水平的导师、同学讨论,让我获益良多,我会把这种方式带到之后的工作中。同时我在微软亚洲研究院也认识了很多有趣、活跃、积极的伙伴,也是非常大的收获。

 


黄婧璇
软件分析组

感谢“星跃计划”,让我拥有了成为微软亚洲研究院大家庭一员的机会。我的研究方向是智能化数据清洗, Mentor非常nice,总是很耐心地指导我,在我卡壳的地方,他们也总能想出很棒的idea,让整个项目能够持续顺利地推进下去。我在这里也学到了很多新的知识和技能,遇到了许多来自不同院校、拥有不同背景的小伙伴,让我科研的道路不再孤单!

 


李晔伟
网络研究组

非常有幸能加入“星跃计划”,项目的配置非常豪华,整合了两大研究院的资源,给我提供了一段很难得的科研实习经历和一个自我成长的机会。我的研究方向是结合强化学习处理实时多人视频会议中的决策问题,在这里我遇见了微软雷德蒙研究院和微软亚洲研究院的两位很资深很nice的mentor、来自学界的合作者以及优秀的同龄人,在与大家的深度交流中,我获得了很多科研上的思路和启迪,也了解了科研的范式与工业产品落地的实际考量。非常感谢“星跃计划”、导师们和其他项目参与者,给我提供了完美的科研平台,让我有机会来探索有意思的真实世界中的问题。

 


李佳蔚
软件分析组

在“星跃计划”中我研究的问题是数据中心耗电量的优化,用微软云大规模的历史数据在生产级别的仿真器中进行实验。在这个项目中,我近距离接触到了工业界真实存在的前沿问题,第一次进行工业应用驱动的研究,很直接地体验到了研究的价值以及能有机会产生影响力的成就感。同时,因为问题很前沿,这个项目也具有很大的探索性和挑战性,在遇到困难时,基本是一天一次实验更新和讨论,从大方向到coding细节,Mentor和合作者都会及时给出具体的帮助。在微软亚洲研究院实习期间,我也认识了很多优秀、有趣、各有特点又志同道合的小伙伴,我从每个人身上都学到很多,感觉非常幸运!

 


杨宗霖
自然语言计算组

在星跃计划中,实习生可以同时跟微软亚洲研究院与微软雷德蒙研究院的高水平研究员交流,我在这个过程中受益匪浅。同时,我很感谢我的mentor和其他微软研究院的前辈对我的research idea的支持,使我有机会也有信心在新的研究方向上全身心投入地进行探索。和优秀同龄人的交流也让我开阔了思路和视野。希望我在正式读博后也有机会搭建所在课题组与微软亚洲研究院的桥梁,在未来能与微软亚洲研究院有更多的合作。