星跃计划 | 新项目开放！MSR Asia 与 Microsoft E+D 联合科研计划邀你申请！
微软亚洲研究院与微软总部联合推出的“星跃计划”科研合作项目邀请你来报名！本次“星跃计划”报名再次新增了来自微软 E+D (Experiences + Devices) Applied Research 全球总部的新项目，欢迎大家关注与申请！还在等什么？加入“星跃计划”，和我们一起跨越重洋，探索科研的更多可能！
目前还在招募的跨研究院联合科研项目覆盖智能推荐、计算机视觉、行为检测、社会计算、智能云等领域。研究项目如下：Online Aesthetic-Aware Smart Image Resizing, UserBERT: Pretrain User Models for Recommendation, Visual Representation Learning by Vision-language Tasks and its Applications, DNN-based Detection of Abnormal User Behaviors, Reinforcing Pretrained Language Models for Generating Attractive Text Advertisements, Intelligent Power-Aware Virtual Machine Allocation。星跃计划开放项目将持续更新，请及时关注获取最新动态！
- 本科、硕士、博士在读学生；延期（deferred）或间隔年（gap year）学生
Online Aesthetic-Aware Smart Image Resizing
For the new Designer app and Designer in Edge, we need to resize templates to different sizes, since different social media platforms require different target dimensions of the media, e.g., Facebook Timeline Post for personal accounts and business pages (1200 x 628), LinkedIn timeline post (1200 x 1200), Twitter timeline post (1600 x 900), etc. Image is the center of a template design. We need an ML-powered technique to automatically resize (including aspect ratio change, crop, zoom in/out) an image and put it into a resized template (more specifically speaking, resized image placeholder) for the target platform, so that the image placement looks good (i.e., maintaining the aesthetic values).
Computer Vision and Machine Learning
- Ph.D. students majoring in computer science, applied mathematics, electrical engineering or related technical discipline
- Relevant experience in the development and application of computer vision and/or machine learning algorithms to solve challenging image understanding problems
- Strong scientific programming skills, including C/C++, MATLAB, Python
- Independent analytical problem-solving skills
- Experience collaborating within research teams to develop advanced research concepts, prototypes, and systems
- Strong communication skills
UserBERT: Pretrain User Models for Recommendation
Pretrained language models such as BERT and UniLM have achieved huge success in many natural language processing scenarios. In many recommendation scenarios such as news recommendation, video recommendation, and ads CTR/CVR prediction, user models are very important to infer user interest and intent from user behaviors. Previously, user models are trained in a supervised task-specific way, which cannot achieve a global and universal understanding of users and may limit they capacities in serving personalized applications.
In this project, inspired by the success of pretrained language models, we plan to pretrain universal user models from large-scale unlabeled user behaviors using self-supervision tasks. The pretrained user models aim to better understand the characteristics, interest and intent of users, and can empower different downstream recommendation tasks by finetuning on their labeled data. Our recent work can be found at https://scholar.google.co.jp/citations?hl=zh-CN&user=0SZVO0sAAAAJ&view_op=list_works&sortby=pubdate.
Recommender Systems and Natural Language Processing
- Ph.D. students majoring in computer science, electronic engineering, or related areas
- Self-motivated and passionate in research
- Solid coding skills
- Experienced in Recommender Systems and Natural Language Processing
Visual Representation Learning by Vision-language Tasks and its Applications
Learning visual representation by vision-language pair data has shown highly competitive compared to previous supervised and self-supervised approaches, pioneered by CLIP and DALL-E. Such vision-language learning approaches have also demonstrated strong performance on some pure vision and vision-language applications. The aim of this project is to continually push forward the boundary of this research direction.
- Currently enrolled oversea Ph. D. students with promised or deferred offer, and is now staying in China
- Major in computer vision, natural language processing, or machine learning
DNN-based Detection of Abnormal User Behaviors
Are you excited to apply deep neural networks to solve practical problems? Would you like to help secure enterprise computer systems and users across the globe? Cyber-attacks on enterprises are proliferating and oftentimes causing damage to essential business operations. Adversaries may steal credentials of valid users and use their accounts to conduct malicious activities, which abruptly deviate from valid user behavior. We aim to prevent such attacks by detecting abrupt user behavior changes.
In this project, you will leverage deep neural networks to model behaviors of a large number of users, detect abrupt behavior changes of individual users, and determine if changed behaviors are malicious or not. You will be part of a joint initiative between Microsoft Research and the Microsoft Defender for Endpoint (MDE). During your internship, you will get to collaborate with some of the world’s best researchers in security and machine learning.
You would be expected to:
- Closely work with researchers in China and Israel towards the research goals of the project.
- Develop and implement research ideas and conduct experiments to validate them.
- Report and present findings.
Microsoft is an equal opportunity employer.
Software Analytics, MSR Asia
Microsoft Defender for Endpoint (MDE)
This is a Microsoft engineering and research group that develops the Microsoft Defender for Endpoint, an enterprise endpoint security platform designed to help enterprise networks prevent, detect, investigate, and respond to advanced threats
- Must have at least 1 year of experience applying machine learning/deep learning to real world/ research problems
- Demonstrated hands on the experience with Python through previous projects
- Familiarity with Deep Learning frameworks like PyTorch, Tensorflow, etc
- Keen ability for attention to detail and a strong analytical mindset
- Excellent in English reading and reasonably good in English communications
- Advisor’s permission
Those with the following conditions are preferred:
- Prior experience in behavior modeling
- Prior experience in anomaly detection
- Security knowledge a plus
Reinforcing Pretrained Language Models for Generating Attractive Text Advertisements
While PLMs have been widely used to generate high-quality texts in a supervised manner (by imitating texts written by humans), they lack a mechanism for generating texts that directly optimize a given reward, e.g., given user feedback like user clicks or a criterion that cannot be directly optimized by using gradient descent. In real-world applications, we usually wish to achieve more than just imitating existing texts. For example, we may wish to generate more attractive texts that lead to increased user clicks, more diversified texts to improve user experience, and more personalized texts that are better tailored to user tastes. Combing RL with PLMs provides a unified solution for all these scenarios, and is the core for machines to achieve human parity in text generation. Such a method has the potential to be applied in a wide range of products, e.g., Microsoft Advertising (text ad generation), Microsoft News (news headline generation), and Microsoft Stores and Xbox (optimizing the description for recommended items).
In this project, we aim to study how pretrained language models (PLMs) can be enhanced by using deep reinforcement learning (RL) to generate attractive and high-quality text ads. While finetuning PLMs have been shown to be able to generate high-quality texts, RL additionally provides a principled way to directly optimize user feedback (e.g., user clicks) for improving attractiveness. Our initial RL method UMPG is deployed in Dynamic Search Ads and published in KDD 2021. We wish to extend the method so that it can work for all pretrained language models (in addition to UNILM) and study how the technique can benefit other important Microsoft Advertising products and international markets.
Social Computing (SC), MSR Asia
Microsoft Advertising, Microsoft Redmond
- Ph.D. students majoring in computer science, electrical engineering, or equivalent areas
- Experience with deep NLP and Transformers a strong plus
- Background knowledge of language model pre-training and/or reinforcement learning
- Capable of system implementing based on academic papers in English
Those with the following conditions are preferred:
- Good English reading and writing ability and communication skills, capable of writing English papers and documents
- Active on GitHub, used or participated in well-known open source projects
Intelligent Power-Aware Virtual Machine Allocation
As one of the world-leading cloud service providers, Microsoft Azure manages tens of millions of virtual machines every day. Within such a large-scale cloud system, how to efficiently allocate virtual machines on servers is critical and has been a hot research topic for years. Previously, teams from MSR-Asia and MSR-Redmond have made significant contributions in this area that resulted in production impact and publication of academic papers at top-tier conferences (e.g., IJCAI, AAAI, OSDI, NSDI). In this project we intend to unify the strength of MSR-Asia and MSR-Redmond for performing forward-looking and collaborative research on power management in datacenters, including power-aware virtual machine allocation. The project involves developing power prediction models by leveraging the start-of-the-art machine learning methods, as well as building efficient and reliable allocation systems in large-scale distributed environments.
Data, Knowledge, and Intelligence (DKI), MSR Asia
System, MSR Redmond
- Currently enrolled in a graduate program in computer science or equivalent field
- Good research track record in related areas
- Able to carry out research tasks with high quality
- Good communication and presentation skills in written and oral English
- Knowledge and experience in machine learning, data mining and data analytics are preferred
- Familiarity with AIOps or AI for systems is a strong plus