星跃计划|再次上新!MSR Asia 与微软全球总部联合科研计划邀你申请!

2024-02-24 | 作者:微软亚洲研究院

微软亚洲研究院与微软总部联合推出的 “星跃计划” 科研合作项目邀请你来报名!来自微软体验与设备 (Experiences + Devices) 总部应用研究团队、 微软安全人工智能研究团队 (Microsoft Security AI Research) 的项目再次上新,欢迎大家关注与申请。加入 “星跃计划”,和我们一起跨越重洋,探索科研的更多可能!

该计划旨在为优秀人才创造与微软全球总部的研究团队一起聚焦真实前沿问题的机会。你将微软总部与微软亚洲研究院两位顶尖mentor的联合指导下,在国际化的科研环境中、在多元包容的科研氛围中,做有影响力的研究。

本次招募的联合科研项目为 Smarter, Faster, Cheaper: Improving Large Language Models by Compressing Prompts and Responses; Automated Evaluation - the Right Way 和 Towards Efficient and Robust Retrieval Augmented Generation。星跃计划开放项目将持续更新,请及时关注获取最新动态!

星跃亮点

同时在微软亚洲研究院、微软全球总部顶级研究员的指导下进行科研工作,与不同研究背景的科研人员深度交流

聚焦来自于工业界的真实前沿问题,致力于做出对学术及产业界有影响力的成果

通过线下与线上的交流合作,在微软了解国际化、开放的科研氛围,及多元与包容的文化

申请资格

博士在读学生(具体参考项目要求);延期(deferred)或间隔年(gap year)学生

可全职在国内工作 6-12 个月

项目详细要求详见下方项目介绍

Smarter, Faster, Cheaper: Improving Large Language Models by Compressing Prompts and Responses

Large language models (LLMs) have revolutionized various fields with technologies like Retrieval Augmented Generation (RAG), In-Context Learning (ICL), Chain of Thought (CoT), and Agent-based models. These advancements, while groundbreaking, often result in lengthy prompts that lead to increased computational and financial costs, higher latencies, and added redundancy. Moreover, the intrinsic position bias of LLMs and redundancy within prompt will impact their performance, leading to the "lost in the middle" issue.

Previous studies have introduced prompt compression methods such as LLMLingua and LongLLMLingua, which address these issues and show promising results in generic scenarios. This project aims to explore research questions around complex scenarios, such as agent-related prompts and the compression of LLM responses. Furthermore, it seeks to investigate the effects of such compression techniques on adversarial attacks, security, and other critical aspects.

Research Areas

Large Language Models, Agent-based, Efficient Method

Qualifications

Passionate about research in LLMs and AGI
Solid coding and communication skills
Prior experience in LLM and/or Machine Learning research is preferred.

Automated Evaluation - the Right Way

While there have been significant efforts on leveraging LLMs as an evaluator, it is not quite there yet. It is only useful in English and in certain tasks, which severely limits its usability and trustworthiness across language. Join a groundbreaking project at Microsoft Research Asia, focusing on answering fundamental questions around LLM-based evaluation, but with direct production impact. This project aims to surpass the current capabilities of LLMs in certain tasks, emphasizing accuracy, reliability, robustness, and generalizability. The intern will be instrumental in creating a production-deployed system that adapts to needs serving hundreds of millions of users; and answer fundamental questions around the capabilities, limitations, and usages of LLMs and beyond.

Research Areas

Large Language Models, LLM-based Evaluation, LLM for low-resource language

Qualifications

Passionate about research in LLMs and AGI
Solid coding and communication skills
Graduate student working towards a Ph.D.
Prior experience in LLM and/or Machine Learning research is preferred.

Towards Efficient and Robust Retrieval Augmented Generation

Retrieval-augmented generation (RAG) is a technique for enhancing the quality of responses generated by large language models (LLMs) by using external sources of knowledge to supplement the LLM's internal representation of information. RAG allows LLMs to access the most up-to-date and reliable facts from a knowledge base or internal information storage. It can be used for various natural language generation tasks, such as question answering, summarization, and chat. However, the documents retrieved might be redundant and noisy. This project aims to develop efficient and robust RAG methods, which leverage shorter context length by removing contradiction and redundancy, reduce hallucination and are robust in different domains.

Research Areas

Large Language Models, Retrieval-Augmented Generation

Qualifications

Passionate about research in LLMs and AGI
Solid coding and communication skills
Prior experience in LLM and/or Machine Learning research is preferred.

 

申请方式

符合条件的申请者请填写下方申请表:

https://jinshuju.net/f/EckoQG

或扫描下方二维码,立即填写进入申请!


申请报名表

标签