Over the past years, large-scale pretrained models with billions of parameters have improved the state of the art in nearly every natural language processing (NLP) task. These models are fundamentally changing the research and development of NLP and AI in general. Recently, researchers are expanding such models beyond natural language texts to include more modalities, such as structured knowledge bases, images, and videos. With this background, the talks in this session are expected to introduce the latest advances in pretrained models, and also discuss the future of this research frontier. Hear from Lei Ji and Chenfei Wu from Microsoft Research Asia, in the second of three talks on recent advances and applications of language model pertaining.
Learn more about the 2021 Microsoft Research Summit: https://Aka.ms/researchsummit