LTI Logo

The Student Research Symposium (SRS) is an LTI-wide event for students to show their recent and/or ongoing research and projects to the community. Submissions from students are selected to be presented as talks or posters/demos throughout the day-long event. During the event, talks and posters/demos are judged by a panel of students and faculty, and cash prizes are given to the winners (one $500 prize for each track). This is a great chance to enjoy the camaraderie of students and faculty, and munch on good food together! In addition, if you have a speaking requirement for your degree program (e.g., MLT), participating in this event can fulfill that requirement!

Event Schedule

The SRS will take place on August 24, 2022, in-person at GHC 4307.

The full schedule for the 2022 SRS is now available. Please click the link below to view the schedule:
SRS 2022 Schedule

Invited Keynote Speaker: Chenyan Xiong

Talk: In search for more efficient language model pretraining
The scale of pretrained language models has grown exponentially in the past several years. While the benefits of a larger scale are well observed, its return-of-investment (scaling law of effectiveness versus computation cost) is more ambivalent. In this talk, we discuss our recent effort in seeking more efficient ways to pretrain language models, with the goal of achieving stronger downstream generalization ability with language models at more commodity scales. We will first present our recent progress in leveraging model-generated pretraining signals to form more efficient pretraining curriculums. Then we will recap a series of techniques that improve the optimization stability and efficiency when pretraining large language models. Lastly, I will share a few exploratory fronts that I believe are critical for more efficient language model pretraining. 

Bio
Chenyan Xiong is a principal researcher of Microsoft Research at Redmond. His research area is in the intersection of information retrieval, natural language processing, and deep learning. The current research focus is representation learning for information and language systems and efficient pretraining of large-scale deep learning models. Before joining Microsoft Research, Chenyan obtained his Ph.D. at Language Technologies Institute, Carnegie Mellon University in 2018. He publishes at IR, NLP, and ML conferences, with best paper and best reviewer awards, and developed techniques shipped to production systems at billions or trillions scale. 

Important Dates

Submission Deadline: July 15, 2022
Notification of Acceptance: Aug 1, 2022
Notice of Final Presentation Schedule: Aug 16, 2022
Final Presentation Date: Aug 24, 2022 (in-person at GHC 4307)

Tracks

Submitted Work Track
You can submit previous work that has been published/accepted by other conferences. This is a good opportunity to present and publicize your work to the community, and also get potential insights into your future work. 

Submission format: We accept full paper submissions in either 8-page long paper format or 4-page short paper format.

Novel Work Track
You can also submit any novel or ongoing work that you might not have already submitted to a conference. This will be a good opportunity for you to get feedback on your project and ideas.

Submission format: We accept extended abstracts or short papers (1-2 pages long).

Demo Track
You can submit tools/software that you build to this track. This could either be a demonstration of your research project, or a toolkit that you created that is useful/interesting. Examples include a GitHub repository, a website, or software that you can create a demo from (e.g, ExplainaBoard, rebiber).

Submission format: Describe the functions and usage of your tool in up to 4 pages. 

All paper submissions should follow the official ACL style templates, which are available here (https://github.com/acl-org/acl-style-files).

Submission Guidelines

Any LTI Research conducted by current LTI students (in LTI or elsewhere) is appropriate for presentation in the Symposium.

Submissions should be sent via the EasyChair electronic submission system (https://easychair.org/conferences/?conf=ltisrs22) by July 15, 2022 (23:59 pm EST).

Presentation Format

Oral Presentation Format
Each oral presentation slot will be​ ​around​ ​30 minutes long, with 20 minutes for the oral presentation itself and 10 minutes for questions. An oral presentation in the LTI SRS fulfills the yearly speaking requirement for all PhDs and second-year MLT students.

Poster Format
Posters can be any reasonable size, landscape or portrait, as for major conferences. More details are available on request from the organizers. If you have a particularly wide poster or want to include a demo, please consult with us so that appropriate arrangements can be made.

If you have a specific preference for an oral or poster presentation, please indicate so in the submission portal, and we will make effort to arrange it.

Topics of Interest

Acceptable topics include, but are not limited to:

  • Computational Social Science and Cultural Analytics

  • Dialogue and Interactive Systems

  • Discourse and Pragmatics

  • Efficient Methods for NLP

  • Ethics and NLP

  • Generation

  • Information Extraction

  • Information Retrieval and Text Mining

  • Interpretability and Analysis of Models for NLP

  • Knowledge Representation and Reasoning 

  • Language Grounding to Vision, Robotics, and Beyond

  • Linguistic theories, Cognitive Modeling and Psycholinguistics

  • Machine Learning for NLP

  • Machine Translation and Multilinguality

  • Multimodality

  • NLP Applications

  • Phonology, Morphology, and Word Segmentation

  • Question Answering

  • Resources and Evaluation

  • Semantics: Lexical

  • Semantics: Sentence-level Semantics, Textual Inference, and Other areas

  • Sentiment Analysis, Stylistic Analysis, and Argument Mining

  • Speech Processing, Speech Recognition, Speech Synthesis

  • Summarization

  • Syntax: Tagging, Chunking, and Parsing

 

Organizing Committee



Feel free to contact the SRS student committee members if you have any questions!