Welcome to the listing and information directory for all courses that have ever been offered by the LTI. Courses are grouped in numerical order followed by summaries for each individual course below. Selecting a course number will take you directly to the appropriate listing for further information. For a list of courses currently being offered, please visit the Schedule of Classes on the Enrollment Services website.
This list includes several courses from outside of the LTI that are especially relevant to LTI students. Further information about these courses is available on the web pages of the departments that offer them.
Depending on a student's interests, electives may be taken from the LTI, other departments within SCS, the Tepper School of Business, the Statistics department, or the University of Pittsburgh.
Note: Some courses that were cross-listed in the past are not cross-listed now. If the course is not cross-listed now, it does not count as an LTI course.
Course | Title | Units | Semester | ||||||||||||||||||||
11-344 | Machine Learning in Practice | 12 | Fall/Spring | ||||||||||||||||||||
11-390 | LTI Minor Project - Juniors | 12 | All | ||||||||||||||||||||
11-411 | Natural Language Processing | 12 | Fall/Spring | ||||||||||||||||||||
11-441 | Machine Learning for Text Mining (Cross listed with 11-741/11-641) | 12 | Fall/Spring | ||||||||||||||||||||
11-442 | Search Engines | 9 | Fall/Spring | ||||||||||||||||||||
11-423 | ConLanging: Learning About Linguistics and Language Technologies Through Construction of Artificial Languages | 12 | Spring | ||||||||||||||||||||
11-443 | Machine Learning for Text Analysis (Renamed as 11-441 since Spring 2015) | 12 | All | ||||||||||||||||||||
11-465 | Special Topics: Digital Signal Processing | 12 | Intermittent | ||||||||||||||||||||
11-490 | LTI Minor Project - Seniors | 12 | All | ||||||||||||||||||||
11-492 | Speech Processing | 12 | Fall | ||||||||||||||||||||
Course | Title | Units | Semester | ||||||||||||||||||||
11-590 | LTI Minor Project - Advanced | 12 | All | ||||||||||||||||||||
11-601 | Coding Boot-Camp | 12 | Fall/Spring | ||||||||||||||||||||
11-611 | Natural Language Processing | 12 | Fall/Spring | ||||||||||||||||||||
11-623 | ConLanging: Learning About Linguistics and Language Technologies Through Construction of Artificial Languages | 12 | Spring | ||||||||||||||||||||
11-624 | Human Language for Artificial Intelligence | 12 | Fall | ||||||||||||||||||||
11-630 | MCDS Practicum - Internship | 0 | Summer All | ||||||||||||||||||||
11-631 | Seminar Data Science | 12 | Fall | ||||||||||||||||||||
11-632 | Data Science Capstone | 12 | Fall | ||||||||||||||||||||
11-633 | MCDS Independent Study | Variable | Fall/Spring/Summer | ||||||||||||||||||||
11-634 | Capstone Planning Seminar | 12 | Spring | ||||||||||||||||||||
11-635 | Data Science Capstone - Research | 12 | Fall | ||||||||||||||||||||
11-641 | Machine Learning for Text Mining (Cross listed with 11-741/11-441) | 12 | Fall/Spring | ||||||||||||||||||||
11-642 | Search Engines | 12 | Fall/Spring | ||||||||||||||||||||
11-643 | Machine Learning for Text Analysis (Renamed as 11-641 since Spring 2015) | 12 | Fall/Spring | ||||||||||||||||||||
11-651 | New Technologies and Future Markets | 12 | Fall | ||||||||||||||||||||
11-654 | Enterprise Development | 12 | Fall | ||||||||||||||||||||
11-661 | Language and Statistics | 12 | Fall | ||||||||||||||||||||
11-663 | Applied Machine Learning | 12 | Fall/Spring | ||||||||||||||||||||
11-675 | Big Data Systems in Practice | 12 | Spring | ||||||||||||||||||||
11-676 | Big Data Analytics | 12 | Fall | ||||||||||||||||||||
11-683 | Biotechnology Outsourcing Growth | 6-Mini | Spring | ||||||||||||||||||||
11-688 | Computational Forensics and Investigative Intelligence | 12 | Spring | ||||||||||||||||||||
11-690 | MIIS Directed Study | 12 | Fall/Spring | ||||||||||||||||||||
11-691 | Mathematical Foundations for Data Science | 12 | Fall | ||||||||||||||||||||
11-692 | Speech Processing | 12 | Fall | ||||||||||||||||||||
11-693 | Software Method for Biotechnology | 6-Mini | Fall | ||||||||||||||||||||
11-695 | Competitive Engineering | 12 | Spring | ||||||||||||||||||||
11-696 | MIIS Capstone Planning Seminar | 6 | Spring | ||||||||||||||||||||
11-697 | MIIS Capstone Project | 36 | Fall | ||||||||||||||||||||
11-699 | MSBIC Program Capstone | 36 | Spring | ||||||||||||||||||||
11-700 | LTI Colloquium | 6 | Fall/Spring | ||||||||||||||||||||
11-711 | Algorithms for NLP | 12 | Fall | ||||||||||||||||||||
11-712 | Lab in NLP | 6 | Spring | ||||||||||||||||||||
11-713 | Advanced NLP Seminar | 6 | Intermittent | ||||||||||||||||||||
11-714 | Tools for NLP | 6 | Intermittent | ||||||||||||||||||||
11-716 | Graduate Seminar on Dialog Processing | 6 | Fall | ||||||||||||||||||||
11-717 | Language Technologies for Computer Assisted Language Learning | 12 | Intermittent | ||||||||||||||||||||
11-718 | Conversational Interfaces | 12 | Intermittent | ||||||||||||||||||||
11-719 | Computational Models of Discourse Analysis | 12 | Spring | ||||||||||||||||||||
11-721 | Grammars and Lexicons | 12 | Fall | ||||||||||||||||||||
11-722 | Grammar Formalisms | 12 | Intermittent | ||||||||||||||||||||
11-723 | Linguistics Lab | 6 | Fall/Spring | ||||||||||||||||||||
11-724 | Human Language for Artificial Intelligence | 12 | Fall | ||||||||||||||||||||
11-725 | Meaning in Language | 12 | Intermittent | ||||||||||||||||||||
11-726 | Meaning in Language Lab (Self-Paced) | 6 | Fall/Spring | ||||||||||||||||||||
11-727 | Computational Semantics for NLP | 12 | Spring | ||||||||||||||||||||
11-728 | Advanced Seminar in Semantics | 6 | Intermittent | ||||||||||||||||||||
11-731 | Machine Translation and Sequence-to=Sequence Models | 12 | Fall | ||||||||||||||||||||
11-732 | Self-Paced Lab: MT | 6 | Fall/Spring | ||||||||||||||||||||
11-733 | Multilingual Speech-to-Speech Translation Lab | 6 | Intermittent | ||||||||||||||||||||
11-734 | Advanced Machine Translation Seminar | 6 | Intermittent | ||||||||||||||||||||
11-736 | Graduate Seminar on Endangered Languages | 6 | Intermittent | ||||||||||||||||||||
11-741 | Machine Learning for Text Mining (Cross listed with 11-641/11-441) | 12 | Fall/Spring | ||||||||||||||||||||
11-742 | Search Engines | ||||||||||||||||||||||
11-743 | Self-Paced Lab: IR | 6 | Fall/Spring | ||||||||||||||||||||
11-744 | Experimental Information Retrieval | 12 | Intermittent | ||||||||||||||||||||
11-745 | Advanced Statistical Learning Seminar | 12 | Fall | ||||||||||||||||||||
11-751 | Speech Recognition and Understanding | 12 | Fall | ||||||||||||||||||||
11-752 | Speech II: Phonetics, Prosody, Perception and Synthesis | 12 | Intermittent | ||||||||||||||||||||
11-753 | Advanced Laboratory in Speech Recognition | 6 | Spring | ||||||||||||||||||||
11-754 | Project Course: Dialogue Systems | 6 | Springl | ||||||||||||||||||||
11-755 | Machine Learning for Signal Processing | 12 | Fall/Spring | ||||||||||||||||||||
11-756 | Design and Implementation of Speech Recognition Systems | 12 | Intermittent | ||||||||||||||||||||
11-757 | Advanced Topics: Statistical Modeling for Spoken Dialog Systems | 12 | Intermittent | ||||||||||||||||||||
11-761 | Language and Statistics | 12 | Fall | ||||||||||||||||||||
11-762 | Language and Statistics II | 12 | Intermittent | ||||||||||||||||||||
11-763 | Structured Prediction for Language and Other Discrete Data | 12 | Spring | ||||||||||||||||||||
11-765 | Active Learning Seminar | 6 | Intermittent | ||||||||||||||||||||
11-767 | On-Device Machine Learning | 12 | Fall | ||||||||||||||||||||
11-772 | Analysis of Social Media | 12 | Intermittent | ||||||||||||||||||||
11-775 | Large-Scale Multimedia Analysis | 12 | Spring | ||||||||||||||||||||
11-776 | Human Communication and Multimodal Machine Learning | 12 | Intermittent | ||||||||||||||||||||
11-777 | Multimodal Machine Learning | 12 | Fall | ||||||||||||||||||||
11-780 | Research Design and Writing | 12 | Intermittent | ||||||||||||||||||||
11-782 | Self-Paced Lab for Computational Biology | 6-12 | Fall/Spring | ||||||||||||||||||||
11-783 | Self-Paced Lab: Rich Interaction in Virtual World | 6 | Spring | ||||||||||||||||||||
11-785 | Introduction to Deep Learning | 12 | Fall/Spring | ||||||||||||||||||||
11-791 | Design and Engineering of Intelligent Information Systems | 12 | Fall/Spring | ||||||||||||||||||||
11-792 | Intelligent Information Systems Project | 12 | Fall/Spring | ||||||||||||||||||||
11-794 | Inventing Future Services | 12 | Intermittent | ||||||||||||||||||||
11-795 | Seminar: Algorithms for Privacy and Security | 6 | Intermittent | ||||||||||||||||||||
11-796 | Question Answering Lab | 6 | Spring | ||||||||||||||||||||
11-797 | Question Answering | 12 | Spring | ||||||||||||||||||||
11-805 | Machine Learning with Large Datasets | 12 | Intermittent | ||||||||||||||||||||
11-821 | Advanced Linguistics Seminar | 6 | Spring | ||||||||||||||||||||
ConLanging: Learning About Linguistics and Language Technologies Through Construction of Artificial Languages | 12 | Spring | |||||||||||||||||||||
Computational Ethics for NLP |
12 | Spring | |||||||||||||||||||||
11-899 | Summarization and Personal Information Management | 12 | Intermittent | ||||||||||||||||||||
11-910 | Directed Research | 1-48 | All | ||||||||||||||||||||
11-920 | Independent Study: Breadth | 1-48 | All | ||||||||||||||||||||
11-925 | Independent Study: Area | 1-48 | All | ||||||||||||||||||||
11-928 | Masters Thesis I | 5-36 | All | ||||||||||||||||||||
11-929 | Masters Thesis II | 5-36 | All | ||||||||||||||||||||
11-930 |
Dissertation Research |
1-48 | All | ||||||||||||||||||||
11-935 |
LTI Practicum |
1-36 | All | ||||||||||||||||||||
Course | Title | ||||||||||||||||||||||
10-601 10-701 |
Machine Learning (can only count under one focus per student) - LTI PhD students must register for 10-701 for it to count towards their required 8 courses. LTI Masters students should register for 10-601. | ||||||||||||||||||||||
15-750 | Algorithms | ||||||||||||||||||||||
15-780 | Artificial Intelligence | ||||||||||||||||||||||
15-883 |
Computational Models of Neural Systems |
||||||||||||||||||||||
11-344 - Machine Learning in Practice | |||||||||||||||||||||||
Description | Machine Learning is concerned with computer programs that enable the behavior of a computer to be learned from examples or experience rather than dictated through rules written by hand. It has practical value in many application areas of computer science such as on-line communities and digital libraries. This class is meant to teach the practical side of machine learning for applications, such as mining newsgroup data or building adaptive user interfaces. The emphasis will be on learning the process of applying machine learning effectively to a variety of problems rather than emphasizing an understanding of the theory behind what makes machine learning work. This course does not assume any prior exposure to machine learning theory or practice. In the first 2/3 of the course, we will cover a wide range of learning algorithms that can be applied to a variety of problems. In particular, we will cover topics such as decision trees, rule-based classification, support vector machines, Bayesian networks, and clustering. In the final third of the class, we will go into more depth on one application area, namely the application of machine learning to problems involving text processing, such as information retrieval or text categorization. | ||||||||||||||||||||||
11-390 - LTI Minor Project - Juniors | |||||||||||||||||||||||
11-411 - Natural Language Processing | |||||||||||||||||||||||
Description |
This course will introduce students to the highly interdisciplinary area of Artificial Intelligence known alternately as Natural Language Processing (NLP) and Computational Linguistics. The course aims to cover the techniques used today in software that does useful things with text in human languages like English and Chinese. Applications of NLP include automatic translation between languages, extraction and summarization of information in documents, question answering and dialog systems, and conversational agents. This course will focus on core representations and algorithms, with some time spent on real-world applications. Because modern NLP relies so heavily on Machine Learning, we'll cover the basics of discrete classification and probabilistic modeling as we go. Good computational linguists also know about Linguistics, so topics in linguistics (phonology, morphology, and syntax) will be covered when fitting. From a software engineering perspective, there will be an emphasis on rapid prototyping, a useful skill in many other areas of Computer Science. In particular, we will introduce some high-level languages (e.g., regular expressions and Dyna) and some scripting languages (e.g., Python and Perl) that can greatly simplify prototype implementation. |
||||||||||||||||||||||
Pre-Requisites | 15-211 Fundamental Data Structures and Algorithms | ||||||||||||||||||||||
Course Site | |||||||||||||||||||||||
11-441 - Machine Learning for Text Mining (Cross listed with 11-741/11-641) | |||||||||||||||||||||||
Description |
Fall/Spring This is a full-semester lecture-oriented course (12 units) for the PhD-level, MS-level and undergraduate students who meet the prerequisites. It offers a blend of core theory, algorithms, evaluation methodologies and applications of scalable data analytic techniques. Specifically, it covers the following topics:
Notice that 11-741 and 11-641 are 12-unit courses for graduate students, but 11-441 is a 9-unit course for undergraduate students.Although the lectures are the same for all students, the workload differs by course. That is, the required course work in 11-441 is a subset of that in 11-641, and the work in 11-641 is a subset of that in 11-741. See the detailed distinctions in the Grading section. 11-741 is among the required courses for PhD candidates in the Language Technologies Institute. while 11-641 only counts as a master-level course. Graduate students can choose either 11-741 or 11-641, depending on their career goals and backgrounds. Undergraduate students should take 11-441; exception is possible if approved by the instructor. |
||||||||||||||||||||||
Pre-Requisites |
|
||||||||||||||||||||||
Course Site | http://www.cs.cmu.edu/~yiming/MLTM-f20-index.htm | ||||||||||||||||||||||
11-442 - Search Engines | |||||||||||||||||||||||
Description | This course studies the theory, design, and implementation of text-based search engines. The core components include statistical characteristics of text, representation of information needs and documents, several important retrieval models, and experimental evaluation. The course also covers common elements of commercial search engines, for example, integration of diverse search engines into a single search service ("federated search", "vertical search"), personalized search results, and diverse search results. The software architecture components include design and implementation of large-scale, distributed search engines.
This is a full-semester lecture-oriented course worth 9 units. |
||||||||||||||||||||||
Eligibility | This course is intended for undergraduates, although it is open to all students who meet the pre-requisites. | ||||||||||||||||||||||
Pre-Requisites | This course requires good programming skills and an understanding of computer architectures and operating systems (e.g., memory vs. disk trade-offs). A basic understanding of probability, statistics, and linear algebra is helpful. Thus students should have preparation comparable to the following CMU undergraduate courses.
|
||||||||||||||||||||||
Website | http://boston.lti.cs.cmu.edu/classes/11-442/ | ||||||||||||||||||||||
11-490 - LTI Minor Project - Seniors | |||||||||||||||||||||||
11-590 - LTI Minor Project - Advanced | |||||||||||||||||||||||
11-611 - Natural Language Processing | |||||||||||||||||||||||
Description | Natural language processing is an introductory graduate-level course on the computational properties of natural languages and the fundamental algorithms for processing natural languages. The course will provide an in-depth presentation of the major algorithms used in NLP, including Lexical, Morphological, Syntactic and Semantic analysis, with the primary focus on parsing algorithms and their analysis. | ||||||||||||||||||||||
Pre-Requisites | 15-211 Fundamental Data Structures and Algorithms | ||||||||||||||||||||||
Course Site | |||||||||||||||||||||||
11-641 - Machine Learning for Text Mining (Cross listed with 11-741/11-441) | |||||||||||||||||||||||
Description |
Fall/Spring This is a full-semester lecture-oriented course (12 units) for the PhD-level, MS-level and undergraduate students who meet the prerequisites. It offers a blend of core theory, algorithms, evaluation methodologies and applications of scalable data analytic techniques. Specifically, it covers the following topics:
Notice that 11-741 and 11-641 are 12-unit courses for graduate students, but 11-441 is a 9-unit course for undergraduate students.Although the lectures are the same for all students, the workload differs by course. That is, the required course work in 11-441 is a subset of that in 11-641, and the work in 11-641 is a subset of that in 11-741. See the detailed distinctions in the Grading section. 11-741 is among the required courses for PhD candidates in the Language Technologies Institute. while 11-641 only counts as a master-level course. Graduate students can choose either 11-741 or 11-641, depending on their career goals and backgrounds. Undergraduate students should take 11-441; exception is possible if approved by the instructor. |
||||||||||||||||||||||
Pre-Requisites |
|
||||||||||||||||||||||
Course Site | http://www.cs.cmu.edu/~yiming/MLTM-f20-index.htm | ||||||||||||||||||||||
11-642 - Search Engines | |||||||||||||||||||||||
Description | This course studies the theory, design, and implementation of text-based search engines. The core components include statistical characteristics of text, representation of information needs and documents, several important retrieval models, and experimental evaluation. The course also covers common elements of commercial search engines, for example, integration of diverse search engines into a single search service ("federated search", "vertical search"), personalized search results, and diverse search results. The software architecture components include design and implementation of large-scale, distributed search engines.
This is a full-semester lecture-oriented course worth 12 units. |
||||||||||||||||||||||
Eligibility | This course is open to all students who meet the pre-requisites. | ||||||||||||||||||||||
Pre-Requisites |
This course requires good programming skills and an understanding of computer architectures and operating systems (e.g., memory vs. disk trade-offs). A basic understanding of probability, statistics, and linear algebra is helpful. Thus students should have preparation comparable to the following CMU undergraduate courses.
|
||||||||||||||||||||||
Website | http://boston.lti.cs.cmu.edu/classes/11-642/ | ||||||||||||||||||||||
11-661 - Language and Statistics | |||||||||||||||||||||||
Description |
The goal of "Language and Statistics" is to ground the data-driven techniques used in language technologies in sound statistical methodology. We start by formulating various language technology problems in both an information theoretic framework (the source-channel paradigm) and a Bayesian framework (the Bayes classifier). We then discuss the statistical properties of words, sentences, documents and whole languages, and the various computational formalisms used to represent language. These discussions naturally lead to specific concepts in statistical estimation. Topics include: Zipof's distribution and type-token curves; point estimators, Maximum Likelihood estimation, bias and variance, sparseness, smoothing and clustering; interpolation, shrinkage, and backoff; entropy, cross entropy and mutual information; decision tree models applied to language; latent variable models and the EM algorithm; hidden Markov models; exponential models and the maximum entropy principle; semantic modeling and dimensionality reduction; probabilistic context-free grammars and syntactic language models. |
||||||||||||||||||||||
Course Site | |||||||||||||||||||||||
11-663 - Applied Machine Learning | |||||||||||||||||||||||
Description | Machine Learning is concerned with computer programs that enable the behavior of a computer to be learned from examples or experience rather than dictated through rules written by hand. It has practical value in many application areas of computer science such as on-line communities and digital libraries. This class is meant to teach the practical side of machine learning for applications, such as mining newsgroup data or building adaptive user interfaces. The emphasis will be on learning the process of applying machine learning effectively to a variety of problems rather than emphasizing an understanding of the theory behind what makes machine learning work. This course does not assume any prior exposure to machine learning theory or practice. In the first 2/3 of the course, we will cover a wide range of learning algorithms that can be applied to a variety of problems. In particular, we will cover topics such as decision trees, rule based classification, support vector machines, Bayesian networks, and clustering. In the final third of the class, we will go into more depth on one application area, namely the application of machine learning to problems involving text processing, such as information retrieval or text categorization. | ||||||||||||||||||||||
11-683 - Biotechnology Outsourcing Growth | |||||||||||||||||||||||
Description | An especially dangerous time for new ventures is right after the initial product launch. At startup, many ventures run lean with a small headcount and minimal operational overhead. After some success, the startup is compelled to expand headcount, increase capital expansion, and scale up operations. In many cases, what was a promising theoretical business model may fail due to inadequate growth management. Biotechnology companies in particular are increasingly having key functions outsourced to reduce cost and increasing efficiency. The capital cost for laboratories and specialized lab technicians is often prohibitive for biotech startups with a clear and narrow focus. Biotech startups are therefore running much leaner but with a distributed organizational structure. Under these circumstances, managing outsourced functions becomes critical and is a focus of this course. This course will introduce students to issues with growth strategy and outsourcing management. | ||||||||||||||||||||||
11-688 - Computational Forensics and Invetigative Intelligence | |||||||||||||||||||||||
Description | This course covers the use of computational methods in crime investigation (forensics) and prevention (intelligence). In almost all areas of forensics and intelligence, computational methods continue to aid, and sometimes entirely replace, human expertise in tracking crime. This is desirable since automation can address the problems associated with scale and global crime linkage through diverse data computational tools can potentially overcome and surpass human capabilities for crime investigation. This course is of a cross-disciplinary nature. It amalgamates knowledge from criminology, forensic sciences, computer science, statistics, signal processing, machine learning, AI, psychology, medicine and many other fields. Students from all departments and schools are welcome to take this course |
||||||||||||||||||||||
Course Syllabus | http://mlsp.cs.cmu.edu/people/rsingh/docs/CFII_Syllabus.pdf | ||||||||||||||||||||||
11-690 - MIIS Directed Study | |||||||||||||||||||||||
Description | Students must pass 24 units (typically 12 units x 2 semesters) in directed study under the supervision of their advisor. It is a structured, task-oriented form of independent study that provides deep, hands-on experience in a particular technology area and an opportunity to work closely with a member of the faculty. | ||||||||||||||||||||||
11-691 - Mathematical Foundations for Data Science | |||||||||||||||||||||||
Description | There is a familiar picture regarding software development: it is often delivered late, over-budget, and lacking important features. There is often an inability to capture the customer's actual way of accomplishing work, and then creating a realistic project plan. This will be especially important as software development in the life sciences involves creating applications that are relatively new to the industry. The course will introduce students to the "Balanced Framework" of project management process that assists biotechnology organizations in planning and managing software projects that support their product development. It provides the identification, structuring, evaluation and ongoing management of the software project that deliver the benefits expected from the organization's investments. It focuses on the delivery of business value being initiated by the project. It helps an organization answer the basic question "Are the things we are doing providing value to the business?" In this course, students will learn how to examine and explain customer processes and create requirements that reflect how work is actually done. Students will additionally create a software project plan that incorporates: problem framing; customer workflow, planning, project tracking, monitoring, and measurement. | ||||||||||||||||||||||
11-693 - Software Method for Biotechnology | |||||||||||||||||||||||
Description | Moore's law describes how processing power continues to be faster, better, and cheaper. It not only powered the computer industry forward, but it also is a key driver for propelling biotechnology. It is hard to imagine the world of biotechnology without the world of software. Moreover, the future will further underscore software's importance for enabling biotechnology innovations. This course is focusing on the relationship between biotechnology processes and information technology where students will be introduced to business process workflow modeling and how these concepts are applied in large organizations. Through this method, students will learn the key drivers behind information systems and how to identify organizational opportunities and leverage these to create disruptive models. Student will also learn to assess new technology sectors for unsolved problems and commercially viable solutions By taking this course, students will become conversant with the software technologies that can be applied to commercial life science problems in the present and future. | ||||||||||||||||||||||
11-695 - Competitive Engineering | |||||||||||||||||||||||
Description | In the second core course, students will be tasked with building a software application prototype for a biotech/pharmaceutical firm. Students will be introduced to a particular firm (through one of the program advisors) and will learn how to conduct and develop requirements analysis and convert that into feature definition. The customer requirements are often a moving target: they're influenced by the emergence of competitive alternatives (e.g. internal consultants, off-the-shelf software) and also by the team interaction with each others. Students will learn to create a product that best captures the best balance of the customer priorities and feasibility and distinguishing it from competitive alternatives. They will then use this learning to develop their respective prototypes. At the conclusion of the term, teams will compete with each other to determine which team's product is superior. In addition to having to apply various aspects of software development and computational learning, the course will help to provide students with some key insights into how biotech/pharmaceutical businesses operate. In addition to concepts regarding market demand, students will learn how to aggregate and synthesize information related to demand, pricing and competition. They will then apply this learning to define and prioritize market driven requirements as it relates to a product. This information will then be used to build a product development plan. Students will utilize methods to enhance product quality and customer satisfaction: benchmarking; industry and customer analyses; project metrics, and a range of customer relationship management tools. | ||||||||||||||||||||||
11-696 - MIIS Capstone Planning Seminar | |||||||||||||||||||||||
Description | The MIIS Capstone Planning Seminar prepares students to complete the MIIS Capstone Project in the following semester. Students are organized into teams that will work together to complete the capstone project. They define project goals, requirements, success metrics, and deliverables; and they identify and acquire data, software, and other resources required for successful completion of the project. The planning seminar must be completed in the semester prior to taking the capstone project. | ||||||||||||||||||||||
11-697 - MIIS Capstone Project | |||||||||||||||||||||||
Description | The capstone project is a large, group-oriented demonstration of student skill in one or more areas covered by the degree. Typically the result of the capstone project is a major software application. The capstone project is supervised by a member of the faculty who meets with students on a weekly basis to monitor progress and provide guidance. | ||||||||||||||||||||||
11-700 - LTI Colloquium | |||||||||||||||||||||||
Description | The LTI colloquium is a series of talks related to language technologies. The topics include but are not limited to Computational Linguistics, Machine Translation, Speech Recognition and Synthesis, Information Retrieval, Computational Biology, Machine Learning, Text Mining, Knowledge Representation, Computer-Assisted Language Learning, Intelligent Language Tutoring, and Multimodal Computing and Interaction. To receive credit for the course, students must attend all lectures and successfully complete an on-line quiz made available at the close of each talk. | ||||||||||||||||||||||
Course Site | TBA | ||||||||||||||||||||||
11-711 - Algorithms for Natural Language Processing | |||||||||||||||||||||||
Description | Algorithms for NLP is an introductory graduate-level course on the computational properties of natural languages and the fundamental algorithms for processing natural languages. The course will provide an in-depth presentation of the major algorithms used in NLP, including Lexical, Morphological, Syntactic and Semantic analysis, with the primary focus on parsing algorithms and their analysis. | ||||||||||||||||||||||
Topics | Introduction to Formal Language Theory, Search Techniques, Morphological Processing and Lexical Analysis, Parsing Algorithms for Context-Free Languages, Unification-based Grammars and Parsers, Natural Language Generation, Introduction to Semantic Processing, Ambiguity Resolution Methods | ||||||||||||||||||||||
Pre-Requisites | College-level: course on algorithms/programming skills; Minimal exposure to syntax and structure of Natural Language (English) | ||||||||||||||||||||||
Co-Requisites | The self-paced Laboratory in NLP (11-712) is designed to complement this course with programming assignments on relevant topics. Students are encouraged to take the lab in parallel with the course or in the following semester. | ||||||||||||||||||||||
Course Site | http://demo.clab.cs.cmu.edu/11711fa20/ | ||||||||||||||||||||||
11-712 - Lab in NLP | |||||||||||||||||||||||
Description | The Self-Paced Lab in NLP Algorithms is intended to complement the 11-711 lecture course by providing a chance for hands-on, in-depth exploration of various NLP paradigms. Students will study a set of on-line course materials and complete a set of programming assignments illustrating the concepts taught in the lecture course. Timing of individual assignments is left up to the student, although all assignments must be successfully completed and turned in before the end of the semester for the student to receive credit for the course. | ||||||||||||||||||||||
Co-Requisites | 11-711 - Algorithms for Natural Language Processing | ||||||||||||||||||||||
11-713 - Advanced NLP Seminar | |||||||||||||||||||||||
Description | This course aims to improve participants' knowledge of current techniques, challenges, directions, and developments in all areas of NLP (i.e., across applications, symbolic formalisms, and approaches to the use of data and knowledge); to hone students' critical technical reading skills, oral presentation skills, and written communication skills; to generate discussion among students across research groups to inspire new research.
In a typical semester, a set of readings will be selected (with student input) primarily from the past 2-3 years' conference proceedings (ACL and regional variants, EMNLP, and COLING), journals (CL, JNLE), and relevant collections and advanced texts. Earlier papers may be assigned as background reading. In 2010, the readings will primarily be recent dissertations in NLP. The format of each meeting will include a forty-minute, informal, critical student presentation on the week's readings, with presentations rotating among participants, followed by general discussion. Apart from the presentation and classroom participation, each student will individually write a 3-4-page white paper outlining a research proposal for new work extending research discussed in class - this is similar to the Advanced IR Seminar. |
||||||||||||||||||||||
Course Site | |||||||||||||||||||||||
11-714 - Tools for NLP | |||||||||||||||||||||||
Description | This course is designed as a hands-on lab to help students interested in NLP build their own compendium of the open-source tools and resources available online. Ideally taken in the first semester, the course focuses on one basic topic every two weeks, during which each student will download, install, and play with two or three packages, tools, or resources, and compare notes. The end-of-semester assignment will be to compose some of the tools into a system that does something interesting. We will cover a range, from the most basic tools for sentence splitting and punctuation removal through resources such as WordNet and the Penn Treebank to parsing and Information Extraction engines. | ||||||||||||||||||||||
11-716 - Graduate Seminar on Dialog Processing | |||||||||||||||||||||||
Description | Dialog systems and processes are becoming an increasingly vital area of interest both in research and in practical applications. The purpose of this course will be to examine, in a structured way, the literature in this area as well as learn about ongoing work. The course will cover traditional approaches to the problem, as exemplified by the work of Grosz and Sidner, as well as more recent work in dialog, discourse and evaluation, including statistical approaches to problems in the field. We will select several papers on a particular topic to read each week. While everyone will do all readings, a presenter will be assigned to overview the paper and lead the discussion. On occasion, a researcher may be invited to present their own work in detail and discuss it with the group. A student or researcher taking part in the seminar will come away with a solid knowledge of classic work on dialog, as well as familiarity with ongoing trends. | ||||||||||||||||||||||
11-717 - Language Technologies for Computer Assisted Language Learning | |||||||||||||||||||||||
Description | This course studies the design and implementation of CALL systems that use Language Technologies such as Speech Synthesis and Recognition, Machine Translation, and Information Retrieval. After a short history of CALL/LT, students will learn where language technologies (LT) can be used to aid in language learning. From there, the course will explore the specifics of designing software that must interface with a language technology, For each LT, we will explore: • what information does the LT require, • what type of output does the LT send to the CALL interface, • what are the limits of the LT that the CALL designer must deal with, • what are the real time constraints, • what type of training does the LT require The goal of the course is to familiarize the student with : • existing systems that use LT • assessment of CALL/LT software • the limitations imposed by the LT • designing CALL/LT software Grading criteria: • several short quizzes • term project: production of a small CALL/LT system, verbal presentation and written documentation of design of the software. | ||||||||||||||||||||||
11-718 - Conversational Interfaces | |||||||||||||||||||||||
Description | Conversational Interfaces is intended to bring together an interdisciplinary mix of students from the language technologies institute and the human computer interaction institute to explore the topic of conversational interfaces from a user centered, human impact perspective rather than a heavily technology centered one. In this course we will explore through readings and project work such questions as (1) What are the costs and benefits to using a speech/language interface? (2) When is it advantageous to use a speech/language interface over an alternative? (3) What are the factors involved in the design of effective speech/language interfaces, and what impact do they have on the user's experience with the system? (4) How do we evaluate the usability of a speech/language interface? (5) What have we learned from evaluations of speech/language interfaces that have already been built? To what extent does the data support the claims that are made about the special merits of conversational interfaces? | ||||||||||||||||||||||
11-719 - Computational Models of Discourse Analysis | |||||||||||||||||||||||
Description | Discourse analysis is the area of linguistics that focuses on the structure of language above the clause level. It is interesting both in the complexity of structures that operate at that level and in the insights it offers about how personality, relationships, and community identification are revealed through patterns of language use. A resurgence of interest in topics related to modeling language at the discourse level is in evidence at recent language technologies conferences. This course is designed to help students get up to speed with foundational linguistic work in the area of discourse analysis, and to use these concepts to challenge the state-of-the-art in language technologies for problems that have a strong connection with those concepts, such as dialogue act tagging, sentiment analysis, and bias detection. This is meant to be a hands on and intensely interactive course with a heavy programming component. The course is structured around 3 week units, all but the first of which have a substantial programming assignment structured as a competition (although grades will not be assigned based on ranking within the competition, rather grades will be assigned based on demonstrated comprehension of course materials and methodology). | ||||||||||||||||||||||
Course Site | http://www.cs.cmu.edu/%7Ecprose/discourse-course.html | ||||||||||||||||||||||
11-721 - Grammars and Lexicons | |||||||||||||||||||||||
Description | Grammars and Lexicons is an introductory graduate course on linguistic data analysis and theory, focusing on methodologies that are suitable for computational implementations. The course covers major syntactic and morphological phenomena in a variety of languages. The emphasis will be on examining both the diversity of linguistic structures and the constraints on variation across languages. Students will be expected to develop and defend analyses of data, capturing linguistic generalizations and making correct predictions within and across languages. The goal is for students to become familiar with the range of phenomena that occur in human languages so that they can generalize the insights into the design of computational systems. The theoretical framework for syntactic and lexical analysis will be Lexical Functional Grammar. Grades will be based on problem sets and take-home exams. | ||||||||||||||||||||||
Pre-Requisites | Introductory linguistics course or permission of instructor | ||||||||||||||||||||||
11-722 - Grammar Formalisms | |||||||||||||||||||||||
Description | The goal of this course is to familiarize students with grammar formalisms that are commonly used for research in computational lingusitics, language technologies, and lingusitics. We hope to have students from a variety disciplines (linguistics, computer science, psychology, modern languages, philosophy) in order to cover a broad perspective in class discussions. Comparison of formalisms will lead to a deeper understanding of human language and natural language processing algorithms. The formalisms will include: Head Driven Phrase Structure Grammar, Lexical Functional Grammar, Tree Adjoining Grammar and Categorial Grammar. If time permits, we will cover Penn Treebank, dependency grammar, and Construction Grammar. We will cover the treatment of basic syntactic and semantic phenomena in each formalism, and will also discuss algorithms for parsing and generating sentences for each formalism. If time permits, we may discuss formal language theory and generative capacity. | ||||||||||||||||||||||
11-624 & 11-724 - Human Language for Artificial Intelligence | |||||||||||||||||||||||
Description
|
An enduring aspect of the quest to build intelligent machines is the challenge of human language. This course introduces students with a background in computer science and a research interest in artificial intelligence fields to the structure of natural language, from sound to society. It covers phonetics (the physical aspects of speech), phonology (the sound-structure of language), morphology (the structure of words), morphosyntax (the use of word and phrase structure to encode meaning), syntactic formalisms (using finite sets of production rules to characterize infinite configurations of structure), discourse analysis and pragmatics (language in discourse and communicative context), and sociolinguistics (language in social context and social meaning). Evaluation is based on seven homework assignments, a midterm examination, and a final examination. | ||||||||||||||||||||||
11-725 - Meaning in Language | |||||||||||||||||||||||
Description | This course provides a survey of the many different ways in which meaning is conveyed in spoken languages, and of the different types of meaning which are conveyed. We will introduce various theoretical frameworks for the description of these phenomena. Topics to be covered will include: word meaning (lexical semantics); structure and meaning (compositional semantics); information structure (foregrounding and backgrounding); verb argument structure and thematic roles; intonational meaning and focus; presupposition; context dependency; discourse markers and utterance modifiers; and the role of inference in interpretation. The topics to be addressed bring together a variety of fields: linguistics; philosophy of language; communication studies and rhetoric; and language technologies. The course may be taken as either a 9-unit (80-306) or 12-unit (80-606/11-725) course. The 12-unit course will include an additional component, which will relate the content of the course to issues in computational linguistics, with an emphasis on methods of implementation. (The computational component will be taught by faculty from the Language Technologies Institute.) | ||||||||||||||||||||||
11-726 - Meaning in Language Lab (Self-Paced) | |||||||||||||||||||||||
Description | The self-paced Meaning in Language Lab is intended to follow-up on the 11-725 lecture course (Meaning in Language) by providing a chance for hands-on, in-depth, computational exploration of various semantics and pragmatics research topics. The course is self-paced and there will be no scheduled lecture times, however, students are welcome to set up meetings with the instructor as desired, and students who prefer to have a weekly or bi-monthly regularly scheduled meeting with the instructor are welcome to arrange for that. If there is sufficient interest, an informal reading group may be formed to supplement the lab work. Students will design their own project, which they will discuss with the instructor for approval. Students are encouraged to select a topic from semantics, pragmatics, or discourse analysis, such as entailment, evidentiality, implicature, information status, or rhetorical structure, and a topic from language technologies, such as sentiment analysis or summarization, and explore how the linguistic topic applies to some aspect of the chosen language technology. Students are encouraged to contrast symbolic, formal, and knowledge based approaches with empirical approaches. Each student will work independently. If multiple students work as a team on a particular topic, each should choose an approach that is different from the approaches used by the other students working on the same problem. Students will be responsible to set up a web page, blog, or wiki to post progress reports and other supporting documents, data, and analyses. The web space will be checked by the instructor periodically , and thus should be kept updated in order to reflect on-going progress. The web space will also serve as a shared project space in the case that students are working in a team for the project. | ||||||||||||||||||||||
11-727 - Computational Semantics for NLP | |||||||||||||||||||||||
Description | This course surveys semantics from a language processing perspective. It is divided into three main sections supplemented with a substantive semester-long computational project. The first section addresses traditional topics of computational semantics and semantic processing and representation systems. The second focuses on computational lexical semantics, including resources such as WordNet, Framenet, and some word-based ontologies, and their computational applications, such as word sense disambiguation, entailment, etc., and briefly the semantic web. The third section covers modern statistics-based distributional models of semantics. Each week focuses on one topic, covered by the lecturers, and will include one or two core introductory readings plus several optional more advanced readings. All students will read and discuss the introductory readings while each student will be expected to read advanced papers on at least two topics. | ||||||||||||||||||||||
11-731 - Machine Translation | |||||||||||||||||||||||
Description | Machine Translation is an introductory graduate-level course surveying the primary approaches and methods for developing modern state-of-the-art automated language translation systems. The main objectives of the course are: Obtain a basic understanding of modern MT systems and MT-related issues. Learn about theory and approaches in Machine Translation and implement the main components of statistical MT systems. |
||||||||||||||||||||||
Pre-Requisites | 11-711 - "Algorithms for NLP" or equivalent background is recommended. | ||||||||||||||||||||||
11-732 - Self-Paced Lab: MT | |||||||||||||||||||||||
Description | The Self-Paced Lab in MT is intended to complement the 11-731 lecture course by providing a chance for hands-on, in-depth exploration of various MT paradigms. MT faculty will present a set of possible topics to the students enrolled in the course. The students will indicate their first and second choices for lab projects, and will then be matched to a lab project advisor. At the end of the semester, the students will present the results of their projects in class, and submit a short paper describing them. | ||||||||||||||||||||||
Pre-Requisites | 11-731 - Machine Translation | ||||||||||||||||||||||
11-733 - Multilingual Speech-to-Speech Translation Lab | |||||||||||||||||||||||
Description | Building speech-to-speech translation systems (S-2-S) is an extremely complex task, involving research in Automatic Speech Recognition (ASR), Machine Translation (MT), Natural Language Understanding (NLU), as well as Text-to-Speech (TTS) and doing this for many languages doesn't make it easier. Although substantial progress has been made in each of these areas over the last years, the integration of the invididual ASR, MT, NLU, and TTS components to build a good S-2-S system is still a very challenging task. The seminar course on Multilingual Speech-to-Speech Translation will cover important recent work in the areas of ASR, MT, NLU, and TTS with a special focus on language portable approaches and discuss solutions for rapidly building state-of-the-art speech-to-speech translation systems. In the beginning sessions the instructors and other invited lecturers will give a brief introduction into the broad field. We will select papers on particular topics to read by each week. While everyone will do all readings and participate in the discussions, one person is assigned per session to present the basic ideas of the topic specific papers and lead the concluding discussion. | ||||||||||||||||||||||
11-734 - Advanced Machine Translation Seminar | |||||||||||||||||||||||
Description | The Advanced Machine Translation Seminar is a graduate-level seminar on current research topics in Machine Translation. The seminar will cover recent research on different approaches to Machine Translation (Statistical MT, Example-based MT, Interlingua and rule-based approaches, hybrid approaches, etc.). Related problems that are common to many of the various approaches will also be discussed, including the acquisition and construction of language resources for MT (translation lexicons, language models, etc.), methods for building large sentence-aligned bilingual corpora, automatic word alignment of sentence-parallel data, etc. The material covered will be mostly drawn from recent conference and journal publications on the topics of interest and will vary from year to year. The course will be run in a seminar format, where the students prepare presentations of selected research papers and lead in class discussion about the presented papers. | ||||||||||||||||||||||
Pre-Requisites | 11-731 - Machine Translation, or instructor approval. | ||||||||||||||||||||||
11-736 - Graduate Seminar on Endangered Languages | |||||||||||||||||||||||
Description | The purpose of this seminar is to allow students to better understand the linguistic, social and political issues when working with language technologies for endangered languages. Often in LTI we concentrate on issues of modeling with small amounts of data, or designing optimal strategies for collecting data, but ignore many of wider practical issues that appear when working with endangered languages. This seminar will consist of reading books and papers, and having participants give presentations; a few invited talks (e.g. from field linguists, and language advocates) will also be included. It will count for 6 units of LTI course credit. It may be possible for interested students to also carry out a related 6-unit project as a lab. | ||||||||||||||||||||||
Course Site | http://www.cs.cmu.edu/%7Eref/sel/ | ||||||||||||||||||||||
11-741 - Machine Learning for Text Mining (Cross listed with 11-641/11-441) | |||||||||||||||||||||||
Description |
Fall/Spring This is a full-semester lecture-oriented course (12 units) for the PhD-level, MS-level and undergraduate students who meet the prerequisites. It offers a blend of core theory, algorithms, evaluation methodologies and applications of scalable data analytic techniques. Specifically, it covers the following topics:
Notice that 11-741 and 11-641 are 12-unit courses for graduate students, but 11-441 is a 9-unit course for undergraduate students.Although the lectures are the same for all students, the workload differs by course. That is, the required course work in 11-441 is a subset of that in 11-641, and the work in 11-641 is a subset of that in 11-741. See the detailed distinctions in the Grading section. 11-741 is among the required courses for PhD candidates in the Language Technologies Institute. while 11-641 only counts as a master-level course. Graduate students can choose either 11-741 or 11-641, depending on their career goals and backgrounds. Undergraduate students should take 11-441; exception is possible if approved by the instructor. |
||||||||||||||||||||||
Pre-Requisites |
|
||||||||||||||||||||||
Course Site | |||||||||||||||||||||||
|
|||||||||||||||||||||||
11-743 - Self-Paced Lab: IR | |||||||||||||||||||||||
Description | The Self-Paced Lab for Information Retrieval (IR Lab) is intended to complement the courses of 11-741/11-641/11-441 and 11-642/11-442 by providing a chance for hands-on, in-depth exploration of various IR research topics. Students will design their own projects (project examples) and discuss instructor for approval. Each student will work independently. If multiple students work as a team on a particular topic, each should choose an approach that is different from the approaches used by the other students working on the same problem. Make a Web page for progress report and communication. Your Web page will be checked by the instructor periodically thus should be updated timely to reflect your on-going progress and work organization. The Web pages will also serve a role of data/tools sharing among students. | ||||||||||||||||||||||
Pre-Requisite |
|
||||||||||||||||||||||
Course Site | http://nyc.lti.cs.cmu.edu/classes/11-743 | ||||||||||||||||||||||
11-744 - Experimental Information Retrieval | |||||||||||||||||||||||
Description | This seminar studies the experimental evaluation of information retrieval systems in community-wide evaluation forums such as TREC, CLEF, NTCIR, INEX, TAC, and other annual research evaluations. The content will change from year to year, but the general format will be an in-depth introduction to the evaluation forum; its tracks or tasks, test collections, evaluation methodologies, and metrics; and several of the most competitive or interesting systems in each track or task. Class discussions will explore and develop new methods that might be expected to be competitive. The seminar includes a significant project component in which small teams develop systems intended to be competitive with the best recent systems. Students are not required to participate in actual TREC, CLEF, etc., evaluations, however some students may wish to do so. A specific goal of the seminar is to prepare students to compete effectively in such evaluations. The course meets twice a week during the first half of the semester. This part of the course is a combination of seminar-style presentations and brainstorming sessions about how to build competitive systems. The course meets once a week during the second half of the semester, when students are doing their projects. This part of the class is essentially weekly progress reports about student projects. | ||||||||||||||||||||||
Pre-Requisites | 11-642 - Search Engines | ||||||||||||||||||||||
Course Site | http://boston.lti.cs.cmu.edu/classes/11-744/ | ||||||||||||||||||||||
11-745 - Advanced Statistical Learning Seminar | |||||||||||||||||||||||
Description |
Fall 2016 Special Topics: Machine Learning Tutorials (6 Units) This seminar aims to deepen the participants' understanding of the theoretical foundation of statistical learning and applications, to broaden their knowledge of new techniques, directions and challenges in the field, and to inspire research ideas through class-room discussions. In the past years, this seminar was structured as a 12-unit or 6-unit course in the form of group-reading, presenting and discussing the books of The Elements of Statistical Learning: Data Mining, Inference, and Prediction (by Trevor Hastie et al.) and the Foundations of Machine Learning (by Mehryar Mohri et al.) and selected papers in the areas of large-scale structured learning, temporal dynamics and scalable optimization algorithms. In the fall of 2016, we will have a 6-unit seminar, going through well-known tutorial materials on Deep Learning, Time Series Forecasting, Graph Laplacian Matrices, Modern Bayesian Nonparametrics, and Advances in Structured Learning, NLP and Stochastic Gradient Methods. When the tutorial vedio is listed for the class of the week, all the students are required to read the vedio before the class. We will meet once a week, one main topic (or sub-topic) per week, with presentations rotating among participants. In each week, the assigned presenter starts with the questions from all the participants (collected by email) about the current topic/sub-topic, followed by a presentation on that topic and leads the discussion. All the students are required to read the selected reading materials (about 1.5 hours of reading per week) for the week before the class while the representer should read 2 or 3 additional related papers (chosen by the presenter or the instructor) as needed. Students should email their questions to the presenter and CC to everybody. The slides of each presentation should be shared (by email) with all the class members after the presentation (in the same day). By the end of the semester, each student will individually write a short 3-4-page white paper, outlining a research proposal for new work extending one of the research areas covered in class, or analyzing more than one area with respect to the open challenges, the state-of-the-art sollutions and the future research opportunities. There will be no exams or homework. The grading is based on class participation (25%), questions submitted for each class and class-room discussions (15%), quality of the seminar presentations (40%) and the final paper (20%). If one cannot attend a class, a write up of two pages on the topic is required for receiving the credit (25% participation and 15% discussion) for that class. |
||||||||||||||||||||||
Pre-Requisites | 10-701 (PhD-level Machine Learning) or equivalent course is required. Other relevant courses include 10-702 (Statistical Machine Learning), 10-705/36-705 (Intermediate Statistics) and 10-725 (Convex Optimization) are helpful but not required. If you are not sure about the expectation, please discuss with the instructor. | ||||||||||||||||||||||
Course Site | http://nyc.lti.cs.cmu.edu/classes/11-745/F15/ (accessible on campus, or through VPN for those who are off campus). | ||||||||||||||||||||||
11-751 - Speech Recognition and Understanding | |||||||||||||||||||||||
Description |
This 12-unit LTI course provides an introduction to the theoretical background as well as the experimental practice that has made the field what it is today. We will cover theoretical foundations, essential algorithms, and experimental strategies needed to turn speech into text, and go beyond (i.e. do something really useful, and do it right). We will discuss examples of state-of-the-art research, and show links to related fields, such as machine learning, machine translation, dialog systems, robotics, and user interfaces. A term project will provide students with the opportunity to conduct hands-on research. |
||||||||||||||||||||||
Pre-Requisites |
Sound mathematical background, knowledge of basic statistics, good computing skills. No prior experience with speech recognition is necessary. This course is primarily for graduate students in LTI, CS, Robotics, ECE, Psychology, or Computational Linguistics. Others by prior permission of instructor. |
||||||||||||||||||||||
Course Site |
https://sites.google.com/a/is.cs.cmu.edu/lti-speech-classes/11-751-speech-recognition-and-under |
||||||||||||||||||||||
11-752 - Speech II: Phonetics, Prosody, Perception and Synthesis | |||||||||||||||||||||||
Description | The goal of the course is to give the student basic knowledge from several fields that is necessary in order to pursue research in automatic speech processing. The course will begin with a study of the acoustic content of the speech signal. The students will use the spectrographic display to examine the signal and discover its variable properties. Phones in increasingly larger contexts will be studied with the goal of understanding coarticulation. Phonological rules will be studied as a contextual aid in understanding the spectrographic display. The spectrogram will then serve as a first introduction to the basic elements of prosody. Other displays will then be used to study the three parts of prosody: amplitude, duration, and pitch. Building on these three elements, the student will then examine how the three interact in careful and spontaneous speech. Next, the students will explore perception. Topics covered will be: physical aspects of perception, psychological aspects of perception, testing perception processes, practical applications of knowledge about perception. The second part of this course will cover all aspects of speech synthesis. Students need only have a basic knoweldge of speech and language processing. Some degree of programming and statistical modelling will be beneficial, but not required. Taught every other year. | ||||||||||||||||||||||
11-753 - Advanced Laboratory in Speech Recognition | |||||||||||||||||||||||
Description | The technology to allow humans to communicate by speech with machines or by which machines can understand when humans communicate with each other is rapidly maturing. While the 11-751 speech course focussed on an introduction to the theoretical foundations, essential algorithms, major approaches, and strategies for current state-of-the-art systems, the 11-753 speech lab complements the education by concentrating on the experimental practice in developing speech recognition and understanding speech-based systems, and by getting hands-on experience on relevant research questions using state-of-the art tools. Possible problem sets include both core speech recognition technology, and the integration of speech-based components into multi-modal, semantic, learning, or otherwise complex systems and interfaces. | ||||||||||||||||||||||
Pre-Requisites |
11-751 or equivalent; this course can be combined with 11-783 for a 12-unit lab |
||||||||||||||||||||||
Course Site |
https://sites.google.com/a/is.cs.cmu.edu/lti-speech-classes/11-753-advanced-speech-lab |
||||||||||||||||||||||
11-754 - Project Course: Dialogue Systems | |||||||||||||||||||||||
Description | This course will teach participants how to implement a complete spoken language system while providing opportunities to explore research topics of interest in the context of a functioning system. The course will produce a complete implementation of a system to access and manipulate email through voice only, for example to allow users to interact with the mail system over a telephone while away from their computer. In doing so the class will address the component activities of spoken language system building. These include, but are not limited to, task analysis and language design, application-specific acoustic and language modeling, grammar design, task design, dialog management, language generation and synthesis. The course will place particular emphasis on issues in task design and dialog management and on issues in language generation and synthesis. For Fall, we will implement a simple telephone-based information access application. The domain is bus schedules (see http://www.speech.cs.cmu.edu/BusLine for a web-based interface to this domain) and the goal will be to create one or more usable applications that can provide a real service and can be deployed for actual use by the University community. Participants will chose individual components of the system to concentrate on and will collaborate to put together the entire system. It is perfectly acceptable for several individuals to concentrate on a single component, particularly if their work will exemplify alternative approaches to the same problem. | ||||||||||||||||||||||
Pre-Requisites | Speech Recognition or permission of the instructor. | ||||||||||||||||||||||
11-755 - Machine Learning for Signal Processing | |||||||||||||||||||||||
Description | Signal Processing is the science that deals with extraction of information from signals of various kinds. This has two distinct aspects -- characterization and categorization. Traditionally, signal characterization has been performed with mathematically-driven transforms, while categorization and classification are achieved using statistical tools.
Machine learning aims to design algorithms that learn about the state of the world directly from data. A increasingly popular trend has been to develop and apply machine learning techniques to both aspects of signal processing, often blurring the distinction between the two. This course discusses the use of machine learning techniques to process signals. We cover a variety of topics, from data driven approaches for characterization of signals such as audio including speech, images and video, and machine learning methods for a variety of speech and image processing problems. |
||||||||||||||||||||||
11-756 - Design and Implementation of Speech Recognition Systems | |||||||||||||||||||||||
Description | Voice recognition systems invoke concepts from a variety of fields including speech production, algebra, probability and statistics, information theory, linguistics, and various aspects of computer science. Voice recognition has therefore largely been viewed as an advanced science, typically meant for students and researchers who possess the requisite background and motivation. In this course we take an alternative approach. We present voice recognition systems through the perspective of a novice. Beginning from the very simple problem of matching two strings, we present the algorithms and techniques as a series of intuitive and logical increments, until we arrive at a fully functional continuous speech recognition system. Following the philosophy that the best way to understand a topic is to work on it, the course will be project oriented, combining formal lectures with required hands-on work. Students will be required to work on a series of projects of increasing complexity. Each project will build on the previous project, such that the incremental complexity of projects will be minimal and eminently doable. At the end of the course, merely by completing the series of projects students would have built their own fully-functional speech recognition systems. Grading will be based on project completion and presentation. | ||||||||||||||||||||||
Pre-Requisites | Mandatory: Linear Algebra. Basic Probability Theory. Recommended: Signal Processing. Coding Skills: This course will require significant programming from the students. Students must be able to program fluently in at least one language (C, C++, Java, Python, LISP, Matlab are all acceptable). |
||||||||||||||||||||||
Course Site | http://www.cs.cmu.edu/afs/cs/user/bhiksha/WWW/courses/11-756.asr/spring2011/ | ||||||||||||||||||||||
11-761 - Language and Statistics | |||||||||||||||||||||||
Description | The goal of "Language and Statistics" is to ground the data-driven techniques used in language technologies in sound statistical methodology. We start by formulating various language technology problems in both an information theoretic framework (the source-channel paradigm) and a Bayesian framework (the Bayes classifier). We then discuss the statistical properties of words, sentences, documents and whole languages, and the various computational formalisms used to represent language. These discussions naturally lead to specific concepts in statistical estimation.
Topics include: Zipf's distribution and type-token curves; point estimators, Maximum Likelihood estimation, bias and variance, sparseness, smoothing and clustering; interpolation, shrinkage, and backoff; entropy, cross entropy and mutual information; decision tree models applied to language; latent variable models and the EM algorithm; hidden Markov models; exponential models and the maximum entropy principle; semantic modeling and dimensionality reduction; probabilistic context-free grammars and syntactic language models. |
||||||||||||||||||||||
Course Site | http://www.cs.cmu.edu/%7Eroni/11761 | ||||||||||||||||||||||
11-762 - Language and Statistics II | |||||||||||||||||||||||
Description | This course will cover modern empirical methods in natural language processing. It is designed for language technologies students who want to understand statistical methodology in the language domain, and for machine learning students who want to know about current problems and solutions in text processing. Students will, upon completion, understand how statistical modeling and learning can be applied to text, be able to develop and apply new statistical models for problems in their own research, and be able to critically read papers from the major related conferences (EMNLP and .ACL). A recurring theme will be the tradeoffs between computational cost, mathematical elegance, and applicability to real problems. The course will be organized around methods, with concrete tasks introduced throughout. The course is designed for SCS graduate students.
This course is taught intermittently. Students interested in this topic may also wish to consider 11-763 - Structured Prediction for Language and Other Discrete Data, which covers similar material. |
||||||||||||||||||||||
Pre-Requisites | Mandatory: 11-761 - Language and Statistics, or permission of the instructor. Recommended: 11-711 - Algorithms for Natural Language Processing; 10-601 or 10-701 - Machine Learning; or 11-745 - Advanced Statistical Learning Seminar |
||||||||||||||||||||||
Course Site | |||||||||||||||||||||||
11-763 - Structured Prediction for Language and other Discrete Data | |||||||||||||||||||||||
Description | This course seeks to cover statistical modeling techniques for discrete, structured data such as text. It brings together content previously covered in Language and Statistics 2 (11-762) and Information Extraction (10-707 and 11-748), and aims to define a canonical set of models and techniques applicable to problems in natural language processing, information extraction, and other application areas. Upon completion, students will have a broad understanding of machine learning techniques for structured outputs, will be able to develop appropriate algorithms for use in new research, and will be able to critically read related literature. The course is organized around methods, with example tasks introduced throughout. | ||||||||||||||||||||||
Pre-Requisites | 10-601 or 10-701 - Machine Learning or instructors' permission. | ||||||||||||||||||||||
Course Site | |||||||||||||||||||||||
11-765 - Active Learning Seminar | |||||||||||||||||||||||
Description | Participants will read and present papers, including analyzing comparative strengths and weaknesses of various algorithms. Meetings will take place once a week for about two hours in the fall. | ||||||||||||||||||||||
Pre-Requisites | A graduate-level machine learning course. | ||||||||||||||||||||||
11-772 - Analysis of Social Media | |||||||||||||||||||||||
Description | The most actively growing part of the web is "social media" (wikis, blogs, bboards, and collaboratively-developed community sites like Flikr and YouTube). This course will review selected papers from recent research literature that address the problem of analyzing and understanding social media. Topics to be covered include: -Text analysis techniques for sentiment analysis, analysis of figurative language, authorship attribution, and inference of demographic information about authors (age or sex). -Community analysis techniques for detecting communities, predicting authority, assessing influence (in viral marketing), or detecting spam. -Visualization techniques for understanding the interactions within and between communities. -Learning techniques for modeling and predicting trends in social media, or predicting other properties of media (user-provided content tags.) |
||||||||||||||||||||||
Pre-Requisites | 10-601 or 10-701 - Machine Learning or instructors' permission. | ||||||||||||||||||||||
11-775 - Large-Scale Multimedia Analysis | |||||||||||||||||||||||
Description |
Can a robot watch “Youtube" to learn about the world? What makes us laugh? How to bake a cake? Why is Kim Kardashian famous? 12-unit class covering fundamentals of computer vision, audio and speech processing, multi-media files and streaming, multi-modal signal processing, video retrieval, semantics, and text (possibly also: speech, music) generation. Instructors will give an overview of relevant recent work and benchmarking efforts (Trecvid, Mediaeval, etc.). Students will work on research projects to explore these ideas and learn to perform multi-modal retrieval, summarization and inference on large amounts of “Youtube”-style data. The experimental environment for the practical part of the course will be given to students in the form of Virtual Machines. |
||||||||||||||||||||||
Pre-Requisites |
This is a graduate course primarily for students in LTI, HCII, CSD, Robotics, ECE; others, for example (undergraduate) students of CS or professional masters, by prior permission of the instructor(s). Strong implementation skills, and familiarity with some (not all) of the above fields (e.g. 11-611, 11-711, 11-751, 11-755, 11-792, 16-720, or equivalent), will be helpful. |
||||||||||||||||||||||
Course Site |
https://sites.google.com/a/is.cs.cmu.edu/lti-speech-classes/11-775-large-scale-multimedia-analysis |
||||||||||||||||||||||
11-776 - Human Communication and Multimodal Machine Learning | |||||||||||||||||||||||
Description |
Humans are highly social creatures and have evolved complex mechanisms for signaling information about their thoughts, feelings, and intentions (both deliberately and reflexively). In turn, humans have also evolved complex mechanisms for receiving these signals and inferring the thoughts, feelings, and intentions of others. Proper understanding of human behavior, in all its nuance, requires careful consideration and integration of verbal, vocal, and visual information. These communication dynamics have long been studied in psychology and other social sciences. More recently, the field of multimodal affective computing has sought to enhance these studies using techniques from computer science and artificial intelligence. Common topics of study in this field include affective states, cognitive states, personality, psychopathology, social processes, and communication. As such, multimodal affective computing has broad applicability in both scientific and applied settings ranging from medicine and education to robotics and marketing. The objectives of this course are: (1) To give an overview of the components of human behavior (verbal, vocal, and visual) and the computer science areas that measure them (NLP, speech processing, and computer vision) (2) To provide foundational knowledge of psychological constructs commonly studied in multimodal affective computing (e.g., emotion, personality, and psychopathology) (3) To provide practical instruction on using statistical tools to study research hypotheses (4) To provide information about computational predictive models that integrate multimodal information from the verbal, vocal, and visual modalities (5) To give students practical experience in the computational study of human behavior and psychological constructs through an in-depth course project |
||||||||||||||||||||||
11-777 - Multimodal Machine Learning | |||||||||||||||||||||||
Description |
Multimodal machine learning (MMML) is a vibrant multi-disciplinary research field which addresses some of the original goals of artificial intelligence by integrating and modeling multiple communicative modalities, including linguistic, acoustic and visual messages. With the initial research on audio-visual speech recognition and more recently with language vision projects such as image and video captioning, this research field brings some unique challenges for multimodal researchers given the heterogeneity of the data and the contingency often found between modalities. The course will present the fundamental mathematical concepts in machine learning and deep learning relevant to the five main challenges in multimodal machine learning: (1) multimodal representation learning, (2) translation & mapping, (3) modality alignment, (4) multimodal fusion and (5) co-learning. These include, but not limited to, multimodal auto-encoder, deep canonical correlation analysis, multi-kernel learning, attention models and multimodal recurrent neural networks. We will also review recent papers describing state-of-the-art probabilistic models and computational algorithms for MMML and discuss the current and upcoming challenges. The course will discuss many of the recent applications of MMML including multimodal affect recognition, image and video captioning and cross-modal multimedia retrieval.
This is a graduate course designed primarily for PhD and research master students at LTI, MLD, CSD, HCII and RI; others, for example (undergraduate) students of CS or from professional master programs, are advised to seek prior permission of the instructor. It is strongly recommended for students to have taken an introduction machine learning course such as 10-401, 10-601, 10-701, 11-663, 11-441, 11-641 or 11-741. Prior knowledge of deep learning is recommended but not required. |
||||||||||||||||||||||
11-780 - Research Design and Writing | |||||||||||||||||||||||
Description | In an increasingly competitive research community within a rapidly changing world, it is essential that our students formulate research agendas that are of enduring importance, with clean research designs that lead to generalizable knowledge, and with high likelihood of yielding results that will have impact in the world. However, even the best research, if not communicated well, will fail to earn the recognition that it deserves. Even more seriously, the most promising research agendas, if not argued in a convincing and clear manner, will fail to secure the funding that would give them the chance to produce those important results. Thus, in order to complement the strong content-focused curriculum in LTI, we are proposing a professional skills course that targets the research and writing methodology that our students will need to excel in the research community, both during their degree at LTI and in their career beyond. This course focuses specifically on general experimental design methodology and corresponding writing and reporting skills. Grades will be based on a series of substantial writing assignments in which students will apply principles from experimental design methodology, such as writing an IRB application, a research design, a literature review, and a conference paper with data analysis and interpretation. A final exam will test skills and concepts related to experimental design methodology, and will include short answer questions and a critique of a research paper. | ||||||||||||||||||||||
11-782 - Self-Paced Lab for Computational Biology | |||||||||||||||||||||||
Description | Students will choose from a set of projects designed by the instructor. Students will also have the option of designing their own projects, subject to instructor approval. For the students who had completed a project in the 10-810 course, they can either switch to another project, or continue working on the previous project by aiming a significant progress (subject to instructor approval). Each student will work independently. If more than one student work on a particular topic, each should choose an approach that is different from the approaches used by the other students working on the same problem. The students need to begin with a project proposal to outline the high-level ideas, tasks, and goals of the problem, and plan of experiments and/or analysis. The instructor will consult with you on your ideas , but the final responsibility to define and execute an interesting piece of work is yours.
Your project will have two final deliverables: In addition, you must turn in a midway progress report (5 pages maximum in NIPS format, including references) describing the results of your first experiments, worth 20% of the project grade. Note that, as with any conference, the page limits are strict! Papers over the limit will not be considered. The grading of your project are based on overall scientific quality, novelty, writing, and clarity of presentation. We expect your final report to be of conference-paper quality, and you are expected to also deliver software implementation of your algorithmic results. |
||||||||||||||||||||||
Pre-Requisites | 10-810 - Advanced Algorithms and Model for Computational Biology | ||||||||||||||||||||||
Co-Requisites | 10-810 - Advanced Algorithms and Model for Computational Biology | ||||||||||||||||||||||
11-783 - Self-Paced Lab: Rich Interaction in Virtual World | |||||||||||||||||||||||
Description | Massively Multi-player Online Role-Playing Games have evolved into Virtual Worlds (VWs), and are creating ever richer environments for experimentation on all aspects of human to human, or human to machine communication, as well as for information discovery and access. So far, interaction has been constrained by the limited capabilities of keyboards, joysticks, or computer mice. This creates an exciting opportunity for explorative research on speech input and output, speech-to-speech translation, or any aspect of language technology. Of particular interest will be a combination with other novel "real world" (RW) input, or output devices, such as mobile phones or portable games consoles, because they can be used to control the VW, or make it accessible everywhere in RW. Language technologies in particular profit from "context awareness", because domain adaptation can be performed. For scientific experimentation in that area, Virtual Worlds offer the opportunity to concentrate on algorithms, because context sensors can be written with a few lines of code, without the need for extra hardware sensors. Algorithms can also run "continuously", without the need for specific data collection times or places, because the VW is "always on". In this lab, we will enhance existing clients to virtual worlds so that they can connect to various speech and language related research systems developed at LTI and CMU's Silicon Valley campus. The lab will be held jointly at the CMU's Pittsburgh and Silicon Valley Campuses. We will "eat our own dog food", so the goal will be to hold the last session entirely in a virtual class room, which will by that time include speech control of virtual equipment, speech-to-speech translation, and some devices that can be controlled using non-PC type equipment, like mobile phones. | ||||||||||||||||||||||
Pre-Requisites | 11-751 or equivalent; this course can be combined with 11-753 for a 12-unit lab | ||||||||||||||||||||||
Course Site | https://sites.google.com/a/is.cs.cmu.edu/lti-speech-classes/11-783-rich-interaction-in-virtual-worlds | ||||||||||||||||||||||
11-785 - Introduction to Deep Learning | |||||||||||||||||||||||
Description | Neural networks have increasingly taken over various AI tasks, and currently produce the state of the art in many AI tasks ranging from computer vision and planning for self-driving cars to playing computer games. Basic knowledge of NNs, known currently in the popular literature as deep learning, familiarity with various formalisms, and knowledge of tools, is now an essential requirement for any researcher or developer in most AI and NLP fields. This course is a broad introduction to the field of neural networks and their deep learning formalisms. The course traces some of the development of neural network theory and design through time, leading quickly to a discussion of various network formalisms, including simple feedforward, convolutional, recurrent, and probabilistic formalisms, the rationale behind their development, and challenges behind learning such networks and various proposed solutions. We subsequently cover various extensions and models that enable their application to various tasks such as computer vision, speech recognition, machine translation and playing games. Instruction Unlike prior editions of 11-785, the instruction will primarily be through instructor lectures, and the occasional guest lecture. Evaluation Students will be evaluated based on weekly continuous-evaluation tests, and their performance in assignments and a final course project. There will be six hands-on assignments, requiring both low-level coding and toolkit-based implementation of neural networks, covering basic MLP, convolutional and recurrent formalisms, as well as one or more advanced tasks, in addition to the final project. | ||||||||||||||||||||||
11-791 - Software Engineering for Information Systems | |||||||||||||||||||||||
Description | The Software Engineering for IT sequence combines classroom material and assignments in the fundamentals of software engineering (11-791) with a self-paced, faculty-supervised directed project (11-792). The two courses cover all elements of project design, implementation, evaluation, and documentation. For students intending to complete both courses, it is recommended that the project design and proof-of-concept prototype be completed and approved by the faculty advisor before the start of 11-792, if possible. Students may elect to take only 11-791; however, if both parts are taken, they should be taken in proper sequence. | ||||||||||||||||||||||
Course Site | http://www.cs.cmu.edu/%7Eehn/seit.html | ||||||||||||||||||||||
11-792 - Intelligent Information Systems Project | |||||||||||||||||||||||
Description | The Software Engineering for IS sequence combines classroom material and assignments in the fundamentals of software engineering (11-791) with a self-paced, faculty-supervised directed project (11-792). The two courses cover all elements of project design, implementation, evaluation, and documentation. Students may elect to take only 11-791; however, if both parts are taken, they should be taken in proper sequence. Prerequisite: 11-791. The course is required for VLIS students. | ||||||||||||||||||||||
Pre-Requisites | 11-791 - Software Engineering for Information Systems (required for VLIS students). | ||||||||||||||||||||||
Course Site | http://www.cs.cmu.edu/%7Eehn/seit.html | ||||||||||||||||||||||
11-794 - Inventing Future Services | |||||||||||||||||||||||
Description | Inventing the Future of Services is a course that focuses on the development of innovative thinking in a business environment. CMU graduates should not be waiting for their employers to tell them what to do – they should be driving radical innovation in their businesses. Drawing on 17 years experience directing applied research at Accenture Technology Labs, the instructor teaches students systematic approaches to technology-driven business innovation in services industries. | ||||||||||||||||||||||
Course Site | http://www.cs.cmu.edu/~anatoleg/Inventing%20the%20Future%20of%20Services%20Course%20descr%20Fall%202011.htm | ||||||||||||||||||||||
11-795 - Seminar: Algorithms for Privacy and Security | |||||||||||||||||||||||
Description | Alice wants an answer from Bob. But she does not want Bob to know the question! Charlie puts up pictures on the web. Bob downloads one of them from Flickr. How can he be sure the picture was Charlie's and not a counterfeit from Mallory? A secret must be distributed among N people so that a minimum of T of them must pool their knowledge in order to learn anything about the recipe? Answers to questions such as the above (many lie in a variety of computational fields such as Cryptography, Secure Multi-Party Computation, Watermarking, Secret Sharing. In this course we will cover a variety of topics related to privacy and security, including basic cryptography, secret sharing, privacy-preserving computation, data-hiding and steganography, and the latest algorithms for data mining with privacy. This will be a participatory course. Students will be required to present 1-3 papers during the semester. Papers must be analysed and presented in detail. Discussion and questions will be encouraged. Grading will be based on participation and presentation. | ||||||||||||||||||||||
Pre-Requisites | Recommended: Abstract Algebra, Number Theory. | ||||||||||||||||||||||
Course Site | http://www.cs.cmu.edu/afs/cs/user/bhiksha/WWW/courses/11-795.privacy/ | ||||||||||||||||||||||
11-796 - Question Answering Lab | |||||||||||||||||||||||
Description | The Question Answering Lab course provides a chance for hands-on, in-depth exploration of core algorithmic approaches to question answering (QA). Students will work independently or in small teams to extend or adapt existing QA modules and systems to improve overall performance on known QA datasets (e.g. TREC, CLEF, NTCIR, Jeopardy!), using best practices associated with the Open Advancement of Question Answering initiative. Projects will utilize existing components and systems from LTI (JAVELIN, Ephyra) and other open source projects (UIMA-AS, OAQA) running on a 10-node distributed computing cluster. Each student project will evaluate one or more component algorithms on a given QA dataset and produce a conference-style paper describing the experimental setup and results. Format: The course will require weekly in-class progress meetings with the instructors, in addition to individual self-paced work outside the classroom. | ||||||||||||||||||||||
Pre-Requisites | Intermediate Java programming skills. | ||||||||||||||||||||||
11-797 - Question Answering | |||||||||||||||||||||||
Description | The Question Answering course provides a chance for hands-on, in-depth exploration of core algorithmic approaches to question answering (QA). Students will work independently or in small teams to extend or adapt existing QA modules and systems to improve overall performance on known QA datasets (e.g. TREC, CLEF, NTCIR), using best practices associated with the Open Advancement of Question Answering initiative. Each student project will evaluate one or more component algorithms on a given QA dataset and produce a conference-style paper describing the system design, experimental setup and results. | ||||||||||||||||||||||
Pre-Requisites | Intermediate Java programming skills. | ||||||||||||||||||||||
11-823 - ConLanging: Learning About Linguistics and Language Technologies Through Construction of Artificial Languages | |||||||||||||||||||||||
Description |
Students will work individually to create artificial human(oid) languages for fictional human cultures or SciFi worlds. Students will implement at least one language technology for their languages. In the course of creating the languages, students will learn about the building blocks of human language such as phones, phonemes, morphemes, and morpho-syntactic constructions including their semantics and pragmatics. Class instruction will focus specifically on variation among human languages so that the students can make conlangs that are not just naively English-like. We will also touch on philosophical issues in philosophy of language and on real-world socio-political issues related to language policy. Students will be required to use at least one of the following technologies: language documentation tools that are used for field linguistics and corpus annotation, automatic speech recognition, speech synthesis, morphological analysis, parsing, or machine translation. | ||||||||||||||||||||||
Course Site | http://tts.speech.cs.cmu.edu/11-823/ | ||||||||||||||||||||||
11-830 - Computational Ethics for NLP | |||||||||||||||||||||||
Description | As language technologies have become increasingly prevalent, there is a growing awareness that decisions we make about our data, methods, and tools are often tied up with their impact on people and societies. This course introduces students to real-world applications of language technologies and the potential ethical implications associated with their design. The class will study advanced topics in Natural Language Processing, in or near the intersection of Machine Learning, Linguistics, and Computational Social Science. Centered around classical and state-of-the-art research, lectures will cover philosophical foundations of ethical research along with concrete case studies and ethical challenges in development of intelligent systems. Methodologically, we will discuss how to analyze large scale text generated by people to or about other people and how to reason about it through text-, network-, and people-centered models. From an engineering perspective, there will be an emphasis on practical design and implementation of useful and ethical AI systems, with annotation and coding assignments and a course project. Topics include: misrepresentation and bias, including algorithms to identify biases in models and data and adversarial approaches to debiasing; privacy algorithms for demographic inference, personality profiling, and anonymization of demographic and personality traits; techniques to monitor civility in communication, and detecting trolling, hate speech, abusive language, cyberbullying, and toxic comments; propaganda and manipulation in news, and methods to identify fake news and political framing; healthcare and biomedical text processing and applications; low-resource NLP and its applications for disaster response and monitoring diseases in developing regions. | ||||||||||||||||||||||
Course Site | http://demo.clab.cs.cmu.edu/ethical_nlp/ | ||||||||||||||||||||||
11-899 - Summarization and Personal Information Management | |||||||||||||||||||||||
Description | The problem of information overload in personal communication media such as email, instant messaging, and on-line forums is a well documented phenomenon. Much work addressing this problem has been conducted separately in the human-computer interaction (HCI) community, the information sciences community, and the computational linguistics community. However, in each case, while important advancements in scientific knowledge have been achieved, the work suffers from an "elephant complex", where each community focuses mainly on just the part of the problem most visible from their own perspective. The purpose of this course is to bring these threads together to examine the issue of managing personal communication data from an integrated perspective. | ||||||||||||||||||||||
11-910 - Directed Research | |||||||||||||||||||||||
Description | This course number documents the research being done by Masters and pre-proposal PhD students. Every LTI graduate student will register for at least 24 units of 11-910 each semester, unless they are ABD (i.e., they have had a thesis proposal accepted), in which case they should register for 48 units of 11-930. The student will be expected to write a report and give a presentation at the end of the semester, documenting the research done. The report will be filed by either the faculty member or the LTI graduate program administrator. | ||||||||||||||||||||||
Pre-Requisites | Consent of Instructor. | ||||||||||||||||||||||
11-920 - Independent Study: Breadth | |||||||||||||||||||||||
Description | This course number is intended for individual study with faculty other than a student's intended thesis advisor. | ||||||||||||||||||||||
Pre-Requisites | Consent of advisor. Special Permission is required to register. | ||||||||||||||||||||||
11-925 - Independent Study: Area | |||||||||||||||||||||||
Description | This course number is intended for individual study with the intended thesis advisor prior to acceptance of a student's thesis proposal. | ||||||||||||||||||||||
Pre-Requisites | Consent of advisor. Special Permission is required to register. | ||||||||||||||||||||||
11-928 - Masters Thesis I | |||||||||||||||||||||||
Description | This course number is intended for last semester MLT students who wish to do an optional Masters Thesis. Please see the description of the optional Masters Thesis for more details. | ||||||||||||||||||||||
Pre-Requisites | Consent of advisor. | ||||||||||||||||||||||
11-929 - Masters Thesis II | |||||||||||||||||||||||
Description | This course number is intended for last semester Masters students who wish to do an optional Masters Thesis. The student will normally have taken 11-925 - Independent Study: Area of Concentration for 12 units in the preceding semester, to produce an MS Thesis Proposal. | ||||||||||||||||||||||
Pre-Requisites | Consent of advisor. | ||||||||||||||||||||||
11-930 - Dissertation Research | |||||||||||||||||||||||
Description | This course number is intended for PhD dissertation research after acceptance of a student's PhD thesis proposal. | ||||||||||||||||||||||
Pre-Requisites | Consent of advisor. | ||||||||||||||||||||||
11-935 - LTI Practicum | |||||||||||||||||||||||
Description | This course is an internship course for students who are doing an elective internship as part of their graduate degree. |