AI News, Christopher Manning
His research goal is computers that can intelligently process, understand,
Manning is a leader in applying Deep Learning to Natural Language Processing, with well-known research on Tree Recursive Neural Networks, sentiment analysis, neural network dependency parsing, the GloVe model of word vectors, neural machine translation, and deep language understanding.
He also focuses on computational linguistic approaches to parsing, robust textual inference and multilingual language processing, including being a principal developer of Stanford Dependencies and Universal Dependencies. Manning
has coauthored leading textbooks on statistical approaches to Natural Language Processing (NLP) (Manning and Schütze 1999) and information retrieval (Manning, Raghavan, and Schütze, 2008), as well as linguistic monographs on ergativity and complex predicates.
from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford.
He is the founder of the Stanford NLP group (@stanfordnlp) and manages development of the Stanford CoreNLP software.
I've become lazy, so for recent stuff, you're often more likely to find it on the NLP
The general area of my research is robust but linguistically sophisticated natural
Particular current topics include deep learning for NLP, Universal
Dependencies and dependency parsing, language learning through interaction, and reading
I am interested in new students, at or accepted to Stanford, wanting to work in the area of Natural Language Processing.
to look at my papers, or my group research page.
social science students who think it might be a good match).
We haven't found the time to revise it and teach a secodn version, but you
using large corpora, statistical models for acquisition, disambiguation,
previously taught it in winter 2002 (née Ling 236) and Winter
LaTeX: When I used to have more time (i.e., when I was a grad student), I used to
Interactive Language Learning
Today, natural language interfaces (NLIs) on computers or phones are often trained once and deployed, and users must just live with their limitations.
Examining language acquisition research, there is considerable evidence suggesting that human children require interactions to learn language, as opposed to passively absorbing language, such as when watching TV (Kuhl et al., 2003, Sachs et al., 1981).
We think that interactivity is important, and that an interactive language learning setting will enable adaptive and customizable systems, especially for resource-poor languages and new domains where starting from close to scratch is unavoidable.
While the human can teach the computer any language - in our pilot, Mechanical Turk users tried English, Arabic, Polish, and a custom programming language - a good human player will choose to use utterances so that the computer is more likely to learn quickly.
Event scheduling is a common yet unsolved task: while several available calendar programs allow limited natural language input, in our experience they all fail as soon as they are given something slightly complicated, such as ‘Move all the tuesday afternoon appointments back an hour’.
Furthermore, aiming to expand our learning methodology from definition to demonstration, we chose this domain as most users are already familiar with the common calendar GUI with an intuition for its manual manipulation.
Additionally, as calendar NLIs are already deployed, particularly on mobile, we hoped users will naturally be inclined to use natural language style phrasing rather than a more technical language as we saw in the blocks world domain.
In our pilot, user feedback was provided by scrolling and selecting the proper action for a given utterance - a process both unnatural and un-scalable for large action spaces.
We expanded our system to receive feedback through demonstration, as it is 1) natural for people, especially using a calendar, allowing for easy data collection, and 2) informative for language learning and can be leveraged by current machine learning methods.
For our calendar, we abandoned the individualized user-specific language model for a collective community model where a model consists of a set of grammar rules and parameters collected across all users and interactions.
In total, out of 356 total utterances, in 196 cases the worker selected a state out of the suggested ranked list as the desired calendar state, and 68 times the worker used the calendar GUI to manually modify and submit feedback by demonstration.
A categorized sample of commands collected in our experiment To assess learning performance, we measure the system’s ability to correctly predict the correct calendar action given a natural language command.
For example, when a user rephrases “my meetings tomorrow morning” as “my meetings tomorrow after 7 am and before noon”, we can infer the meaning of “morning'.
Stanford AI Lab's Outreach
SAIL won a number of best paper awards this year: SAIL is delighted to announce that JD.com, China’s largest retailer has agreed to establish the SAIL JD AI Research Initiative, a sponsored research program at the Stanford Artificial Intelligence Lab.
The collaboration will fund research into a range of areas including natural language processing, computer vision, robotics, machine learning, deep learning, reinforcement learning, and forecasting.
The SAIL Affiliates Program is pleased to welcome Google, the largest internet-related technology company providing advertising, search, cloud computing, software, and hardware technologies and, DiDi, a major ride-sharing company that provides transportation services for close to 400 million users across over 400 cities in China.
- On Monday, June 17, 2019
Lecture 1 | Natural Language Processing with Deep Learning
Lecture 1 introduces the concept of Natural Language Processing (NLP) and the problems NLP faces today. The concept of representing words as numeric ...
Lecture 3 | GloVe: Global Vectors for Word Representation
Lecture 3 introduces the GloVe model for training word vectors. Then it extends our discussion of word vectors (interchangeably called word embeddings) by ...
Stanford Seminar: Google's Multilingual Neural Machine Translation System
EE380: Computer Systems Colloquium Seminar Google's Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation Speaker: Melvin ...
Lecture 13: Convolutional Neural Networks
Lecture 13 provides a mini tutorial on Azure and GPUs followed by research highlight "Character-Aware Neural Language Models." Also covered are CNN ...
11. Introduction to Machine Learning
MIT 6.0002 Introduction to Computational Thinking and Data Science, Fall 2016 View the complete course: Instructor: Eric Grimson ..
Lecture 7: Introduction to TensorFlow
Lecture 7 covers Tensorflow. TensorFlow is an open source software library for numerical computation using data flow graphs. It was originally developed by ...
Lecture 18: Tackling the Limits of Deep Learning for NLP
Lecture 18 looks at tackling the limits of deep learning for NLP followed by a few presentations.
How does speech recognition software work? | Computer Science - The Royal Institution Lectures
Suitable for teaching 11 to 15s. Professor Sophie Scott explains how a computer decodes the sounds that we make and turns them into words. Subscribe for ...
Lesson 10: Deep Learning Part 2 2018 - NLP Classification and Translation
NB: Please go to to view this video since there is important updated information there. If you have questions, use the forums at ..
Chunking - Natural Language Processing With Python and NLTK p.5
Chunking in Natural Language Processing (NLP) is the process by which we group various words together by their part of speech tags. One of the most popular ...