AI News, Simons Institute for the Theory of Computing

Simons Institute for the Theory of Computing

final component of the program is ​understanding heuristics​: what works in practice, and why.  The most popular algorithms for a variety of basic statistical tasks—clustering, embedding, and so on—behave in a manner that is not fully understood.  Some, like principal component analysis, have strong properties, but are used in ways that cannot directly be justified by appealing to these properties.  Others, like k-­means, have obvious failure modes in a worst-­case setting, and yet are quite successful on many types of data.  The program will bring together theoreticians and practitioners who are interested in teasing apart these issues and in expanding the useful formal characterizations of such procedures.   

Machine learning

progressively improve performance on a specific task) with data, without being explicitly programmed.[1] The name Machine learning was coined in 1959 by Arthur Samuel.[2] Evolved from the study of pattern recognition and computational learning theory in artificial intelligence,[3] machine learning explores the study and construction of algorithms that can learn from and make predictions on data[4] – such algorithms overcome following strictly static program instructions by making data-driven predictions or decisions,[5]:2 through building a model from sample inputs.

Machine learning is sometimes conflated with data mining,[8] where the latter subfield focuses more on exploratory data analysis and is known as unsupervised learning.[5]:vii[9] Machine learning can also be unsupervised[10] and be used to learn and establish baseline behavioral profiles for various entities[11] and then used to find meaningful anomalies.

These analytical models allow researchers, data scientists, engineers, and analysts to 'produce reliable, repeatable decisions and results' and uncover 'hidden insights' through learning from historical relationships and trends in the data.[12] Effective machine learning is difficult because finding patterns is hard and often not enough training data are available;

Mitchell provided a widely quoted, more formal definition of the algorithms studied in the machine learning field: 'A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E.'[15] This definition of the tasks in which machine learning is concerned offers a fundamentally operational definition rather than defining the field in cognitive terms.

Machine learning tasks are typically classified into two broad categories, depending on whether there is a learning 'signal' or 'feedback' available to a learning system: Another categorization of machine learning tasks arises when one considers the desired output of a machine-learned system:[5]:3 Among other categories of machine learning problems, learning to learn learns its own inductive bias based on previous experience.

Developmental learning, elaborated for robot learning, generates its own sequences (also called curriculum) of learning situations to cumulatively acquire repertoires of novel skills through autonomous self-exploration and social interaction with human teachers and using guidance mechanisms such as active learning, maturation, motor synergies, and imitation.

Probabilistic systems were plagued by theoretical and practical problems of data acquisition and representation.[19]:488 By 1980, expert systems had come to dominate AI, and statistics was out of favor.[20] Work on symbolic/knowledge-based learning did continue within AI, leading to inductive logic programming, but the more statistical line of research was now outside the field of AI proper, in pattern recognition and information retrieval.[19]:708–710;

Machine learning and data mining often employ the same methods and overlap significantly, but while machine learning focuses on prediction, based on known properties learned from the training data, data mining focuses on the discovery of (previously) unknown properties in the data (this is the analysis step of knowledge discovery in databases).

Much of the confusion between these two research communities (which do often have separate conferences and separate journals, ECML PKDD being a major exception) comes from the basic assumptions they work with: in machine learning, performance is usually evaluated with respect to the ability to reproduce known knowledge, while in knowledge discovery and data mining (KDD) the key task is the discovery of previously unknown knowledge.

Jordan, the ideas of machine learning, from methodological principles to theoretical tools, have had a long pre-history in statistics.[22] He also suggested the term data science as a placeholder to call the overall field.[22] Leo Breiman distinguished two statistical modelling paradigms: data model and algorithmic model,[23] wherein 'algorithmic model' means more or less the machine learning algorithms like Random forest.

Multilinear subspace learning algorithms aim to learn low-dimensional representations directly from tensor representations for multidimensional data, without reshaping them into (high-dimensional) vectors.[29] Deep learning algorithms discover multiple levels of representation, or a hierarchy of features, with higher-level, more abstract features defined in terms of (or generating) lower-level features.

In machine learning, genetic algorithms found some uses in the 1980s and 1990s.[33][34] Conversely, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms.[35] Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves `rules’ to store, manipulate or apply, knowledge.

They seek to identify a set of context-dependent rules that collectively store and apply knowledge in a piecewise manner in order to make predictions.[37] Applications for machine learning include: In 2006, the online movie company Netflix held the first 'Netflix Prize' competition to find a program to better predict user preferences and improve the accuracy on its existing Cinematch movie recommendation algorithm by at least 10%.

A joint team made up of researchers from ATT Labs-Research in collaboration with the teams Big Chaos and Pragmatic Theory built an ensemble model to win the Grand Prize in 2009 for $1 million.[43] Shortly after the prize was awarded, Netflix realized that viewers' ratings were not the best indicators of their viewing patterns ('everything is a recommendation') and they changed their recommendation engine accordingly.[44] In 2010 The Wall Street Journal wrote about the firm Rebellion Research and their use of Machine Learning to predict the financial crisis.

[45] In 2012, co-founder of Sun Microsystems Vinod Khosla predicted that 80% of medical doctors jobs would be lost in the next two decades to automated machine learning medical diagnostic software.[46] In 2014, it has been reported that a machine learning algorithm has been applied in Art History to study fine art paintings, and that it may have revealed previously unrecognized influences between artists.[47] Classification machine learning models can be validated by accuracy estimation techniques like the Holdout method, which splits the data in a training and test set (conventionally 2/3 training set and 1/3 test set designation) and evaluates the performance of the training model on the test set.

Systems which are trained on datasets collected with biases may exhibit these biases upon use (algorithmic bias), thus digitizing cultural prejudices.[49] For example, using job hiring data from a firm with racist hiring policies may lead to a machine learning system duplicating the bias by scoring job applicants against similarity to previous successful applicants.[50][51] Responsible collection of data and documentation of algorithmic rules used by a system thus is a critical part of machine learning.

Basic Concepts in Machine Learning

found that the best way to discover and get a handle on the basic concepts in machine learning is to review the introduction chapters to machine learning textbooks and to watch the videos from the first model in online courses.

There are tens of thousands of machine learning algorithms and hundreds of new algorithms are developed every year.

Every machine learning algorithm has three components: All machine learning algorithms are combinations of these three components.

There are four types of machine learning: Supervised learning is the most mature, the most studied and the type of learning used by most machine learning algorithms.

Machine learning algorithms are only a very small part of using machine learning in practice as a data analyst or data scientist.

From the perspective of inductive learning, we are given input samples (x) and output samples (f(x)) and the problem is to estimate the function (f).

Specifically, the problem is to generalize from the samples and the mapping to be useful to estimate the output for new samples in the future.

Terminology used in machine learning: Key issues in machine learning: There are 3 concerns for a choosing a hypothesis space space: There are 3 properties by which you could choose an algorithm: In this post you discovered the basic concepts in machine learning.

In summary, these were: These are the basic concepts that are covered in the introduction to most machine learning courses and in the opening chapters of any good textbook on the topic.

Although targeted at academics, as a practitioner, it is useful to have a firm footing in these concepts in order to better understand how machine learning algorithms behave in the general sense.

Reinforcement learning

Reinforcement learning (RL) is an area of machine learning inspired by behaviourist psychology, concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward.

The problem, due to its generality, is studied in many other disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, statistics and genetic algorithms.

In the operations research and control literature, reinforcement learning is called approximate dynamic programming,[citation needed] The approach has been studied in the theory of optimal control, though most studies are concerned with the existence of optimal solutions and their characterization, and not with learning or approximation.[citation needed] In economics and game theory, reinforcement learning may be used to explain how equilibrium may arise under bounded rationality.[citation needed] In machine learning, the environment is typically formulated as a Markov decision process (MDP), as many reinforcement learning algorithms for this context utilize dynamic programming techniques.[1] The main difference between the classical techniques[which?] and reinforcement learning algorithms is that the latter do not need knowledge[vague] about the MDP and they target large MDPs where exact methods become infeasible.[citation needed] Reinforcement learning differs from standard supervised learning in that correct input/output pairs[clarification needed] are never presented, nor sub-optimal actions explicitly corrected.

Instead the focus is on performance,[clarification needed], which involves finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge).[2] The exploration vs.

exploitation trade-off has been most thoroughly studied through the multi-armed bandit problem and in finite MDPs.[citation needed] Basic reinforcement is modeled as a Markov decision process: Rules are often stochastic.

The observation typically involves the scalar, immediate reward associated with the last transition.

In many works, the agent is assumed to observe the current environmental state (full observability).

Sometimes the set of actions available to the agent is restricted (a zero balance cannot be reduced).

reinforcement learning agent interacts with its environment in discrete time steps.

from the set of available actions, which is subsequently sent to the environment.

The goal of a reinforcement learning agent is to collect as much reward as possible.

The agent can (possibly randomly) choose any action as a function of the history.

When the agent's performance is compared to that of an agent that acts optimally, the difference in performance gives rise to the notion of regret.

In order to act near optimally, the agent must reason about the long term consequences of its actions (i.e., maximize future income), although the immediate reward associated with this might be negative.

Thus, reinforcement learning is particularly well-suited to problems that include a long-term versus short-term reward trade-off.

It has been applied successfully to various problems, including robot control, elevator scheduling, telecommunications, backgammon, checkers[3] and go (AlphaGo).

Two elements make reinforcement learning powerful: the use of samples to optimize performance and the use of function approximation to deal with large environments.

Thanks to these two key components, reinforcement learning can be used in large environments in the following situations: The first two of these problems could be considered planning problems (since some form of model is available), while the last one could be considered to be a genuine learning problem.

However, reinforcement learning converts both planning problems to machine learning problems.

Randomly selecting actions, without reference to an estimated probability distribution, shows poor performance.

The case of (small) finite Markov decision processes is relatively well understood.

However, due to the lack of algorithms that provably scale well with the number of states (or scale to problems with infinite state spaces), simple exploration methods are the most practical.

-greedy, when the agent chooses the action that it believes has the best long-term effect with probability

is a tuning parameter, which is sometimes changed, either according to a fixed schedule (making the agent explore progressively less), or adaptively based on heuristics.[5] Even if the issue of exploration is disregarded and even if the state was observable (assumed hereafter), the problem remains to use past experience to find out which actions are good.

The agent's action selection is modeled as a map called policy: The policy map gives the probability of taking action a when in state s.[6]:61 There are also non-probabilistic policies.

Hence, roughly speaking, the value function estimates 'how good' it is to be in a given state.[6]:60 where the random variable

denotes the return, and is defined as the sum of future discounted rewards where

From the theory of MDPs it is known that, without loss of generality, the search can be restricted to the set of so-called stationary policies.

A policy is stationary if the action-distribution returned by it depends only on the last state visited (from the observation agent's history).

A deterministic stationary policy deterministically selects actions based on the current state.

Since any such policy can be identified with a mapping from the set of states to the set of actions, these policies can be identified with such mappings with no loss of generality.

The brute force approach entails two steps: One problem with this is that the number of policies can be large, or even infinite.

Another is that variance of the returns may be large, which requires many samples to accurately estimate the return of each policy.

These problems can be ameliorated if we assume some structure and allow samples generated from one policy to influence the estimates made for others.

The two main approaches for achieving this are value function estimation and direct policy search.

Value function approaches attempt to find a policy that maximizes the return by maintaining a set of estimates of expected returns for some policy (usually either the 'current' [on-policy] or the optimal [off-policy] one).

These methods rely on the theory of MDPs, where optimality is defined in a sense that is stronger than the above one: A policy is called optimal if it achieves the best expected return from any initial state (i.e., initial distributions play no role in this definition).

To define optimality in a formal manner, define the value of a policy

policy that achieves these optimal values in each state is called optimal.

Clearly, a policy that is optimal in this strong sense is also optimal in the sense that it maximizes the expected return

Although state-values suffice to define optimality, it is useful to define action-values.

now stands for the random return associated with first taking action

is an optimal policy, we act optimally (take the optimal action) by choosing the action from

is called the optimal action-value function and is commonly denoted by

In summary, the knowledge of the optimal action-value function alone suffices to know how to act optimally.

Assuming full knowledge of the MDP, the two basic approaches to compute the optimal action-value function are value iteration and policy iteration.

Computing these functions involves computing expectations over the whole state-space, which is impractical for all but the smallest (finite) MDPs.

In reinforcement learning methods, expectations are approximated by averaging over samples and using function approximation techniques to cope with the need to represent value functions over large state-action spaces.

Policy iteration consists of two steps: policy evaluation and policy improvement.

Assuming (for simplicity) that the MDP is finite, that sufficient memory is available to accommodate the action-values and that the problem is episodic and after each episode a new one starts from some random initial state.

Given sufficient time, this procedure can thus construct a precise estimate

In the policy improvement step, the next policy is obtained by computing a greedy policy with respect to

In practice lazy evaluation can defer the computation of the maximizing actions to when they are needed.

Problems with this procedure include: The first problem is corrected by allowing the procedure to change the policy (at some or all states) before the values settle.

Most current algorithms do this, giving rise to the class of generalized policy iteration algorithms.

The second issue can be corrected by allowing trajectories to contribute to any state-action pair in them.

This may also help to some extent with the third problem, although a better solution when returns have high variance is Sutton's[7][8] temporal difference (TD) methods that are based on the recursive Bellman equation.

Note that the computation in TD methods can be incremental (when after each transition the memory is changed and the transition is thrown away), or batch (when the transitions are batched and the estimates are computed once based on the batch).

Batch methods, such as the least-squares temporal difference method[9], may use the information in the samples better, while incremental methods are the only choice when batch methods are infeasible due to their high computational or memory complexity.

In order to address the fifth issue, function approximation methods are used.

The algorithms then adjust the weights, instead of adjusting the values associated with the individual state-action pairs.

Methods based on ideas from nonparametric statistics (which can be seen to construct their own features) have been explored.

Value iteration can also be used as a starting point, giving rise to the Q-Learning algorithm and its many variants.

[10] The problem with using action-values is that they may need highly precise estimates of the competing action values that can be hard to obtain when the returns are noisy.

Using the so-called compatible function approximation method compromises generality and efficiency.

that can continuously interpolate between Monte Carlo methods that do not rely on the Bellman equations and the basic TD methods that rely entirely on the Bellman equations.

An alternative method is to search directly in (some subset of) the policy space, in which case the problem becomes a case of stochastic optimization.

Gradient-based methods (policy gradient methods) start with a mapping from a finite-dimensional (parameter) space to the space of policies: given the parameter vector

Defining the performance function by under mild conditions this function will be differentiable as a function of the parameter vector

Since an analytic expression for the gradient is not available, only a noisy estimate is available.

Such an estimate can be constructed in many ways, giving rise to algorithms such as Williams' REINFORCE[11] method (which is known as the likelihood ratio method in the simulation-based optimization literature).[12] Policy search methods have been used in the robotics context.[13] Many policy search methods may get stuck in local optima (as they are based on local search).

large class of methods avoids relying on gradient information.These include simulated annealing, cross-entropy search or methods of evolutionary computation.

Many gradient-free methods can achieve (in theory and in the limit) a global optimum.

In multiple domains[which?] they have demonstrated performance.[citation needed] Policy search methods may converge slowly given noisy data.

For example, this happens in episodic problems when the trajectories are long and the variance of the returns is large.

Value-function based methods that rely on temporal differences might help in this case.

In recent years, actor–critic methods have been proposed and performed well on various problems.[14] Both the asymptotic and finite-sample behavior of most algorithms is well understood.

Algorithms with provably good online performance (addressing the exploration issue) are known.

Efficient exploration of large MDPs is largely unexplored (except for the case of bandit problems).[clarification needed] Although finite-time performance bounds appeared for many algorithms, these bounds are expected to be rather loose and thus more work is needed to better understand the relative advantages and limitations.

Temporal-difference-based algorithms converge under a wider set of conditions than was previously possible (for example, when used with arbitrary, smooth function approximation).

Research topics include Multiagent or distributed reinforcement learning is a topic of interest.

Applications are expanding.[15] Reinforcement learning algorithms such as TD learning are under investigation as a model for dopamine-based learning in the brain.

In this model, the dopaminergic projections from the substantia nigra to the basal ganglia function as the prediction error.

Reinforcement learning has been used as a part of the model for human skill learning, especially in relation to the interaction between implicit and explicit learning in skill acquisition (the first publication on this application was in 1995-1996).[16] Since the work on learning ATARI TV games by Google DeepMind [17], end-to-end reinforcement learning or deep reinforcement learning is garnering attention.

This approach extends reinforcement learning to the entire process from sensors to motors by forming it using an artificial neural network especially a deep network without designing state space or action space explicitly.

It can minimize the interference from human design and flexible and purposive learning on a huge degree of freedom enables the emergence of game strategy and other necessary functions inside.

Instead, the reward function is inferred given an observed behavior from an expert.

The idea is to mimic the observed behavior, which is often optimal or close to optimal.[18] In apprenticeship learning, an expert demonstrates the ideal behavior.

The system tries to recover the policy directly using observations from the expert.

Most reinforcement learning papers are published at the major machine learning and AI conferences (ICML, NIPS, AAAI, IJCAI, UAI, AI and Statistics) and journals (JAIR, JMLR, Machine learning journal, IEEE T-CIAIG).

Control researchers publish their papers at the CDC and ACC conferences, or, e.g., in the journals IEEE Transactions on Automatic Control, or Automatica, although applied works tend to be published in more specialized journals.

Other than this, papers also published in the major conferences of the neural networks, fuzzy, and evolutionary computation communities.

The annual IEEE symposium titled Approximate Dynamic Programming and Reinforcement Learning (ADPRL) and the biannual European Workshop on Reinforcement Learning (EWRL) are two regularly held meetings where RL researchers meet.

41 Essential Machine Learning Interview Questions (with answers)

We’ve traditionally seen machine learning interview questions pop up in several categories.

The third has to do with your general interest in machine learning: you’ll be asked about what’s going on in the industry and how you keep up with the latest machine learning trends.

Finally, there are company or industry-specific questions that test your ability to take your general machine learning knowledge and turn it into actionable points to drive the bottom line forward.

We’ve divided this guide to machine learning interview questions into the categories we mentioned above so that you can more easily get to the information you need when it comes to machine learning interview questions.

This can lead to the model underfitting your data, making it hard for it to have high predictive accuracy and for you to generalize your knowledge from the training set to the test set.

The bias-variance decomposition essentially decomposes the learning error from any algorithm by adding the bias, the variance and a bit of irreducible error due to noise in the underlying dataset.

For example, in order to do classification (a supervised learning task), you’ll need to first label the data you’ll use to train the model to classify data into your labeled groups.

K-means clustering requires only a set of unlabeled points and a threshold: the algorithm will take unlabeled points and gradually learn how to cluster them into groups by computing the mean of the distance between different points.

It’s often used as a proxy for the trade-off between the sensitivity of the model (true positives) vs the fall-out or the probability it will trigger a false alarm (false positives).

More reading: Precision and recall (Wikipedia) Recall is also known as the true positive rate: the amount of positives your model claims compared to the actual number of positives there are throughout the data.

Precision is also known as the positive predictive value, and it is a measure of the amount of accurate positives your model claims compared to the number of positives it actually claims.

It can be easier to think of recall and precision in the context of a case where you’ve predicted that there were 10 apples and 5 oranges in a case of 10 apples.

Mathematically, it’s expressed as the true positive rate of a condition sample divided by the sum of the false positive rate of the population and the true positive rate of a condition.

Say you had a 60% chance of actually having the flu after a flu test, but out of people who had the flu, the test will be false 50% of the time, and the overall population only has a 5% chance of having the flu.

(Quora) Despite its practical applications, especially in text mining, Naive Bayes is considered “Naive” because it makes an assumption that is virtually impossible to see in real-life data: the conditional probability is calculated as the pure product of the individual probabilities of components.

clever way to think about this is to think of Type I error as telling a man he is pregnant, while Type II error means you tell a pregnant woman she isn’t carrying a baby.

More reading: Deep learning (Wikipedia) Deep learning is a subset of machine learning that is concerned with neural networks: how to use backpropagation and certain principles from neuroscience to more accurately model large sets of unlabelled or semi-structured data.

More reading: Using k-fold cross-validation for time-series model selection (CrossValidated) Instead of using standard k-folds cross-validation, you have to pay attention to the fact that a time series is not randomly distributed data —

More reading: Pruning (decision trees) Pruning is what happens in decision trees when branches that have weak predictive power are removed in order to reduce the complexity of the model and increase the predictive accuracy of a decision tree model.

For example, if you wanted to detect fraud in a massive dataset with a sample of millions, a more accurate model would most likely predict no fraud at all if only a vast minority of cases were fraud.

More reading: Regression vs Classification (Math StackExchange) Classification produces discrete values and dataset to strict categories, while regression gives you continuous results that allow you to better distinguish differences between individual points.

You would use classification over regression if you wanted your results to reflect the belongingness of data points in your dataset to certain explicit categories (ex: If you wanted to know whether a name was male or female rather than just how correlated they were with male and female names.) Q21- Name an example where ensemble techniques might be useful.

They typically reduce overfitting in models and make the model more robust (unlikely to be influenced by small changes in the training data).  You could list some examples of ensemble methods, from bagging to boosting to a “bucket of models” method and demonstrate how they could increase predictive power.

(Quora) This is a simple restatement of a fundamental problem in machine learning: the possibility of overfitting training data and carrying the noise of that data through to the test set, thereby providing inaccurate generalizations.

There are three main methods to avoid overfitting: 1- Keep the model simpler: reduce variance by taking into account fewer variables and parameters, thereby removing some of the noise in the training data.

More reading: How to Evaluate Machine Learning Algorithms (Machine Learning Mastery) You would first split the dataset into training and test sets, or perhaps use cross-validation techniques to further segment the dataset into composite sets of training and test sets within the data.

More reading: Kernel method (Wikipedia) The Kernel trick involves kernel functions that can enable in higher-dimension spaces without explicitly calculating the coordinates of points within that dimension: instead, kernel functions compute the inner products between the images of all pairs of data in a feature space.

This allows them the very useful attribute of calculating the coordinates of higher dimensions while being computationally cheaper than the explicit calculation of said coordinates. Many algorithms can be expressed in terms of inner products.

More reading: Writing pseudocode for parallel programming (Stack Overflow) This kind of question demonstrates your ability to think in parallelism and how you could handle concurrency in programming implementations dealing with big data.

For example, if you were interviewing for music-streaming startup Spotify, you could remark that your skills at developing a better recommendation model would increase user retention, which would then increase revenue in the long run.

The startup metrics Slideshare linked above will help you understand exactly what performance indicators are important for startups and tech companies as they think about revenue and growth.

Your interviewer is trying to gauge if you’d be a valuable member of their team and whether you grasp the nuances of why certain things are set the way they are in the company’s data process based on company- or industry-specific conditions.

This overview of deep learning in Nature by the scions of deep learning themselves (from Hinton to Bengio to LeCun) can be a good reference paper and an overview of what’s happening in deep learning —

More reading: Mastering the game of Go with deep neural networks and tree search (Nature) AlphaGo beating Lee Sidol, the best human player at Go, in a best-of-five series was a truly seminal event in the history of machine learning and deep learning.

The Nature paper above describes how this was accomplished with “Monte-Carlo tree search with deep neural networks that have been trained by supervised learning, from human expert games, and by reinforcement learning from games of self-play.” Cover image credit: https://www.flickr.com/photos/iwannt/8596885627

Improving Machine Learning Beyond the Algorithm

User interaction data is at the heart of interactive machine learning systems (IMLSs), such as voice-activated digital assistants, e-commerce destinations, news content hubs, and movie streaming...

1. Introduction to Statistics

NOTE: This video was recorded in Fall 2017. The rest of the lectures were recorded in Fall 2016, but video of Lecture 1 was not available. MIT 18.650 Statistics for Applications, Fall 2016...

Mod-01 Lec-39 Genetic Algorithms contd...

Design and Optimization of Energy Systems by Prof. C. Balaji , Department of Mechanical Engineering, IIT Madras. For more details on NPTEL visit

Lecture 03 -The Linear Model I

The Linear Model I - Linear classification and linear regression. Extending linear models through nonlinear transforms. Lecture 3 of 18 of Caltech's Machine Learning Course - CS 156 by Professor...

A Framework for Interactive Learning

A Google Algorithms TechTalk, 10/9/17, presented by David Kempe, USC Talks from visiting speakers on Algorithms, Theory, and Optimization.

Economics and Probabilistic Machine Learning

David Blei of Columbia University opens the Becker Friedman Institute's conference on machine learning in economics with an overview of how probabilistic machine learning techniques can be...

Ray Kurzweil: "How to Create a Mind" | Talks at Google

How to Create a Mind: The Secret of Human Thought Revealed About the book: In How to Create a Mind, The Secret of Human Thought Revealed, the bold futurist and author of The New York Times...

Introduction to the class and overview of topics

MIT 8.591J Systems Biology, Fall 2014 View the complete course: Instructor: Jeff Gore In this lecture, Prof. Jeff Gore introduces the topics of the course, which..

Learning from Untrusted Data

A Google Algorithms Seminar, 4/14/17, presented by Greg Valiant, Stanford University Talks from visiting speakers on Algorithms, Theory, and Optimization.

Developing Bug-Free Machine Learning Systems Using Formal Mathematics

Noisy data, non-convex objectives, model misspecification, and numerical instability can all cause undesired behaviors in machine learning systems. As a result, detecting actual implementation...