AI News, Big data, small data, and the role of logic in machine learning

Big data, small data, and the role of logic in machine learning

It would be easy to believe that all the interesting machine learning problems involve big data.

For instance, consider a spreadsheet in which the first column contains email addresses of the following form: Now suppose you want to learn how to populate a second column using the following training example of the form (input,output): It is clear that to populate the second column we copy everything in the first column up to the @ symbol, uppercase the initial letter and the letter after the .

However, most standard machine learning algorithms, such as artificial neural networks and support vector machines, cannot learn from few (~1) training examples, and typically require many (>10k).

The field of logic programming is too big to explain fully, but we can illustrate the paradigm with an example using Prolog, the most popular logic programming language, developed by Alain Colmerauer's group in Marseille, France, in the early 70s.

For instance, we can ask whether marge is the parent of bart: Or ask whether bart is the parent of marge: Or ask who is the grandmother of bart: Prolog, and logic programming in general, is based on logical deduction.

In this example, we are given two premises (above the line), from which we derive a conclusion (below the line) We can derive this conclusion using a rule of inference named Modus ponens, which is stated as follows: (P, P→Q) → Q In the last Simpsons example, logical deduction is used to infer that mona is the grandmother of bart (i.e.

This basically means that the inputs and outputs to an ILP systems are logic programs, in contrast to most other forms of machine learning, where the inputs and outputs are typically vectors of real numbers.

By using logic programming as a uniform representation, ILP has three main advantageous over most textbook machine learning approaches: (1) expressibility, (2) the ability to include background knowledge in learning, and (3) human readable theories.

Consider learning the concept of the kinship grandparent relation given the following four positive training examples, denoted as E+: To learn this concept, ILP systems can include additional information, known as background knowledge, denoted as B, in the learning task.

Given the training examples E formed of positive examples E+ and negative examples E- and background knowledge B, the goal of an ILP system is to find (induce) a hypothesis H (a logic program) that explains all of the positive examples and none of the negative examples.

Returning to ILP and the Simpsons kinship example, given the examples and the background knowledge, we would expect an ILP learner to find a hypothesis similar to the following: This hypothesis says that X is the grandparent of Y if X is the parent of some Z and Z is the parent of Y.

For instance, Lin et al [1] demonstrated an ILP system which can learn string transformation functions from a single training example, similar to the problem given in the introduction.

This work was particularily interesting in that it performed a procedure named dependent learning, in which solutions to simple problems were reused to solve more difficult problems, forming a hierarachy of learned logic programs.

For instance, our recent IJCAI paper [3] (the grandaddy of AI conferences) demonstrated an ILP system which was able to learn quicksort given only 5 training examples, and, importantly, to prefer the quicksort hypothesis over the less efficient bubble sort hypothesis.

This vanilla meta-interpreter basically takes a goal as input and returns true if and only if: In logic programming terms, a vanilla Prolog meta-interpreter attempts to prove a goal by repeatedly fetching first-order clauses whose heads unify with a given goal.

How to solve an Integer Linear Programming Problem Using Branch and Bound

In this video, first we give a brief introduction about the difference between the linear programming problem and Integer linear programming problem. Then, we ...

Linear Programming. Lecture 23. Adding a constraint. Integer programming-introduction

Nov. 15, 2016. Penn State University.

15. Linear Programming: LP, reductions, Simplex

MIT 6.046J Design and Analysis of Algorithms, Spring 2015 View the complete course: Instructor: Srinivas Devadas In this lecture, ..

Scheduling Problem: A Linear Programming Example

A linear programming example of staff scheduling problem.

Lec-14 Synthesis: Part-VII

Lecture Series on Electronic Design and Automation by Prof.I.Sengupta, Department of Computer Science and Engineering, IIT Kharagpur. For more details on ...

Defining Inductive Learning - Georgia Tech - Machine Learning

Watch on Udacity: Check out the full Advanced Operating Systems course for free ..

Linear Programming decoder In NLP Part 1

Linear Programming Decoders in Natural Language Processing: From Integer Programming to Message Passing and Dual Decomposition André F. T. Martins ...

Linear Programming problem formulation - Example 2

In this video, you will learn how to formulate a linear programming problem in order to select the optimal advertising media sources for a company.

Mixed Integer Linear Programming (MILP) Tutorial

Optimization with continuous and integer variables is more challenging than problems with only continuous variables. This tutorial and example problem gives ...

Lec-16 Cutting Plane Algorithm

Lecture series on Advanced Operations Research by Prof. G.Srinivasan, Department of Management Studies, IIT Madras. For more details on NPTEL visit ...