# AI News, Have You Tried Using a 'Nearest Neighbor Search'?

- On Wednesday, August 15, 2018
- By Read More

## Have You Tried Using a 'Nearest Neighbor Search'?

Roughly a year and a half ago, I had the privelage of taking a graduate 'Introduction to Machine Learning' course under the tutelage of the fantastic Professor Leslie Kaelbling.

While I learned a great deal over the course of the semester, there was one minor point that she made to the class which stuck with me more than I expected it to at the time: before using a really fancy or sophisticated or 'in-vogue' machine learning algorithm to solve your problem, try a simple Nearest Neighbor Search first.

In addition, if you don't have very many points in your initial data set, the performance of this approach is questionableIt's worth noting that having few data points in one's training set is already enough to give most machine learning researchers pause.

I found myself asking most of them the same question: Machine learning is, in many ways, a science of comparisonThere are often theoretical reasons to prefer one technique over another, but sometimes effectiveness or popularity is reason enough on its own, as has become the case for neural networks.

However, there's certainly a more general lesson to be learned here: in the midst of an age characterized by algorithms of famously 'unreasonable effectiveness', it's important to remember that simpler techniques are still powerful enough to solve many problems.

My job was to detect a vehicle — not a specific type of vehicle and not a make or model of car — a single, specific vehicle that we have in our garage.

My first instinct was to pull up the cutting edge work in real-time object detection (which I did) and get it to work on my own machine (which I also did) and to train it with a massive amount of images of our particular vehicle (which I was unable to doCollecting thousands of images on one's own is difficult enough without having to vary backgrounds and lighting conditions to build an effective training set.

The takeaway here is that though the simpler algorithms may not perform quite as well as the state-of-the-art, the gain in both time and computational complexity often outweighs the difficulties associated with more sophisticated solutions.

- On Wednesday, August 15, 2018
- By Read More

## Cluster analysis

Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense) to each other than to those in other groups (clusters).

It is a main task of exploratory data mining, and a common technique for statistical data analysis, used in many fields, including machine learning, pattern recognition, image analysis, information retrieval, bioinformatics, data compression, and computer graphics.

Popular notions of clusters include groups with small distances between cluster members, dense areas of the data space, intervals or particular statistical distributions.

The appropriate clustering algorithm and parameter settings (including parameters such as the distance function to use, a density threshold or the number of expected clusters) depend on the individual data set and intended use of the results.

Besides the term clustering, there are a number of terms with similar meanings, including automatic classification, numerical taxonomy, botryology (from Greek βότρυς 'grape') and typological analysis.

The subtle differences are often in the use of the results: while in data mining, the resulting groups are the matter of interest, in automatic classification the resulting discriminative power is of interest.

At different distances, different clusters will form, which can be represented using a dendrogram, which explains where the common name 'hierarchical clustering' comes from: these algorithms do not provide a single partitioning of the data set, but instead provide an extensive hierarchy of clusters that merge with each other at certain distances.

Apart from the usual choice of distance functions, the user also needs to decide on the linkage criterion (since a cluster consists of multiple objects, there are multiple candidates to compute the distance) to use.

Popular choices are known as single-linkage clustering (the minimum of object distances), complete linkage clustering (the maximum of object distances) or UPGMA ('Unweighted Pair Group Method with Arithmetic Mean', also known as average linkage clustering).

Furthermore, hierarchical clustering can be agglomerative (starting with single elements and aggregating them into clusters) or divisive (starting with the complete data set and dividing it into partitions).

They are not very robust towards outliers, which will either show up as additional clusters or even cause other clusters to merge (known as 'chaining phenomenon', in particular with single-linkage clustering).

When the number of clusters is fixed to k, k-means clustering gives a formal definition as an optimization problem: find the k cluster centers and assign the objects to the nearest cluster center, such that the squared distances from the cluster are minimized.

Variations of k-means often include such optimizations as choosing the best of multiple runs, but also restricting the centroids to members of the data set (k-medoids), choosing medians (k-medians clustering), choosing the initial centers less randomly (k-means++) or allowing a fuzzy cluster assignment (fuzzy c-means).

Here, the data set is usually modeled with a fixed (to avoid overfitting) number of Gaussian distributions that are initialized randomly and whose parameters are iteratively optimized to better fit the data set.

A cluster consists of all density-connected objects (which can form a cluster of an arbitrary shape, in contrast to many other methods) plus all objects that are within these objects' range.

Another interesting property of DBSCAN is that its complexity is fairly low – it requires a linear number of range queries on the database – and that it will discover essentially the same results (it is deterministic for core and noise points, but not for border points) in each run, therefore there is no need to run it multiple times.

On data sets with, for example, overlapping Gaussian distributions – a common use case in artificial data – the cluster borders produced by these algorithms will often look arbitrary, because the cluster density decreases continuously.

Besides that, the applicability of the mean-shift algorithm to multidimensional data is hindered by the unsmooth behaviour of the kernel density estimate, which results in over-fragmentation of cluster tails.[13]

With the recent need to process larger and larger data sets (also known as big data), the willingness to trade semantic meaning of the generated clusters for performance has been increasing.

This led to the development of pre-clustering methods such as canopy clustering, which can process huge data sets efficiently, but the resulting 'clusters' are merely a rough pre-partitioning of the data set to then analyze the partitions with existing slower methods such as k-means clustering.

For high-dimensional data, many of the existing methods fail due to the curse of dimensionality, which renders particular distance functions problematic in high-dimensional spaces.

This led to new clustering algorithms for high-dimensional data that focus on subspace clustering (where only some attributes are used, and cluster models include the relevant attributes for the cluster) and correlation clustering that also looks for arbitrary rotated ('correlated') subspace clusters that can be modeled by giving a correlation of their attributes.[19]

Also message passing algorithms, a recent development in computer science and statistical physics, has led to the creation of new types of clustering algorithms.[30]

Popular approaches involve 'internal' evaluation, where the clustering is summarized to a single quality score, 'external' evaluation, where the clustering is compared to an existing 'ground truth' classification, 'manual' evaluation by a human expert, and 'indirect' evaluation by evaluating the utility of the clustering in its intended application.[32]

One drawback of using internal criteria in cluster evaluation is that high scores on an internal measure do not necessarily result in effective information retrieval applications.[34]

Therefore, the internal evaluation measures are best suited to get some insight into situations where one algorithm performs better than another, but this shall not imply that one algorithm produces more valid results than another.[4]

More than a dozen of internal evaluation measures exist, usually based on the intuition that items in the same cluster should be more similar than items in different clusters.[35]:115–121 For example, the following methods can be used to assess the quality of clustering algorithms based on internal criterion:

However, it has recently been discussed whether this is adequate for real data, or only on synthetic data sets with a factual ground truth, since classes can contain internal structure, the attributes present may not allow separation of clusters or the classes may contain anomalies.[37]

In the special scenario of constrained clustering, where meta information (such as class labels) is used already in the clustering process, the hold-out of information for evaluation purposes is non-trivial.[38]

In place of counting the number of times a class was correctly assigned to a single data point (known as true positives), such pair counting metrics assess whether each pair of data points that is truly in the same cluster is predicted to be in the same cluster.[31]

- On Wednesday, August 15, 2018
- By Read More

## Support vector machine

In machine learning, support vector machines (SVMs, also support vector networks[1]) are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis.

Given a set of training examples, each marked as belonging to one or the other of two categories, an SVM training algorithm builds a model that assigns new examples to one category or the other, making it a non-probabilistic binary linear classifier (although methods such as Platt scaling exist to use SVM in a probabilistic classification setting).

An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible.

In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces.

When data is unlabelled, supervised learning is not possible, and an unsupervised learning approach is required, which attempts to find natural clustering of the data to groups, and then map new data to these formed groups.

algorithm created by Hava Siegelmann and Vladimir Vapnik, applies the statistics of support vectors, developed in the support vector machines algorithm, to categorize unlabeled data, and is one of the most widely used clustering algorithms in industrial applications.[citation needed]

If such a hyperplane exists, it is known as the maximum-margin hyperplane and the linear classifier it defines is known as a maximum margin classifier;

More formally, a support vector machine constructs a hyperplane or set of hyperplanes in a high- or infinite-dimensional space, which can be used for classification, regression, or other tasks like outliers detection[3].

Intuitively, a good separation is achieved by the hyperplane that has the largest distance to the nearest training-data point of any class (so-called functional margin), since in general the larger the margin the lower the generalization error of the classifier[4].

Whereas the original problem may be stated in a finite dimensional space, it often happens that the sets to discriminate are not linearly separable in that space.

To keep the computational load reasonable, the mappings used by SVM schemes are designed to ensure that dot products of pairs input data vectors may be computed easily in terms of the variables in the original space, by defining them in terms of a kernel function

The hyperplanes in the higher-dimensional space are defined as the set of points whose dot product with a vector in that space is constant[why?].

i

i

∑

i

i

x

i

c

o

n

s

t

a

n

t

i

In this way, the sum of kernels above can be used to measure the relative nearness of each test point to the data points originating in one or the other of the sets to be discriminated.

i

x

→

i

x

→

i

x

→

i

i

i

x

→

i

x

→

w

→

w

→

b

‖

w

→

‖

w

→

If the training data is linearly separable, we can select two parallel hyperplanes that separate the two classes of data, so that the distance between them is as large as possible.

2

‖

w

→

‖

w

→

w

→

x

→

w

→

x

→

x

→

i

x

→

i

i

w

→

x

→

i

x

→

i

x

→

i

the second term in the loss function will become negligible, hence, it will behave similar to the hard-margin SVM, if the input data are linearly classifiable, but will still learn if a classification rule is viable or not.

Vapnik suggested a way to create nonlinear classifiers by applying the kernel trick (originally proposed by Aizerman et al.[14]) to maximum-margin hyperplanes.[12]

It is noteworthy that working in a higher-dimensional feature space increases the generalization error of support vector machines, although given enough samples the algorithm still performs well.[15]

x

i

→

x

i

→

x

j

→

x

i

→

x

j

→

{\displaystyle k({\vec {x_{i}}},{\vec {x_{j}}})=\varphi ({\vec {x_{i}}})\cdot \varphi ({\vec {x_{j}}})}

w

→

∑

i

i

y

i

x

→

i

w

→

x

→

∑

i

i

y

i

x

→

i

x

→

{\displaystyle \textstyle {\vec {w}}\cdot \varphi ({\vec {x}})=\sum _{i}\alpha _{i}y_{i}k({\vec {x}}_{i},{\vec {x}})}

i

0

,

1

−

y

i

(

w

x

i

−

b

)

i

i

i

i

i

i

i

x

→

i

i

−

1

x

→

i

w

→

x

→

i

i

−

1

.) Suppose now that we would like to learn a nonlinear classification rule which corresponds to a linear classification rule for the transformed data points

x

→

x

→

x

→

x

→

x

→

w

→

x

→

Both techniques have proven to offer significant advantages over the traditional approach when dealing with large, sparse datasets—sub-gradient methods are especially efficient when there are many training examples, and coordinate descent when the dimension of the feature space is high.

w

→

As such, traditional gradient descent (or SGD) methods can be adapted, where instead of taking a step in the direction of the functions gradient, a step is taken in the direction of a vector selected from the function's sub-gradient.

The soft-margin support vector machine described above is an example of an empirical risk minimization (ERM) algorithm for the hinge loss.

Seen this way, support vector machines belong to a natural class of algorithms for statistical inference, and many of its unique features are due to the behavior of the hinge loss.

(for example, that they are generated by a finite Markov process), if the set of hypotheses being considered is small enough, the minimizer of the empirical risk will closely approximate the minimizer of the expected risk as

H

H

w

^

w

^

In light of the above discussion, we see that the SVM technique is equivalent to empirical risk minimization with Tikhonov regularization, where in this case the loss function is the hinge loss

The difference between the three lies in the choice of loss function: regularized least-squares amounts to empirical risk minimization with the square-loss,

The difference between the hinge loss and these other loss functions is best stated in terms of target functions - the function that minimizes expected risk for a given pair of random variables

x

p

x

/

1

−

p

x

Thus, in a sufficiently rich hypothesis space—or equivalently, for an appropriately chosen kernel—the SVM classifier will converge to the simplest function (in terms of

This extends the geometric interpretation of SVM—for linear classification, the empirical risk is minimized by any function whose margins lie between the support vectors, and the simplest of these is the max-margin classifier.[18]

The final model, which is used for testing and for classifying new data, is then trained on the whole training set using the selected parameters.[20]

Multiclass SVM aims to assign labels to instances by using support vector machines, where the labels are drawn from a finite set of several elements.

Crammer and Singer proposed a multiclass SVM method which casts the multiclass classification problem into a single optimization problem, rather than decomposing it into multiple binary classification problems.[25]

Transductive support vector machines extend SVMs in that they could also treat partially labeled data in semi-supervised learning by following the principles of transduction.

w

→

y

⋆

→

The model produced by support vector classification (as described above) depends only on a subset of the training data, because the cost function for building the model does not care about training points that lie beyond the margin.

Analogously, the model produced by SVR depends only on a subset of the training data, because the cost function for building the model ignores any training data close to the model prediction.

This extended view allows for the application of Bayesian techniques to SVMs, such as flexible feature modeling, automatic hyperparameter tuning, and predictive uncertainty quantification.

There exist several specialized algorithms for quickly solving the QP problem that arises from SVMs, mostly relying on heuristics for breaking the problem down into smaller, more-manageable chunks.

To avoid solving a linear system involving the large kernel matrix, a low rank approximation to the matrix is often used in the kernel trick.

Another common method is Platt's sequential minimal optimization (SMO) algorithm, which breaks the problem down into 2-dimensional sub-problems that are solved analytically, eliminating the need for a numerical optimization algorithm and matrix storage.

The special case of linear support vector machines can be solved more efficiently by the same kind of algorithms used to optimize its close cousin, logistic regression;

Each convergence iteration takes time linear in the time taken to read the train data and the iterations also have a Q-Linear Convergence property, making the algorithm extremely fast.

- On Wednesday, September 18, 2019

**What is machine learning and how to learn it ?**

Machine learning is just to give trained data to a program and get better result for complex problems. It is very close to data ..

**Disjoint Sets using union by rank and path compression Graph Algorithm**

Design disjoint sets which supports makeSet, union and findSet operations. Uses union by rank and path compression for optimization.

**Sublinear-time Approximation Algorithms - Prof. Artur Czumaj**

Yandex School of Data Analysis Conference Machine Learning: Prospects and Applications We will survey some of ..

**11. Introduction to Machine Learning**

MIT 6.0002 Introduction to Computational Thinking and Data Science, Fall 2016 View the complete course: Instructor: Eric Grimson ..

**Deep Learning with Neural Networks and TensorFlow Introduction**

Welcome to a new section in our Machine Learning Tutorial series: Deep Learning with Neural Networks and TensorFlow. The artificial neural network is a ...

**Cross Validation**

Watch on Udacity: Check out the full Advanced Operating Systems course for free ..

**How to calculate linear regression using least square method**

An example of how to calculate linear regression line using least squares. A step by step tutorial showing how to develop a linear regression equation. Use of ...

**Lecture 15 - Kernel Methods**

Kernel Methods - Extending SVM to infinite-dimensional spaces using the kernel trick, and to non-separable data using soft margins. Lecture 15 of 18 of ...

**Lecture 01 - The Learning Problem**

The Learning Problem - Introduction; supervised, unsupervised, and reinforcement learning. Components of the learning problem. Lecture 1 of 18 of Caltech's ...

**Pedro Domingos: "The Master Algorithm" | Talks at Google**

Machine learning is the automation of discovery, and it is responsible for making our smartphones work, helping Netflix suggest movies for us to watch, and ...