# AI News, Machine Learning FAQ

- On Thursday, June 7, 2018
- By Read More

## Machine Learning FAQ

However, there is this popular saying: “that if al that you have is a hammer everything starts to look like a nail” The math behind neural nets is probably a bit harder to understand, but I don’t think they are really black boxes.

However, I think people in the biosciences prefer “interpretable” results, e.g., decision trees where they can follow the “reasoning” step by step.

Of course, the primary goal is often to a good agonist or antagonist (inhibitor or drug) in a million-compound database to solve a particular problem;

- On Thursday, June 7, 2018
- By Read More

## Machine Learning FAQ

However, there is this popular saying: “that if al that you have is a hammer everything starts to look like a nail” The math behind neural nets is probably a bit harder to understand, but I don’t think they are really black boxes.

However, I think people in the biosciences prefer “interpretable” results, e.g., decision trees where they can follow the “reasoning” step by step.

Of course, the primary goal is often to a good agonist or antagonist (inhibitor or drug) in a million-compound database to solve a particular problem;

- On Thursday, June 7, 2018
- By Read More

## A Kaggle Master Explains Gradient Boosting

We want to predict a person’s age based on whether they play video games, enjoy gardening, and their preference on wearing hats.

LikesHats is probably just random noise We can do a quick and dirty inspection of the data to check these assumptions: Now let’s model the data with a regression tree.

The reason is because this regression tree is able to consider LikesHats and PlaysVideoGames with respect to all the training samples, contrary to our overfit regression tree which only considered each feature inside a small region of the input space, thus allowing random noise to select LikesHats as a splitting feature.

Specifically, where is an initial model fit to Since we initialize the procedure by fitting , our task at each step is to find .

So in theory, a well coded gradient boosting module would allow you to “plug in” various classes of weak learners at your disposal.

In practice however, is almost always a tree based learner, so for now it’s fine to interpret as a regression tree like the one in our example.

Respective squared error reductions would be 43 and 19, while respective absolute error reductions would be 1 and 1.

So a regression tree, which by default minimizes squared error, will focus heavily on reducing the residual of the first training sample.

But if we want to minimize absolute error, moving each prediction one unit closer to the target produces an equal reduction in the cost function.

With this in mind, suppose that instead of training on the residuals of , we instead train on the gradient of the loss function, with respect to the prediction values produced by .

Essentially, we’ll train on the cost reduction for each sample if the predicted value were to become one unit closer to the observed value.

In the case of absolute error, will simply consider the sign of every residual (as apposed to squared error which would consider the magnitude of every residual).

After samples in are grouped into leaves, an average gradient can be calculated and then scaled by some factor, , so that minimizes the loss function for the samples in each leaf.

Notice, you can interpret this function as calculating the squared error for two data points, 15 and 25 given two prediction values, and (but with a multiplier to make the math work out nicely).

Although we can minimize this function directly, gradient descent will let us minimize more complicated loss functions that we can’t minimize directly.

(This is the part that gets butchered by a lot of gradient boosting explanations.) Let’s clean up the ideas above and reformulate our gradient boosting model once again.

In case you want to check your understanding so far, our current gradient boosting applied to our sample problem for both squared error and absolute error objectives yields the following results.

As this slow convergence occurs, samples that get closer to their target end up being grouped together into larger and larger leaves (due to fixed tree size parameters), resulting in a natural regularization effect.

XGBoost employs a number of tricks that make it faster and more accurate than traditional gradient boosting (particularly 2nd-order gradient descent) so I’ll encourage you to try it out and read Tianqi Chen’s paper about the algorithm.

- On Thursday, June 7, 2018
- By Read More

## A Tour of Machine Learning Algorithms

In this post, we take a tour of the most popular machine learning algorithms.

There are different ways an algorithm can model a problem based on its interaction with the experience or environment or whatever we want to call the input data.

There are only a few main learning styles or learning models that an algorithm can have and we’ll go through them here with a few examples of algorithms and problem types that they suit.

This taxonomy or way of organizing machine learning algorithms is useful because it forces you to think about the roles of the input data and the model preparation process and select one that is the most appropriate for your problem in order to get the best result.

Let’s take a look at three different learning styles in machine learning algorithms: Input data is called training data and has a known label or result such as spam/not-spam or a stock price at a time.

hot topic at the moment is semi-supervised learning methods in areas such as image classification where there are large datasets with very few labeled examples.

The most popular regression algorithms are: Instance-based learning model is a decision problem with instances or examples of training data that are deemed important or required to the model.

Such methods typically build up a database of example data and compare new data to the database using a similarity measure in order to find the best match and make a prediction.

The most popular instance-based algorithms are: An extension made to another method (typically regression methods) that penalizes models based on their complexity, favoring simpler models that are also better at generalizing.

The most popular regularization algorithms are: Decision tree methods construct a model of decisions made based on actual values of attributes in the data.

All methods are concerned with using the inherent structures in the data to best organize the data into groups of maximum commonality.

The most popular clustering algorithms are: Association rule learning methods extract rules that best explain observed relationships between variables in data.

The most popular association rule learning algorithms are: Artificial Neural Networks are models that are inspired by the structure and/or function of biological neural networks.

They are a class of pattern matching that are commonly used for regression and classification problems but are really an enormous subfield comprised of hundreds of algorithms and variations for all manner of problem types.

The most popular artificial neural network algorithms are: Deep Learning methods are a modern update to Artificial Neural Networks that exploit abundant cheap computation.

They are concerned with building much larger and more complex neural networks and, as commented on above, many methods are concerned with semi-supervised learning problems where large datasets contain very little labeled data.

The most popular deep learning algorithms are: Like clustering methods, dimensionality reduction seek and exploit the inherent structure in the data, but in this case in an unsupervised manner or order to summarize or describe data using less information.

Ensemble methods are models composed of multiple weaker models that are independently trained and whose predictions are combined in some way to make the overall prediction.

- On Thursday, June 7, 2018
- By Read More

## What are the advantages of different classification algorithms?

The 2 generally accepted high performance methods are SVMs , Random Forests, and Boosted Trees.

An exception would be SVM based Factorization Machines (see below) Many modern methods can be formulated as regularization problems solving a convex optimizations with some loss function.

SVMs can even be applied in situations when the labels are only partially or weakly known, but we have additional information about the global statistics Convex Relaxations of Transductive Learning And this works particularly well for text classification when choosing an extended basis set, such as using a word2vec or glove.

FMs would be useful for picking up very weak correlations between sparse, discrete features Regularization:Most modern methods include some basic regularizers like L1, L2, or some combination of the 2 (elastic net) Why would one choose, say, an L1 SVM over an L2 SVM?

L2 SVM can capture models that require many tiny features Numerical performance:SVMs, like many convex methods, have been optimized for the past 15 years and these days scale very well...I routinely use linear SVMs to solve classifications problems with tens of millions of instances and say half a million features.

I am bit puzzled why others would claim these are hard to train (unless they are trying to use a non-linear / RBF kernel and have a poor implementation) And in my experience in using them over 15 years in production, they are very easy train at very large scale for most commercial applications.

For more than say 10M instances and 1M features, the training can be done in parallel using lock free stochastic coordinate descent, or in online mode using.

- On Thursday, February 20, 2020

**Auburn Coach Wife Kristi Malzahn Agrees with Match & eHarmony: Men are Jerks**

My advice is this: Settle! That's right. Don't worry about passion or intense connection. Don't nix a guy based on his annoying habit of yelling "Bravo!" in movie ...