AI News, A Complete Machine Learning Project Walk-Through in Python: Part One

A Complete Machine Learning Project Walk-Through in Python: Part One

Here we are using the seaborn visualization library and the PairGrid function to create a Pairs Plot with scatterplots on the upper triangle, histograms on the diagonal, and 2D kernel density plots and correlation coefficients on the lower triangle.

machine learning model can only learn from the data we provide it, so ensuring that data includes all the relevant information for our task is crucial.

Taking the square root, natural log, or various powers of features is common practice in data science and can be based on domain knowledge or what works best in practice.

The following code selects the numeric features, takes log transformations of these features, selects the two categorical features, one-hot encodes these features, and joins the two sets together.

Features that are strongly correlated with each other are known as collinear and removing one of the variables in these pairs of features can often help a machine learning model generalize and be more interpretable.

(I should point out we are talking about correlations of features with other features, not correlations with the target, which help our model!) There are a number of methods to calculate collinearity between features, with one of the most common the variance inflation factor.

For regression problems, a reasonable naive baseline is to guess the median value of the target on the training set for all the examples in the test set.

Before calculating the baseline, we need to split our data into a training and a testing set: We will use 70% of the data for training and 30% for testing: Now we can calculate the naive baseline performance: The naive estimate is off by about 25 points on the test set.

The second post (available here) will show how to evaluate machine learning models using Scikit-Learn, select the best model, and perform hyperparameter tuning to optimize the model.

Interactions among explanatory variables in R

Learn more about statistical modeling at In thinking about effect size, keep in mind that there ..

Reducing High Dimensional Data with PCA and prcomp: ML with R

This has been re-designed as 'Reducing High Dimensional Data in R' on Udemy.com, $19 COUPON!

R Tutorial 13: Variable Selection and Best Subsets Selection (regsubsets)

This video is going to show how to perform variable selection and best subsets selection using regsubsets() in R. Measures include R-squared, Adjusted ...

Getting Started with Orange 06: Making Predictions

Making predictions with classification tree and logistic regression. Train data set: Test data set: ..

Introduction to experiment design

Introduction to experiment design. Explanatory and response variables. Control and treatment groups.

Regression Features and Labels - Practical Machine Learning Tutorial with Python p.3

We'll be using the numpy module to convert data to numpy arrays, which is what Scikit-learn wants. We will talk more on preprocessing and cross_validation ...

4. Variable Selection and Creating Data Partitions

How to select which variables to include in models. How to partition data into training and validation partitions.

R Stats: Multiple Regression - Variable Selection

This video gives a quick overview of constructing a multiple regression model using R to estimate vehicles price based on their characteristics. The video ...

Basic data plotting in MATLAB

This screencasts covers how to use the PLOT command to make plots of data. Basically it's the same procedure as using PLOT to make graphs of functions.

The DyND Library; SciPy 2013 Presentation

Authors: Wiebe, Mark, Continuum Analytics Track: General The DyND library is a component of Blaze, providing an in-memory data structure which is dynamic, ...