AI News, On Machine Learning
- On Sunday, June 3, 2018
- By Read More
On Machine Learning
post outlines some of the things that I have been thinking about how to apply machine learning for a given problem along with the process that we adopted for the classification problem at CB Insights, but also gave me a good opportunity to
My aim is not to focus on the algorithms, methods or classifiers but rather to offer a broader picture on how to approach a machine learning problem, and in the meantime give couple of bad advices.
and be warned that they may generalize better than your favorite classifier.(I will try not to overfit, but let me know if I do so in the comments.) Most of the machine learning book chapters and articles focus on algorithms/classifiers and sometimes optimization methods.
From a theoretical perspective, they analyze the algorithms' theoretical bounds and sometimes the learning function itself along with different types of optimization.
The datasets in papers sometimes happen to be trivial and not necessarily reflect the real-world or in the wild dataset characteristics, though.
There is a significant amount of knowledge and experience one has to gain (sometimes just by experimentation) to cover the gap these two separate(yet not independent) two sections to build a pipeline.
If I make an analogy with software programming, we put algorithms and data from input, output would become the program that we intended to write yet without explicitly writing it.
If we have data and labels for that class, we could train a classifier based on that data along with features, feature selection, and then classify the sample based on that classifier.
There would be always cases you miss couple of rules or some structures in text are hard to express in hard-coded rules (if one company joins another company, that article is most probably partnership rather than HR), and it requires quite amount of effort both in development and also requires large domain expertise.
It could incorporate more data and use that data without putting more effort where you want to introduce new rules, you basically grow and grow your code base.
In computer vision domain, even the pixel values are found to be not very good or discriminative, so computer vision researchers come up with higher level representations for the images.
Rather than knowing particular classifier strengths and weaknesses, even knowing categories of classifiers would be useful to make a good decision around which classifier to choose.
For example, a search engine needs to take both precision and recall to evaluate the ranking where a classifier on medical domain may put more emphasis on Type-I error than Type-II error or vice versa.
(I will mention different cross-validation methods in the generalized section in a bit) When we have some input(text, image, video, discrete, continuous or categorical variables), which we want to learn some structure or train a classifier, the first thing that needs to be done is to represent the input in a way that the classifier or the algorithm could use.
individual pixel values for images, words in text), you may want to also build your features which could be more higher level or in the same level but useful for learning.
Not only that, but when you evaluate your classifiers, the ones that are generalizing well(performing good on the test dataset), turn out to be the ones that use better document representation rather than the differences of the methods.
You tried a bunch of great classifiers into your training dataset but the results are far from satisfying and in different measures for performance, they may be even dismal.
If you have domain knowledge, then you are in a better shape as you could reason about what type of features would be more important and what needs to be done in order to improve the classification accuracies.
If you do not know much about the domain, then you should probably be spending some time on the misclassifications and try to figure out why do these classifiers perform very poorly and what needs to be corrected in the representation.
will deal with generalization in the evaluation by using cross-validation and make sure that we have a separate test set rather than the dataset that we optimize the parameters for.
said noise-free training samples in the beginning of this section, but for some of algorithms(especially the ones that tend to overfit, some amount of noise may actually improve the classification accuracy due to the reasons that I explained above).
- On Monday, June 17, 2019
Difference between Classification and Regression - Georgia Tech - Machine Learning
Watch on Udacity: Check out the full Advanced Operating Systems course for free ..
"Why Should I Trust you?" Explaining the Predictions of Any Classifier
Author: Marco Tulio Ribeiro, Department of Computer Science and Engineering, University of Washington Abstract: Despite widespread adoption, machine ...
How SVM (Support Vector Machine) algorithm works
In this video I explain how SVM (Support Vector Machine) algorithm works to classify a linearly separable binary data set. The original presentation is available ...
Extreme Learning Machine: Why Tuning Is Not Required in Learning?
Neural networks (NN) and support vector machines (SVM) play key roles in machine learning and data analysis. However, it is known that these popular ...
Linear Regression - Machine Learning Fun and Easy
Linear Regression - Machine Learning Fun and Easy
Ajinkya More | Resampling techniques and other strategies
PyData SF 2016 Ajinkya More | Resampling techniques and other strategies for handling highly unbalanced datasets in classification Many real world machine ...
How to Make a Text Summarizer - Intro to Deep Learning #10
I'll show you how you can turn an article into a one-sentence summary in Python with the Keras machine learning library. We'll go over word embeddings, ...
Decision Tree 1: how it works
Full lecture: A Decision Tree recursively splits training data into subsets based on the value of a single attribute. Each split corresponds to a ..
Machine Learning with R Tutorial: Identifying Clustering Problems
Make sure to like & comment if you liked this video! Take Hank's course here: Many times in ..
Cliff Click on Big Data and Machine Learning
Interview with Cliff Click on his new open source software for doing machine learning on large data sets using a compressed in-memory representation with pure ...