AI News, BOOK REVIEW: Search Results

Your own blog with GitHub Pages and fast_template (4 part tutorial)

Abstract: fastai is a deep learning library which provides practitioners with high-level components that can quickly and easily provide state-of-the-art results in standard deep learning domains, and provides researchers with low-level components that can be mixed and matched to build new approaches.

This is possible thanks to a carefully layered architecture, which expresses common underlying patterns of many deep learning and data processing techniques in terms of decoupled abstractions.

fastai includes: We have used this library to successfully create a complete deep learning course, which we were able to write more quickly than using previous approaches, and the code was more clear.

fastai is a modern deep learning library, available from GitHub as open source under the Apache 2 license, which can be installed directly using the conda or pip package managers.

For instance, fastai provides a single Learner class which brings together architecture, optimizer, and data, and automatically chooses an appropriate loss function where possible.

In addition, because the training set and validation set are integrated into a single class, fastai is able, by default, always to display metrics during training using the validation set.

The mid-level API provides the core deep learning and data-processing methods for each of these applications, and low-level APIs provide a library of optimized primitives and functional and object-oriented foundations, which allows the mid-level to be developed and customised.

Perhaps more tellingly, we have been able to implement recent deep learning research papers with just a couple of hours work, whilst matching the performance shown in the papers.

Even if using the 'import *' syntax is not generally recommended, REPL programmers generally prefer the symbols they need to be directly available to them, which is why fastai supports the 'import *' style.

The library is carefully designed to ensure that importing in this way only imports the symbols that are actually likely to be useful to the user and avoids cluttering the namespace or shadowing important symbols.

The second line downloads a standard dataset from the fast.ai datasets collection (if not previously downloaded) to a configurable location (~/.fastai/data by default), extracts it (if not previously extracted), and returns a pathlib.Path object with the extracted location.

Many other labellers are provided, particularly focused on labelling based on different kinds of file and folder name patterns, which are very common across a wide range of datasets.

aug_transforms() selects a set of data augmentations that work well across a variety of vision datasets and problems and can be fully customized by providing parameters to the function.

However, by providing a single function which curates best practices and makes the most common types of customization available through a single function, users have fewer pieces to learn in order to get good results.

After defining a DataLoaders object the user can easily look at the data with a single line of code: This fourth line creates a Learner, which provides an abstraction combining an optimizer, a model, and the data to train it – this will be described in more detail in 4.1.

For instance, in this case it will download an ImageNet-pretrained model, if not already available, remove the classification head of the model, replace it with a head appropriate for this particular dataset, and set appropriate defaults for the optimizer, weight decay, learning rate, and so forth (except where overridden by the user).

It is annealing both the learning rates, and the momentums, printing metrics on the validation set, displaying results in an HTML table (if run in a Jupyter Notebook, or a console table otherwise), recording losses and metrics after every batch to allow plotting later, and so forth.

After training a model the user can view the results in various ways, including analysing the errors with show_results(): Here is another example of a vision application, this time for segmentation on the CamVid dataset (Brostow et al.

2008): The lines of code to create and train this model are almost identical to those for a classification model, except for those necessary to tell fastai about the differences in the processing of the input data.

The exact same line of code that was used for the image classification example can also be used to display the segmentation data: Furthermore, the user can also view the results of the model, which again are visualized automatically in a way suitable for this task: In modern natural language processing (NLP), perhaps the most important approach to building models is through fine tuning pre-trained language models.

2011)): Fine-tuning this model for classification requires the same basic steps: The same API is also used to view the DataLoaders: The biggest challenge with creating text applications is often the processing of the input data.

Because the tokenisation is built on top of a layered architecture, users can replace the base tokeniser with their own choices and will automatically get support for the underlying parallel process model provided by fastai.

fastai also provides features for automatically creating appropriate DataLoaders with separated validation and training sets, using a variety of mechanisms, such as randomly splitting rows, or selecting rows based on some column.

Both are trained using the same steps that we’ve seen in the other applications, as in this example using the popular Movielens dataset (Harper and Konstan 2015): fastai is mostly focused on model training, but once this is done you can easily export the PyTorch model to serve it in production.

One such component is the visualisation API, which uses a small number of methods, the main ones being show_batch (for showing input data) and show_results (for showing model results).

The transfer learning capability shared across the applications relies on PyTorch’s parameter groups, and fastai’s mid-level API then leverages these groups, such as the generic optimizer (see 4.3).

The recommended way of training models using a variant of the 1cycle policy (Smith 2018) which uses a warm-up and annealing for the learning rate while doing the opposite with the momentum parameter: The learning rate is the most important hyper-parameter to tune (and very often the only one since the library sets proper defaults).

Other libraries often provide help for grid search or AutoML to guess the best value, but the fastai library implements the learning rate finder (Smith 2015) which much more quickly provides the best value for this parameter after a mock training.

It is the first attempt we are aware of to systematically define all of the steps necessary to prepare data for a deep learning model, and give users a mix and match recipe book for combining these pieces (which we refer to as data blocks).

The steps that are defined by the data block API are: Here is an example of how to use the data block API to get the MNIST dataset (LeCun, Cortes, and Burges 2010) ready for modelling: In fastai v1 and earlier we used a fluent instead of a functional API for this (meaning the statements to execute those steps were chained one after the other).

Here is an example of using the data blocks API to complete the same segmentation seen earlier: Object detection can also be completed using the same functionality (here using the COCO dataset (Lin et al.

The data for language modeling seen earlier can also be built using the data blocks API: We have heard from users that they find the data blocks API provides a good balance of conciseness and expressivity.

Then, the 4 lines of the training loop is replaced with this code: With no other changes, the user now has the benefit of all fastai’s callbacks, progress reporting, integrated schedulers such as 1cycle training, and so forth.

As the application examples have shown, the fastai library allows training a variety of kinds of application models, with a variety of kinds of datasets, using a very consistent API.

Customizing the behaviour of predefined applications can be challenging, which means that researchers often end up 'reinventing the wheel', or, constraining themselves to the specific parts which there tooling allows them to customize.

Software engineering best practices involve building up decoupled components which can be tied together in flexible ways, and then creating increasingly less abstract and more customized layers on top of each part.

This has two problems: the first is that it becomes harder and harder to create additional high-level functionality, as the system becomes more sophisticated, because the low-level API becomes increasingly complicated and cluttered.

The second problem is that for users of the system who want to customize and adapt it, they often have to rewrite significant parts of the high-level API, and understand the large surface area of the low-level API in order to do so.

These issues are common across nearly all software development, and many software engineers have worked hard to find ways to deal with this complexity and develop layered architectures.

There was only one approach that consistently worked well across all datasets that we tried, which is to never freeze batch-normalization layers, and never turn off the updating of their moving average statistics.

However, fastai’s callback system is the first that we are aware of that supports the design principals necessary for complete two-way callbacks: This is the way callbacks are usually designed, but in addition, there is a key design principal: This is why we call these 2-way callbacks, as the information not only flows from the training loop to the callbacks, but on the other way as well.

For instance, here is the code for training a single batch b in fastai: This example clearly shows how every step of the process is associated with a callback (the calls to self.cb() and shows how exceptions are used as a flexible control flow mechanism for them.

On the other hand, with this new callback system we have not had to change the training loop at all, and have used callbacks to implement mixup augmentation, generative adversarial networks, optimized mixed precision training, PyTorch hooks, the learning rate finder, and many more.

To do so, it must complete the following tasks: To do so, it relies on a GANModule that contains the generator and the critic, then delegates the input to the proper model depending on the value of a flag gen_mode and on a GANLoss that also has a generator or critic behavior and handles the evaluation mentioned earlier.

fastai provides a new generic optimizer foundation that allows recent optimization techniques to be implemented in a handful of lines of code, by refactoring out the common functionality of modern optimizers into two basic pieces: This has allowed us to implement every optimizer that we have attempted in fastai, without needing to extend or change this foundation.

As an example of a development improvement, here are the entire changes needed to make to support decoupled weight decay (also known as AdamW (Loshchilov and Hutter 2017)): On the other hand, the implementation in the PyTorch library required creating an entirely new class, with over 50 lines of code.

The benefit for research comes about because it it easy to rapidly implement new papers as they come out, recognise similarities and differences across techniques, and try out variants and combinations of these underlying differences, many of which have not yet been published.

2019): The only difference between the code and the figure are: In order to support modern optimizers such as LARS fastai allows the user to choose whether to aggregate stats at model, layer, or per activation level.

In order to provide a more flexible foundation to support metrics like this fastai provides a Metric abstract class which defines three methods: reset, accumulate, and value (which is a property).

2011) already provides a wide variety of useful metrics, so instead of reinventing them, fastai provides a simple wrapper function, skm_to_fastai, which allows them to be used in fastai, and can automatically add pre-processing steps such as sigmoid, argmax, and thresholding.

Because the documentation is written in interactive notebooks (as discussed in a later section) this also means that users can directly experiment with these datasets and models by simply running and modifying the documentation notebooks.

These customizable methods represent the 15 stages of data loading that we have identified, and which fit into three broad stages: sample creation, item creation, and batch creation.

When users who need to create a new kind of block for the data blocks API, or need a level of customization that even the data blocks API doesn’t support, they can use the mid-level components that the data block API is built on.

This is an important foundational functionality for deep learning, such as the ability to index into a collection of filenames, and on demand read an image file then apply any processing, such as data augmentation and normalization, necessary for a model.

Information about what subset an item belongs to is passed down to transforms, so that they can ensure that they do the appropriate processing – for instance, data augmentation processing would be generally skipped for a validation set, unless doing test time augmentation.

This also makes it harder to support automatic drawing graphs representing a model, printing a model summary, accurately reporting on model computation requirements by layer, and so forth.

Therefore, fastai attempts to provide the basic foundations to allow modern neural network architectures to be built by stacking a small number of predefined building blocks.

In this way, fastai provides primitives which allow representing modern network architecture out of predefined building blocks, without falling back to Python code in the forward function.

The low-level of the fastai stack provides a set of abstractions for: The rest of this section will explain how the transform pipeline system is built on top of the foundations provided by PyTorch, type dispatch, and semantic tensors, providing the flexible infrastructure needed for the rest of fastai.

fastai uses building blocks from all parts of the PyTorch library, including directly patching its tensor class, entirely replacing its library of optimizers, providing simplified mechanisms for using its hooks, and so forth.

For CPU image processing fastai uses and extends the Python imaging library (PIL) (Clark and Contributors, n.d.), for reading and processing tabular data it uses pandas, for most of its metrics it uses Scikit-Learn (Pedregosa et al.

For example, normalization statistics could be based on a sample of data batches, a categorization transform could get its vocabulary directly from the dependent variable, or an NLP numericalization transform could get its vocabulary from the tokens used in the input corpus.

Here is an example of creating two different methods which dispatch based on parameter types: Here f_td_test has a generic implementation for x of numeric types and all ys, then a specialized implementation when x is an int and y is a float.

Historically, the processing pipeline in computer vision has always been to open the images and apply data augmentation on the CPU, using a dedicated library such as PIL (Clark and Contributors, n.d.) or OpenCV (Bradski 2000), then batch the results before transferring them to the GPU and using them to train the model.

Most data augmentations are random affine transforms (rotation, zoom, translation, etc), functions on a coordinates map (perspective warping) or easy functions applied to the pixels (contrast or brightness changes), all of which can easily be parallelized and applied to a batch of images.

The type-dispatch system helps apply appropriate transforms to images, segmentation masks, key-points or bounding boxes (and users can add support for other types by writing their own functions).

For instance, here is how fastai adds the read() method to the pathlib.Path class: Lastly, inspired by the NumPy (Oliphant, n.d.) library, fastai provides a collection type, called L, that supports fancy indexing and has a lot of methods that allow users to write simple expressive code.

For example, the code below takes a list of pairs, selects the second item of each pair, takes its absolute value, filters items greater than 4, and adds them up: L

For instance, the sorted method can take any of the following as a key: a callable (sorts based on the value of calling the key with the item), a string (used as an attribute name), or an int (used as an index).

In order to assist in developing this library, we built a programming environment called nbdev, which allows users to create complete Python packages, including tests and a rich documentation system, all in Jupyter Notebooks (Kluyver et al.

But these systems are not as strong for the “programming” part, since they’re missing features provided by IDEs and editors like good documentation lookup, good syntax highlighting, integration with unit tests, and (most importantly) the ability to produce final, distributable source code files.

nbdev is built on top of Jupyter Notebook and adds the following critically important tools for software development: We plan to provide more information about the features, benefits, and history behind nbdev in a future paper.

2011), Torchvision (Massa and Chintala, n.d.), and pandas (McKinney 2010) are examples of libraries which provide a function composition abstraction that (like fastai’s Pipeline) are designed to help users process their data into the format they need (Scikit-Learn also being able to perform learning and predictions on that processed data).

On the other hand, fastai does not make much use of PyTorch’s higher level APIs, such as nn.optim and annealing, instead independently creating overlapping functionality based on the design approaches and goals described above.

We have used the fastai library to rewrite the entire fast.ai course “Practical Deep Learning for Coders”, which contains 14 hours of material, across seven modules, and covers all the applications described in this paper (and some more).

Many thanks also to Sebastian Raschka for commissioning this paper and acting as the editor for the special edition of Information that it appears in, to the Facebook PyTorch team for all their support throughout fastai's development, to the global fast.ai community who through forums.fast.ai have contributed many ideas and pull requests that have been invaluable to the development of fastai, to Chris Lattner and the Swift for TensorFlow team who helped develop the Swift courses at course.fast.ai and SwiftAI, to Andrew Shaw for contributing to early prototypes of showdoc in nbdev, to Stas Bekman for contributing to early prototypes of the git hooks in nbdev and to packaging and utilities, and to the developers of the Python programming language, which provides such a strong foundation for fastai's features.

Applied Machine Learning with Peter Norvig, Director of Research, Google AI

Launchpad Accelerator Engineer Bootcamp 2018 → Peter Norvig is a Director of Research at Google; previously he directed Google's core ..

Stanford CS230: Deep Learning | Autumn 2018 | Lecture 8 - Career Advice / Reading Research Papers

Andrew Ng, Adjunct Professor & Kian Katanforoosh, Lecturer - Stanford University Andrew Ng Adjunct Professor, Computer ..

John Searle: "Consciousness in Artificial Intelligence" | Talks at Google

John Searle is the Slusser Professor of Philosophy at the University of California, Berkeley. His Talk at Google is focused on the philosophy of mind and the ...

The Ethics and Governance of AI opening event, February 3, 2018

Chapter 1: 0:04 - Joi Ito Chapter 2: 1:03:27 - Jonathan Zittrain Chapter 3: 2:32:59 - Panel 1: Joi Ito moderates a panel with Pratik Shah, Karthik Dinakar, and ...

The Case Against Reality | Prof. Donald Hoffman on Conscious Agent Theory

We have no clue how consciousness emerges from 3 pounds of wet goo. Cognitive scientist Don Hoffman takes us deep into his research suggesting we're ...

Carnegie Mellon's Approach to AI - Andrew W. Moore | Lecture Series on AI #2 | J.P. Morgan

"Andrew W. Moore is the Dean of the School of Computer Science, Carnegie Mellon University. Andrew is a distinguished computer scientist with expertise in ...

Marvin Minsky

Marvin Minsky Toshiba Professor of Media Arts and Sciences and Computer Science and Engineering, emeritus Head, Society of Mind Group Marvin Minsky ...

Android Meets TensorFlow: How to Accelerate Your App with AI (Google I/O '17)

Portability is one of the main benefits of TensorFlow -- you can easily move a neural network model to Android and run predictions on mobile phones, for all ...

Coexisting with AI: A Dean's Society event

How will we coexist with increasingly intelligent machines? New approaches to human-compatible artificial intelligence Game-changing advances in machine ...