AI News, BOOK REVIEW: Scaling deep learning for science

Scaling deep learning for science

Now, researchers are eager to apply this computational technique -- commonly referred to as deep learning -- to some of science's most persistent mysteries.

But because scientific data often looks much different from the data used for animal photos and speech, developing the right artificial neural network can feel like an impossible guessing game for nonexperts.

Using the Titan supercomputer, a research team led by Robert Patton of the US Department of Energy's (DOE's) Oak Ridge National Laboratory (ORNL) has developed an evolutionary algorithm capable of generating custom neural networks that match or exceed the performance of handcrafted artificial intelligence systems.

Better yet, by leveraging the GPU computing power of the Cray XK7 Titan -- the leadership-class machine managed by the Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility at ORNL -- these auto-generated networks can be produced quickly, in a matter of hours as opposed to the months needed using conventional methods.

Scaled across Titan's 18,688 GPUs, MENNDL can test and train thousands of potential networks for a science problem simultaneously, eliminating poor performers and averaging high performers until an optimal network emerges.

Today's neural networks can consist of thousands or millions of simple computational units -- the 'neurons' -- arranged in stacked layers, like the rows of figures spaced across a foosball table.

During one common form of training, a network is assigned a task (e.g., to find photos with cats) and fed a set of labeled data (e.g., photos of cats and photos without cats).

As the network pushes the data through each successive layer, it makes correlations between visual patterns and predefined labels, assigning values to specific features (e.g., whiskers and paws).

As Titan works through individual networks, new data is fed to the system's nodes asynchronously, meaning once a node completes a task, it's quickly assigned a new task independent of the other nodes' status.

To demonstrate MENNDL's versatility, the team applied the algorithm to several datasets, training networks to identify sub-cellular structures for medical research, classify satellite images with clouds, and categorize high-energy physics data.

Neutrinos, ghost-like particles that pass through your body at a rate of trillions per second, could play a major role in explaining the formation of the early universe and the nature of matter -- if only scientists knew more about them.

The task, known as vertex reconstruction, required a network to analyze images and precisely identify the location where neutrinos interact with the detector -- a challenge for events that produce many particles.

Furthermore, because deep learning requires less mathematical precision than other types of scientific computing, Summit could potentially deliver exascale-level performance for deep learning problems -- the equivalent of a billion billion calculations per second.

In addition to preparing for new hardware, Patton's team continues to develop MENNDL and explore other types of experimental techniques, including neuromorphic computing, another biologically inspired computing concept.

Adapting Deep Learning to New Data Using ORNL's Titan Supercomputer

In this video from SC17, Travis Johnston from ORNL presents: Adapting Deep Learning to New Data Using ORNL's Titan Supercomputer. "There has been a ...