AI News, MOOC Artificial Intelligence artificial intelligence

Special Issue "Emerging Artificial Intelligence (AI) Technologies for Learning"

Dear Colleagues, The future of education lies in the ability to develop learning technologies which integrate seamless Artificial Intelligent components in the educational process, in order to deliver a personalized learning service, dynamically tailored to the learner's characteristics, abilities, and needs (new educational goals, previous skills, re-training, special educational needs, etc.).This Special Issue’s aim is to collect research contributions in the area of Artificial Intelligence models, technologies, and applications for supporting the learning process.

The potential of AI learning technologies is dramatically increasing their role in modern educational systems from the academic level, where learning management systems platforms are ubiquitous and widely support student–lecturer communications, down to kindergarten level, where kids often use tailored systems, such as toy robots or storytelling software tools, in order to learn the basics of computational thinking.A large number of conventional knowledge transfer and learning systems already integrate AI components, e.g., for supporting learners profiling and learning analytics, while a great potential for AI technologies is represented by the personalization and automation of the different phases of the learning process.

In a scenario which demands education to be quick, effective, and responding to fast-changing topics and educational goals and individualized learners needs, the role of AI model and technology is crucial.The scope of the submitted contributions is expected to range from theoretical models and methods to architectures, system implementations, and reports of field experiences.

Decrappification, DeOldification, and Super Resolution

In this article we will introduce the idea of “decrappification”, a deep learning method implemented in fastai on PyTorch that can do some pretty amazing things, like… colorize classic black and white movies—even ones from back in the days of silent movies, like this: The same approach can make your old family photos look like they were taken on a modern camera, and even improve the clarity of microscopy images taken with state of the art equipment at the Salk Institute, resulting in 300% more accurate cellular analysis.

In recent years generative models have advanced at an astonishing rate, largely due to deep learning, and particularly due to generative adversarial models (GANs).

However, GANs are notoriously difficult to train, due to requiring a large amount of data, needing many GPUs and a lot of time to train, and being highly sensitive to minor hyperparameter changes.

fast.ai has been working in recent years towards making a range of models easier and faster to train, with a particular focus on using transfer learning.

Transfer learning refers to pre-training a model using readily available data and quick and easy to calculate loss functions, and then fine-tuning that model for a task that may have fewer labels, or be more expensive to compute.

The pre-trained model that fast.ai selected was this: Start with an image dataset and “crappify” the images, such as reducing the resolution, adding jpeg artifacts, and obscuring parts with random text.

Then, the loss function was replaced was a combination of other loss functions used in the generative modeling literature (more details in the f8 video) and trained for another couple of hours.

His ambition was to be able to successfully colorize real world old images with the noise, contrast, and brightness problems caused by film degradation.

He discovered that just a tiny bit of GAN fine-tuning on top of the process developed with fast.ai could create colorized movies in just a couple of hours, at a quality beyond any automated process that had been built before.

Meanwhile, Uri Manor, Director of the Waitt Advanced Biophotonics Core (WABC) at the Salk Institute, was looking for ways to simultaneously improve the resolution, speed, and signal-to-noise of the images taken by the WABC’s state of the art ZEISS scanning electron and laser scanning confocal microscopes.

These three parameters are notably in tension with one another - a variant of the so-called “triangle of compromise”, the bane of existence for all photographers and imaging scientists alike.

The advanced microscopes at the WABC are heavily used by researchers at the Salk (as well as several neighboring institutions including Scripps and UCSD) to investigate the ultrastructural organization and dynamics of life, ranging anywhere from carbon capturing machines in plant tissues to synaptic connections in brain circuits to energy generating mitochondria in cancer cells and neurons.

The scanning electron microscope is distinguished by its ability to serially slice and image an entire block of tissue, resulting in a 3-dimensional volumetric dataset at nearly nanometer resolution.

The so-called “Airyscan” scanning confocal microscopes at the WABC boast a cutting-edge array of 32 hexagonally packed detectors that facilitate fluorescence imaging at nearly double the resolution of a normal microscope while also providing 8-fold sensitivity and speed.

Using carefully acquired high resolution images for training, the group validated “generalized” models for super-resolution processing of electron and fluorescence microscope images, enabling faster imaging with higher throughput, lower sample damage, and smaller file sizes than ever reported.

Since the models are able to restore images acquired on relatively low-cost microscopes, this model also presents an opportunity to “democratize” high resolution imaging to those not working at elite institutions that can afford the latest cutting edge instrumentation.

Taking inspiration from this blog post about stabilizing neural style transfer in video, he was able to add a “stability” measure to the loss function being used for creating single image super-resolution.

This stability when combined with information about the preceding and following frames of video significantly reduces flicker and improves the quality of output when processing low resolution movies.

Additionally, high quality video colorization in DeOldify is made possible by advances in training that greatly increase stability and quality of rendering, largely brought about by employing NoGAN training.

Moreover, the discriminator can check that highly detailed features in distant portions of the image are consistent with each other.” This same approach was adapted to the critic and U-Net based generators used in DeOldify.

We’ve also employed gaussian noise augmentation in video model training in order to reduce model sensitivity to meaningless noise (grain) in film.

Here is the NoGAN training process: In the case of DeOldify, training to this point requires iterating through only about 1% to 3% of ImageNet data (or roughly 2600 to 7800 iterations on a batch size of five).

However, there’s still the issue of artifacts and discoloration being introduced after the “inflection point”, and it’s suspected that this could be reduced or eliminated by mitigating batch size related issues.

In just fifteen minutes of direct GAN training (or 1500 iterations), the output of Fast.AI’s Part 1 Lesson 7 Feature Loss based super resolution training is noticeably sharpened (original lesson Jupyter notebook here).

If you took the original DeOldify model and merely colorized each frame just as you would any other image, the result was this—a flickering mess: The immediate solution that usually springs to mind is temporal modeling of some sort, whereby you enforce constraints on the model during training over a window of frames to keep colorization decisions more consistent.

For example, a larger resnet backbone (resnet101) makes a noticeable difference in how accurately and robustly features are detected, and therefore how consistently the frames are rendered as objects and scenes move.

In contrast, simpler loss functions such as MSE and L1 loss tend to produce dull colorizations as they encourage the networks to “play it safe” and bet on gray and brown by default.

The high level overview of the steps involved in creating model to produce high resolution microscopy videos from low resolution sequences is: At Salk we have been fortunate because we have produced decent results using only synthetic low resolution data.

This is important because it is time consuming and rare to have perfectly aligned pairs of high resolution and low resolution images - and that would be even harder or impossible with video (live cells).

This is a function that transforms a high resolution image into a low resolution image that approximates the real low resolution images we will be working with once our model is trained.

For example, we found that if our crappifier injected too much high frequency noise into the training data, the trained model would have a tendency to eliminate thin and fine structures like those of neurons.

This image shows an example from a training where we are using 5 sequential images ( t-2, t-1, t 0, t+1, t+2) - to predict a single super-resolution output image (also at time t 0 ) For the movies we used bundles of 3 images and predicted the high resolution image at the corresponding middle time.

We chose 3 images because that conveniently allowed us to easily use pre-existing super-resolution network architectures, data loaders and loss functions that were written for 3 channels of input.

Given the low resolution image sequence X that we will use to predict the true high resolution image T, we create X1 and X2 which result from to separate applications of the random noise generating crappifier function.

We used L1 loss but you could also use a feature loss or some other approach to measure the difference: LossStable = loss(Y1,Y2) Our final training loss is therefore: loss = L1 + L2 + LossStable Now that we have a trained model, generating high resolution output from low resolution input is simply a matter of running the model across a sliding window of, in this case, three low resolution input images at a time.

Andrew Ng - The State of Artificial Intelligence

Professor Andrew Ng is the former chief scientist at Baidu, where he led the company's Artificial Intelligence Group. He is an adjunct professor at Stanford ...

Developmental Artificial Intelligence MOOC Teaser

Olivier Georgon starts this Massive Open Online Course on Developmental Artificial Intelligence in october 2014 (more information ..

Elements of AI

Prof. Teemu Roos, coordinator of Education program in FCAI, tells about motivation behind the highly popular Elements of AI MOOC, an open online course ...

11. Introduction to Machine Learning

MIT 6.0002 Introduction to Computational Thinking and Data Science, Fall 2016 View the complete course: Instructor: Eric Grimson ..

What is artificial intelligence?

In this 3 Minute Thinking video Dr Sanjay Modgil imagines a future full of new technology and asks fundamental questions about artificial intelligence.