AI News, Watch An AI Invent Its Own Visual Language

Watch An AI Invent Its Own Visual Language

At least, that’s one way to interpret the work of artist Tom White, a computational design professor at Victoria University School of Design.

He’s experimenting with flipping the creative process, putting AI in the artist’s place while he simply helps these so-called “Perception Engines” express themselves.

With grant funding from Google’s Artist and Machine Intelligence group, White used a Riso printer to turn each sketch into a print, which he sells online to fund the process.

One series of prints, cleverly titled The Treachery of ImageNet, include captions that nod to Magritte’s iconic 1928 painting, The Treachery of Images (Ceci n’est pas une pipe).

A small irregularity in an image of a fan that a human would never notice, for instance, might confuse the system into thinking it’s looking at an avocado.

Strengthening a neural network against these adversarial examples may depend on helping them recognize upper-level abstract concepts of the objects they see–rather than just granular, pixel-level recognition.

Perception Engines

Can neural networks create abstract objects from nothing other than collections of labelled example images?

Human perception is an often under-appreciated component of the creative process, so it is an interesting exercise to to devise a computational creative process that puts perception front and center.

For example, here are the first few dozen training images (from over over a thousand) in the electric fan category: Abstract representational prints are then constructed which are able to elicit strong classifier responses in neural networks.

From the point of view of trained neural network classifiers, images of these ink on paper prints strongly trigger the abstract concepts within the constraints of a given drawing system.

This process developed is called perception engines as it uses the perception ability of trained neural networks to guide its construction process.

Adversarial examples are a body of research which probes machine learning systems with small perturbations in order to cause a classifier to fail to correctly assign the correct label.

As the architecture of these early systems settled, the operation could be cleanly divided into three different submodules: The perception engine architecture uses the random search ofthe planning module to gradually achieve the objective through iterative refinement.

Though this is effective when optimizing for digital outputs, additionalwork is necessary when planning physical objects which are subject toproduction tolerances and a range of viewing conditions.

This meant all outputs are subject to a number of production constraints such as limited number of ink colors (I used about 6) and unpredictable layer alignment between layers of different colors.

This is done by querying neural networks that were not involved in the original pipeline to see if they agree the objective has been met — an analogue of a train / test split across several networks with different architectures.

As the human artist, my main creative contribution is the design of a programming design system that allows the neural network to express itself effectively and with a distinct style.

The conceit was that many of these prints would strongly evoke their target concepts in neural networks in the same way people find Magritte’s painting evocative of an actual, non-representational pipe.

The name also emphasizes the ImageNet’s role in establishing the somewhat arbitrary ontology of concepts used to train these networks (the canonical ILSVRC subset) which I also tried to highlight by choosing an eclectic set of labels across the series.

Watch An AI Invent Its Own Visual Language

At least, that’s one way to interpret the work of artist Tom White, a computational design professor at Victoria University School of Design.

He’s experimenting with flipping the creative process, putting AI in the artist’s place while he simply helps these so-called “Perception Engines” express themselves.

With grant funding from Google’s Artist and Machine Intelligence group, White used a Riso printer to turn each sketch into a print, which he sells online to fund the process.

One series of prints, cleverly titled The Treachery of ImageNet, include captions that nod to Magritte’s iconic 1928 painting, The Treachery of Images (Ceci n’est pas une pipe).

A small irregularity in an image of a fan that a human would never notice, for instance, might confuse the system into thinking it’s looking at an avocado.

Strengthening a neural network against these adversarial examples may depend on helping them recognize upper-level abstract concepts of the objects they see–rather than just granular, pixel-level recognition.

Could There Ever Be an AI Artist?

Mitchell scolds his predecessors for viewing the revolution of linear perspective through the lens of Scientism – the misguided notion that something like vision is an objective phenomenon that could be ‘discovered’ outside of our own experience.

Linear, three-point perspective was merely a toolset that confirmed our way of seeing, a match that arrived in a ‘world already clothed in our systems of representation.’ Vision itself is a product of the tools we make, Mitchell warns, not a discovery of ‘natural’ facts outside our senses: ‘What is natural is, evidently, what we can build a machine to do for us.’ Today, the debate over what we might call art’s ‘tool problem’ has been reanimated by the rise of artificial intelligence (AI).

Recent innovations in computer vision – the field that studies how computers process images ­­– and deep learning allow machines to discriminate, judge and find patterns that humans might not know exist.

He then programmed the system to draw a series of marks that are continually optimized towards a ‘target concept’, effectively reverse-engineering computer vision to guide automated outputs towards a representation of a fan.

The prints that result are certainly fan-like, though end up looking like Kandinsky knockoffs: short black strokes punctuate globular blue forms that radiate from the centre.

Instead of the artist’s hand, it is, in White’s words ‘several neural networks [that] simultaneously nudge and push a drawing toward the objective.’ The machine built the image for him, but it’s not exactly clear how.

As White himself notes, ‘as the human artist, my main creative contribution is the design of a programming design system that allows the neural network to express itself effectively and with a distinct style.

While computation can render objects by learning which measurable data correlate with things we presently refer to as art, its chief innovation is precisely the removal of the human from the critical role of the consenting subject.

For all its utopian prognostication, the pursuit of AI boils down to a very simple business case: AI holds the promise that it can approximate the labour of a human, and if successful, perform it faster, cheaper and beyond the reach of those pesky labour laws.

If machine intelligence can conquer this uniquely human realm, the march to artificial general intelligence must be nigh, and the profits unimaginable.  It is no surprise that the idea of creative machines is coterminous with the ascent of platform capitalism.

To administer the aesthetic – to control the terms of what counts as an image, or what constitutes art – is to rule both the mind and the body, to influence the whole sensate world of human emotion and expression.

How computers are learning to be creative | Blaise Agüera y Arcas

We're on the edge of a new frontier in art and creativity — and it's not human. Blaise Agüera y Arcas, principal scientist at Google, works with deep neural ...

MIT 6.S094: Recurrent Neural Networks for Steering Through Time

This is lecture 4 of course 6.S094: Deep Learning for Self-Driving Cars taught in Winter 2017. Course website: Lecture 4 slides: ..

NWO Depopulation & Eugenics - Their Evil Playbook Exposed

Agenda 21 - Agenda 21 is a non-binding action plan of the United Nations with regard to sustainable development. It is a product of the Earth Summit held in Rio ...

What the heck is an AI Phone?! - Honor 10 Showcase

Thanks to Honor for sponsoring this video! Check out the Honor 10 at Enter our giveaway by following Honor on Instagram at ..

DIGITAL LOGOTYPE IN ILLUSTRATOR VIDEO - MOUTAIN VECTOR LOGO - Logo Design Illustrator

Check out my portfolio website here, and feel free to get in touch with any queries or propositions Digital logotype in Illustrator ..

The beauty of attractors

This video shows a beauty of Lorenz, Aizawa and some others attractors. The simulation was performed with 20k-100k particles. Rendered at real-time in ...

Keynote (Google I/O '18)

Learn about the latest product and platform innovations at Google in a Keynote led by Sundar Pichai. This video is also subtitled in Chinese, Indonesian, Italian, ...