AI News, Facebook's AI Chief at ISSCC talks about the future of deep learning ... artificial intelligence

Facebook is Working on Its Own Custom AI Silicon

Facebook’s chief AI researcher, Yann LeCun, has stated that the company is working on its own custom AI silicon, with the goal of building far more efficient methods of processing neural networks in hardware and boosting performance, addressable problems, and energy efficiency.

Its themes call for expanding the role of AI from language translation to content policing, the goal of creating smarter devices that can differentiate between, say, weeds and roses, and giving computers what we typically call “common sense.”

According to its reporting, LeCun is focused on creating chips that don’t have to break data sets into small batches for processing, but instead, work with larger amounts of information without this step.

If you want to mow an area (or vacuum a carpet), you don’t need to teach the device how to differentiate between what to mow or clean nearly as much as you’d have to teach it if you wanted it to specifically avoid non-weed plants.

The problem with specialty microprocessor architectures, historically speaking, is that even if you had an idea for a particularly clever way to execute a specific type of instructions, the speed of general purpose computation was accelerating quickly enough to eat most of your market advantage before your product could be built.

If it took three years to bring your part to market, you’re up against the 66MHz Pentium, a CPU more than 2x faster by clock and instruction set improvements than your initial comparison point.

This one-two punch of unbeatable economy of scale and rapid-fire compute improvements explains why general-purpose computation took over the market from specialty architectures and why it’s maintained its lock on the market ever since.

The reason they’re such an exception is that the nature of a graphics workload is so different from a general purpose computational workload that you’d never build a GPU to handle the tasks of a serial CPU or vice-versa.

So long as Intel (or AMD, IBM, or any other general-purpose CPU vendor) could kick out double-digit performance improvements every 12-18 months, the effort of investing in a 3-5 year architectural research project was too uncertain to justify.

Edge AI Summit 2019 | Kisaco Research

As part of the investment team, Maria is focused on sourcing and evaluating new investment opportunities, working with portfolio companies to explore and evaluate business strategies and developing investment themes for the firm.

Maria joined Madrona from the executive team member at Cloudflare, a San Francisco – based web performance and security company, where she led business development and helped build the company from 25 employees to unicorn status.

While at Cloudflare, Maria built a global ecosystem of over five thousand partners and launched strategic partnerships with Google, Rackspace and other leading cloud platforms, delivering significant revenue and distribution.

Facebook’s chief AI scientist: Deep learning may need a new programming language – VentureBeat

Deep learning may need a new programming language that’s more flexible and easier to work with than Python, Facebook AI Research director Yann LeCun said today.

“The question now is, is that a valid approach?” Python is currently the most popular language used by developers working on machine learning projects, according to GitHub’s recent Octoverse report, and the language forms the basis for Facebook’s PyTorch and Google’s TensorFlow frameworks.

virtuous cycle of better hardware causing better algorithms, causing better performance, causing more people to build better hardware is only a few years old, said LeCun, who worked at Bell Labs in the 1980s and made ConvNet (CNN) AI to read zip codes on postal envelopes and bank checks.

In recent years, advances in hardware — like field programmable gate arrays (FPGA), tensor processing units (TPU) from Google, and graphics processing units (GPU) — have played a major role in the industry’s growth.

“It’s very humbling for computer scientists, because we like to think in the abstract that we’re not bound by the limitation of our hardware, but in fact we are.” LeCun highlighted a number of AI trends hardware makers should consider in the years ahead and made recommendations about the kind of architecture needed in the near future, recommending that the growing size of deep learning systems be taken into consideration.

“If self-supervised learning eventually allows machines to learn vast amounts of background knowledge about how the world works through observation, one may hypothesize that some form of machine common sense could emerge,” LeCun wrote in the paper.

Facebook's AI Chief at ISSCC talks about the future of deep learning hardware | Packt Hub

Yesterday, on the ongoing IEEE's International Solid-State Circuits Conference (ISSCC), Yann LeCun, Facebook AI Research director, supplied a paper that touched upon the current traits and the destiny of deep studying hardware.

LeCun believes that during the next decades, researchers should put their efforts into making machines learn identical to people, by means of mere observations and occasional actions or in brief, by means of self-supervised manner.

LeCun, in his paper Deep Learning Hardware: Past, Present, and Future, wrote, ' If self-supervised learning ultimately enables machines to be informed vast amounts of history knowledge about how the international works through observation, one can even hypothesize that a few sort of desktop common experience may emerge.'

These machines might help in very crucial complications like detecting hate speech and inappropriate content on Facebook, permitting virtual assistants to infer context like humans, and extra.

In an interview with VentureBeat, Yann LeCun referred to, ' There are a couple of tasks at Google, Facebook, and other locations to variety of design this sort of compiled language that will also be effective for deep learning, however it's no longer clean at all that the group will comply with, as a result of individuals simply want to use Python.'

' The sort of hardware that is accessible has a big affect on the variety of analysis that individuals do, and so the direction of AI in a higher decade or so goes to be enormously inspired through what hardware turns into accessible.

Yann LeCun - The Next Step Towards Artificial Intelligence

Recorded July 16th 2018 Yann LeCun is the Chief AI Scientist for Facebook AI Research (FAIR), joining Facebook in December 2013. He is also a Silver ...

ISSCC2018 - Semiconductor Innovation: Is the party over or just getting started?

Vince Roche, President & CEO, Analog Devices, Norwood, MA The future pace of semiconductor innovation is by no means certain. A little more than a decade ...