AI News, Intel Starts R&D Effort in Probabilistic Computing for AI

Intel Starts R&D Effort in Probabilistic Computing for AI

Intel announced today that it is forming a strategic research alliance to take artificial intelligence to the next level.

Autonomous systems don’t have good enough ways to respond to the uncertainties of the real world, and they don’t have a good enough way to understand how the uncertainties of their sensors should factor into the decisions they need to make.

The current wave of AI is around sensing and perception—using a convolutional neural net to scan an image and see if something of interest is there.

So what we’d like to do in a general research thrust is figure out how to build probability into our reasoning systems andinto our sensing systems.

So we’ve been doing a certain amount of internal work and with academia, and we’ve decided that there’s enough here that we’re going to kick off a research community.

The goal is to have people sharewhat they know about it, collaborate on it, figure out how you represent probability when you write software, and how you construct computer hardware.

Fuzzy logic is actually closer to the concept that we’re talking here, where you’re deliberately keeping track of uncertainties as you process information.

There’s statistical computing too, which is really more of a software approach, where you’re keeping track of probabilities by building trees.

Our bias at Intel is to build hardware, but if we don’t really understand how the use model is going to evolve or how the algorithms are going to evolve then we run the risk of commiting to a path too early.

Mayberry: This is intended to be part of a larger system that incorporates our existing work….You don’t want your logic system to assume that your sensing is 100 percent accurate, but you don’t want the sensor to necessarily have false information about confidence either.

As I said we’d like to influence our roadmap in the next few years, but this is pre-roadmap so we don’t have a specific product implementation date.

The quantum computing apocalypse is imminent

The elements of quantum computing have been around for decades, but it’s only in the past few years that a commercial computer that could be called “quantum” has been built by a company called D-Wave.

Announced in January, the D-Wave 2000Q can “solve larger problems than was previously possible, with faster performance, providing a big step toward production applications in optimization, cybersecurity, machine learning and sampling.” IBM recently announced that it had gone even further —

Taking advantage of the physical “spin” of quantum elements, a quantum computer will be able to process simultaneously the same data in different ways, enabling it to make projections and analyses much more quickly and efficiently than is now possible.

“Our quantum research has progressed to the point where our partner QuTech is simulating quantum algorithm workloads, and Intel is fabricating new qubit test chips on a regular basis in our leading-edge manufacturing facilities,” said Dr. Michael Mayberry, corporate vice president and managing director of Intel Labs.

“Intel’s expertise in fabrication, control electronics and architecture sets us apart and will serve us well as we venture into new computing paradigms, from neuromorphic to quantum computing.” The difficulty in achieving a cold enough environment for a quantum computer to operate is the main reason they are still experimental, and can only process a few qubits at a time —

Last year, scientists at MIT and the University of Innsbruck were able to build a quantum computer with just five qubits, conceptually demonstrating the ability of future quantum computers to break the RSA encryption scheme.

The NSA’s “Commercial National Security Algorithm Suite and Quantum Computing FAQ” says that “many experts predict a quantum computer capable of effectively breaking public key cryptography” within “a few decades,” and that the time to come up with solutions is now.

The solution lies in the development of quantum-safe cryptography, consisting of information theoretically secure schemes, hash-based cryptography, code-based cryptography and exotic-sounding technologies like lattice-based cryptography, multivariate cryptography (like the “Unbalanced Oil and Vinegar scheme”), and even supersingular elliptic curve isogeny cryptography.

CES 2018: Intel's 49-Qubit Chip Shoots for Quantum Supremacy

Intel has passed a key milestone while running alongside Google and IBM in the marathon to build quantum computing systems.

The tech giant has unveiled a superconducting quantum test chip with 49 qubits: enough qubits to possibly enable quantum computing that begins to exceed the practical limits of modern classical computers.

Intel’s announcement about the design and fabrication of its new 49-qubit superconducting quantum chip, code-named Tangle Lake, came during a keynote speech by Brian Krzanich, Intel's CEO, during 2018 CES, an annual consumer electronics tradeshow in Las Vegas.

“Around 50 qubits is an interesting place from a scientific point of view, because you’ve reached the point where you can’t completely predict or simulate the quantum behavior of the chip,” he says.

It’s still going to be a long road before anyone will realize the commercial promise of quantum computing, which leverages the idea of quantum bits (qubits) that can represent more than one information state at the same time.

One important step involves implementing “surface code” error correction that can detect and correct for disruptions in the fragile quantum states of individual qubits.Another step involves figuring out how to map software algorithms to the quantum computing hardware.

The 49-qubit Tangle Lake chip builds upon the tech giant’s earlier work with 17-qubit arrays that have the minimum number of qubits necessary to perform surface code error correction.

Still, he anticipatesneuromorphic computing products potentially hitting the market within 2 to 4 years, if customers can run their applications on the Loihi chip without requiring additional hardware modifications.

CES 2018: Intel gives glimpse into mind-blowing future of computing

There’s a race brewing in Silicon Valley to create the best quantum computer — one that could change the world by making the impossible possible — and some big gains in that race were just announced at CES.

The semiconductor giant is competing against Google, IBM and other companies to reach “quantum supremacy,” the marker at which universal quantum computers outpace traditional computers that represents an important Kitty Hawk moment for data processing.

“Just as we ushered in the era of our classical personal computing, we’re pushing the boundaries of the quantum computing revolution.” Quantum computers aren’t like the computers at your workplace, or even like supercomputers currently used to model weather or run academic simulations.

They are an entirely different flavor of computers, and while they’ll never be used to power your laptop or phone, their awe-inspiring potential comes from using quantum physics to solve problems that would be entirely impossible on even the most powerful traditional supercomputer.

Simulating the side effects of pharmaceutical drugs, simulating catalysts that speed up chemical reactions (which can lead to more energy-efficient applications) and breaking cryptography, Mike Mayberry, corporate vice president and managing director of Intel Labs told Fox News.

It’s important to note that D-Wave has already surpassed these companies and released a 2,000-qubit system, but since their computer is a quantum annealer that only solves a specific type of problem, they aren’t considered to be in direct competition to universal quantum computers by researchers and industry leaders.

AMAZING NEW AI INNOVATIONS AT CES 2018 Krzanich also discussed the development of a neuromorphic test chip, nicknamed “Loihi.” Neuromorphic chips are closer to traditional computer chips, but they are built to imitate neurons in a human brain, giving it the ability to learn as it runs.

On the Moore's Law hot seat: Intel's Mike Mayberry (QA)

As the vice president who leads Intel's research team, he bears responsibility for making sure his employer can cram ever more electronic circuitry onto computer chips.

Intel co-founder Gordon Moore 47 years ago observed the pace at which microchips' transistor count doubled, and Mayberry is in charge of keeping that legacy intact.

A lot rests on Moore's Law, which in a 1975 update to Moore's original 1965 paper predicted that the number of transistors will double every two years.

That means a chip of a given size has been able to accomplish more and more computing chores -- or that you can perform the same computing tasks using a smaller, cheaper, less power-hungry chip.

It's proved remarkably long-lived, despite any number of crises that threatened to stall the steady march of progress.

Intel builds its top-end 'Ivy Bridge' family of Core chips with a 22 nanometer (22nm) manufacturing process.

That means the smallest elements of the chip measure just 22 billionths of a meter, which is to say about 7,500 of them could fit across a dot on the letter i.

Each two years, through what Intel calls its 'tick-tock' strategy, the chip giant moves to a smaller process that doubles the number of transistors it can pack into a given surface area.

He lives in the future, testing new processes to figure out how to scale manufacturing processes down to 5nm -- and what to do after that.

Shankland: I remember times when people predicted this or that manufacturing process would be the end of the line, like at 180 nanometers.

I probably get contacted by a reporter two to three times a year because they read something predicts the end of Moore's Law.

As you make transistors small in their electrical length, they turn on easier, which is good, but they turn off harder, which is bad.

All through the end of the 1990s, the worry was leakage power: you have to stop scaling [making transistors smaller] because you couldn't turn devices off without overheating.

The next challenge was gate dielectric leakage, which we solved with high-k [a thin layer between the transistor gate and channel made of a material with a high dielectric constant].

The truth is we've been modifying the technology every five or seven years for 40 years, and there's no end in sight for being able to do that.

Looking at the three examples of Intel [which concentrates in logic], Samsung [a leader in memory], and TSMC [Taiwan Semiconductor Manufacturing Corp., the top foundry that makes microprocessors for other companies], the percentage of R&D burden is higher in the memory industry than it is in the other two.

That ends up where on average you break even -- in the high part of the cycle you make enough money to make up for the low part of the cycle.

For the case of logic and foundry, there are more transistors, therefore more wires, therefore more patterning depth, which creates more layers, so the cost of R&D does increase steadily.

In some cases, where companies have dropped out entirely out and gone to a foundry, it has been because their market hasn't grown fast enough to spread that out.

Mayberry: There's a flippant answer: the old people would retire and the new people would get new jobs.

We have to worry about managing leakage [in which electrical current produces too much waste heat, limiting chip speeds].

As you make the wire smaller, the surface area compared to the bulk gets larger and larger.

When we get down to the point where we do 1 or 2 atoms, we're certainly in the space of chemistry where you build individual molecules.

If you can't precisely make the same size device a trillion times on a wafer, then it doesn't make economic sense.

There are technologies based on magnetism to communicate information [spintronics], but a spin wave travels at a slower rate than an electron wave.

In the case of magnetism, those are potential nonvolatile technologies [memory systems that store data without needing a constant power supply].

One of the focus items is the basic architecture for the 2017 node, and there are people looking ahead to the 2019 node.

The fact that Intel is running on two-year cycles has as much to do with the need to sync the design side with process side than with any reason to pick that date.

Shankland: In the last decade the chip industry hit a power wall, when chip clock frequencies couldn't be increased without drastically increasing power consumption and waste heat.

There was a power wall in the 1980s, a power wall in the late 1990s, a power wall in the middle of the 2000s, which is what you're referring to.

When you get into something like modeling weather, you have an element of vector computing, but there is an interconnectedness where each chunk of weather connects the next neighbor, so you need information passed to other computing elements.

If you're writing apps for cell phone and expect to be paid 99 cents, you're not going to devote as much energy into optimizing the last bit of performance, so you have to find developer tools to make that easier.

If you're writing something that runs in the cloud and executes many times a day, you throw resources into making it run it run more efficiently.

Shankland: One way to get more profit from a given manufacturing process is to increase the size of the silicon wafer on which chips are printed -- the cost of sending a wafer through the manufacturing process stays about the same but you can carve more chips from a larger wafer.

It'll probably take longer than from 200mm to 300mm, and from 100mm to 200mm, and from when I was a little engineer, from 3-inch to 4-inch.

If you break down the history of how much economic gains we get from scaling [decreasing the size of electronics elements on the chip] and how much we get from increasing wafer size, there is non-negligible amount from wafer size.

You start with a small seed crystal, then grow up like a cone to get to the desired diameter, then you end up with a vertical cylinder.

Shankland: With smaller chips, we've seen processors move from supercomputers to minicomputers to personal computers to cell phones.

That's a part of perceptual computing, responsive computing, adaptive computing, where you're not only doing from computational point of view the task requested, but the delivery of the information is modified by the context around it.

30 Years of Graphics & Gaming Innovation: #AMD30Live (1/2)

AMD30Live, recorded live on August 23rd at 9AM CST. First aired on 0:01:00 - Welcome/Monologue from Richard Huddy 0:13:30 - Interview with ...

Как накачать большие руки

screamed liter favoring traction wondered reconsider realizing plow nap brain's ebb manifests CVD HDL minutiae ..

С какого возраста можно начинать заниматься бодибилдингом (спортом)?

reassured screamed liter favoring traction wondered reconsider realizing plow nap brain's ebb manifests CVD HDL ..

Правда ли, что у всех качков (бодибилдеров) маленький член?

reassured screamed liter favoring traction wondered reconsider realizing plow nap brain's ebb manifests CVD HDL ..

Быстрые гликолитические (белые) и медленные окислительные (красные) мышечные волокна

reassured screamed liter favoring traction wondered reconsider realizing plow nap brain's ebb manifests CVD HDL ..

#Рост_мышц и боль. Биохимия роста мышц, дома, питание, на турнике, галилео без жира

screamed liter favoring traction wondered reconsider realizing plow nap brain's ebb manifests CVD HDL minutiae ..

Можно ли бегать после силовой тренировки? Правда ли, что бег сжигает мышечную массу?

reassured screamed liter favoring traction wondered reconsider realizing plow nap brain's ebb manifests CVD HDL ..

Как накачать (нарастить) огромный бицепс и мышцы рук?

reassured screamed liter favoring traction wondered reconsider realizing plow nap brain's ebb manifests CVD HDL ..

Жим лежа 192,5 кг Чемпионат Одесской области по жиму лежа RAW 2016

reassured screamed liter favoring traction wondered reconsider realizing plow nap brain's ebb manifests CVD HDL ..

Что такое кетоз? Кето диета. Кетогенная диета

reassured screamed liter favoring traction wondered reconsider realizing plow nap brain's ebb manifests CVD HDL ..