AI News, Science and the vampire/zombieapocalypse

Science and the vampire/zombieapocalypse

I have been fortunate to have been born at a time when I had the opportunity to witness the birth of several of the major innovations that shape our world today.  I have also managed to miss out on capitalizing on every single one of them.

Just months before impending unemployment, I managed to talk my way into being the first post doc of Jim Collins, who just started as a non-tenure track research assistant professor at Boston University.  Midway through my time with Jim, we had a meeting with Charles Cantor, who was a professor at BU then, about creating engineered organisms that could eat oil.

However, my idea seemed too complicated to implement biologically so when I went to Switzerland to visit Wulfram Gerstner at the end of 1997,  Tim and Jim, freed from my meddling influence, were able create the genetic toggle switch and the field of synthetic biology was born.

Still, I could have mined thousands of Bitcoins on a laptop back then, which would be worth tens of millions today.  I do think blockchains are an important innovation and my former post-bac fellow Wally Xie is even the CEO of the blockchain startup QChain.

I recruited Michael Buice to work on the path integral formulation for neural networks because I wanted to write down a neural network model that carried both rate and correlation information so I could implement a correlation based learning rule.

Buzzword Convergence: Making Sense of Quantum Neural Blockchain AI

What happens if you take four of today’s most popular buzzwords and string them together?

But in any case, in the past 20 years or so there’s been all sorts of nice theoretical work on formulating the idea of quantum circuits and quantum computing.

And, yes, according to the theory, a big quantum computer should be able to factor a big integer fast enough to make today’s cryptography infrastructure implode.

In quantum mechanics, though, there’s supposed to be something much more intrinsic about the interference, leading to the phenomenon of entanglement, in which one basically can’t ever “see the wave that’s interfering”—only the effect.

Because (at least in modern times) we’re always trying to deal with discrete bits—while the typical phenomenon of interference (say in light) basically involves continuous numbers.

And this led to mathematical models of “neural networks”—which were proved to be equivalent in computational power to mathematical models of digital computers.

(Though for years they were quietly used for things like optical character recognition.) But then, starting in 2012, a lot of people suddenly got very excited, because it seemed like neural nets were finally able to do some very interesting things, at first especially in connection with images.

Well, a neural net basically corresponds to a big mathematical function, formed by connecting together lots of smaller functions, each involving a certain number of parameters (“weights”).

In practice one might show a bunch of images of elephants, and a bunch of images of teacups, and then do millions of little updates to the parameters to get the network to output “elephant”

But here’s the crucial idea: the neural net is somehow supposed to generalize from the specific examples it’s shown—and it’s supposed to say that anything that’s “like”

But what’s become clear is that for lots of practical tasks (that turn out to overlap rather well with some of what our brains seem to do easily) it’s realistic with feasible amounts of GPU time to actually train neural networks with a few million elements to do useful things.

And, yes, in the Wolfram Language we’ve now got a rather sophisticated symbolic framework for training and using neural networks—with a lot of automation (that itself uses neural nets) for everything.

A cryptographic hash has the feature that while it’s easy to work out the hash for a particular piece of data, it’s very hard to find a piece of data that will generate a given hash.

In cryptocurrencies like Bitcoin the big idea is to be able to validate transactions, and, for example, be able to guarantee just by looking at the blockchain that nobody has spent the same bitcoin twice.

Well, the point is that there’s a whole decentralized network of thousands of computers around the world that store the blockchain, and there are lots of people (well, actually not so many in practice these days) competing to be the one to add each new block (and include transactions people have submitted that they want in it).

(Yes, there’s an analogy with measurement in quantum mechanics here, which I’ll be talking about soon.) Traditionally, when people keep ledgers, say of transactions, they’ll have one central place where a master ledger is maintained.

Pretty much any nontrivial smart contract will eventually need to know about something in the world (“did it rain today?”, “did the package arrive?”, etc.), and that has to come from off the blockchain—from an “oracle”.

Back in the 1950s, people thought that pretty much anything human intelligence could do, it’d soon be possible to make artificial (machine) intelligence do better.

And in fact the whole concept of “creating artificial intelligence” pretty much fell into disrepute, with almost nobody wanting to market their systems as “doing AI”.

I’ve built a whole scientific and philosophical structure around something I call the Principle of Computational Equivalence, that basically says that the universe of possible computations—even done by simple systems—is full of computations that are as sophisticated as one can ever get, and certainly as our brains can do.

In doing engineering, and in building programs, though, there’s been a tremendous tendency to try to prevent anything too sophisticated from happening—and to set things up so that the systems we build just follow exactly steps we can foresee.

Most of what’s inside Wolfram|Alpha doesn’t work anything like brains probably do, not least because it’s leveraging the last few hundred years of formalism that our civilization has developed, that allow us to be much more systematic than brains naturally are.

And, yes, I’ve spent a large part of my life building the Wolfram Language, whose purpose is to provide a computational communication language in which humans can express what they want in a form suitable for computation.

Here it is: And the point here is that even though the rule (or program) is very simple, the behavior of the system just spontaneously generates complexity, and apparent randomness.

And what happens is complicated enough that it shows what I call “computational irreducibility”, so that you can’t reduce the computational work needed to see how it will behave: you essentially just have to follow each step to find out what will happen.

In the case of something like Bitcoin, there’s another connection too: the protocol needs people to have to make some investment to be able to add blocks to the blockchain, and the way this is achieved is (bizarrely enough) by forcing them to do irreducible computations that effectively cost computer time.

It’s most obvious when one gets to recurrent neural nets, but it happens in the training process for any neural net: there’s a computational process that effectively generates complexity as a way to approximate things like the distinctions (“elephant”

It’s essentially an axiom of the traditional mathematical formalism of quantum mechanics that one can only compute probabilities, and that there’s no way to “see under the randomness”.

But even in the standard formalism of quantum mechanics, there’s a kind of complementary place where randomness and complexity generation is important, and it’s in the somewhat mysterious process of measurement.

This law says that if you start, for example, a bunch of gas molecules in a very orderly configuration (say all in one corner of a box), then with overwhelming probability they’ll soon randomize (and e.g.

But here’s the strange part: if we look at the laws for, say, the motion of individual gas molecules, they’re completely reversible—so just as they say that the molecules can randomize themselves, so also they say that they should be able to unrandomize themselves.

The notion is that some little quantum effect (“the electron ends up with spin up, rather than down”) needs to get amplified to the point where one can really be sure what happened.

In other words, one’s measuring device has to make sure that the little quantum effect associated with one electron cascades so that it’s spread across lots and lots of electrons and other things.

So even though pure quantum circuits as one imagines them for practical quantum computers typically have a sufficiently simple mathematical structure that they (presumably) don’t intrinsically generate complexity, the process of measuring what they do inevitably must generate complexity.

What this means is that if one takes a quantum system, and lets it evolve in time, then whatever comes out one will always, at least in principle, be able to take and run backwards, to precisely reproduce where one started from.

Well, the point is that while one part of a system is, say, systematically “deciding to say elephant”, the detailed information that would be needed to go back to the initial state is getting randomized, and turning into heat.

And in fact I don’t think anyone knows how one can actually set up a quantum system (say a quantum circuit) that behaves in this kind of way.

To explain how one goes from quantum mechanics in which everything is just an amplitude, to our experience of the world in which definite things seem to happen, people sometimes end up trying to appeal to mystical features of consciousness.

I suspect one could create one from a quantum version of a cellular automaton that shows phase transition behavior—actually not unlike the detailed mechanics of a real quantum magnetic material.

When people talk about “quantum computers”, they are usually talking about quantum circuits that operate on qubits (quantum analog of binary bits).

But in the end, to find out, say, the configuration that does best in satisfying the matching condition everywhere, one may effectively have to essentially just try out all possible configurations, and see which one works best.

Then the problem of finding the best overall configuration is like the problem of finding the minimum energy configuration for the molecules, which physically should correspond to the most stable solid structure that can be formed from the molecules.

But I don’t think this is actually true—and I think what instead will happen is that the material will turn mushy, not quite liquid and not quite solid, at least for a long time.

Still, there’s the idea that if one sets up this energy minimization problem quantum mechanically, then the physical system will be successful at finding the lowest energy state.

But here’s the confusing part: when one trains a neural net, one ends up having to effectively solve minimization problems like the one I’ve described (“which values of weights make the network minimize the error in its output relative to what one wants?”).

So people end up sometimes talking about “quantum neural nets”, meaning domino-like arrays which are set up to have energy minimization problems that are mathematically equivalent to the ones for neural nets.

(Yet another connection is that convolutional neural nets—of the kind used for example in image recognition—are structured very much like cellular automata, or like dynamic spin systems.

But in training neural nets to handle multiscale features in images, one seems to end up with scale invariance similar to what one sees at critical points in spin systems, or their quantum analogs, as analyzed by renormalization group methods.) OK, but let’s return to our whole buzzword string.

scheme (as used in Bitcoin and currently also Ethereum), to find out how to add a new block one searches for a “nonce”—a number to throw in to make a hash come out in a certain way.

But one could imagine a quantum version in which one is in effect searching in parallel for all possible nonces, and as a result producing many possible blockchains, each with a certain quantum amplitude.

And to fill out the concept, imagine that—for example in the case of Ethereum—all computations done on the blockchain were reversible quantum ones (achieved, say, with a quantum version of the Ethereum Virtual Machine).

At the outset, one might have thought that “quantum”, “neural” and “blockchain” (not to mention “AI”) didn’t have much in common (other than that they’re current buzzwords)—and that in fact they might in some sense be incompatible.

And to make that alignment, we essentially have to communicate with the AI at a level of abstraction that transcends the details of how it works: in effect, we have to have some symbolic language that we both understand, and that for example AI can translate into the details of how it operates.

the thought patterns of many interacting brains, a bit like the data put on a blockchain becomes a robust part of “collective blockchain memory”

And, yes, in a fittingly bizarre end to a somewhat bizarre journey,  it does seem to be the case that a string plucked from today’s buzzword universe has landed very close to home.

Joseph Lubin’s career has involved various posts in the fields of technology and finance and in their intersection.

Subsequent to graduating cum laude with a degree in Electrical Engineering and Computer Science from Princeton, he worked as research staff in the Robotics Lab at Princeton and then at Vision Applications, Inc., a private research firm, in the fields of autonomous mobile robotics, machine vision and artificial neural networks.

ConsenSys Enterprise, the professional services arm, works with various enterprises to help them formulate their blockchain strategy and develop business processes for them on private or consortium blockchains, as well as on the public Ethereum network.

How Blockchains could transform Artificial Intelligence

Here are the slides.] In recent years, AI (artificial intelligence) researchers have finally cracked problems that they’ve worked on for decades, from Go to human-level speech recognition.

A key piece was the ability to gather and learn on mountains of data, which pulled error rates past the success line.

Before we discuss applications, let’s first review what’s different about blockchains compared to traditional big-data distributed databases like MongoDB.

We can think of blockchains as “blue ocean”databases: they escape the “bloody red ocean” of sharks competing in an existing market, opting instead to be in a blue ocean of uncontested market space.

Famous blue ocean examples are Wii for video game consoles (compromise raw performance, but have new mode of interaction), or Yellow Tail for wines (ignore the pretentious specs for wine lovers;

traditional database standards, traditional blockchains like Bitcoin are terrible: low throughput, low capacity, high latency, poor query support, and so on.

But in blue-ocean thinking, that’s ok, because blockchains introduced three new characteristics: centralized / shared control, immutable / audit trails, and native assets / exchanges.

People inspired by Bitcoin were happy to overlook the traditional database-centric shortcomings, because these new benefits had potential to impact industries and society at large in wholly new ways.

But most real-world AI works on large volumes of data, such as training on large datasets or high-throughput stream processing.

These blockchain benefits lead to the following opportunities for AI practitioners: Decentralized / shared control encourages data sharing: Immutability / audit trail: Native assets / exchanges: There’s one more opportunity: (6) AI with blockchains unlock the possibility for AI DAOs (Decentralized Autonomous Organizations).

In my experience, it was like this in many sub-fields of AI, including neural networks, fuzzy systems (remember those?), evolutionary computation, and even slightly less AI-ish techniques like nonlinear programming or convex optimization.

In my first published paper (1997), I proudly showed how my freshly-invented algorithm had the best results compared to state-of-the-art neural networks, genetic programming, and more — on a small fixed dataset.

Error rates were 25% for the old / boring / least fancy algorithms like Naive Bayes and Perceptrons, whereas fancy newer memory-based algorithms achieved 19% error.

But then Banko and Brill showed something remarkable: as you added more data — not just a bit more data but orders of magnitude more data — and kept the algorithms the same, then the error rates kept going down, by a lot.

For example, in 2007, Google researchers Halevy, Norvig and Pereira of Google published a paper showing how data could be “unreasonably effective” across many AI domains.

Deep learning directly fits in this context: it’s the result of figuring out how, if given a massive enough dataset, to start to capture interactions and latent variables.

Interestingly, backprop neural networks from the ’80s are sometimes competitive with the latest techniques, if given the same massive datasets.

As I was attacking real-world problems, I learned how to swallow my pride, abandon the “cool” algorithms, build only as much was needed to solve the problem at hand, and learned to love the data and the scale.

It happened in my second company, Solido (2004-present), as well, as we pivoted from fancier modeling approaches to super-simple but radically scalable ML algorithms like FFX;

and once again was un-boring as our users pulled us from 100 variables to 100,000, and from 100 million Monte Carlo samples to 10 trillion (effective samples).

In short: decentralized / shared control encourages data sharing, which in turns lead to better models, which in turns leads to higher profit / lower cost / etc.

The decentralized nature of blockchains encourages data sharing: it’s less friction to share if no single entity controls the infrastructure where the data is being stored.

If you’re a bank providing diamond insurance, you’d like to create a classifier that identifies whether a diamond is fraudulent.

If you only have access to the diamond data for one of these labs, then you’re blind about the other three houses, and your classifier could easily flag one of those other houses’ diamonds as fraud (see picture below, left).

The classifier can detect legitimate fraud and avoid false positives, therefore lowering the fraud rate, to benefit of insurance providers and certification labs.

An appropriate token-incentive scheme in a decentralized system could incentivize datasets to get labeled that could not be previously labeled, or labeled in a cost-effective fashion.

or collective action against a powerful central player, like the music labels working together against Apple iTunes.

(This was a key stumbling block a few years back when the music labels tried to work together for a common registry.) Another benefit is that it’s easier to turn the data &

(I think the reason we didn’t see more work on this sooner is that semantic web work tried to go there, from the angle of upgrading a file system.

It’s more effective to say from the start that you’re building a database, and designing as such.) “Global variable” gets interpreted a bit more literally:) So, what does it look like when we have data sharing with a planet-scale shared database service like IPDB?

The first point of reference is that there’s already a billion-dollar market (recently), for companies to curate and repackage public data, to make it more consumable.

Garbage may also come from non-malicious actors / crash faults, for example from defective IoT sensor, a data feed going down, or environmental radiation causing a bit flip (sans good error correction).

At each step of the process to build models, and to run models in the field, the creator of that data can simply time-stamp that model to the blockchain database, which includes digitally signing it as a claim of “I believe this data / model to be good at this point”.

But blockchain technology makes it better, because: IP on the blockchain is near and dear to my heart, with my work on ascribe going back to 2013 to help digital artists get compensated.

In being decentralized, no single entity controls the data storage infrastructure or the ledger of who-owns-what, which makes it easier for organizations to work together or share data, as described earlier in this essay.

When you create data that can be used for model-building, and when you create models themselves, you can pre-specify licenses that restricts how others use them upstream.

In the blockchain database, you treat permissions as assets where for example a read permissions or the right to view a particular slice of data or model.

You as the rights holder can transfer these permissions-as-assets to others in the system, similar to how you transfer Bitcoin: create the transfer transaction and sign it with your private key.

For example, “you can remix this data but you can’t deep-learn it.” This is likely part of DeepMind’s strategy in their healthcare blockchain project.

But if users can instead truly own their medical data and control its upstream usage, then DeepMind can simply tell consumers and regulators “hey, the customer actually owns their own data, we just use it”.

He then extrapolated further: It’s entirely possible that the only way governments will allow private ownership (human or AGI) of data is with a shared data infrastructure with “network neutrality” rules, as with AT&T and the original long lines.

In that sense, increasingly autonomous AI requires blockchains and other shared data infrastructure to be acceptable to the government, and therefore to be sustainable in the long term.

That’s now possible via better APIs to the process such as smart contracts languages, and decentralized stores of value such as public blockchains.

They capture the interaction with the world (actuating and sensing), and adapting (updating state based on internal model and external sensors).

AI can win at poker: but as computers get smarter, who keeps tabs on their ethics?

Researchers have overcome one of the major stumbling blocks in artificial intelligence with a program that can learn one task after another using skills it acquires on the way.

But the work shows a way around a problem that had to be solved if researchers are ever to build so-called artificial general intelligence (AGI) machines that match human intelligence.

Most AIs are based on programs called neural networks that learn how to perform tasks, such as playing chess or poker, through countless rounds of trial and error.

To build the new AI, the researchers drew on studies from neuroscience which show that animals learn continually by preserving brain connections that are known to be important for skills learned in the past.

The lessons learned in hiding from prey are crucial for survival, and mice would not last long if the know-how was erased by the skills needed to find food.

For instance, when it played Enduro, a car racing game that takes place through the daytime, at night, and in snowy conditions, the AI treated each as a different task.

“One key part of the puzzle is building systems that can learn to tackle new tasks and challenges while retaining the abilities that they have already learnt.

This research is an early step in that direction, and could in time help us build problem-solving systems that can learn more flexibly and efficiently.” Peter Dayan, director of the Gatsby Computational Neuroscience Unit at University College London, called the work “extremely nice”.

Alan Winfield, at the Bristol Robotics Lab at the University of the West of England said the work was “wonderful”, but added: “I don’t believe it brings us significantly closer to AGI, since this work does not, nor does it claim to, show us how to generalise from one learned capability to another.

Google DeepMind's Untrendy Play to Make the Blockchain Actually Useful

For Silicon Valley, the headline was sweet nectar: Google DeepMind, the world's hottest artificial intelligence lab, embraces the blockchain, the endlessly fascinating idea at the heart of the bitcoin digital currency.

To DeepMind's credit, its new project depends less on trendy ideas than an apparent desire to solve a real problem in the real world—one that involves the most private and personal information.

"We would like to develop—and we think we can develop—technical proofs that give the hospital a really clear indication of which data we've had access to, for how long, under which policy, and how it is moved around in our ecosystem."

Just as the blockchain works to track every event related to your personal stash of bitcoin, DeepMind's system will track every event related to hospital health data.

What's more, DeepMind plans to share its system as open source software, allowing anyone to apply the same tech to any sensitive data, not just health care records.

In an age when it's hard to know how your data is being used and who you're talking to online, DeepMind's project offers a way to push back.

"The broader trend is that people are looking for ways to securely share data in a way that's verifiable so that they have greater trust in the integrity of the information,"

Tieron and others are building their systems using the actual blockchain, asking this worldwide network of independent machines paid in bitcoin to verify the integrity of data stored elsewhere, whether it's financial transactions or health data.

Shoucheng Zhang: "Quantum Computing, AI and Blockchain: The Future of IT" | Talks at Google

Prof. Shoucheng Zhang discusses three pillars of information technology: quantum computing, AI and blockchain. He presents the fundamentals of ...

Ever wonder how Bitcoin (and other cryptocurrencies) actually work?

Bitcoin explained from the viewpoint of inventing your own cryptocurrency. Videos like these made possible by patreon: ..

The Third Industrial Revolution: A Radical New Sharing Economy

The global economy is in crisis. The exponential exhaustion of natural resources, declining productivity, slow growth, rising unemployment, and steep inequality, ...

Tim Draper: One Bitcoin Is Still One Bitcoin | Interview to Cointelegraph

Cointelegraph talked to Tim Draper during the Global Blockchain Forum in San Francisco. He covered blockchain adoption, regulations in US and China, and ...

Jeremy Rifkin on the Fall of Capitalism and the Internet of Things

Economic theorist and author Jeremy Rifkin explains his concept of The Internet of Things. Rifkin's latest book is The Zero Marginal Cost Society: The Internet of ...

Bitcoin: Made by an A. I.

Leave it to Gonz Shamira (Face Like the Sun channel) to have the most comprehensive post for this over-the-top momentous event. Here is a re-post of his ...

AI Can Now Self-Reproduce—Should Humans Be Worried? | Eric Weinstein

Those among us who fear world domination at the metallic hands of super-intelligent AI have gotten a few steps ahead of themselves. We might actually be ...

Jeremy Rifkin: "The Zero Marginal Cost Society" | Talks at Google

In The Zero Marginal Cost Society, New York Times bestselling author Jeremy Rifkin describes how the emerging Internet of Things is speeding us to an era of ...

GIVING YOU ALL THE SECRET ONCE AND FOR ALL AT COMPLEX CON 2017 | DAILYVEE 334

Hope you guys enjoy this video, which is just 64 minutes of straight fire that I gave at Complex Con 2017. I think that there is a ton of value in this keynote, which ...