AI News, Critic Knows Best (Or Thinks They Do, Anyway) | IndieWire artificial intelligence
Everything You Know About Artificial Intelligence is Wrong
It was hailed as the most significant test of machine intelligence since Deep Blue defeated Garry Kasparov in chess nearly 20 years ago.
That fateful day when machines finally become smarter than humans has never appeared closer—yet we seem no closer in grasping the implications of this epochal event.
NYU research psychologist Gary Marcus has said that “virtually everyone” who works in AI believes that machines will eventually overtake us: “The only real difference between enthusiasts and skeptics is a time frame.” Futurists like Ray Kurzweil think it could happen within a couple of decades, while others say it could take centuries.
“Or, to be more precise, we use the word consciousness to indicate several psychological and cognitive attributes, and these come bundled together in humans.” It’s possible to imagine a very intelligent machine that lacks one or more of these attributes.
[Note: For clarification, these viruses are not AI, but in future they could be imbued with intelligence, hence the concern.] Reality: AI researcher and founder of Surfing Samurai Robots, Richard Loosemore thinks that most AI doomsday scenarios are incoherent, arguing that these scenarios always involve an assumption that the AI is supposed to say “I know that destroying humanity is the result of a glitch in my design, but I am compelled to do it anyway.” Loosemore points out that if the AI behaves like this when it thinks about destroying us, it would have been committing such logical contradictions throughout its life, thus corrupting its knowledge base and rendering itself too stupid to be harmful.
He also asserts that people who say that “AIs can only do what they are programmed to do” are guilty of the same fallacy that plagued the early history of computers, when people used those words to argue that computers could never show any kind of flexibility.
“It will know exactly what we meant for it to do.” McIntyre and Armstrong believe an AI will only do what it’s programmed to, but if it becomes smart enough, it should figure out how this differs from the spirit of the law, or what humans intended.
Reality: Assuming we create greater-than-human AI, we will be confronted with a serious issue known as the “control problem.” Futurists and AI theorists are at a complete loss to explain how we’ll ever be able to house and constrain an ASI once it exists, or how to ensure it’ll be friendly towards humans.
But these solutions are either too simple—like trying to fit the entire complexity of human likes and dislikes into a single glib definition—or they cram all the complexity of human values into a simple word, phrase, or idea.
Take, for example, the tremendous difficulty of trying to settle on a coherent, actionable definition for “respect.” “That’s not to say that such simple tricks are useless—many of them suggest good avenues of investigation, and could contribute to solving the ultimate problem,” Armstrong said.
As AI theorist Eliezer Yudkowsky said, “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.” In his book Superintelligence: Paths, Dangers, Strategies, Oxford philosopher Nick Bostrom wrote that true artificial superintelligence, once realized, could pose a greater risk than any previous human invention.
“It therefore has a strong incentive to ensure that it isn’t interrupted or interfered with, including being turned off, or having its goals changed, as then those goals would not be achieved.” Unless the goals of an ASI exactly mirror our own, McIntyre said it would have good reason not to give us the option of stopping it.
“Intelligence has just given them the ability to be bad more intelligently, it hasn’t turned them good.” As McIntyre explained, an agent’s ability to achieve a goal is unrelated to whether it’s a smart goal to begin with.
“Relying on luck is not a great policy for something that could determine our future.” Reality: This is a particularly common mistake (good examples here and here), one perpetuated by an uncritical media and Hollywood films like the Terminator movies.
“Imagine how boring a story would be,” Armstrong said, “where an AI with no consciousness, joy, or hate, ends up removing all humans without any resistance, to achieve a goal that is itself uninteresting.” Reality: The ability of AI to automate much of what we do, and its potential to destroy humanity, are two very different things.
“And the money these winners save will be spent on other goods and services which will generate new jobs for humans.” In all likelihood, artificial intelligence will produce new ways of creating wealth, while freeing humans to do other things.
The Limits of Modern AI: A Story
Tweet this!❝The Limits of Modern AI: A Story❞ In the 18th Century the Enlightenment philosopher and proto-psychologist Étienne Bonnot de Condillac imagined a statue outwardly appearing like a man and also with what he called “the inward organization.” In an example of supreme armchair speculation, Condillac imagined pouring facts---bits of knowledge---into its head, wondering when intelligence would emerge.
His blueprint for his Analytic Engine was the first design for a general purpose computer, incorporating components that are now part of modern computers: an arithmetical logic unit, program control flow in the form of loops and branches, and integrated memory.
The design extended work on his earlier Difference Engine---a device for automating classes of mathematical calculations---and though the completion of decades of work was at hand, the British Association for the Advancement of Science refused additional funding for the project.
The paper begins: “I propose to consider the question ‘Can machines think?'' He then proposed the Imitation Game, or what's now referred to as the Turing Test: pose questions to a machine and a human hidden in separate rooms, and if we can't tell the difference in their responses, then the machine can be said to “think,” as we do.
John McCarthy coined the phrase “Artificial Intelligence” in 1955, and by 1956 Artificial Intelligence, or AI, was officially launched as an independent research field at the now-famous Dartmouth Conference, a meeting of minds that included notable scientists such as Alan Newell and Herbert Simon (Carnegie Mellon), Marvin Minsky (MIT), and John McCarthy (MIT, and later Stanford).
Tweet this!❝AI was harder than we thought.❞—John McCarthy, The Limits of Modern AI: A Story Herbert Simon (left) declared in 1957 that AI had arrived, with machines that as he put it “can think.” And MIT computer scientist Marvin Minsky too, by the 1960s, thought that the problems would be ironed out “within a generation.” All of this was of course wildly off the mark.
Each phase highlighted a different research agenda, and each fizzled out after a decade of work following a pattern of registering early successes on easy, controlled problems, and then meeting failure attempting to scale the approaches to more realistic scenarios.
As AI scientist Drew McDermott points out, programs like the “General Problem Solver” were in fact specific algorithms for solving constrained problems, but their names tended to impart an air of generality and robustness that encouraged a misunderstanding about their actual modest, even uninteresting, capabilities.
Tweet this!❝Boasting and trickery is a tacit admission that, on the merits, #AI is often not as impressive as it's billed.❞ In fact, the tendency of AI researchers to endlessly hype and overrate their programs has been part and parcel of the field on up to the present day (more on this later).
(A separate, major approach was to model the brain with simple constructs resembling neurons, called “perceptrons.” We'll get to this approach later, as it has resurfaced as the major paradigm in AI today.) Minksy and Papert championed the development of methods for handling---processing, manipulating, “dealing with”---knowledge in isolated domains known as “micro-worlds.” Micro-worlds were supposed to provide the initial insights that would lead to more general programs that could scale up to real-world thinking.
The entire world could be described by as few as 50 English words: nouns such as “block” or “cone,” verbs such as “move to” or “place on,” and adjectives such as “blue” or “green.” Via a program Winograd devised called SHRDLU, an operator could ask the robot to “pick up the green cone and place it on the blue block,” for example.
Tweet this!❝The #AI problem is one of the hardest #science has ever undertaken.❞—Marvin Minsky, Minsky himself was to experience a profound change of mood, admitting in 1982 to a reporter that “the AI problem is one of the hardest science has ever undertaken.” Yet in the late 1960s, the failure of the Blocks-World simply suggested to him yet other, quite similar, strategies.
AI and language understanding was---must be---about getting the right knowledge into the system, structured in the right way, so that relatively simple programming strategies could access and render usable this knowledge in the performance of intelligent tasks.
Dreyfus would later call his scripts “predetermined, bounded, and game-like.” Schank defined them as follows: We define a script as a predetermined causal chain of conceptualizations that describe the normal sequence of things in a familiar situation.
Where micro-worlds were tractable but relatively uninteresting domains, frames were capable of capturing big pieces of real life---the typical events of attending a party, or walking into a living room, or eating out, and so on.
Systems using scripts or frames to understand stories---their original application---didn't need complex, scientific knowledge but rather simple, everyday knowledge even young children had acquired: “Barack Obama is President of the United States,” or “Barack Obama wears underwear,” or even “When Barack Obama is in Washington, his left foot is also in Washington.” But simple knowledge like this seemed endless;
Tweet this!❝The number of facts we human beings know is, in a certain very pregnant sense, infinite.❞—Yehoshua Bar-Hillel By the end of the 1970s it was clear that our language, or rather the interpretation of language, lay at the root of the problem with scripts, and frames (indeed, language understanding was emerging as the key problem for all of AI.) Schank intended his scripts to be used by physical systems---robots---but his initial work was on programs run on mainframes or desktops that analyzed textual stories about social scenarios.
Bar-Hillel, once again, had been prescient here: “The number of facts we human beings know is, in a certain very pregnant sense, infinite.” Infinitude was not a promising concept for a supposedly practical, engineering-based field.
By the 1980s, the so-called Frame Problem---the problem of grasping what is relevant and ignoring what is not, in real-time thinking---had added a seemingly mysterious and intractable conundrum to the already puzzling issue of how to give computer knowledge in AI.
While continuing GOFAI projects like former Stanford and Carnegie Mellon computer scientist Douglas Lenat's “Cyc” project (short of “encyclopedia”), which focused on hand-coding more and more computer-readable knowledge in large knowledge bases intended to somehow solve issues with relevant knowledge, suddenly thousands and then millions of people were giving AI “big data” in the form of Web pages.
Empirical methods, as they came to be called, were computational approaches that exploited words and surface features of text, and such methods exploded in the 1990s and quickly replaced the deep knowledge--based efforts.
(In the early days of the Web, a major concern was whether anyone could ever find relevant information: it was seemingly a needle-in-a-haystack problem.) Hand-crafted rules---the old efforts at engineering knowledge bases and rules to draw conclusions from them using human experts---clearly couldn't be scaled quickly enough for such an effort.
To understand why we are likely still in a “winter” (one that is not yet recognized, but is likely coming), even today among the success of Web behemoths like Google, Yahoo!, Facebook, Twitter, and others, we'll need to unpack the statistical or data-driven approach brought back to life by the modern Web.
By contrast, traditional AI---what philosopher and AI researcher John Haugeland called “Good Old Fashioned AI” (GOFAI)---assumed that a significant part of human knowledge is not derived from experience but is “fixed” in advance in the capabilities of the brain or mind.
Chomsky (right) himself played a large role in dismissing early statistical approaches to machine translation with his 'poverty of stimulus' arguments against empiricist, learning-based approaches like that of the celebrated behaviorist B.F.
He argued too, contra Shannon and the statistical tradition, that meaningless statements like “Colorless ideas sleep furiously” are useless for statistical inference (predicting the next word given a context of prior words) but are nonetheless grammatical.
The GOFAI projects still surviving represented, in essence, “Hail Mary” attempts to vindicate GOFAI, as evidence mounted that AI researchers had “stumbled into a game of three dimensional chess, thinking it was tic-tac-toe,” as philosopher and cognitive scientist Jerry Fodor put it.
In his early work at Bell Labs, Shannon (left) pioneered modern information theory, but he also made important contributions to the fledgling field of AI in the 1950s by showing that seemingly semantic problems in language understanding could sometimes be reduced to purely statistical analysis.
Information theory in fact helped explain this: a natural language like English is redundant, so predicting the next letter in a word (or next word in a sentence) can be modeled as a statistical inference conditioned on prior context.
If one viewed language as a simple “Markov” process, where the next element in a sequence can be predicted by considering only a local context of n prior elements, problems that seemed difficult could be reduced to simple mathematics.
Statistical analysis of simple word-to-word mappings failed to produce quality translations, and attempts at incorporating syntactic evidence (including Chomsky's new “transformational” grammars) failed, too.
AI critic Hubert Dreyfus summed it up in his widely-read critique, What Computers Can't Do: it seemed that competing approaches to AI were all getting their “deserved chance to fail.” By the 1980s, GOFAI was faltering on problems concerning relevance, as we've seen.
For one thing, the increasing availability of data for training and testing empirical or “learning” methods was beginning to shift the scales, as erstwhile statistics-based failures were showing signs of success.
Channel noise was a concern for Bell Labs where Shannon worked, and so his early work focused on reducing or eliminating “crackles” in telephone lines that affected the quality and comprehensibility of spoken communication for its customers.
What's clear is that the availability of huge datasets breathed new life into work in AI, and given the manifest difficulties encountered by rationalist “GOFAI” efforts, Modern AI has been recast as an empirical discipline using “big data”---very large computer-readable datasets---to learn sophisticated models.
Big data/machine learning--inspired approaches have moved the ball on scores of practical AI tasks, like machine translation, voice recognition, credit card fraud detection, spam filtering, information extraction, and even sci-fi projects like self-driving cars.
Small wonder then that machine learning so captivates the modern mind, married as it is to data on the one hand, and to today's superfast computers on the other, affordable systems that surpass the multi-million dollar supercomputers of a decade ago.
Carnegie Mellon's Tom Mitchell, a professor of Computer Science and an expert in the subfield of machine learning, defines it as follows: Definition: A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measure by P, improves with experience E.
For a computer program, to “learn from experience E” is to learn from data: computer programs can't “experience” human language use directly, for instance, but written or transcribed communication can be digitized and datafied for them, as we've seen.
A task T is a problem such as ranking search results, or facial recognition (pick one), and a performance measure P is our assessment of how good a machine actually gets on a particular task.
Voice recognition (identifying morphemes from phonemes), handwritten character recognition, machine translation, collocation identification, syntactic parse generation and extraction, and many other problems in NLP reduced to classification or regression problems.
For one thing, learning systems are plausible counter-examples to an age-old objection to the possibility of true AI that computers can only “do what they're programmed to do.” The objection was first suggested in the 19th century by Ada Lovelace (right), a brilliant and largely self-taught mathematician (and daughter of Lord Byron) who worked with Charles Babbage on his world-famous Analytic Engine, the monstrous mechanical calculator mentioned early that never fully worked but in important ways still anticipated the age of modern computation arriving in the next century.
To explain exactly why, we turn to the most powerful machine learning approach known today, supervised machine learning---learning from data that has been explicitly labeled by humans.
Given a task such as Part Of Speech (POS) recognition or POS “tagging,” for instance, a training instance might be (1) a sentence, tokenized into its constituent words stored as elements in a vector, along with (2) each word's corresponding part of speech (POS) label, or tag: [The cat is on the mat: DT/NN/VB/PP/DT/NN] Here, the feature data is the sentence “The cat is on the mat,” and the label data are POS tags, using the well-known Brown Corpus abbreviations for parts of speech (DT is a “determiner”, and so on).
In a typical learning task, training data consisting of thousands or even millions of such example sentences along with corresponding POS tags is extracted from a corpus, converted into training instances, and input to a learning algorithm.
Many language understanding tasks like POS tagging, entity recognition, or phrase-mapping tasks like machine translation are based on (or can be cast as) classification problems, and hence use traditional classification methods such as Hidden Markov Models (HMMs), or discriminative methods like Maximum Entropy (MaxEnt) or Conditional Random Fields (CRFs).
CRFs are a class of powerful learning algorithms based on an undirected graphical model that solve known weaknesses in maximum entropy classification, like the “label bias problem,” but otherwise are equivalent to maximum entropy and other conditional models.
So-called large-margin classifiers, like Support Vector Machines (SVMs), are “true” classifiers that can be trained on sequential data (like the word sequences for POS tagging) by using “tricks” like a pre-processing step, where a “sliding window” or other algorithmic technique converts the sequence of input into separate, binary classification problems.
If the learning problem is well-defined, the model accuracy will increase with additional input until it flattens (the accuracy graph is often asymptotic), and the model becomes saturated, and is released to be used in a production phase.
Accuracy is typically calculated using an F-Measure, the harmonic mean between system precision (number of correct answers given number attempted) and recall (number attempted given total possible).
As this 10 percent may somehow “fit” the training data well by chance (i.e., the resemblance is spurious), the training data can be repeatedly divided into “90--10” training/testing splits, where each new split selects another 10 percent of previously unseen data.
Facebook, too, classifies user posts according to topical and other information, and uses additional “label” information from the social graph itself---connections between users are a powerful, supervised “signal” that Facebook gets for “free,” so to speak.
As Nassim Nicholas Taleb argued in his best-selling critique of inductive methods in financial analysis, The Black Swan (Random House, 2007), learning methods that are based on tallying frequencies in past data can be blind to unlikely events departing from such normal past behavior.
Given a supervised learning approach to document classification, however, the frequencies of “crime” words can be expected to be quite high: words like “held up,” “gun,” “robber,” “victim,” and so on will no doubt appear in such a story.
Thus the classification learner has not only missed the intended (human) classification, but precisely because the story fits “Crime” so well given the Frequentist assumption, the intended classification has become less likely---it's been ignored because of the bias of the model.
Again, algorithms that have “thrown in their chips” for building powerful models of language using a frequency bias are very ill-suited to handling such cases, and importantly they become less and less---not more and more---capable, the more powerful they become.
“Zipf's Law” states that there exists some constant k such that f * r = k, where f is the frequency of a word and r is its position in a list ordered by frequency, known as its rank.
But given that word frequency follows the power law distribution that Zipf outlines (Benoît Mandelbrot later refined it, but the details aren't of interest here), there is always an effectively infinite “long tail” of rare, low-frequency words.
.” is a sentence fragment where the semantics of the verb “swallowed” clearly influences the selection of nouns following the adjective “green.” As Manning and Schütze note, nouns like “mountain” or “train” would not be preferred here, given the selectional preferences on the verb.
The IBM Laser Patent Text corpus has 1.5 million words, yet researchers report that almost a quarter (23%) of trigrams discovered after training, on test splits of the corpus, were previously unseen.
There will always be word combinations that are unseen in training data but which end up in test data---this is just a re-statement of the open-endedness and extensibility of language, along with empirical observations captured (if incompletely) by Zipf's Law, stating that the vast majority of words in any corpus are actually rare.
“count” based method like maximum likelihood estimation (MLE) assigns a zero probability to words that don't appear in training data, making such a simple method unusable, in effect.
For this reason, MLE methods are typically supplanted by a technique known as “smoothing.” Data smoothing is a mathematical “trick” that distributes probability (called the probability “mass”) from training data to previously unseen words in test data, giving them some non-zero probability.
wn) + 1 ) / (N + B) Other, more-advanced smoothing techniques are also used, such as Lidstone's Law and the Jeffrey-Perks Law (the former case involves adding some positive quantity less than 1;
Such methods typically perform well for tasks that exploit frequencies well, like part of speech tagging, and fail miserably for more complicated natural language tasks requiring knowledge and inference, like resolving anaphoric or other references.
Complex phenomena like earthquakes, or financial markets, are fertile ground for generating “over-fit” models, as the inherent complexity of such systems---varying pressures on vast areas under the earth's crust, or the feedback loops and human choices forming the modern economy---make discerning signal from noise very difficult.
Ironically, very complicated models from a mathematical standpoint (so-called 'non-parametric density approximation models') are often more vulnerable to over-fitting: signal plus noise is often more complex than signal alone.
The key point is: over-fit models don't generalize to unseen examples except by happy circumstance, as the real underlying distribution representing the signal (not the noise with it) was never learned.
Saturation can occur even when models “fit” the data well, showing good generalization performance on unseen data, but nonetheless can't learn further patterns due to the design constraints on the learning models themselves (choice of parameters, features, etc.).
Norvig here fingers the natural tendency of model performance to level off as more and more data is added, approaching a final accuracy (often asymptotically), as more and more data yields less and less by way of results.
Well-defined tasks that are relatively easy to learn often show relatively high performance before saturating (for instance, part of speech tagging), but other more knowledge-dependent tasks quickly vanquish learning models well before human-level performance can be reached.
Bar-Hillel, in addition to casting a general and notorious skepticism on automated language-processing efforts generally, also pointed out decades ago that simple sentences such as “The box is in the pen” can't be understood using statistical methods.
The problem, again, begins with ambiguity: words like “pen” are polysemous, or “many sensed.” A pen might mean a writing instrument, or a small enclosure for holding children or animals, depending on context.
Such examples may be relatively uncommon in natural language texts, but that they occur at all spells trouble for machine translation---uncommon yet relevant disambiguations are precisely the issue that statistical methods seem ill-equipped to even address, let alone “solve.” A decade later, Haugeland posed similar questions about the necessity of somehow using relevant knowledge to tame holism---and drew similarly skeptical conclusions.
Decades-old debates may seem otiose today, until one realizes that a modern, world-class system like Google Translate gets Hillel's simple sentence wrong, too: “pen” is translated by Google as “a writing instrument.” (The reader can verify this for himself, of course.) The failure is particularly telling for Modern AI proponents, because Google's translation system is touted as state of the art precisely because it maps phrases to different languages using data: books and other pages on the Web that have been translated into other languages.
The illusion of understanding is quickly shattered, however, when one notes that Google translates (2) using the same phrase: “laid down their arms.” As the two situations are radically different, that Google assigns them the same idiomatic phrase perfectly illustrates the differences between data and knowledge.
Once one realizes the differences between grasping what's relevant---actual, usable, context-dependent knowledge---and induction that requires “counting up” previous examples and patterns, the real scope and limits of Modern AI become clear.
In a widely read and controversial paper titled “Cognitive Wheels: The Frame Problem of AI,” Dennett expanded the technical discussion about McCarthy's Frame Problem to include the philosophical question about how intelligent agents---any intelligent agents, whether human or machine---understand what's relevant when the world is constantly changing around them.
`Our next robot must be made to recognize not just the intended implications of its acts, but also the implications about their side-effects, by deducing these implications from the descriptions it uses in formulating its plans.' They called their next model, the robot-deducer, R1D1.
It had just finished deducing that pulling the wagon out of the room would not change the colour of the room's walls, and was embarking on a proof of the further implication that pulling the wagon out would cause its wheels to turn more revolutions than there were wheels on the wagon---when the bomb exploded.
`We must teach it the difference between relevant implications and irrelevant implications,' said the designers, `and teach it to ignore the irrelevant ones.' So they developed a method of tagging implications as either relevant or irrelevant to the project at hand, and installed the method in their next model, the robot-relevant-deducer, or R2D1 for short.
When they subjected R2D1 to the test that had so unequivocally selected its ancestors for extinction, they were surprised to see it sitting, Hamlet-like, outside the room containing the ticking bomb, the native hue of its resolution sicklied o'er with the pale cast of thought, as Shakespeare (and more recently Fodor) has aptly put it.
The Frame Problem emerged out of AI research, but it really exposed the deep mystery of intelligence generally: It appears at first to be at best an annoying technical embarrassment in robotics, or merely a curious puzzle for the bemusement of people working in Artificial Intelligence (AI).
Machine translation was hard, in other words, but true AI was vastly, infinitely, harder: In a typical logical system, subsequent steps are constrained by previous ones, but seldom uniquely determined.
The more data (evidence) Modern AI brings to bear on a particular problem today, like recommending articles or products based on a user's past choices, personalizing content, or major language engineering efforts like Google Translate, the more Modern AI is likely to miss unexpected outcomes that are the whole point: the hallmark of intelligent thinking.
While performance on many language-engineering tasks has indeed increased in recent years, the inevitable errors resulting from systems using inductive approaches will no doubt be from the very difficult or unexpected (or statistically rare, but nonetheless valid given circumstances) examples whose solution requires a solution to the Frame Problem.
For instance, in modern language-engineering, the disambiguation of Bar-Hillel's historic “the box is in the pen” example may succeed perhaps nine out of 10 times today (Google cites a 90 percent success rate on sense disambiguation using its trillion word corpus, for instance).
Yet the single example of the rare meaning of “pen” as a writing instrument in that sentence, in spite of scores of frequencies of it meaning “a small enclosure” in data, is exactly the rarer yet correct inference we need such modern systems to make, to show any real progress on the question of human thinking.
Such answers aren't forthcoming in today's systems, ironically because the value of Modern AI lies precisely in exploiting the “good enough” approach at the expense of outliers: getting the majority of answers correct for its users.
And given the unbounded, fluid nature of human conversations, where context constantly changes in real time in a feedback loop of meaning (“Oh, that's interesting, it reminds me of such-and-such .
The Dark Secret at the Heart of AI
Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey.
The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence.
Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems.
The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation.
There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.
But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users.
But banks, the military, employers, and others are now turning their attention to more complex machine-learning approaches that could make automated decision-making altogether inscrutable.
“Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.” There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right.
The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease.
Without any expert instruction, Deep Patient had discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver.
If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed.
Many thought it made the most sense to build machines that reasoned according to rules and logic, making their inner workings transparent to anyone who cared to examine some code.
But it was not until the start of this decade, after several clever tweaks and refinements, that very large—or “deep”—neural networks demonstrated dramatic improvements in automated perception.
It has given computers extraordinary powers, like the ability to recognize spoken words almost as well as a person could, a skill too complex to code into the machine by hand.
The same approach can be applied, roughly speaking, to other inputs that lead a machine to teach itself: the sounds that make up words in speech, the letters and words that create sentences in text, or the steering-wheel movements required for driving.
The resulting images, produced by a project known as Deep Dream, showed grotesque, alien-like animals emerging from clouds and plants, and hallucinatory pagodas blooming across forests and mountain ranges.
In 2015, Clune’s group showed how certain images could fool such a network into perceiving things that aren’t there, because the images exploit the low-level patterns the system searches for.
The images that turn up are abstract (imagine an impressionistic take on a flamingo or a school bus), highlighting the mysterious nature of the machine’s perceptual abilities.
It is the interplay of calculations inside a deep neural network that is crucial to higher-level pattern recognition and complex decision-making, but those calculations are a quagmire of mathematical functions and variables.
“But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.” In the office next to Jaakkola is Regina Barzilay, an MIT professor who is determined to apply machine learning to medicine.
The diagnosis was shocking in itself, but Barzilay was also dismayed that cutting-edge statistical and machine-learning methods were not being used to help with oncological research or to guide patient treatment.
She envisions using more of the raw data that she says is currently underutilized: “imaging data, pathology data, all this information.” How well can we get along with machines that are
After she finished cancer treatment last year, Barzilay and her students began working with doctors at Massachusetts General Hospital to develop a system capable of mining pathology reports to identify patients with specific clinical characteristics that researchers might want to study.
Barzilay and her students are also developing a deep-learning algorithm capable of finding early signs of breast cancer in mammogram images, and they aim to give this system some ability to explain its reasoning, too.
The U.S. military is pouring billions into projects that will use machine learning to pilot vehicles and aircraft, identify targets, and help analysts sift through huge piles of intelligence data.
A silver-haired veteran of the agency who previously oversaw the DARPA project that eventually led to the creation of Siri, Gunning says automation is creeping into countless areas of the military.
But soldiers probably won’t feel comfortable in a robotic tank that doesn’t explain itself to them, and analysts will be reluctant to act on information without some reasoning.
A chapter of Dennett’s latest book, From Bacteria to Bach and Back, an encyclopedic treatise on consciousness, suggests that a natural part of the evolution of intelligence itself is the creation of systems capable of performing tasks their creators do not know how to do.
But since there may be no perfect answer, we should be as cautious of AI explanations as we are of each other’s—no matter how clever a machine seems.“If it can’t do better than us at explaining what it’s doing,” he says, “then don’t trustit.”
- On 10. april 2021
Will automation take away all our jobs? | David Autor
Here's a paradox you don't hear much about: despite a century of creating machines to do our work for us, the proportion of adults in the US with a job has ...
The AI Is The All Seeing Eye (AI) - Something WEIRD Happened Today
Something weird and strange happened earlier today. Here's my story, confirming, for me anyway, that I am on the right track, the AI is the All Seeing Eye.
Old vs New: Cinderella - Nostalgia Critic
The gloves are off...the long sparkly ones anyway. Which Disney Cinderella is the best one? Donate to this week's Charity here - Go to our ..
Job Automation: Are Writers, Artists, and Musicians Replaceable?
You're probably reading this from either a smartphone or a laptop. It's no small secret that the device you're looking at can create works of art... if you put your ...
Can a robot pass a university entrance exam? | Noriko Arai
Meet Todai Robot, an AI project that performed in the top 20 percent of students on the entrance exam for the University of Tokyo -- without actually ...
AI: What Is it, What Are The Benefits & Challenges And What You Can Do To Prepare For It
There have been many conversations around AI and automation in the past few years. Some people are excited to embrace the coming growth of AI, while ...
How AI can bring on a second Industrial Revolution | Kevin Kelly
"The actual path of a raindrop as it goes down the valley is unpredictable, but the general direction is inevitable," says digital visionary Kevin Kelly -- and ...
De-Occulting Elon Musk (Lecture Only)
For further lectures and videos, or to support this work please become a Patron: Patreon - .
How to get empowered, not overpowered, by AI | Max Tegmark
Many artificial intelligence researchers expect AI to outsmart humans at all tasks and jobs within decades, enabling a future where we're restricted only by the ...