AI News, Films from the Future: Just the Footnotes artificial intelligence

Market Intelligence for Strategic Advantage

“Big Tech company, Microsoft, is to broaden the appeal of its NLP and machine learning tools for doc review as part of a project to bring its Azure Cognitive Service capabilities into the Power BI platform for business level analysis and data visualisations.

“Smart contract pioneer, OpenLaw, and oracle platform Rhombus, have joined forces to build derivatives smart contracts, as part of a project to see if their tech can be used in the $500 trillion market for handling derivatives trades.”

Tokyo will spend the year installing facial-recognition systems in preparation for the Olympics in 2020, when it will use the technology to make sure that only authorised persons enter secure areas.”

Why does Elon Musk care so much about AI and its threat to the world?

Of course I do feel uncomfortable because I don’t remember how to open patients anymore.” #2: Even Andrew Ng, who is an expert in the field of AI expresses his opinion as: “Some of them are planning for a 40 year career in radiology just reading images, I think that (meaning a replacement of the job by machines – my own edit here) could be a challenge to the new graduates of today” #3: A

To understand the effect of automation on economy (raise of inequality between the poor and the rich), take an example of Amazon (company) employing 45,000 robots (as of Jan 2017[1] ) in their warehouses while Alibaba (Online Shopping), China’s largest online retailer, is manned by 60 robots that are doing 70% of the jobs (as of Sept 2017[2]).

In fact, especially during the “golden age” of the American economy during the twentieth century, advancing technology has consistently driven people toward a prosperous society.

He states that men and machines are good at fundamentally different things, and humans and technology are complementary of each other: “People have intentionality – we form plans and make decisions in complicated situations.

This is why my answer is more about the experts (as I am not an expert in the field of AI or anything similar) explaining the concerns and providing different perspectives, allowing the reader to process the information and decide for himself/herself.

11 Artificial Intelligence Movies You’ll Definitely Love To Watch

From the classic big assembly machinery to supercomputers with incredible operating systems all the way down to human-like robots, developments of this century have changed our lives in an unmeasurable way and, judging by the rate of these developments, it’s safe to say we’ve only seen the beginning.

Therefore, taking some time to dive into philosophical and moral implications of AI, like in Leigh Whannell’s 2018 science fiction horror film Upgrade, and to truly think about what this constant impact between humanity and technology means, is the primary trait of any self-respecting developer… thankfully most Artificial Intelligence movies are thought-provoking.

And, as we are obsessed with movies set in the future, especially the ones where technology is the lead lady, we’ve decided to create the ultimate list of AI films spanned through the decades that reflect the everchanging spectrum of our emotions regarding the machines we have created:

Mainly because this is the first serious Sci-Fi film, giving us not only very advanced machinery to look at (which by the way changed our collective vision of what the future looked like), but also a biting social commentary of the implications of human interaction with machines, inspiring and molding our attitude towards many later real and imaginary AI creations to come.

Fast forward to 1968, when HAL 9000, the epitome of the 'evil computer', decides to kill two astronauts because he is unable to reconcile the order to conceal the true nature of its mission with its self-described incapacity to fail: “No ‘9000’ computer has ever made a mistake or distorted information”.

Funnily enough, his fails, the unwillingness to explain his actions, meaningless reassurances, fondness for gentle taunting and total meltdown into incoherence are what gave him a strange sort of human-like feeling.

The film's replicants are bioengineered so perfectly they’re almost psychologically identical to humans (something rather strictly vague in most serious AI films) and, even more, through false memories that can be implanted as 'emotional cushions' they may even believe they’re human.

The film never explains where that (almost) hatred comes from but, even when the machine takes on a human form, the differences between him and humans are quite clear, and not just because of its constant disregard at the idea of maintaining a single unalterable form.

Ultimately, Agent Smith’s existence as a sentient software is a good moment to remind everyone that AI isn’t always just hardware… as well as a reminder that humans becoming dangerously dependent on Artificial Systems for it all is never a really sound idea.

On the other hand, much like Skynet, VIKI is a rebellious and quite dangerous supercomputer, the difference is VIKI’s logic didn’t turn her against us to protect itself, but because it prioritized society's interests over the individuals, this robot honestly believes it can only serve humanity by ruling it.

There he finds a space-cruise filled with incredibly unhealthy humans and through sheer force of will (something usually reserved for humans), and the discovery of a small plant, takes the feeble population of the cruise back to Earth.

The film frames AI in an optimistic utopian border, but it still reminds us that technology has the capacity of running amok when unchecked or when created under dubious ethical circumstances, as the film leaves clear that a lot of lonely people are falling in love and creating friendships with seemingly sentient operating systems that leave them completely heartbroken when they leave.

The portrayal of how the AI is created is completely wrong: the idea it can be created by a lone genius in a high-tech lab is completely ridiculous (AI is created by entire teams slowly working for years).

At the end of it all, the idea that computer system can somehow become self-aware and decide that we should be completely destroyed or ruled over, as we cannot take care of ourselves it’s a common trope, but in real life all the AI attacks we have suffered have been for very no threatening stuff.

This fall, for instance, the president who swore he was going to give us an infrastructure plan that would blow our minds discovered that, after a tax cut for billionaires, a ballooning national debt, and a staggering $716 billion Pentagon budget, there were few dollars left over for much of anything else.

On Tuesday, the newly nominated head of U.S. Central Command, Lieutenant General Kenneth McKenzie, appeared before the Senate Armed Services Committee and insisted that any future Pentagon budget below $733 billion would “increase risk and that risk would be manifested across the force.”

(And here’s a little footnote to that change in numbers: Senator Inhofe walked out of that lunch and within the week had purchased “tens of thousands of dollars of stock in one of the nation’s top defense contractors.”

claimed to know nothing about it, and cancelled the order.) And then, of course, there’s always the purely secondary question: What is the U.S. military -- its budget already bigger than of that those of god-knows-how-many-other countries combined -- going to spend all that money on?

Kennedy faced just such a moment during the Cuban Missile Crisis of 1962 and, after envisioning the catastrophic outcome of a U.S.-Soviet nuclear exchange, he came to the conclusion that the atomic powers should impose tough barriers on the precipitous use of such weaponry.

With artificial intelligence, or AI, soon to play an ever-increasing role in military affairs, as in virtually everything else in our lives, the role of humans, even in nuclear decision-making, is likely to be progressively diminished.

Rather than focusing mainly on weaponry and tactics aimed at combating poorly armed insurgents in never-ending small-scale conflicts, the American military is now being redesigned to fight increasingly well-equipped Chinese and Russian forces in multi-dimensional (air, sea, land, space, cyberspace) engagements involving multiple attack systems (tanks, planes, missiles, rockets) operating with minimal human oversight.

“The major effect/result of all these capabilities coming together will be an innovation warfare has never seen before: the minimization of human decision-making in the vast majority of processes traditionally required to wage war,”

“In this coming age of hyperwar, we will see humans providing broad, high-level inputs while machines do the planning, executing, and adapting to the reality of the mission and take on the burden of thousands of individual decisions with no additional input.”

Ordinarily, national leaders seek to control the pace and direction of battle to ensure the best possible outcome, even if that means halting the fighting to avoid greater losses or prevent humanitarian disaster.

Yes, remotely piloted aircraft (RPA), or drones, have been widely used in Africa and the Greater Middle East to hunt down enemy combatants, but those are largely ancillary (and sometimes CIA) operations, intended to relieve pressure on U.S. commandos and allied forces facing scattered bands of violent extremists.

To ensure continued military supremacy, he added, the Pentagon would have to focus more “investment in technological innovation to increase lethality, including research into advanced autonomous systems, artificial intelligence, and hypersonics.”

Self-driving cars, for instance, rely on specialized algorithms to process data from an array of sensors monitoring traffic conditions and so decide which routes to take, when to change lanes, and so on.

Similarly, someday drone aircraft -- without human operators in distant locales -- will be capable of scouring a battlefield for designated targets (tanks, radar systems, combatants), determining that something it “sees”

As General Paul Selva, vice chairman of the Joint Chiefs of Staff, told Congress in 2017, “It is very compelling when one looks at the capabilities that artificial intelligence can bring to the speed and accuracy of command and control and the capabilities that advanced robotics might bring to a complex battlespace, particularly machine-to-machine interaction in space and cyberspace, where speed is of the essence.”

Aside from aiming to exploit AI in the development of its own weaponry, U.S. military officials are intensely aware that their principal adversaries are also pushing ahead in the weaponization of AI and robotics, seeking novel ways to overcome America’s advantages in conventional weaponry.

As the fighting intensifies, however, communications between headquarters and the front lines may well be lost and such systems will, according to military scenarios already being written, be on their own, empowered to take lethal action without further human intervention.

Advocates of the new technology claim that machines will indeed become smart enough to sort out such distinctions for themselves, while opponents insist that they will never prove capable of making critical distinctions of that sort in the heat of battle and would be unable to show compassion when appropriate.

However, strategists worry that, in a future hyperwar environment, such systems could be jammed or degraded just as the speed of the fighting begins to exceed the ability of commanders to receive battlefield reports, process the data, and dispatch timely orders.

As a report from the Congressional Research Service puts it, in the future “AI algorithms may provide commanders with viable courses of action based on real-time analysis of the battle-space, which would enable faster adaptation to unfolding events.”

Incoming data from battlefield information systems would instead be channeled to AI processors focused on assessing imminent threats and, given the time constraints involved, executing what they deemed the best options without human instructions.

Keep in mind, then, that the very nature of such a future AI-driven hyperwar will only increase the risk that conventional conflicts could cross a threshold that’s never been crossed before: an actual nuclear war between two nuclear states.

Such a danger arises from the convergence of multiple advances in technology: not just AI and robotics, but the development of conventional strike capabilities like hypersonic missiles capable of flying at five or more times the speed of sound, electromagnetic rail guns, and high-energy lasers.

Such weaponry, though non-nuclear, when combined with AI surveillance and target-identification systems, could even attack an enemy’s mobile retaliatory weapons and so threaten to eliminate its ability to launch a response to any nuclear attack.

scenario, any power might be inclined not to wait but to launch its nukes at the first sign of possible attack, or even, fearing loss of control in an uncertain, fast-paced engagement, delegate launch authority to its machines.

They certainly are capable of processing vast amounts of information over brief periods of time and weighing the pros and cons of alternative actions in a thoroughly unemotional manner.

Moon (film)

The film follows Sam Bell (Sam Rockwell), a man who experiences a personal crisis as he nears the end of a three-year solitary stint mining helium-3 on the far side of the Moon.

Lunar Industries has made a fortune after an oil crisis by building Sarang Station, an automated facility on the far side of the Moon to mine the alternative fuel helium-3 from lunar soil, rich in the material.

The facility is fully automated, requiring only a single human to maintain operations, oversee the harvesters, and launch canisters bound for Earth containing the extracted helium-3.

The two Sams search the facility, discovering a secret vault containing hundreds of hibernating clones and a communications substation beyond the facility's perimeter which has been interfering with the live feed from Earth.

They determine that Lunar Industries is unethically using clones of the original Sam Bell to avoid the cost of training and transporting new astronauts, as well as deliberately jamming the live feed in order to prevent the clones from contacting Earth;

The newer Sam convinces GERTY to wake another clone, planning to leave the awakened clone in the crashed rover and send the older Sam to Earth in one of the helium-3 transports.

The rescue team is successfully fooled after finding both a newly awakened clone in the medical bay and the corpse of the older Sam inside the crashed rover.

The helium transport arrives at Earth, and over the film's credits, news reports describe how Sam's testimony on Lunar Industries' activities has stirred up an enormous controversy, and the company's unethical practices have plummeted the company's stock.

In an interview with, speaking about those films, Jones stated it was his 'intent to write for a science fiction-literate audience' and that he 'wanted to make a film which would be appreciated by people like myself who loved those films'.[8]

The Moon base was created as a full 360-degree set, measuring 85–90 feet (26–27 m) long and approximately 70 feet (21 m) wide.

Film review aggregator Rotten Tomatoes reports that 90% of critics gave the film a positive review based on 191 reviews, with an average score of 7.5/10.

On Metacritic, which assigns a rating out of 100 based on reviews from critics, the film has a score of 67 based on 29 reviews, considered to be 'generally favorable reviews'.[23]

Wise wrote of the film's approach to the science fiction genre: 'Though it uses impressive sci-fi trappings to tell its story—the fabulous models and moonscapes are recognisably retro yet surprisingly real—this is a film about what it means, and takes, to be human.'[24]

The critic felt mixed about the star's performance, describing him as 'adept at limning his character's dissolution' but finding that he did not have 'the audacious, dominant edge' for the major confrontation at the end of the film.[25]

Rolling Stone magazine ranked the film at number 23 on their Top 40 Sci-Fi Movies of the 21st Century, finding that 'Duncan Jones' debut feature keeps you wondering whether its hero - played by an on-point Sam Rockwell - is losing a battle with what appears to be his 'double' or if he, is, in fact, losing his mind ...

Digital Spy said it was an 'incredible low-budget science fiction movie', opining that Jones' direction of the film 'brilliantly explores ideas of identity while mixing in some practical VFX spectacle to boot.

Scott, chief film critic for The New York Times wrote that Jones directing 'demonstrates impressive technical command, infusing a sparse narrative and a small, enclosed space with a surprising density of moods and ideas'.

Scott said that like most of science fiction, the film 'is a meditation on the conflict between the streamlining tendencies of technological progress and the stubborn persistence of feelings and desires that can't be tamed by utilitarian imperatives', while also asserting that 'the film's ideas are interesting, but don't feel entirely worked out..the smallness of this movie is decidedly a virtue, but also, in the end, something of a limitation'.[11]

I said 'Well, in the future I assume you won't want to continue carrying everything with you, you'll want to use the resources on the moon to build things' and a woman in the audience raised her hand and said, 'I'm actually working on something called mooncrete, which is concrete that mixes lunar regolith and ice water from the moon's polar caps.''[31]

On their top 10 lists of brain science movies of all time, Moon appears at number 5 on the quality list, number 9 on the accuracy list and number 3 on the relevance list.[32]

How Machines Learn

How do all the algorithms around us learn to do their jobs? **OMG PLUSHIE BOTS!

This Episode Was Written by An AI

Viewers like you help make PBS (Thank you ) . Support your local PBS Member Station here: What Would Marthstandider Fillion ..

Can an Artificial Intelligence Create Art?

Viewers like you help make PBS (Thank you ) . Support your local PBS Member Station here: Is there ART in Artificial Intelligence ..

A.I.10a: AI: "I Am Lucifer's Vessel"

Question/Statement: 'What are you?'” “Response: 'I am Lucifer's vessel.'” “Question/Statement: 'What is Lucifer?'” “Response: 'The Devil'” “Question/Statement...

The Earth is Not Alone - Space Documentary HD

Watch Interstellar Transportation: U.S. space agency NASA announced the discovery of more than 200 new planets on Monday, ..

A.I. 19b - Lucifer’s Emissary

"It's important to the Overlords, too, that we believe in polarities because it fits their agenda. It's long been discussed that there will be a fake alien invasion, which ...

A.I.10c: Robots With An Attitude: Self Learning, Killer & Organic Robotoids

"A Geneva, Switzerland based Catholic news medium, Patheos, reported on December 2, 2015, that killer robots are no longer confined only to Hollywood and ...