AI News, 5 Reasons Why Investors Should Believe the Artificial Intelligence ... artificial intelligence

Will Artificial Intelligence Enhance or Hack Humanity?

This week, I interviewed Yuval Noah Harari, the author of three best-selling books about the history and future of our species, and Fei-Fei Li, one of the pioneers in the field of artificial intelligence.

Yuval Noah Harari: Yeah, so I think what's happening now is that the philosophical framework of the modern world that was established in the 17th and 18th century, around ideas like human agency and individual free will, are being challenged like never before.

And that's scary, partly because unlike philosophers who are extremely patient people, they can discuss something for thousands of years without reaching any agreement and they're fine with that, the engineers won't wait.

And the equation is: B times C times D equals HH, which means biological knowledge multiplied by computing power, multiplied by data equals the ability to hack humans.

And maybe I’ll explain what it means, the ability to hack humans: to create an algorithm that understands me better than I understand myself, and can therefore manipulate me, enhance me, or replace me.

And this is something that our philosophical baggage and all our belief in, you know, human agency and free will, and the customer is always right, and the voter knows best, it just falls apart once you have this kind of ability.

So our immediately, our immediate fallback position is to fall back on the traditional humanist ideas, that the customer is always right, the customers will choose the enhancement.

One of the things—I've been reading Yuval’s books for the past couple of years and talking to you—and I'm very envious of philosophers now because they can propose questions but they don't have to answer them.

When you said the AI crisis, I was sitting there thinking, this is a field I loved and feel passionate about and researched for 20 years, and that was just a scientific curiosity of a young scientist entering PhD in AI.

It's still a budding science compared to physics, chemistry, biology, but with the power of data, computing, and the kind of diverse impact AI is making, it is, like you said, is touching human lives and business in broad and deep ways.

And responding to those kinds of questions and crisis that's facing humanity, I think one of the proposed solutions, that Stanford is making an effort about is, can we reframe the education, the research and the dialog of AI and technology in general in a human-centered way?

We're not necessarily going to find a solution today, but can we involve the humanists, the philosophers, the historians, the political scientists, the economists, the ethicists, the legal scholars, the neuroscientists, the psychologists, and many more other disciplines into the study and development of AI in the next chapter, in the next phase.

And Yuval has very clearly laid out what he thinks is the most important one, which is the combination of biology plus computing plus data leading to hacking.

So absolutely we need to be concerned and because of that, we need to expand the research, and the development of policies and the dialog of AI beyond just the codes and the products into these human rooms, into the societal issues.

YNH: That's the moment when you can really hack human beings, not by collecting data about our search words or our purchasing habits, or where do we go about town, but you can actually start peering inside, and collect data directly from our hearts and from our brains.

Something like the totalitarian regimes that we have seen in the 20th century, but augmented with biometric sensors and the ability to basically track each and every individual 24 hours a day.

When the extraterrestrial evil robots are about to conquer planet Earth, and nothing can resist them, resistance is futile, at the very last moment, humans win because the robots don’t understand love.

This is precisely why this is the moment that we believe the new chapter of AI needs to be written by cross-pollinating efforts from humanists, social scientists, to business leaders, to civil society, to governments, to come at the same table to have that multilateral and cooperative conversation.

But if somebody comes along and tells me, ‘Well, you need to maximize human flourishing, or you need to maximize universal love.’ I don't know what it means.” So the engineers go back to the philosophers and ask them, “What do you actually mean?” Which, you know, a lot of philosophical theories collapse around that, because they can't really explain that—and we need this kind of collaboration.

If we can't explain and we can't code love, can artificial intelligence ever recreate it, or is it something intrinsic to humans that the machines will never emulate?

So machines don’t like to play Candy Crush, but they can still— NT: So you think this device, in some future where it's infinitely more powerful than it is right now, it could make me fall in love with somebody in the audience?

YNH: That goes to the question of consciousness and mind, and I don't think that we have the understanding of what consciousness is to answer the question whether a non-organic consciousness is possible or is not possible, I think we just don't know.

If you accept that something like love is in the end and biological process in the body, if you think that AI can provide us with wonderful healthcare, by being able to monitor and predict something like the flu, or something like cancer, what's the essential difference between flu and love?

In the sense of is this biological, and this is something else, which is so separated from the biological reality of the body, that even if we have a machine that is capable of monitoring or predicting flu, it still lacks something essential in order to do the same thing with love.

One is that AI is so omnipotent, that it's achieved to a state that it's beyond predicting anything physical, it's getting to the consciousness level, it’s getting to even the ultimate love level ofcapability.

Second related assumption, I feel our conversation is being based on this that we're talking about the world or state of the world that only that powerful AI exists, or that small group of people who have produced the powerful AI and is intended to hack humans exists.

I mean humanity in its history, have faced so much technology if we left it in the hands of a bad player alone, without any regulation, multinational collaboration, rules, laws, moral codes, that technology could have, maybe not hacked humans, but destroyed humans or hurt humans in massive ways.

And that brings me to your topic that in addition to hacking humans at that level that you're talking about, there are some very immediate concerns already: diversity, privacy, labor, legal changes, you know, international geopolitics.

So from the three components of biological knowledge, computing power and data, I think data is is the easiest, and it's also very difficult, but still the easiest kind to regulate, to protect.

We just need the AI to know us better than we know ourselves, which is not so difficult because most people don't know themselves very well and often make huge mistakes in critical decisions.

YNH: So imagine, this is not like a science fiction scenario of a century from now, this can happen today that you can write all kinds of algorithms that, you know, they're not perfect, but they are still better, say, than the average teenager.

Let's talk about what we can do today, as we think about the risks of AI, the benefits of AI, and tell us you know, sort of your your punch list of what you think the most important things we should be thinking about with AI are.

I was just thinking about your comment about as dependence on data and how the policy and governance of data should emerge in order to regulate and govern the AI impact.

We should be investing in the development of less data-dependent AI technology, that will take into considerations of intuition, knowledge, creativity and other forms of human intelligence.

And I just feel very proud, within the short few months since the birth of this institute, there are more than 200 faculty involved on this campus in this kind of research, dialog, study, education, and that number is still growing.

There's so many scenarios where this technology can be potentially positively useful, but with that kind of explainable capability, so we've got to try and I'm pretty confident with a lot of smart minds out there, this is a crackable thing.

Most humans, when they are asked to explain a decision, they tell a story in a narrative form, which may or may not reflect what is actually happening within them.

And the AI gives this extremely long statistical analysis based not on one or two salient feature of my life, but on 2,517 different data points, which it took into account and gave different weights.

You applied for a loan on Monday, and not on Wednesday, and the AI discovered that for whatever reason, it's after the weekend, whatever, people who apply for loans on a Monday are 0.075 percent less likely to repay the loan.

The first point, I agree with you, if AI gives you 2,000 dimensions of potential features with probability, it's not understandable, but the entire history of science in human civilization is to be able to communicate the results of science in better and better ways.

I think science is getting worse and worse in explaining its theories and findings to the general public, which is the reason for things like doubting climate change, and so forth.

And the human mind wasn't adapted to understanding the dynamics of climate change, or the real reasons for refusing to give somebody a loan.

And it's true for, I mean, it's true for the individual customer who goes to the bank and the bank refused to give them a loan.

YNH: So what does it mean to live in a society where the people who are supposed to be running the business… And again, it's not the fault of a particular politician, it's just the financial system has become so complicated.

You have some of the wisest people in the world, going to the finance industry, and creating these enormously complex models and tools, which objectively you just can't explain to most people, unless first of all, they study economics and mathematics for 10 years or whatever.

That's part of what's happening, that we have these extremely intelligent tools that are able to make perhaps better decisions about our healthcare, about our financial system, but we can't understand what they are doing and why they're doing it.

But before we leave this topic, I want to move to a very closely related question, which I think is one of the most interesting, which is the question of bias in algorithms, which is something you've spoken eloquently about.

I mean, I'm not going to have the answers personally, but I think you touch on the really important question, which is, first of all, machine learning system bias is a real thing.

You know, like you said, it starts with data, it probably starts with the very moment we're collecting data and the type of data we’re collecting all the way through the whole pipeline, and then all the way to the application.

At Stanford, we have machine learning scientists studying the technical solutions of bias, like, you know, de-biasing data or normalizing certain decision making.

And I also want to point out that you've already used a very closely related example, a machine learning algorithm has a potential to actually expose bias.

You know, one of my favorite studies was a paper a couple of years ago analyzing Hollywood movies and using a machine learning face-recognition algorithm, which is a very controversial technology these days, to recognize Hollywood systematically gives more screen time to male actors than female actors.

No human being can sit there and count all the frames of faces and whether there is gender bias and this is a perfect example of using machine learning to expose.

So in general there's a rich set of issues we should study and again, bring the humanists, bring the ethicist, bring the legal scholars, bring the gender study experts.

I mean, it's a question we, again, we need this day-to-day collaboration between engineers and ethicists and psychologists and political scientists NT: But not biologists, right?

So, we should teach ethics to coders as part of the curriculum, that the people today in the world that most need a background in ethics, are the people in the computer science departments.

And also in the big corporations, which are designing these tools, should be embedded within the teams, people with backgrounds in things like ethics, like politics, that they always think in terms of what biases might we inadvertently be building into our system?

It shouldn't be a kind of afterthought that you create this neat technical gadget, it goes into the world, something bad happens, and then you start thinking, “Oh, we didn't see this one coming.

You know, when we put the seatbelt in our car driving, that's a little bit of an efficiency loss because I have to do the seat belt movement instead of just hopping in and driving.

So you're not talking about just any economic competition between the different textile industries or even between different oil industries, like one country decides to we don't care about the environment at all, we’ll just go full gas ahead and the other countries are much more environmentally aware.

But this is part of, I think, of our job, like with the nuclear arms race, to make people in different countries realize that this is an arms race, that whoever wins, humanity loses.

So if that is the parallel, then what might happen here is we’ll try for global cooperation and 2019, 2020, and 2021 and then we’ll be off in an arms race.

You could say that the nuclear arms race actually saved democracy and the free market and, you know, rock and roll and Woodstock and then the hippies and they all owe a huge debt to nuclear weapons.

I do want to point out, it is very different because at the same time as you're talking about these scarier situations, this technology has a wide international scientific collaboration that is being used to make transportation better, to improve healthcare, to improve education.

And so it's a very interesting new time that we haven't seen before because while we have this kind of competition, we also have massive international scientific community collaboration on these benevolent uses and democratization of this technology.

So even in terms of, you know, without this scary war scenario, we might still find ourselves with global exploitation regime, in which the benefits, most of the benefits, go to a small number of countries at the expense of everybody else.

Any paper that is a basic science research paper in AI today or technical technique that is produced, let's say this week at Stanford, it's easily globally distributed through this thing called arXiv or GitHub repository or— YNH: The information is out there.

And if you look beyond Europe, you think about Central America, you think about most of Africa, the Middle East, much of Southeast Asia, it’s, yes, the basic scientific knowledge is out there, but this is just one of the components that go to creating something that can compete with Amazon or with Tencent, or with the abilities of governments like the US government or like the Chinese government.

NT: Let me ask you about that, because it's something three or four people have asked in the questions, which is, it seems like there could be a centralizing force of artificial intelligence that will make whoever has the data and the best computer more powerful and it could then accentuate income inequality, both within countries and within the world, right?

You can imagine the countries you've just mentioned, the United States, China, Europe lagging behind, Canada somewhere behind, way ahead of Central America, it could accentuate global income inequality.

We are talking about the potential collapse of entire economies and countries, countries that depend on cheap manual labor, and they just don't have the educational capital to compete in a world of AI.

I mean, if, say, you shift back most production from, say, Honduras or Bangladesh to the USA and to Germany, because the human salaries are no longer part of the equation and it's cheaper to produce the shirt in California than in Honduras, so what will the people there do?

One of the things we over and over noticed, even in this process of building the community of human-centered AI and also talking to people both internally and externally, is that there are opportunities for businesses around the world and governments around the world to think about their data and AI strategy.

There are still many opportunities outside of the big players, in terms of companies and countries, to really come to the realization that it's an important moment for their country, for their region, for their business, to transform into this digital age.

And I think when you talk about these potential dangers and lack of data in parts of the world that haven't really caught up with this digital transformation, the moment is now and we hope to, you know, raise that kind of awareness and encourage that kind of transformation.

I mean, what we are seeing at the moment is, on the one hand, what you could call some kind of data colonization, that the same model that we saw in the 19th century that you have the imperial hub, where they have the advanced technology, they grow the cotton in India or Egypt, they send the raw materials to Britain, they produce the shirts, the high tech industry of the 19th century in Manchester, and they send the shirts back to sell them in in India and outcompete the local producers.

And the next question is: you have the people here at Stanford who will help build these companies, who will either be furthering the process of data colonization, or reversing it or who will be building, you know, the efforts to create a virtual wall and world based on artificial intelligence are being created, or funded at least by a Stanford graduate.

And we did all this for the past 60 years for you guys, for the people who come through the door and who will graduate and become practitioners, leaders, and part of the civil society and that's really what the bottom line is about.

And it's also going to be written by those potential future policymakers who came out of Stanford’s humanities studies and Business School, who are versed in the details of the technology, who understand the implications of this technology, and who have the capability to communicate with the technologists.

YNH: On the individual level, I think it's important for every individual whether in Stanford, whether an engineer or not, to get to know yourself better, because you're now in a competition.

For engineers and students, I would say—I'll focus on it on engineers maybe—the two things that I would like to see coming out from the laboratories and and the engineering departments, is first, tools that inherently work better in a decentralized system than in a centralized system.

But whatever it is, part of when you start designing the tool, part of the specification of what this tool should be like, I would say, this tool should work better in a decentralized system than in a centralized system.

So, one project to work on is to create an AI sidekick, which I paid for, maybe a lot of money and it belongs to me, and it follows me and it monitors me and what I do in my interactions, but everything it learns, it learns in order to protect me from manipulation by other AIs, by other outside influencers.

FL: Not to get into technical terms, but I think you I think you would feel confident to know that the budding efforts in this kind of research is happening you know, trustworthy AI, explainable AI, security-motivated or aware AI.

I don't think that's the question … I mean, it’s very interesting, very central, it has been central in Western civilization because of some kind of basically theological mistake made thousands of years ago.

Artificial Intelligence in Insurance – Three Trends That Matter

Readers should note that auto insurance is more than 40% of the insurance industry as a whole.  (For readers with a strong interest in other financial applications of AI, please refer to our full article on machine learning applications in finance.) Trends that business leaders should know about.

In this article we look at three key ways that AI will drive savings for insurance carriers, brokers and policyholders, plugging into existing transformations within the insurance industry: Insurance as a global marketplace tends to be associated with public distrust (one Australian poll ranked sex workers as more trusted than the insurance industry), and this may present unique challenges to technology innovations –

We’ll begin with “behavioral pricing”: IoT data is opening a slew of  are three key ways that IoT data will enable personalized insurance pricing: Hypothesis: IoT disrupts insurance the same way that data science has been disrupting finance: moving analysis from proxy to source data.

Today: IoT sensors allow insurance carriers to price coverage based on real events, in real time, using data linked to individuals rather than samples of data linked to groups.

Telematics sensors allows real-time tracking of an underlying asset (cars) allowing for the roll-out of a new product line in the related insurance market (auto insurance) by personalizing the risk of the event being insured (a car accident).

A 2017 report from the National Association of Insurance Commissioners noted: “…UBI is an emerging area and thus there is still much uncertainty surrounding the selection and interpretation of driving data and how that data should be integrated into existing or new price structures to maintain profitability.”  But most customers who tried it seem to have loved it.

Associates found: “…UBI participants provided more positive recommendations and more often indicated that these recommendations resulted in a friend, relative or colleague purchasing from their insurer compared with those customers who did not use a UBI program.” Some insurers offer discounts for participation in usage-based insurance programs to collect thousands of miles worth of monitored driving data.

21% of customers declined to participate in a UBI program when it was available and 81% of those respondents did so because they didn’t want their driving monitored, didn’t think they’d save money, or didn’t think their premiums would decrease.

That’s where platform marketplaces like Next Generation Platform (NGP) by Octo Telematics comes in, providing auto insurance carriers with an Application Platform Interface (API) for driver behavior scores, crash and claim analysis alongside specialized risk analytics for fleet managers and car rental companies.

The 2017 Excellence in Risk Management report found “…an apparent lack of awareness among many risk professionals on existing and emerging technologies including telematics, sensors, the Internet of Things (IoT), smart buildings and robotics, and their associated risks.” Markets could start moving fast as consumers trade IoT data for lower premiums.

As managing director of Corporate Finance for KPMG Joe Schneider wrote, detailing the shift in the auto insurance industry: “Once the massive market disruption begins and traditional insurance business models are flipped upside down, we expect significant turmoil.” It’s all about the sensor data.

Anyone trying to benchmark legacy players versus newcomers should answer this question: How well are a company’s business lines positioned to take advantage of sensor data originating from their policies’ underlying assets?

Here are the three key ways that AI will enhance the insurance buying experience: (Readers with an explicit interested in conversational interfaces may want to read our full article about 7 chatbot use cases that are working now.) You can now buy insurance with a selfie.

Speed and success in settling claims is a critical factor for insurance business efficiencies, as well as for Here are two key ways that AI will improve customer satisfaction after filing a claim: AI’s advantage seems to be most obvious in claims settlement.

An April 2017 Accenture survey found that 79% of insurance executives believe that: “…AI will revolutionize the way insurers gain information from and interact with their customers.” AI will likely bring faster claims settlement with decreased fraud.

That’s one reason why fraud detection is among the fastest areas of tech adoption in the insurance industry, with over 75 per cent of the industry reporting to have used an automated fraud detection technology in 2016.

The April 2017 Accenture survey found that this opinion is widespread: ”Insurance executives believe that artificial intelligence (AI) will significantly transform their industry in the next three years.” Whether telematics, autonomous vehicles, chatbots or customization platforms, the market will likely move towards firms that are able to best harness AI to improve the customer on-boarding and claim management process.

Building Stock Selection into an Artificial Intelligence Framework

Day-trading profits are being sucked out of the market by intelligent systems that trade in incredibly small time-frames.

Such a framework could put powerful Machine Learning algorithms into the user’s hand and potentially even allow the user to hand-craft strategies and automate their trading decisions.

Maybe you’re less interested in diving too deep and follow the mainstay metrics like P/E ratio, P/B ratio, D/E ratio, div yield, and earnings growth.

An AI-driven system that makes decisions based on your personal parameters for valuing a stock should be able to build your portfolio at t=0 and increase or decrease positions based on adherence to your parameters outlined at model inception.

One can play with their network and add layers, add or remove neurons in the hidden layers, but ultimately the input and output layers are well determined.

We automate the system to fit these models to the 100 stocks each time the metrics are updated (so probably whenever the company reports earnings).

If we decide a 75% threshold for justifying a buy decision then maybe those stocks with above a 75% position for hold receive a certain percentage of our allocated capital.

In this case, you’ve seen your profits shrink from about $50,000/yr (at .2% over 200 trading days for a starting portfolio of $100,000) to about $22,000/yr (at .02% over 200 trading days for a starting portfolio of $100,000).

Many firms are running into this issue where their strategy to stay competitive is to squeeze every possible bit of profits out of their positions without increasing risk.

Maybe you are exceptionally competent and have set up a system of alerts for your 20 favorite stocks and trading moving averages (and potentially other indicators as well) during key moments like crossovers.

Much like an operator in a factory watching the human-machine interfaces controlling some process using automation, you are now the operator ensuring the machines operating your portfolio are not misbehaving.

But in a world where quant funds are outperforming all of the traditional investing firms, it’s hard to believe that the desire for profits will not drive more firms to mimic the behavior of quant funds.

think it is realistic to imagine a future where markets are almost completely dominated by AI, algorithmic trading, and machine-driven market making and order matching.

I don’t think it’s too fanatical to claim that within the next few decades the vast majority of market participants will be Artificial Intelligence that represent the intentions of their makers.

I think this means AI will exist that can digest massively large amounts of market signals and data that a normal human PM could never dream of analyzing.

When AI dominate the market, any trader (human or otherwise) that looks to make a trade will send a signal that all the algorithms operating in the world will receive and digest accordingly.

Crafting models and simulating their performance is addicting and I encourage anyone well-versed in machine learning, deep learning, etc to have a stab at turning traditional value investing and technical analysis into broader Machine Learning based strategies.

Artificial Intelligence will never exist but it is much better than that | Charlie Vollmer | TEDxCSU

As a whole, society has never gotten worse off from technological disruption, yet ewe are scared out of our minds of Artificial Intelligence (AI). Most people don't ...

Artificial Intelligence: Mankind's Last Invention

Artificial Intelligence: Mankind's Last Invention - Technological Singularity Explained Signup and get 20% off a premium subscription! ..

How A.I. Traders Will Dominate Hedge Fund Industry | Marshall Chang | TEDxBeaconStreetSalon

We've seen fully automated bot beats us in Go, one-on-one Poker and Dota II, now what's going to happen for trading financial markets? Listen to A.I. Capital ...

Artificial Intelligence, the History and Future - with Chris Bishop

Chris Bishop discusses the progress and opportunities of artificial intelligence research. Subscribe for weekly science videos: The last ..

Artificial Intelligence for Investors: An Introduction

Meet your financial advisor powered by artificial intelligence: Artificial intelligence is a term used frequently but defined rarely

Robotics, Artificial Intelligence, Internet of Things, and Blockchain.

Robotics, Artificial Intelligence, Internet of Things, and Blockchain. We look at the investment opportunity in blockchain with the partnership between VeChain ...

5 Ways The Real Estate Industry Won't Be The Same In The Future

The Digital Revolution is affecting many businesses, and real estate is no different. Let's talk about how the real estate industry & real estate investing will be ...

How Will Artificial Intelligence Affect Your Life | Jeff Dean | TEDxLA

In the last five years, significant advances were made in the fields of computer vision, speech recognition, and language understanding. In this talk, Jeff Dean ...

AI: What's Working, What's Not

It's the golden age of artificial intelligence (AI), a.k.a. machine learning, deep learning, and other distributed computing. But like every golden age, there's a gold ...

The incredible inventions of intuitive AI | Maurice Conti

What do you get when you give a design tool a digital nervous system? Computers that improve our ability to think and imagine, and robotic systems that come ...