AI News, T artificial intelligence

Recently in Future Tense

Banks love to brag about how many data scientists they’re hiring and their shiny machine-learning “centers of excellence.” In the 2018 JP Morgan Chase annual report, CEO Jamie Dimon said the company had gone “all in” on artificial intelligence, adding that artificial intelligence and machine learning were “being deployed across virtually everything we do.” Not to be outdone, HSBC has opened multiple “data and innovation labs” around the world, in order to build artificial intelligence tools that can take in the bank’s more than 10 petabytes of data.

As the BOE explains, “machine learning” isn’t always substantially different from the statistical models banks have used for decades, like the credit scores they develop to predict the likelihood a customer will default on a loan, or the models bank use to predict whether a particular debit or credit card transaction was fraudulent.

As legal scholar Spiros Simitis wrote in 1987, “Information processing is developing [into] long-term strategies of manipulation intended to mold and adjust individual conduct.” And as Shoshana Zuboff points out in The Age of Surveillance Capitalism: The Fight for Human Future at the New Frontier of Power, the ultimate purpose of machine learning and artificial intelligence by corporations is often to induce “behavior modification at scale,” ideally in ways that are subtle enough that they happen “outside of our awareness, let alone our consent.”

Banks are particularly incentivized to go whole-hog on machine learning because of all the ways financial products vary from most of the other things we “buy.” Consumers rarely pay for bank products like checking accounts or credit cards with one lump sum upfront, the way we buy ice cream or shoes.

For example, the Consumer Financial Protection Bureau has found that the typical credit card in the United States has more than 20 distinct “price points”—separate fees or interest rates contingent on specific ways you might use the card—a level of complexity that can raises lots of opportunities for “gotcha” moments induced via behavior modification.

Between advances in machine-learning technology and the limits of existing law, banks can toggle each customer’s particular payment options, based on all that individualized data, to increase the likelihood that the customer will miss a payment and get hit with a late fee, while calibrating that it happens just infrequently enough that customer won’t get fed up and close their account.

Can Artificial Intelligence “Think”?

Sci-fi and science can’t seem to agree on the way we should think about artificial intelligence.

Sci-fi wants to portray artificial intelligence agents as thinking machines, while businesses today use artificial intelligence for more mundane tasks like filling out forms with robotic process automation or driving your car.

When interacting with these artificial intelligence interfaces at our current level of AI technology, our human inclination is to treat them like vending machines, rather than to treat them like a person.

Today’s AI is very narrow, and so straying across the invisible line between what these systems can and can’t do leads to generic responses like “I don’t understand that” or “I can’t do that yet”.

In the book “Thinking Fast and Slow”, Nobel laureate Daniel Kahneman talks about the two systems in our brains that do thinking: A fast automated thinking system (System 1), and a slow more deliberative thinking system (System 2).

Just like we have a left and right brain stuck in our one head, we also have these two types of thinking systems baked into our heads, talking to each other and forming the way we see the world.

Today's AI systems learn to think fast and automatically (like System 1), but artificial intelligence as a science doesn’t yet have a good handle on how to do the thinking slow approach we get from System 2.

In the future, thinking algorithms that teach themselves may themselves represent most of the value in an AI system, but for now, you still need data to make an AI system, and the data is the most valuable part of the project.

A colleague of mine has a funny story from her undergraduate math degree at a respected university, where the students would play a game called “stats chicken”, where they delay taking their statistics course until the fourth year, hoping every year that the requirement to take the course will be dropped from the program.

When we see a really relevant movie or product recommendation, we feel impressed by this amazing recommendation magic trick, but don’t get to see the way the magic trick is performed.

In fact, there are some accusations even in respected academic circles (slide 24, here) that the basic theory of artificial intelligence as a field of science is not yet rigerously defined.

Engineers don’t tend to ask questions like “is it thinking?”, and instead ask questions like “is it broken?” and “what is the test score?” Supervised learning is a very popular type of artificial intelligence that makes fast predictions in some narrow domain.

Explicit models like decision trees are a common approach to developing an interpretable AI system, where a set of rules is learned that defines your path from observation to prediction, like a choose your own adventure story where each piece of data follows a path from the beginning of the book to the conclusion.

Another type of artificial intelligence called reinforcement learning involves learning the transition from one decision to the next based on what’s going on in the environment and what happened in the past.

We know that without much better “environment” models of the world, these approaches are going to learn super slowly, to do even the most basic tasks.

In a game playing simulator an AI model can play against itself very quickly to get smart, but in human-related applications the slow pace of data collection gums up the speed of the project.

CES showcases more than 4,500 exhibiting companies, including manufacturers, developers and suppliers of consumer technology hardware, content, technology delivery systems and more.

Photographer: David Paul Morris/Bloomberg Regardless of the underlying technical machinery, when you interact with a trained artificial intelligence model in the vast majority of real-life applications today, the model is pre-trained and is not learning on the fly.

It is useful to think about more general mathematical models like rainfall estimation and sovereign credit risk modeling to think about how mathematical models are carefully designed by humans, encoding huge amounts of careful and deliberative human thinking.

I asked Kurt a lot of technology questions, leading up to the question “Does the system think like people do?” AstraLaunch is a pretty advanced product involving both supervised and unsupervised learning for matching technologies with company needs on a very technical basis.

Artificial Intelligence Is on the Case in the Legal Profession

My brain conjures up an image of C-3PO in a three-piece suit, shuffling around a courtroom, while throwing out cross-examination quips such as: “Don’t call me a mindless philosopher, you overweight glob of prosecuting witness grease!” SEE ALSO: Banks Will Replace 200,000 Workers With Robots by Next Decade But that’s not exactly the case (yet).

Still, Elon Musk has warned that AI is a bigger threat to humanity than nuclear weapons, but before we start worrying about how the robot lawyer uprising won’t be televised (it will happen slowly and quietly in the middle of night), we connected with Lane Lillquist, the co-founder and CTO of legal tech company InCloudCounsel, to give us his thoughts on what we need to fear and/or not fear when it comes to lawyer robots.

“It can make contract review more accurate, enable us to take a more data-driven approach to the practice of law and make the legal space overall more efficient.” Lillquist sees robot lawyers, AKA artificial intelligence being used in the legal profession, akin to the simple tools that make everyday life easier and more productive, along the lines of spellcheck or autocorrect.

“Beyond this, the role of the lawyer is still vital to conducting quality legal work.” Over the next five years, Lillquist predicts the role of AI in the legal space will continue to be accomplishing narrow and specific tasks, such as finding terms in a set of documents or filling out certain forms.

“Enabled by technology, lawyers are more productive, allowing more legal matters to be represented around the world.” He sees AI continually changing the legal profession, requiring lawyers to possess an increasing number of skills to make use of such technology to remain competitive in the market.

In turn (or in theory), AI enabling legal tech solutions will allow human lawyers to complete more work at a higher degree of accuracy, freeing up bandwidth to focus on different and/or more complex types of work that can create substantial value for their companies and clients.

The Eastern New Mexico News - Serving Clovis, Portales and the Surrounding Communities

Some days I feel like I want to get off the artificial intelligence train at the next stop.

They have another guy at the exit waiting to check the items in my cart against my receipt just as soon as I’ve put the receipt away in my wallet.

If you do finally find a prompt that gives you the choice of speaking with a human the hold music is constantly interrupted with a recording telling you how you could make everyone’s life easier if you would just transact your business with them online.

Since I was sure I had renewed online a month or two previous and couldn’t remember getting the registration and sticker in the mail I figured I could just quickly check online on my phone to see if it was current.

I decided I wasn’t going to be put off so I put the phone on speaker and in the hands-free cradle and took off for home knowing I didn’t have time to sit still and wait.

told the little gal on the phone how I had faithfully registered my vehicles and how they obviously hadn’t been sent out to me or I would have the tags on my vehicle.