AI News, Re artificial intelligence

How will AI change your life? AI Now Institute founders Kate Crawford and Meredith Whittaker explain.

Ask a layman about artificial intelligence and they might point to sci-fi villains such as HAL from 2001: A Space Odyssey or the Terminator.

Instead of talking about far-flung super-intelligent AI, they argued on the latest episode of Recode Decode, we should be talking about the ways AI is affecting people right now, in everything from education to policing to hiring.

“And they found that it was so biased against any female applicant that if you even had the word ‘woman’ on your résumé that it went to the bottom of the pile.” That’s a classic example of what Crawford calls “dirty data.” Even though people think of algorithms as being fair and free of human bias, Whittaker explained, biased humans are the ones who create the data sets and the code that decides how that data should be evaluated;

by looking around internationally and realizing there wasn’t a single AI institute that was focused on the social, political, and ethical implications of these tools.

And so Meredith and I realized that we had to make our lives a lot harder and actually do it ourselves, and so now we head the AI Now Institute at NYU, and it’s really the world’s first institute to really center these concerns, and we created it essentially as an interdisciplinary institute.

I ran a research group there and I think Kate and I came to very similar conclusions that were fairly heterodox during my days in industry, through very different paths.

I was watching people take data that I knew was faulty or fallible or incomplete, and begin to pump it into AI systems and make claims about the world that I didn’t believe were actually credible or verified.

We met on a bus on the way to a conference, and suddenly there was someone who was speaking this language, and helping me think through ideas that I had felt fairly alone in thinking about, and we started talking about this, and we shared a similar set of concerns, right?

This is actually one of the big areas for our research of AI Now, is really lifting up the hood on AI systems and looking at the sometimes quite weird and sticky and gooey training data that goes into the pipes.

So for those of you who’ve seen how there are these kinds of heat maps that are used that sort of basically isolate areas in cities where basically police can predict that crime might occur, or in some cases it’s a person-based list to say, “This person looks like they’re the sort of person who might commit a crime,” looking at their social network.

To say, “Hey, check out this person,” or, “Check out this neighborhood.” So we ended up looking at 13 jurisdictions across the US that were specifically under legal orders because of biased or illegal or unconstitutional policing.

We found multiple cases — Chicago being one of the most obvious — where you could see that the data coming from what was essentially corrupt police practices was informing supposedly neutral and objective predictive policing platforms.

Kate Crawford: So if we have dirty data actually forming our predictive policing systems, you’re ingraining the sort of bias and discrimination that we’ve seen over decades into these systems that in many ways just are above repute.

Because people say, “Oh, well, it’s neutral, so it must be completely fine.” And so you see these vicious circles emerging because essentially the training data itself ...

You show a machine learning system 100 million pictures of cats, but you’ve only shown this machine learning system cats that were colored white, right?

Meredith Whittaker: It only reflects what’s in the data, which is why this question of, is the data coming through biased policing practices that have a record of arrest, that is actually a record of corruption, is really important because once that is filtered through one of these systems, people take it as the product of a smart computer, that it’s infallible, that it is sort of mathematical wizardry and probably not to be contested.

Well, the more we’ve been doing this research, the more concerns we have about this sort of idea of a simplistic tech fix, because in the end, you’re talking about cultures of data production and if that data is historical, then you are importing the historical biases of the past into the tools of the future.

Black box, that’s what the … that is, proprietary companies like Google and the two leaders in AI right now or three really would be … Kate Crawford: Sometimes I say the big five and sometimes they say the big seven, the number keeps changing.

Kate Crawford: And the first female CEO that came up in these searches that we were running at the time, was Barbie CEO and you’re like, “Okay, that’s a problem.” And it’s funny because it’s like a whack-a-mole problem right now.

So if you’re scraping it from very particular types of photo sets, like Getty, for example, really pushed for more diverse images of people in these classic photo sets, because it had become really cliched in terms of what you could get.

So long story short, search is really complicated and people are trying to fix it, but it’s much harder than you might imagine.

There was a Wired study that came out last year, and it said that around 12 percent of the papers that were submitted to the big machine learning AI conferences were submitted by women.

Meredith Whittaker: Again, one of the issues is that we aren’t seeing enough data on this and enough sort of emphasis on the urgency of this problem, but one anecdotal evidence is Timnit Gebru, who is a preeminent machine vision researcher, she’s a woman of color, and when she first went to NeurIPS, which is the biggest machine learning conference, she said she was one of six black people out of 8,000.

So she was the co-founder of Black in AI, she’s been doing a huge amount of work, sort of spearheading this with a couple of colleagues to make a lot more space for black people to participate in machine learning.

The people who bear the costs of discrimination, of exclusion, of racism within these companies are the same people who bear the costs of bias of errors and of sort of, I would say, oppressive uses of AI outside of these companies.

it is very clear that the people who are benefiting from these systems match a specific demographic profile, and the people who are being harmed by these systems are those who have historically been marginalized.

Can you talk about those benefits and harms, in the new society with AI making decisions and in some cases, it does notice inefficiencies and things like crops or weather, it’s hard to have bias in those.

This is when you’re talking about the architecture of, I had a really great podcast with Nicole Wong, who used to be chief legal counsel at Twitter and Google, and she was involved with building these or helping build this architecture.

So people say, “Look, if we have AI in, say, the criminal justice system, won’t that be less bias?” We’ve got real problems in terms of our court systems, in terms of policing.

One of the things that sort of keeps us up at night is if you think about the way that we check that our current systems are fair in, say, criminal justice is that we have a system of appeals.

What you may not know is that in many cases, companies are using AI systems to scan those résumés, to decide whether or not you’re worthy of an interview, and that’s fine until you start hearing about Amazon’s system, where they took two years to design, essentially, an AI automatic résumé scanner.

And essentially, while you’re being interviewed, there’s a camera that’s recording you, and it’s recording all of your micro facial expressions and all of the gestures you’re using, the intonation of your voice, and then pattern matching those things that they can detect with their highest performers.

I mean, that’s really what Meredith and I do, and what we stand for, is saying, “We will do the research to actually test these systems.” Which is why it’s so important that we can audit ...

Salesforce, it constituted something along the lines of an ethics board, right in the wake of a kind of crisis where a lot of their workers and a lot of other people were asking them not to sell tech to ICE, right?

Are you going to harm humanity and, specifically, historically marginalized populations, or are you going to sort of get your act together and make some significant structural changes to ensure that what you create is safe and not harmful?

What we don’t see are mechanisms of oversight that actually bring the people who are most at risk of harm into the room to help shape these decisions.

Because they’re still trying to figure out how to deal with social media, they’re still trying to deal with how to deal with privacy, sort of basic stuff.

You’re seeing multiple states moved towards actually saying, “No, we need to regulate facial recognition.” For very good reasons, because this technology can be deeply troubling in the way that it’s being used.

We don’t want this in our backyard.” Rather than just being told, “Hey, you’ve walked into public space so, basically, you’ve already consented.” I mean, that presents a set of concerns.

There’s already been a debate about this, for many years now, which is like, “Do you try to give more strength to existing agencies, or do you create a new super agency for AI?” This is something we looked at, in detail, in our research last year.

mean, if you’re the FAA and you’re focused on, “Okay, how do we think about safety and planes?” You’re the right agency with the right expertise to be thinking about how AI starts to impact your particular domain.

Same thing goes for many agencies where we want to say, “Hey, give them the power to look at these issues.” Maybe one day we’ll get a super agency, but we can’t wait that long.

In the health care domain, you would want doctors, you would want nurses unions, you would want people who understand the arcane workings of the US insurance system.

You would need them all at the table, on equal footing with AI experts, to actually create the practices to verify and ensure that these systems are safe and beneficial.

Whether they’re going out of stores, what they’re buying in stores, not just looking at what you’re doing on Facebook, but what you pick up and put down in stores.

If your score is low, it impacts your ability to do everything from buying a train ticket to getting your kids into this school that you want to go to, to getting the job that you want.

I’m sure you read the news that, for example, in New York, insurers have been given full permission to look at your social media to decide how to modulate your insurance rates.

think, sometimes, there’s this tendency to say, “Oh, China’s the bad guy, and that would never happen here.” Actually, we have to do a lot of work to make sure that these tools aren’t used in oppressive ways that threaten civil rights, because we’ve got some real issues if we don’t really stop pushing back on some of these.

If we look specifically at the environment and climate change, there’s been some really interesting work that’s being done using AI systems to say, “Hey, we can actually modulate the use of the electricity grid to make sure we’re much more efficient.

We can look at how ...” I mean, think about how much energy is wasted in cities and in giant server farms, which we’ve done lots of research in, as well.

Even though there is a new facial recognition product that is being sold to different retail stores that offers to capture the image of shoplifters and then ban them across stores they’ve never been to, right?

This is happening sort of under the cover of proprietary private sector tech that is actually not disclosed to the people that it’s going to affect.

I mean, if you see the story this week where, basically, there’s a rent-controlled building in Brooklyn where they’re just installing facial recognition cameras.

We need to think more clearly about the implications of our technology on geopolitics, on our social well-being.” I think that is something that gives me hope that there’s actually the possibility of change.

I mean, this is one of the things that we think is super important is, “How do you protect people inside companies, who are going to be the whistleblowers, who are going to tell us things that we need to know, and who are actually going to do this sort of organizing work?” One of the things that’s super important is to start saying, “Hey, this is going to be important for journalism, this is going to be important for research, it’s gonna be important for history, that we understand how these systems work.” Really, being able to create structures where workers can unionize, where they can disclose, where they can actually hold to account the companies that they work for, I think this is going to be increasingly important.

I’ve come to the conclusion recently that — because we know most of these people, and I don’t find them to be particularly evil in terms of ...

I’d say this field has worshiped at the altar of the technical for the better part of 60 years, and at the expense of understanding the social and the ethical.

And right now, as we have these real issues of homogeneity in Silicon Valley, we need to open those doors up, but we also need to get people in the room who are the ones who are most likely to be seeing the downsides of the system.

It is so far from that.” We’re talking about basic 101 stuff, to a degree to which, yeah, AI systems can tell the difference between a cat and a dog.

As you know, when I interviewed Elon Musk a couple of years ago — who has talked about these issues of dangers of AI — he said he thinks ...

I think the premise is faulty, but it is a great distraction from the very real harms of faulty, broken, imperfect, profitable systems that are being mundanely and obscurely threaded through our social and economic systems.

Kate Crawford: We’ve called this the apex predator problem, which if you’re already an apex predator and you have all the money and all the power in the world, what’s the next thing to worry about?

Super-intelligent machines, that’s the next threat to me.” But if you’re not an apex predator, if you’re one of us, we’ve got real problems with the systems that are already deployed, so maybe let’s focus on that.

Audience member: I was interested in your comment about GDPR in contrast to where the US is in terms of privacy or tech regulation generally and how it’s important who gets to make the decisions.

So one of the challenges that I think US regulators often have to grapple with is, vis a vis a lot of these industries, we’ve often prided ourselves on the idea of permissionless innovation.

How do you think about the tension between the idea that, in some ways, we want this technology to be here in the United States because we think we have the best values and we’ll get to the right answers to these questions, versus, we have to put some rules to the road in place prophylactically because an ex post facto enforcement machine isn’t going to protect people.

People will be much more likely to want to trust these tools when we know that it’s not gonna discriminate against us or harm us or cause other forms of ongoing structural problems.

Otherwise, I think it is really interesting how innovation has become basically tethered to rising share prices for a couple of Silicon Valley companies, right?

You can scratch below the surface a little more and say, “Actually, what kind of AI are you building?” Oftentimes, these companies are, in fact, just sort of repackaging models as a service that are sold by the big tech players.

I would love to bring it back to the Elon Musk comments and how this negative outlook on AI is quite easy for people who do not understand the opportunity to go down that rabbit hole of, “Oh, the negative ideas, the negative future.

I don’t wanna talk about it.” How do you suggest opening up the conversation to people who either don’t want to understand or don’t have the opportunity or the conversation in their daily lives?

I think part of the way you begin to get more people in the room is to focus not on the technical wizardry on the shiny cover of some Wired article, but ...

So the super intelligence or the next deep neural net which is better than humans — which is a claim that Kate has examined and we’re looking at — but it is actually affecting and shaping all of our lives in different ways, right?

We have an example that is fairly chilling in Arkansas of an algorithmic system that was brought in to distribute Medicaid benefits, and this system was allocating the number of hours of home care treatment that very ill patients got.

A case worker shows up with this new system, enters her info, it dropped her hours from something like 12 hours a day to eight.

Now, thankfully, there was a lawyer who took that to court, contested the algorithm, found that actually there was a major implementation flaw.

Our experience matters just as much as a technical design doc or a Wired article about the super intelligence, and I think part of the job to steer us toward a better future with these technologies is to begin to re-center the conversation around what the lived experience of having these technologies shape and direct our resources and opportunities is.

I’m curious, in looking back in history, I’m sitting here spinning and saying, “Is there something that throughout different technology evolutions and disruptions, back through whenever, that’s even analogous?” Like, as you guys look at the way this will sort of percolate through society, do you look back at any other kind of technology transitions in history for lessons learned and things like that?

In fact, one of the things that we have is we really focus on deep historical research because I think we can learn a lot, exactly, from moments in history, where these big general-purpose technologies were flooding into society and decisions had to be made about how to use them.

And we had the creation of things like the IAEA, the international inspections body, that could say, “Hey, we should be able to inspect how you’re creating this, what you’re working with, do you have weapons facilities, do you have energy facilities?” This was a big international effort and that’s a very difficult thing to do right now if you look at what the international governance conversation looks like, it’s much bleaker.

We can look to these kind of key moments of technologies that really changed the way we lived, but we also have to look at, what were the governance structures?

And that’s one of the big questions hovering over AI right now is, you can come up with local regulation but you’re really talking about technologies that are planetary in scale.

You’re all, it’s not just biased data, there’s so much data pouring out of this room right now, for example, it’s insane what’s happening.

I think a lot of what we’ve talked about today has sort of come back to this issue of being able to ask the right questions.

For so many years in America, we’ve always focused on this idea of innovation, innovation, without really deciding what it is we want these technologies to be able to do, which has been one of the core issues.

Held for how long?” These are really good questions that we need to pursue a lot further, but what’s interesting is that that’s happening at a much slower rate than these technologies are being released into the world and essentially being live tested on populations all the time.

So what we have is this kind of race now between how do you actually have those conversations with sufficient knowledge about how these tools really work, when a lot of these things are protected by trade secrecy, a lot of these tools, you’re not gonna know how it’s working and what data is being collected.

Artificial Intelligence Can Predict When You're Going To Die.

News Radio WTAG 580/94.9 is Worcester's news, traffic, and weather station and the home of The Jim Polito Show.

The Advantages of Re-Humanizing HR with Artificial Intelligence

It’s easy to see that artificial intelligence (AI) is making a significant impact in every area of the enterprise, human resources and recruiting included.

While AI and machine learning have the potential to automate mundane processes, streamline operations and make intelligent decisions, the rapid proliferation of this technology has ignited fear around job elimination and human interaction being replaced by cold, calculated robots.

Despite these concerns, a recent study by Oracle found that 93% of people are ready to take orders from a robot and more than a third of employees believe that AI will enable better customer and employee experiences.

On average, companies lose 17% of new hires within the first three months and 15% of those who resigned said that a lack of effective onboarding played a part in their decision to leave.

HR teams are beginning to realize the role that a good onboarding experience plays in employee engagement, productivity, and retention, however, a surprising number of businesses still lack an intuitive, formalized process.

By embracing emerging technologies, Arizona Federal Credit Union was able to eliminate 60 hours per month on onboarding tasks and saved 115 hours during benefits open enrollment period alone.

AI tools can tackle this issue by providing teams with data-backed insights on what motivates, retains and entices employees so that managers can create a personalized work experience that meets both the employee and organization’s overall goals.

IBM artificial intelligence can predict with 95% accuracy which workers are about to quit their jobs

Traditional human resource departments, where Rometty said companies typically 'underinvest,' has been divided between a self-service system, where employees are forced to be their own career managers, and a defensive system to deal with poor performers.

Poor performers, meanwhile, will not be a 'problem' that is dealt with only by managers, HR, legal and finance, but by solutions groups — IBM is using 'pop-up' solutions centers to assist managers in seeking better performance from their employees.

She said many companies have relied on centers of excellence — specialized groups or collaborative entities created to focus on areas where there is a knowledge or skills gap within an organization or community.

Scary Artificial Intelligence Breakthrough!!

Become a channel member to get a badge with your name in the comment section Become a Member ▷ ====== Follow us: Facebook ..

Tesla's Elon Musk: We're 'Summoning the Demon' with Artificial Intelligence

Oct. 27 (Bloomberg) -- Betty Liu reports on Elon Musk's warning about artificial intelligence. (Source: Bloomberg) --Subscribe to Bloomberg on YouTube: ...

Will Artificial Intelligence Replace Coders?

SPONSORS ◅ Linode Web Hosting ($20.00 CREDIT) HipsterCode Web Development in 2019

True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo

Artificial Intelligence Scientist. Scientific Director of the Swiss AI Lab, IDSIA, DTI, SUPSI; Prof. of AI, Faculty of Informatics, USI, Lugano; Co-founder & Chief ...

How artificial intelligence will change your world in 2019, for better or worse

From a science fiction dream to a critical part of our everyday lives, artificial intelligence is everywhere. You probably don't see AI at work, and that's by design.

Mark Cuban: If We Let China or Russia Win the Artificial Intelligence Race, we're 'SOL'

Start a 14-day free trial for Real Vision to watch the full interview (and more):

What Could Advanced Artificial Intelligence Mean for Humanity?

What could advanced artificial intelligence mean for humanity?– Second Thought SUBSCRIBE HERE: From the very earliest mechanical ..

Does Artificial Intelligence already exist (The 8-bit Guy Reupload)

This video is no longer available on The 8-bit Guy's Channel. You can read about why the video was removed from his YouTube channel in the link below.

What happens when our computers get smarter than we are? | Nick Bostrom

Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as "smart" as a human being.

How Artificial Intelligence Is Changing The Art World | NBC Nightly News

Artists are using artificial intelligence to make art, and now the Metropolitan Museum of Art, MIT and Microsoft are teaming up to predict what will appeal to art ...