AI News, #hpe3par hashtag on Twitter artificial intelligence

HPE rolls out updates for HPE 3PAR

HPE announced new capabilities for its popular HPE 3PAR storage solution this morning, such as new automation tools and added support for DevOps.

HPE says earlier versions of InfoSight have already predicted and automatically fixed 85 per cent of more than 1,500 complex cases across the HPE 3PAR installed base.

“HPE 3PAR offers customers a flexible storage platform that easily adapts to any environment, which is critical as companies embrace new technologies and cloud native applications,”

Inside Facebook, Twitter and Google's AI battle over your social lives

When you sign up for Facebook on your phone, the app isn't just giving you the latest updates and photos from your friends and family.

These are just some of the ways that Facebook is verifying that you're actually human and not one of the tens of millions of bots attempting to invade the social network each day.

That Facebook would go to such lengths underscores the escalation of the war between tech companies and bots that can cause chaos in politics and damage public trust.

'It is already pretty much a fundamental part of everyday life,' Michael Connor, the executive director of Open MIC, a technology policy nonprofit, said.

After all, no single person or human team could ever deal with the flood of data coming from billions of users.

If you're training a bot to find fake news, for example, you'd amass a ton of posts that you judge as fake news and tell your algorithm to look for posts similar to them.

Think of this machine learning like the process of teaching a newborn baby the difference between right and wrong, said Kevin Lee, Sift Science's trust and safety architect.

They employ bots that act like a hive when it comes to creating accounts on Facebook, using multiple tricks to fool the massive social network.

AI plays a big role in this.  The massive social network has been relying on outside AI resources, as well as its own team, to help it close the floodgates on bots.

Eran Magril, the startup's vice president of product and operations, said Unbotify works by understanding behavioral data on devices, such as how fast your phone is moving when you sign up for an account.

Still, the numbers are massive: In May, Facebook announced it had deleted 583 million fake accounts in the first three months of 2018.

Magril stressed that the company doesn't collect personal information, only behavioral data with no names or personal identifying information.

The image on the left shows mouse movements on a desktop from a bot, while the right shows mouse movements made by a human.

But behavioral data isn't the only way that Facebook stops bots, according to Lee, a former team leader at the social network.

The company also relies on AI to automatically tell if an account is fake based on how many accounts are on one device, as well as its activities after it's created.

Facebook's AI will label an account as a bot if it signs up and is able to send more than 100 friend requests within a minute, he said.

In March, the social network said it was expanding its fact-checking program to include images and videos after a flurry of propaganda started coming from memes instead of hoax articles.

Facebook relies on third-party fact-checkers to help it classify content as hoaxes and false information, Sara Su, a Facebook product specialist for the News Feed, said at a press event on Wednesday.

Twitter's CEO Jack Dorsey wrestles with how to make his social network a less social place.  While Twitter also uses AI to spot bot behavior, its attempt to preserve an open platform means that it can't completely rely on AI to handle trolls.

'Machine learning improvements are enabling us to be more proactive in finding those who are being disruptive, but user reports are still a highly valuable part of our work,' Nick Pickles, Twitter's senior strategist on public policy, told members of Congress at a House Judiciary Committee hearing on Tuesday.

It's easy to forget that YouTube, better known for its video content, is its own form of social network too, complete with the same trolls infecting the comments section.

The AI is supposed to automatically flag comments it determines would ruin conversations, and to allow moderators to choose whether they should delete it.

Even if there's only a 1 percent chance of error, with 2 billion people on Facebook and 1 billion people on YouTube, that's still tens of millions of toxic content or fake accounts slipping through.

Protesters set up 100 cardboard cutouts of Facebook founder and CEO Mark Zuckerberg outside the US Capitol in Washington, DC, to call attention to hundreds of millions of fake accounts still spreading disinformation on Facebook.

'Even if they are getting 99 percent, that 1 percent is getting through to somebody, and the consequences are real-world attacks,' said Eric Feinberg, the lead researcher on the Digital Citizens Alliance report on terrorist content and social media.

Facebook is using billions of Instagram images to train artificial intelligence algorithms

Your Instagram photo of a perfectly composed plate of pancakes or an exquisitely framed sunset is helping Facebook train its artificial intelligence algorithms to better understand objects in images, the company announced today at its annual F8 developer conference.

If a person hasn’t spend the time to label something specific in an image, even the most advanced computer vision systems won’t be able to identity it,” Mike Schroepfer, Facebook’s chief technology officer, said onstage at F8.

Because it owns and operates such a large platform encompassing billions of users across apps like Instagram, WhatsApp, and Messenger, Facebook has access to extremely valuable text and image data it can use to inform its AI models, so long as that text and those images are posted publicly.

In addition to 20,000 new human moderators for its platform, Facebook is increasingly looking to automation as it grapples with Russia election interference, the Cambridge Analytica data privacy scandal, and other hard questions about how to moderate content on its platform and keep bad actors from abusing its tools.

Tay (Bot)

Tay war ein Chatbot mit künstlicher Intelligenz, der ursprünglich von der Microsoft Corporation entwickelt wurde und am 23.

Er verursachte nachfolgend eine öffentliche Kontroverse, als der Bot anfing, anzügliche und beleidigende Tweets durch sein Twitter-Konto bekanntzugeben, was Microsoft zwang, den Dienst nur 16 Stunden nach seinem Start wieder abzuschalten.

März 2016 wurde eine zweite Version von Tay veröffentlicht, welche allerdings auch mit Problemen zu kämpfen hatte und noch schneller als die erste Version abgeschaltet wurde.[3]