AI News, Why 'deepfake' videos mean you can no longer believe what you see artificial intelligence

Deepfakes: When seeing isn’t believing

Deepfakes are rapidly becoming easier and quicker to create and they’re opening a door into a new form of cybercrime.

Although the fake videos are still mostly seen as relatively harmful or even humorous, this craze could take a more sinister turn in the future and be at the heart of political scandals, cybercrime, or even unimaginable scenarios involving fake videos – and not just targeting public figures.

deepfake is the technique of human-image synthesis based on artificial intelligence to create fake content either from scratch or using existing video designed to replicate the look and sound of a real human.

To be sure, an article from the Economist discusses that in order to make a convincing enough deepfake you would need a serious amount of video footage and/or voice recordings in order to make even a short deepfake clip.

Having said that, in the not-too-distant future, it may be entirely possible to take just a few short Instagram stories to create a deepfake that is believed by the majority of one’s followers online or by anyone else who knows them.

Finally, social media platforms need to realize there is a huge potential threat with the impact of deepfakes because when you mix a shocking video with social media, the outcome tends to spread very rapidly and potentially could have a detrimental impact on society.

I hugely enjoy the development in technology and watching it unfold in front of my eyes, however, we must remain aware of how technology can sometimes detrimentally affect us, especially when machine learning is maturing at a rate quicker than ever before.

Why 'deepfake' videos mean you can no longer believe what you see

Around the world, start-ups, academics and lawmakers are rushing to create tools to mitigate these risks.

Where written fake news was the hallmark of the most recent election cycle in the US and UK, images and videos are increasingly the new focus of propaganda, says Vidya Narayanan, a researcher at the Oxford Internet Institute.

If you see an image, it is very immediate.” Software such as Photoshop was used to create a widely shared fake image of Emma González, a survivor of the Parkland shooting and a gun control activist, ripping up the US Constitution in 2018.

The video spread across conservative media as critics of the Speaker of the House of Representatives declared it evidence of her senility, alcoholism or a mental health problem.

“The detector says there’s an artefact [a distortion in the image], do it again.” Through hundreds of thousands of cycles of trial and error, the two systems can create immensely lifelike videos.

The latter allowed users to plaster their faces over the protagonists of a selection of movies simply by uploading a few seconds of video to the free Chinese app.

“It’s hard to predict where [deepfakes will] go in the next five years, given they’ve only been around for five years.” Part way through our phone conversation, Pappas changes to a woman’s voice, and then to a co-worker’s.

“Your normal voice breaks that illusion that you’ve spent so much time crafting.” Part way through our phone conversation, Pappas changes to a woman’s voice, and then to a co-worker’s: it comes across as a little stiff but still recognisably human.

In August, The Wall Street Journal reported on one of the first known cases of synthetic media becoming part of a classic identity fraud scheme: scammers are believed to have used commercially available voice-changing technology to pose as a chief executive in order to swindle funds.

As Henry Ajder walks through the nearly 600-year-old grounds of Queens’ College, Cambridge, he describes a daily routine that involves tracking the creation and spread of deepfake videos into the darkest corners of the internet.

In a report Deeptrace released in September, the scale of the problem was laid bare: the start-up found nearly 15,000 deepfakes online over the past seven months.

“It just looked odd: the eyes didn’t move properly, the head didn’t move in a natural way – the immediate kind of response was that this is a deepfake,” says Ajder.

“It really drives how powerful the mere doubt is . . . about any videos we already want to be fake.” Kanishk Karan, a researcher at the Digital Forensic Research Lab, part of the US think-tank Atlantic Council, points to another potential deepfake, this time in Malaysia: a video alleging to show economic affairs minister Azmin Ali in a tryst with another minister’s male aide.

Facebook founder Mark Zuckerberg was the victim of a deepfake video, in which he appeared to boast that he controlled "billions of people’s stolen data". Bill Poster The video on my screen looks rather like an earlier version of Windows Movie Maker but in navy corporate colours.

When he hits play, a red box playing over her features, flashing percentages, reveals that it is a fake: a fan wearing a K-Pop singer’s face.

“We see thousands of people contributing to small tweaks to the technology on GitHub, doing it as a hobby.” Farid, the Berkeley professor, is also working on detection, focusing primarily on public figures, including world leaders.

“If he was being funny he would smile and look up to the left . . . Everyone has a different cadence to how their expressions change.” It’s an arms race and, at the end of the day, we know we’re going to lose.

“It’s an arms race and, at the end of the day, we know we’re going to lose – but we’re going to take it out of the hands of the amateur and move it into the hands of fewer people.” Dr Wael Abd-Almageed, a senior scientist at the University of Southern California, represents yet another attempt at detection.

“If you think deepfakes are a problem now, they will be much harder in the next couple of years.” The second method of combating deepfakes focuses on improving trust in videos.

“When you tap on that shutter button, we’re capturing all of the geospatial data – GPS sensors, barometric pressures, the heading of the device and securely transforming that to Truepic’s verification server.” There, the company runs tests to check whether the image has been manipulated.

Amber, a San Francisco-based start-up, produces detection software as well as Amber Authenticate, a camera app that generates “hashes” – representations of the data – that are uploaded to a public blockchain as users shoot a video.

“Yes, this is good – YouTube and Twitter should be doing this too – but there’s a second part, the policy issue.” He points to the altered video of Nancy Pelosi uploaded to Facebook as a prime example of this dimension.

It counts websites as platforms rather than publishers, to promote free speech, but has come under increasing criticism for seeming to enable companies to avoid liability for the content they host.

While printed material is required by law to have imprints showing authorship, this does not apply to electronic content – a potentially dangerous loophole.

“As we move towards 2020, we may be subject to supposed video evidence and we need a way of identifying what may look real [but is not].” She says that there are fears that both China and Iran could turn to deepfakes as a tool to attack the US.

“We’re falling foul to how fast tech is moving.” While the rate of progress is astounding, experts remain unconvinced about a deepfake apocalypse in the political sphere.

“The concept of truth has never been as solid as we like to think.” Voters who see a video of a politician behaving in a way they expect them to might understand it is a fake, but believe it represents an underlying reality to their character.

Deepfake

It is used to combine and superimpose existing images and videos onto source images or videos using a machine learning technique known as generative adversarial network.[2]

The development of deepfakes has taken place to a large extent in two settings: research at academic institutions, and development by amateurs in online communities[citation needed].

Academic research related to deepfakes lies predominantly within the field of computer vision, a subfield of computer science often grounded in artificial intelligence that focuses on computer processing of digital images and videos.

An early landmark project was the Video Rewrite program, published in 1997, which modified existing video footage of a person speaking to depict that person mouthing the words contained in a different audio track.[7]

It was the first system to fully automate this kind of facial reanimation, and it did so using machine learning techniques to make connections between the sounds produced by a video's subject and the shape of their face.

The “Synthesizing Obama” program, published in 2017, modifies video footage of former president Barack Obama to depict him mouthing the words contained in a separate audio track.[8]

The Face2Face program, published in 2016, modifies video footage of a person's face to depict them mimicking the facial expressions of another person in real time.[9]

The project lists as a main research contribution the first method for re-enacting facial expressions in real time using a camera that does not capture depth, making it possible for the technique to be performed using common consumer cameras.

In February 2018, r/deepfakes was banned by Reddit for sharing involuntary pornography, and other websites have also banned the use of deepfakes for involuntary pornography, including the social media platform Twitter and the pornography site Pornhub.

online communities remain, including Reddit communities that do not share pornography, such as r/SFWdeepfakes (short for 'safe for work deepfakes'), in which community members share deepfakes depicting celebrities, politicians, and others in non-pornographic scenarios.[14]

However, she also stated that she would not attempt to remove any of her deepfakes, due to her belief that they do not affect her public image and that differing laws across countries and the nature of internet culture make any attempt to remove the deepfakes 'a lost cause';

she believes that while celebrities like herself are protected by their fame, deepfakes pose a grave threat to women of lesser prominence who could have their reputations damaged by depiction in involuntary deepfake pornography or revenge porn.[21]

In June 2019, a downloadable Windows and Linux application called DeepNude was released which used neural networks, specifically generative adversarial networks, to remove clothing from images of women.

however, the chairman of the Committee, Adam Schiff, clarified in an interview with CNN that the slowed-down video was not actually a deepfake, instead referring to it as a 'cheap fake' and describing it as 'very easy to make, very simple to make, real content just doctored'.[34]

For detailed information, the program needs a lot of visual material from the person to be inserted in order to learn which image aspects have to be exchanged, using the deep learning algorithm based on the video sequences and images.

September 2018, Google added 'involuntary synthetic pornographic imagery” to its ban list, allowing anyone to request the block of results showing their fake nudes.[56]

Jack Wodhams calls such fabricated videos picaper or mimepic—image animation based on 'the information from the presented image, and copied through choices from an infinite number of variables that a program might supply'.

These impressions can be broken down to their individual matrix composites, can be analyzed, rearranged, and can then be extrapolated through known standard human behavioral patterns, so that an image of you may be re-projected doing and saying things that you have not in fact done or said.[61]

In the 1992 techno-thriller A Philosophical Investigation by Philip Kerr, 'Wittgenstein', the main character and a serial killer, makes use of both a software similar to Deepfake and a virtual reality suit for having sex with an avatar of the female police lieutenant Isadora 'Jake' Jakowicz assigned to catch him.[62][non-primary source needed]

Facebook trained AI to fool facial recognition systems, and it works on live video

Facebook remains embroiled in a multibillion-dollar judgement lawsuit over its facial recognition practices, but that hasn’t stopped its artificial intelligence research division from developing technology to combat the very misdeeds of which the company is accused.

It works by altering key facial features of a video subject in real time using machine learning, to trick a facial recognition system into improperly identifying the subject.

There’s also a whole category of facial recognition fooling imagery you can wear yourself, called adversarial examples, that work by exploiting weaknesses in how computer vision software has been trained to identify certain characteristics.

Our contribution is the only one suitable for video, including live video, and presents quality that far surpasses the literature methods.” Facebook apparently does not intend to make use of this technology in any of its commercial products, VentureBeat reports.

The other concern FAIR’s research addresses is facial recognition, which is also unregulated and causing concern among lawmakers, academics, and activists who fear it may violate human rights if it continues to be deployed without oversight by law enforcement, governments, and corporations.

Can you believe your eyes? How deepfakes are coming for politics

Matteo Renzi, Italy’s former prime minister and founder of the new Italia Viva party, sits in an opulent-looking office, face to the camera.

That’s because the politician’s features have been algorithmically transplanted on to a comedian’s, as part of a skit for Striscia la notizia, a long-running Italian satire show.

And, as elections approach in the US, UK and elsewhere, deepfakes could raise the stakes once more in the electorate’s struggle to know the truth.

Hany Farid, a professor at the University of California, Berkeley, has spent decades studying digital manipulation: “In January 2019, deepfakes were . . . buggy and flickery.

Where written fake news was the hallmark of the most recent election cycle in the US and UK, images and videos are increasingly the new focus of propaganda, says Vidya Narayanan, a researcher at the Oxford Internet Institute.

Software such as Photoshop was used to create a widely shared fake image of Emma González, a survivor of the Parkland shooting and a gun control activist, ripping up the US Constitution in 2018.

The video spread across conservative media as critics of the Speaker of the House of Representatives declared it evidence of her senility, alcoholism or a mental health problem.

Through hundreds of thousands of cycles of trial and error, the two systems can create immensely lifelike videos.

This has been the year that saw deepfakes move beyond the hands of those with powerful computers, graphics cards and at least some technical expertise.

The former, now shut down, produced realistic female nudes from clothed photographs, leading to understandable outrage.

The latter allowed users to plaster their faces over the protagonists of a selection of movies simply by uploading a few seconds of video to the free Chinese app.

He is not alone in expressing shock at the rate of development from academic concept to easily accessible reality.

Ricky Wong, one of the co-founders of a start-up called Humen, explains that with three minutes of footage of movement and material from professionals, his company can make anyone “dance”.

Part way through our phone conversation, Pappas changes to a woman’s voice, and then to a co-worker’s: it comes across as a little stiff but still recognisably human.

In August, The Wall Street Journal reported on one of the first known cases of synthetic media becoming part of a classic identity fraud scheme: scammers are believed to have used commercially available voice-changing technology to pose as a chief executive in order to swindle funds.

The company also places a digital watermark on its audio to reduce the risk of a voice skin being recognised for the real thing.

College, Cambridge, he describes a daily routine that involves tracking the creation and spread of deepfake videos into the darkest corners of the internet.

Ajder’s job as head of communications and research analysis at start-up Deeptrace has led to him investigating everything from fake pornography to politics.

In a report Deeptrace released last month, the scale of the problem was laid bare: the start-up found nearly 15,000 deepfakes online over the past seven months.

“It just looked odd: the eyes didn’t move properly, the head didn’t move in a natural way —

A week after the video was released, junior officers attempted a coup d’état, which was quickly crushed.

Kanishk Karan, a researcher at the Digital Forensic Research Lab, part of the US think-tank Atlantic Council, points to another potential deepfake, this time in Malaysia: a video alleging to show economic affairs minister Azmin Ali in a tryst with another minister’s male aide.

Given Malaysia’s colonial-era laws and persistent discrimination against LGBT communities, the footage, released in June, naturally provoked controversy.

In both, there is heavy use of WhatsApp, a platform that lends itself to videos and images and whose closed nature also comes with a sense of security and trust.

Both countries have large populations without basic literacy, Narayanan of the Oxford Internet Institute points out, making it difficult to generate media literacy.

Deeptrace is one of the companies in that space, explains chief executive Giorgio Patrini, as he calls from the company’s Amsterdam headquarters to demonstrate its system.

When he hits play, a red box playing over her features, flashing percentages, reveals that it is a fake: a fan wearing a K-Pop singer’s face.

This video is harmless, but Patrini says that (female) K-Pop singers have become major targets of fake porn.

Patrini explains that Deeptrace’s technology is trained on the thousands of deepfakes that the company has pieced together from across the internet.

Farid, the Berkeley professor, is also working on detection, focusing primarily on public figures, including world leaders.

Truepic, a San Diego-based start-up, has been trying to fight manipulated videos and photos for four years, with experts such as Farid on its advisory board.

Jeffrey McGregor, Truepic’s chief executive, says the company launched in response to a spate of manipulated pictures online.

It remains unclear, however, what policies it might invoke that could stop users taking parody videos and reposting them as if they were real, as with the Renzi deepfake.

It counts websites as platforms rather than publishers, to promote free speech, but has come under increasing criticism for seeming to enable companies to avoid liability for the content they host.

While printed material is required by law to have imprints showing authorship, this does not apply to electronic content —

“As we move towards 2020, we may be subject to supposed video evidence and we need a way of identifying what may look real [but is not].”

The DEEPFAKES Accountability Act, referred to the subcommittee on Crime, Terrorism and Homeland Security in June, would make deepfakes for purposes such as fake porn, disinformation or election interference illegal.

She also worries that watermarking would lead to false positives, or that canny developers could try to have real videos flagged as deepfakes.

“We may end up having to actually favour some type of ban or moratorium until we get further research in all the different ways [videos] could be falsified,”

Doermann at the University of Buffalo says that in the US at least, where public awareness of the technology is growing, an extremely high-quality deepfake would be needed to change the course of electoral history in 2020.

Voters who see a video of a politician behaving in a way they expect them to might understand it is a fake, but believe it represents an underlying reality to their character.

Deepfake Videos Are Getting Real and That’s a Problem | Moving Upstream

Computer-generated videos are getting more realistic and even harder to detect thanks to deep learning and artificial intelligence. As WSJ's Jason Bellini finds ...

Why ‘deepfake’ videos are becoming more difficult to detect

Sophisticated and inaccurate altered videos known as “deepfakes” are causing alarm in the digital realm. The highly realistic manipulated videos are the subject ...

It’s Getting Harder to Spot a Deep Fake Video

Fake videos and audio keep getting better, faster and easier to make, increasing the mind-blowing technology's potential for harm if put in the wrong hands.

Deepfakes: Is This Video Even Real? | NYT Opinion

In the video Op-Ed above, Claire Wardle responds to growing alarm around “deepfakes” — seemingly realistic videos generated by artificial intelligence.

Deepfake Videos Are Getting Terrifyingly Real I NOVA I PBS

Artificially intelligent face swap videos, known as deepfakes, are more sophisticated and accessible than ever. PRODUCTION CREDITS Digital Producer Emily ...

Could deepfakes weaken democracy? | The Economist

Videos can now be faked to make people say things they never actually said. Could this weaken democracy...and can you spot ALL the deep fake interviews in ...

Fake Obama created using AI video tool - BBC News

Researchers at the University of Washington have produced a photorealistic former US President Barack Obama. Artificial intelligence was used to precisely ...

Mark Zuckerberg ‘deepfake’ will remain online

Facebook said they will not take down a video of Mark Zuckerberg created using artificial intelligence, called a "deepfake." The company recently faced backlash ...

Fake videos of real people -- and how to spot them | Supasorn Suwajanakorn

Do you think you're good at spotting fake videos, where famous people say things they've never said in real life? See how they're made in this astonishing talk ...

How Funny Face Swapping Videos Could Lead To More “Fake News”

Fake news has been a problem swamping the internet. Now there is a way to make fake videos with nothing more than your laptop. FakeApp can be used to ...