AI News, China Prohibits 'Deepfake' AI Face Swapping Techniques artificial intelligence

Deepfake

Deepfake (a portmanteau of 'deep learning' and 'fake'[1]) is a technique for human image synthesis based on artificial intelligence.

It is used to combine and superimpose existing images and videos onto source images or videos using a machine learning technique called a 'generative adversarial network' (GAN).[2]

Such fake videos can be created to, for example, show a person performing sexual acts they never took part in, or can be used to alter the words or gestures a politician uses to make it look like that person said something they never did.

Academic research related to deepfakes lies predominantly within the field of computer vision, a subfield of computer science often grounded in artificial intelligence that focuses on computer processing of digital images and videos.

An early landmark project was the Video Rewrite program, published in 1997, which modified existing video footage of a person speaking to depict that person mouthing the words contained in a different audio track.[6]

It was the first system to fully automate this kind of facial reanimation, and it did so using machine learning techniques to make connections between the sounds produced by a video’s subject and the shape of their face.

The project lists as a main research contribution the first method for reenacting facial expressions in real time using a camera that does not capture depth, making it possible for the technique to be performed using common consumer cameras.

In February 2018, r/deepfakes was banned by Reddit for sharing involuntary pornography, and other websites have also banned the use of deepfakes for involuntary pornography, including the social media platform Twitter and the pornography site Pornhub.[12] Other

online communities remain, however, including Reddit communities that do not share pornography, such as r/SFWdeepfakes (short for 'safe for work deepfakes'), in which community members share deepfakes depicting celebrities, politicians, and others in non-pornographic scenarios.[13]

However, she also stated that she wouldn't attempt to remove any of her deepfakes, due to her belief that they don't affect her public image and that differing laws across countries and the nature of internet culture make any attempt to remove the deepfakes 'a lost cause';

she believes that while celebrities like herself are protected by their fame, deepfakes pose a grave threat to women of lesser prominence who could have their reputations damaged by depiction in involuntary deepfake pornography or revenge porn.[20]

The app uses an artificial neural network and the power of the graphics processor and three to four gigabytes of storage space to generate the fake video.

For detailed information, the program needs a lot of visual material from the person to be inserted in order to learn which image aspects have to be exchanged, using the deep learning algorithm based on the video sequences and images.

AI, IoT, Mobile Security Archives - Mosaic Security Research

IoT software flaw could render millions of consumer devices, including baby monitors and webcams, open to remote discovery and hijack.

lawsuits come after a Motherboard investigation showed AT&T, Sprint, and T-Mobile sold phone location data that ended up with bounty hunters, and The New York Times covered an instance of Verizon selling data.

poor security of Internet of Things (IoT) devices from web-connected lightbulbs to refrigerators may be partly the result of penny pinching by consumer and business shoppers, the U.S. Chamber of Commerce told a Senate panel focusing on cybersecurity Tuesday.

law would introduce clearer labeling and mandate improved built-in security Diabetics are hunting down obsolete insulin pumps with a security flaw (Naked Security –

is a legal way for us to improve the cyber resilience of autonomous vehicles by demonstrating a transmission of spoofed or manipulated GPS signals to allow for analysis of system responses,” said Victor Murray, head of SwRI’s Cyber Physical Systems Group in the Intelligent Systems Division.

percent of operators expect to roll out 5G services in 2019, and an additional 86 percent expect to be delivering 5G services by 2021, according to a Vetiv survey of more than 100 global telecom decision makers with visibility into 5G and edge strategies and plans.

parties are calling for a criminal inquiry after the UK defense secretary was sacked for allegedly leaking news of the government’s decision to allow Huawei to supply parts of its 5G network.

organization offers Domain Name Service, or DNS, protection, which means it prevents people from connecting to malicious websites — such as phishing sites that look like a bank’s website but are actually stealing log-in information.

It’s Getting Harder to Spot a Deep Fake Video

Fake videos and audio keep getting better, faster and easier to make, increasing the mind-blowing technology's potential for harm if put in the wrong hands.