AI News, Security cameras use artificial intelligence to detect crime artificial intelligence

Inside China’s Dystopian Dreams: A.I., Shame and Lots of Cameras

China’s public security market was valued at more than $80 billion last year but could be worth even more as the country builds its capabilities, said Shen Xinyang, a former Google data scientist who is now chief technology officer of Eyecool, a start-up.

“Artificial intelligence for public security is actually still a very insignificant portion of the whole market,” he said, pointing out that most equipment currently in use was “nonintelligent.” Many of these businesses are already providing data to the government.

At a building complex in Xiangyang, a facial-recognition system set up to let residents quickly through security gates adds to the police’s collection of photos of local residents, according to local Chinese Communist Party officials.

Artificial intelligence is going to supercharge surveillance

We usually think of surveillance cameras as digital eyes, watching over us or watching out for us, depending on your view.

Ella can recognize hundreds of thousands of natural language queries, letting users search footage to find clips showing specific animals, people wearing clothes of a certain color, or even individual car makes and models.

He typed in various searches — “a man wearing red,” “UPS vans,” “police cars” — all of which brought up relevant footage in a few seconds.

“[It] works well on a one-camera system — just [like] a nanny cam or dog cam — all the way up to enterprise, with a matrix of thousands of cameras,” says Sailor.

(A fish ladder is exactly what it sounds like: a stepped waterway that fish use to travel uphill.) “Then they moved to video and someone [remotely] watching it.” Finally, they contacted Boulder, which built them a custom AI CCTV system to identify types of fish going up the fish ladder.

In the same way that machine learning has made swift gains in its ability to identify objects, the skill of analyzing scenes, activities, and movements is expected to rapidly improve.

YouTube’s dataset, for example, contains more than 450,000 hours of labeled video that it hopes will spur “innovation and advancement in video understanding.” The breadth of organizations involved in building such datasets gives some idea of the field’s importance.

All the system would need to do would be to look out for pupils clumping together and then alert a human, who could check the video feed to see what’s happening or head over in person to investigate.

Alex Hauptmann, a professor at Carnegie Mellon who specializes in this sort of computer analysis, says that although AI has propelled the field forward hugely in recent years, there are still fundamental challenges in getting computers to understand video.

If you’re right in front of a [camera] and playing a guitar, it can track you down to the individual fingers.” This is a big problem for CCTV, where the cameras are often grainy and the angles are often weird.

Similarly, while AI is great at identifying what’s going on in a video at a fairly high level (e.g., someone is brushing their teeth or looking at their phone or playing football), it can’t yet extract vital context.

It might be able to look at the footage and say “this person is running,” but it can’t tell you whether they’re running because they’re late for a bus or because they’ve just stolen someone’s phone.

(Facial recognition using low-quality CCTV footage is another thing.) Identifying things like cars and items of clothing is also pretty solid and automatically tracking one person across multiple cameras can be done, but only if the conditions are right.

In Xinjiang, traditional methods of surveillance and civil control are combined with facial recognition, license plate scanners, iris scanners, and ubiquitous CCTV to create a “total surveillance state” where individuals are tracked constantly in public spaces.

In Moscow, a similar infrastructure is being assembled, with facial recognition software plugged into a centralized system of more than 100,000 high-resolution cameras which cover more than 90 percent of the city’s apartment entrances.

In these sorts of cases, there’s likely to be a virtuous cycle in play, with the systems collecting more data as the software gets better, which in turn helps the software get even better.

Studies have shown that machine learning systems soak up the racial and sexist prejudices of the society that programs them — from image recognition software that always puts women in kitchens, to criminal justice systems that always say black people are more likely to re-offend.

“There’s a real danger with this that we are universalizing biased pictures of criminality and crime.” Even if we manage to fix the biases in these automated systems, that doesn’t make them benign, says ACLU senior policy analyst Jay Stanley.

“The concern is that people will begin to monitor themselves constantly, worrying that everything they do will be misinterpreted and bring down negative consequences on their life.” Stanley also says that false alarms from inaccurate AI surveillance could also lead to more dangerous confrontations between law enforcement and members of the public.

“It’s troubling to me that a lot of these systems are being pumped into our core infrastructure without the democratic process that would allow us to ask questions about their effectiveness, or to inform the populations they’ll be deployed on,” says Whittaker.

“This is one more example in the drumbeat of algorithmic systems that are offering to classify and determine the typology of individuals based on pattern recognition drawn from data that embed cultural and historical biases.” When we ask IC Realtime about problems of how AI surveillance could be abused, they gave an answer that’s common in the tech industry: these technologies are value neutral, and it’s only how they’re implemented and by whom that makes them either good or bad.

A New Security Startup Wants to Stop School Shootings with Artificial Intelligence

In this age of frequent deadly shootings, the majority of teens—and their parents—worry about the possibility of an attack at their own school.

Athena Security, created by Fortune 40 under 40 alum Lisa Falzone, is a camera that uses AI to recognize violent or criminal behavior, such as the pulling out a gun, and can report it to police.

Whether it’s implemented at a business, school, or around a city, the system can send an alert directly to the owner’s phone when it recognizes dangerous motions like a gun being pointed, a knife being pulled, or people fighting.

Athena’s most advanced systems, like the one at Archbishop Wood High School, are capable of coordinating with third party systems to lock doors, halt elevators, or communicate directly with people on site.

“The feedback that we’ve gotten from law enforcement, especially for retailers, convenience stores, banks—criminals going after money—is that’ll be really helpful in deterring crime.”

While the security system is just getting started, Athena’s co-founders are no strangers to success: Falzone and Ciabarra are the founders of iPad point-of-sale company Revel Systems, for which they raised over $120 million and created 700 jobs.

China’s “Minority Report” Style Plans Will Use AI to Predict Who Will Commit Crimes

China is, in many ways, the ideal place to use this kind of technology.

The nation deploys facial recognition in schools to counter cheating, on streets to fight jaywalking, and even in bathrooms to limit toilet paper waste.

Although AI is certainly a potential surveillance tool, it can also be used to protect privacy, keep healthcare records private, secure financial transactions, and prevent hacking.

In other words, while you might object to certain applications, it’s hard to argue against AI technology on the whole if you’re concerned with the future of safety and privacy both online and off.

China: "the world's biggest camera surveillance network" - BBC News

China has been building what it calls "the world's biggest camera surveillance network". Across the country, 170 million CCTV cameras are already in place and ...

Real-time event detection for video surveillance applications

Events are detected in real-time in embedded platforms using optimized computer vision and machine learning algorithms. The detection process is based on ...

China Surveillance: Smart cameras help track almost every move

China has around 100 million cameras on its streets - the highest number in the world. The government says it's to improve efficiency and security, but it comes ...

How Artificial Intelligence is changing the face of Cyber Security

Interviewer: Manek Dubash, NetEvents Interviewees: · Stuart McClure, CEO, President & Founder, Cylance · Kathryn Hume, Fast Forward Labs · Paul Jackson, ...

AI POLICE STATE China to use technology to predict crimes

AI POLICE STATE: China to use technology to predict crimes BEFORE they happen CHINA is hoping to use artificial intelligence (AI) to look into the future and ...

World's Largest Application of Artificial Intelligence -- In China

Do you know what's the most massive application of artificial intelligence (AI) technology in China? And is it for good or bad? This video tells you exactly that.

Chinese Street surveillance. Object / Face Recognition.

Part of the "sense" line of real time video analysis. Click Here To Subscribe!

AI facial recognition can detect your sexuality and politics

AI facial recognition can detect your sexuality and politics.

AI, FACIAL RECOGNITION And GOVERNMENT Merge Into Unholy Trinity of SURVEILLANCE!

DONATE ➜ PATREON ➜ SUPPORT INDEPENDENT MEDIA .

Intelligent video surveillance | Tomorrow Today

Use of CCTV is on the rise in cities around the world. But cameras don't necessarily prevent crime. Researchers are now developing an automated alarm system ...