AI News, AI Drone Learns to Detect Brawls
AI Drone Learns to Detect Brawls
Drones armed with computer vision software could enable new forms of automated skyborne surveillance to watch for violence below.
But their work demonstrates one possibility of combining deep learning’s pattern-recognition capabilities with relatively inexpensive commercial drones and the growing availability of cloud computing services.
key part of this demonstration involved training deep learning algorithms to recognize violent actions by detecting various combinations ofbody and limb poses in video footage.To create a training dataset,researchers enlisted 25 interns to gather in an open area andmimic violentactions in five categories such as punching, kicking, strangling, stabbing and shooting while being filmed by a Parrot AR drone from various heights ranging from 2 meters to 8 meters.
An unsuperviseddeep learning neural network automatically learns patterns over time by filteringdata through its many layers of artificial neurons from end to end—aprocess that can yield good predictive accuracy if you have enough computing resources and training data on your hands.Singh’s workaround solutioncame fromhis Cambridge University research that has focused on more streamlined and efficient forms of deep learning capable of running with fewer computing resources and less training data.
This move effectively replacedsome of the deep learning processwith human engineering input based on what Singh, the human designer, thought would work best for training the neural networkto recognize different human body poses.
There is even the possibility that Singh and his colleagues will move away from trying to have the system recognize specific acts of violence, such as stabbing or kicking, and instead focus on recognizing possible violence in general.
If the drone surveillance system’s accuracy does improve to the point of being commercially viable, Singh still envisions humans being in the loop to check out any suspicious activity or possible outbreaks of violence that the drone highlights.
The automated surveillance system would help narrowdown the range of places a human security guard should look, so that the human brain and eyes can take over and quickly exercise proper judgment about the situation.
'Eye in the Sky' Research Project Uses Drones to Spot Violence in Crowds
Researchers at the University of Cambridge, alongside India’s National Institute of Technology and the Institute of Science, have just published a research paper detailing the use of unmanned aerial vehicles for real-time surveillance and identification of violent individuals in crowds.
ScatterNet’s deep learning network takes it from there, and can effectively “estimate the pose for each detected human.” Once it does that, it can distinguish between potentially violent subjects in the frame, and those simply moving normally within the crowd.
Drones taught to spot violent behavior in crowds using AI
Automated surveillance is going to become increasingly common as companies and researchers find new ways to use machine learning to analyze live video footage.
An algorithm trained using deep learning estimates the poses of humans in the video and matches them to postures the researchers have designated as “violent.” For the purposes of the project, just five poses are included: strangling, punching, kicking, shooting, and stabbing.
Singh said attacks like this could be prevented in future if surveillance cameras can automatically spot suspicious behavior, like someone leaving a bag unattended for a long period of time.
Singh and his colleagues report that their system was 94 percent accurate at identifying “violent” poses, but they note that the more people that appear in frame, the lower this figure.
Singh acknowledged that this isn’t a perfect reflection of how the surveillance system might perform in the wild, but he said he has plans to test the drones during two upcoming festivals in India: Technozion and Spring Spree, both of which take place at NIT Warangal.
This last point is particularly telling, as Singh and his colleagues don’t produce any figures for the systems false positive rate — e.g., the frequency with which it identifies nonviolent behavior as violent.
(Singh denied that the system would make this sort of mistake, but said he and his team had not yet produced stats to support this claim.) But even if this particular system has not yet proved itself in the wild, it’s a clear illustration of the direction contemporary research is going.
Meredith Whittaker, a researcher who examines the social implications of AI, tweeted that Singh and his colleagues’ paper showed there was a “mounting ethical crisis” in the field, with scientists failing to examine potential misuses of the technology they’re helping build.
System for drone surveillance: How violence is boxed
DroneDJ summed up their approach, saying that they use an 'off-the-shelf consumer drone load it with AI and have it monitor a crowded area such as a sports stadium or a protest and look for acts of violence such as punching, kicking, strangling, shooting or stabbing.'
James Vincent in The Verge explained that an algorithm trained using deep learning estimates the poses of humans in the video and matches them to postures the researchers have designated as violent.
(It fell to 79 percent accuracy when looking at 10 individuals.)' Their work reflects a research interest in exploring ways to use machine learning to analyze live video footage.
In the bigger picture, it is obvious by now that the word 'surveillance' in and of itself is a loaded term, and one thinks of repressive governments eager to silence protestors by putting them under lock and key for flimsy reasons.
- On Wednesday, September 18, 2019
Eye in the Sky: Real-time Drone Surveillance System (DSS) for Violent Individuals Identification
This is the video demonstration of the paper titled 'Eye in the Sky: Real-time Drone Surveillance System (DSS) for Violent Individuals Identification using ...
Surveillance and Automated Anomaly Detection
This project, completed for the Fall 2015 CS205 course at Harvard University, attempts to provide a foundation to automate anomaly detection in surveilance ...
CVPR18: Session 3-3B: Image Motion & Tracking
Orals (O3-3B) 1. [G7] PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume, Deqing Sun, Xiaodong Yang, Ming-Yu Liu, Jan Kautz 2.
YOLO:Real-Time Object Detection
You only look once (YOLO) is a state-of-the-art, real-time object detection system. On a Titan X it processes images at 40-90 FPS and has a mAP on VOC 2007 ...
CEVA's Yair Siegel Details a Fast Path for Implementing an Embedded Neural Network (Preview)
For the full version of this video, along with hundreds of others on various embedded vision topics, please visit ...
Disguised Face Identification (DFI) on Discovery Channels' Daily Planet
Disguised Face Identification (DFI):
Aerial Suspicious Action Detection
This is the video demonstration of the algorithm presented in the following paper. Surya Penmetsa, Minhuj Fatima, Amarjot Singh, S.N. Omkar, "Autonomous ...
Sex, drugs & emerging viruses | Prof. Tom Solomon | TEDxLiverpool
This talk was given at a local TEDx event, produced independently of the TED Conferences. A global research leader on infectious diseases such ebola, ...
Face recognition(Next Generation Algorithm)
Face recognition(Next Generation Algorithm) email:firstname.lastname@example.org Mob:061797898