AI News, Safe, low artificial intelligence
How AI Might Provide a Safety Net for Patients and Providers
Rather than view them as problems, he said, “I think everyone needs to take a step back and see incidental findings as an opportunity — a way to add value to your health system — to separate your service from that of your competition.” NLP technology is far from perfect, Wandtke said, as it may only find about 70 percent of radiologist recommended follow-ups.
But used as part of a hybrid system that includes people, it can help providers boost patient care while avoiding the medical legal liability. “We don’t lose track of these people when we participate in the health care system in a coordinated fashion,” Wandtke told ITN after his presentation during the NLP session.
Wandtke reported at the SIIM meeting that these preliminary results were expanded throughout a multihospital network served by 75 radiologists conducting 800,000 diagnostic imaging exams annually, where a tracking system built around this NLP technology reduced the risk of delayed diagnosis by 80 percent.
“And I don’t believe any large health systems should try recommendation tracking without utilizing natural language processing.” The research indicated that NLP-based clinical analytics can serve as a safety net, Wandtke said: “To create a safety net, you have to create a high-reliability system to identify human error early.” Tracking radiologist recommendations manually — without AI assistance — can be costly and labor-intensive, especially in large institutions, which is “why it has not been adopted,” Wandtke said.
“If you start with a manual process, you will learn the areas that can be automated and you can begin to automate using low-level AI in the form of NLP — adding more automation and reducing the amount of manual labor to make it a cost-effective, efficient process,” he said.
Safe, low-cost, modular, self-programming robots
When companies use robots to produce goods, they generally have to position their automatic helpers in safety cages to reduce the risk of injury to people working nearby.
The modules can be combined in almost any way desired, enabling companies to customize their robots for a wide range of tasks - or simply replace damaged components.
At the same time, the robot's control center uses input from cameras installed in the room to collect data on the movements of people working nearby.
IMPROV shortens cycle times For their toolbox set, the scientists used standard industrial modules for some parts, complemented by the necessary chips and new components from the 3D printer.
In a user study, Althoff and his team showed that IMPROV not only makes working robots cheaper and safer - it also speeds them up: They take 36% less time to complete their tasks than previous solutions that require a permanent safety zone around a robot.
Hackers are turning our AI security systems against us — but they can be stopped
With the use of AI growing in almost all areas of business and industry, we have a new problem to worry about – the “hijacking” of artificial intelligence.
Steps organizations can take include paying more attention to basic security, shoring up their AI-based security systems to better detect the tactics hackers use, and educating personnel on the dangers of phishing tactics and other methods used by hackers to compromise system.
Recognizing the patterns of attack, our AI systems, based on machine learning and advanced analytics, are able to alert administrators that they are being attacked, enabling them to take action to shut down the culprits before they go too far.
Machine learning – the heart of what we call artificial intelligence today – gets “smart” by observing patterns in data, and making assumptions about what it means, whether on an individual computer or a large neural network.
So, if a specific action in computer processors takes place when specific processes are running, and the action is repeated on the neural network and/or the specific computer, the system learns that the action means that a cyber-attack has occurred, and that appropriate action needs to be taken.
Instead of trying to outfox intelligent machine-learning security systems, hackers simply “make friends” with them – using their own capabilities against them, and helping themselves to whatever they want on a server.
In one famous experiment at Kyushu University in Japan, scientists were able to fool AI-based image recognition systems nearly three quarters of the time, “convincing” them that they were looking not at a cat, but a dog or even a stealth fighter.
Stricter controls on how data is evaluated – for example, examining the timestamps on log files more closely to determine if they have been tampered with – could take from hackers a weapon that they are currently successfully using.
By shoring up their defenses against basic tactics, organizations will be able to prevent attacks of any kind – including those using advanced AI – by keeping malware and exploits off their networks altogether.
Educating employees on the dangers of responding to phishing pitches – including rewarding those who avoid them and/or penalizing those who don’t – along with stronger basic defenses like sandboxes and anti-malware systems, and more intelligent AI defense systems can go a long way to protect organizations.
Stanford AI Safety
The control of unmanned aircraft systems must be rigorously tested and verified to ensure their correct functioning and airworthiness.
This simulation-based approach provides a way to find the most likely path to a failure event, accelerating the search by formulating stress testing as a sequential decision process and then optimizing it using reinforcement learning.
- On 19. januar 2021
AI "Stop Button" Problem - Computerphile
How do you implement an on/off switch on a General Artificial Intelligence? Rob Miles explains the perils. Part 1: ...
Will automation take away all our jobs? | David Autor
Here's a paradox you don't hear much about: despite a century of creating machines to do our work for us, the proportion of adults in the US with a job has ...
Can artificial intelligence help predict and prevent traffic accidents? - BBC Click
Click visits the US to see how predictive analytics help in emergencies. Plus Honda's work on creating robots with emotions and some AR pilot training.
The big debate about the future of work, explained
Why economists and futurists disagree about the future of the labor market. Subscribe to our channel! Sources: ..
Why AI Will DESTROY Us All - How ARTIFICIAL INTELLIGENCE Will Summon The Demon (2019)
Welcome to Open Your Reality. In this video, I'm going to talk about why artificial intelligence will destroy us all. Why do I say that? Because AI danger to ...
Are Artificial Sweeteners REALLY Safe?
When artificial sweeteners were first released, scientists were finding links to various forms of life-threatening cancer! A lot has changed since then, but are they ...
How Artificial Intelligence (AI) Will Affect Ports and Terminals - TBA
Dr. Yvo Saanen, Commercial Director and Founder of TBA — an industry-leading consultancy, simulation and software specialist for ports, terminals and ...
Avoiding Negative Side Effects: Concrete Problems in AI Safety part 1
We can expect AI systems to accidentally create serious negative side effects - how can we avoid that? The first of several videos about the paper "Concrete ...
Police Unlock AI's Potential to Monitor, Surveil and Solve Crimes | WSJ
Law enforcement agencies like the New Orleans Police Department are adopting artificial-intelligence based systems to analyze surveillance footage.
Eric Weinstein: Revolutionary Ideas in Science, Math, and Society | Artificial Intelligence Podcast
Eric Weinstein is a mathematician, economist, physicist, and managing director of Thiel Capital. He formed the "intellectual dark web" which is a loosely ...