AI News, BOOK REVIEW: Guide to Starting a Career in Artificial Intelligence artificial intelligence
We are doing this by: NIST’s AI efforts fall in several categories: AI and machine learning is changing the way in which society addresses economic and national security challenges and opportunities, and is being used in genomics, image and video processing, materials, natural language processing, robotics, wireless spectrum monitoring and more.
One of the key questions as these new technologies are created and deployed is: Are they trustworthy? While answers to the question of what makes an AI technology trustworthy may differ depending on whom you ask, there are certain key characteristics that support the concept of trustworthiness, including accuracy and accountability, explainability and interpretability, fairness, privacy, reliability, robustness, safety, security (resilience), transparency and mitigation of harmful bias.
What is AI? Here's everything you need to know about artificial intelligence
Robots and driverless cars The desire for robots to be able to act autonomously and understand and navigate the world around them means there is a natural overlap between robotics and AI.
Facial recognition and surveillance In recent years, the accuracy of facial recognition systems has leapt forward, to the point where Chinese tech giant Baidu says it can match faces with 99% accuracy, providing the face is clear enough on the video.
While police forces in western countries have generally only trialled using facial-recognition systems at large events, in China, the authorities are mounting a nationwide program to connect CCTV across the country to facial recognition and to use AI systems to track suspects and suspicious behavior, and has also expanded the use of facial-recognition glasses by police.
These fears have been borne out by multiple examples of how a lack of variety in the data used to train such systems has negative real-world consequences. In 2018, an MIT and Microsoft research paper found that facial recognition systems sold by major tech companies suffered from error rates that were significantly higher when identifying people with darker skin, an issue attributed to training datasets being composed mainly of white men.
The issue of the vast amount of energy needed to train powerful machine-learning models was brought into focus recently by the release of the language prediction model GPT-3, a sprawling neural network with some 175 billion parameters. While the resources needed to train such models can be immense, and largely only available to major corporations, once trained the energy needed to run these models is significantly less.
One argument is that the environmental impact of training and running larger models needs to be weighed against the potential machine learning has to have a significant positive impact, for example, the more rapid advances in healthcare that look likely following the breakthrough made by Google DeepMind's AlphaFold 2.