AI News, Welcome to the Machine: Law, Artificial Intelligence and the Visual Arts artificial intelligence
Artificial intelligence - Axios
imprisonment, or — if the deepfake could incite violence or disrupt government or an election — up to 10 years.
David Greene, civil liberties director at the Electronic Frontier Foundation, says making malicious deepfakes a federal crime may hamper protected speech — like the creation of parody videos.
Reality check: New laws would be a last line of defense against deepfakes, as legislation can’t easily prevent their spread.
Can Artificial Intelligence Be Biased?
Introduction In pursuit of automation-driven efficiencies, the rapidly evolving artificial intelligence (AI) tools and techniques (such as neural networks, machine-learning, predictive analytics, speech recognition, natural-language processing and more) are now routinely used across nations: its governments, industries, organizations and academia (NGIOA) for navigation, translation, behavior modeling, robotic control, risk management, security, decision making and many other applications.
Human versus Machine Decision-Making Processes Irrespective of cyberspace, geospace or space (CGS), since technology revolutions are driven not just by accidental discovery but also by societal needs, the question we all individually and collectively need to first and foremost evaluate is whether there really is a need for decision-making algorithms—and if yes, where and why.
Artificial intelligence tools and techniques are increasingly expanding and enriching decision support not only by coordinating diverse data sources delivery in a timely and efficient manner but also by analyzing evolving data sources, trends, providing defined forecasts, developing data consistency, quantifying uncertainty of all data variables, anticipating the human or machine user’s data needs, providing information to the human or machine user in the most appropriate forms, and suggesting decisive courses of all possible action based on the intelligence gathered.
Algorithmic Engineering Process and Penetration of Bias While there are growing concerns about machine learning decision-making models, it seems AI is being woven into the very fabric of human society and everything individuals and entities do across nations: its government, industries, organizations and academia (NGIOA) in cyberspace, geospace and space (CGS).
Since, we are trying to re-define and re-design systems that brings us more trust and transparency, there is a clear need to promote equality, transparency and accountability in algorithm design and development for decision-making—and to ensure that data transparency, training, review and remediation are being considered throughout the entire algorithmic engineering process.
Let’s not forget Google’s search algorithm including black people in the results of a search on “gorilla.” While decision-making algorithms are inherently not biased and algorithmic decision-making depends on a number of variables -- including how the software is designed, developed, deployed and the quality, integrity and representativeness of the underlying data sources -- there is a need for a new approach to define and design decision-making algorithms.
Since it is important to evaluate what the implications will be if bias penetrates decision-making algorithms—it brings us to evaluating further whether data protection safeguards can be built into the algorithms from the earliest stages of development to prevent bias from penetrating them.
However, when it comes to its decision-making applications for systems at all levels: global, national or local (may it be government agencies, banks, credit agencies, courts, prisons, education institutions etc.,) there is a need for a global standard on the best practices to define and determine whose algorithm can be used for equality, fairness and objectivity.