AI News, Standards and Oversight of Artificial Intelligence

Global Politics and the Governance of Artificial Intelligence

The Governance of Artificial Intelligence (AI) Program at the University of Oxford's Future of Humanity Institute focuses on the political challenges associated with the rapid development of artificial intelligence.

While we may be able to draw important lessons from past attempts at governing powerful technologies such as nuclear technology, biotechnology, and aviation, examples of unambiguously successful technology governance are historically rare.

However, one should keep in mind that many of these historical examples might also be of limited transferability in the context of AI, since the development of AI involves distinct sets of nations, private actors, and considerably broader stakes and interests.

How we manage these near-term challenges could have lasting effects on the shape and vitality of our governance landscape, and determine how well-equipped we are to take on later issues including powerful military capabilities, such as in cyberspace, pervasive labor displacement, concentration of market power, and the safety and alignment of increasingly advanced AI systems.

What we would ideally put in place moving forward is an operationalization of the 'common good' principle--that is, mechanisms by which actors pursuing the development and deployment of advanced AI technologies are incentivized and rewarded for pursuing progress in AI in a safe manner that serves to benefit humanity.

European Commission Issues ‘Ethics Guidelines for Trustworthy AI’

Following extensive consultations, the European Commission’s High-Level Expert Group on AI released ethics guidelines on the use of artificial intelligence.

Three broad principles emerged from those guidelines, suggesting that trustworthy AI should be: To meet those criteria, the guidelines establish seven specific requirements: human agency and oversight;

While human agency and oversight on AI decisions are essential, they are only possible through proper governance, which in turn depends on technical robustness at the outset.

Ensuring technical robustness and safetyThe guidelines recommend a four-pronged approach to technical robustness and safety: These technological requirements further highlight that although complex, AI should not be perceived as mysterious or unmanageable.

Maintaining privacy and data governanceThe guidelines approach governance from the perspective of maintaining privacy and data protection, as well as the quality and integrity of data.

Promoting transparency and accountability through governanceAlthough the EU’s guidelines on AI aren’t compulsory, they are written in a style that emphasizes principles-based regulation, where those principles focus on process.

While this principle does not require full disclosure of intellectual property, it does require an assessment of algorithms as well as data and design processes, along with an evaluation by internal and external auditors of those elements.

This means that companies using AI systems should be prepared to explain to regulators (and data owners) in some detail how the AI works, such as what specific data is imputed, how those inputs are utilized, processed and analyzed by the software, and what checks and audits there are on the system —

in the guidelines suggests that as the technology continues to develop and become better understood both by users and regulators, the guidelines may well become more detailed and prescriptive, and even codified.

Companies that utilize AI tools with EU resident data inputs would be well served to focus on these guidelines now and move toward the transparency, human oversight and ethical directives that they encourage.

How regulation today could avoid tomorrow’s A.I. disaster | Joanna Bryson

If you're interested in licensing this or any other Big Think clip for commercial or private use, contact our licensing partner Executive Interviews: ...

Elon Musk calls for regulation of artificial intelligence

Mercatus Center senior fellow Adam Thierer on Elon Musk's warning about artificial intelligence.

The future of AI: risks and challenges

The future of AI encompasses many risks and challenges. The public perception of the future of AI can be a challenge as many fear artificial intelligence might ...

Governments Passing Laws For Robots

Governments in Europe are starting to propose both rules and rights for A.I. and robots. Ana Kasparian, Ben Mankiewicz, Grace Baldridge, and Aida Rodriguez, ...

AI in finance

Sundeep Gantori of UBS AG doesn't regulatory oversight of the development and use of artificial intelligence in the banking sector. He sees regulation coming in ...

The Dangers of AI: Is Technology Running Us? | Neil Deshmukh | TEDxLehighRiverSalon

Neil understands the potential AI has to revolutionize the world; however, he also has a mission to teach people the truth about technology, informing about the ...

AuditXPRT: AI Solutions for Regulatory Compliance and Audit

A brief introduction to AuditXPRT's iXPRT Platform, a unique Artificial Intelligence solution to automate regulatory compliance and audit. Automate the Mundane.

Hospital Contract Modeling Artificial Intelligence

With millions on the line, learn why contract modeling artificial intelligence will transform tradition modeling techniques. Hospital Revenue

Jonathan Zittrain: Openness and Oversight of Artificial Intelligence

Jonathan Zittrain, faculty director of the Berkman Klein Center and Professor at Harvard Law School, considers the role that both regulators and oversight groups ...

De-Enchanting AI with the Law | Kenneth Anderson | TEDxBoston

We use narrative to help us understand all parts of our world, including our legal world. But what happens when those narratives are outdated or don't match the ...