AI News, Artificial Intelligence and Automated Systems Legal ... artificial intelligence

Selected Online Reading on Artificial Intelligence and Law

Abstract by the author: This study uses nearly one million Tweets from eight campaigns targeting seven countries to explore the relationship between social media and domestic legal change, specifically in the area of women’s rights.

Drawing on scholarship on the relation between new technologies and international law, I argue that new technology changes legal situations both directly, by creating new entities or enabling new behaviour, and indirectly, by shifting incentives or values.

I argue that many of the challenges raised by AI could in principle be accommodated in the international law system through legal development, and that while AI may aid in compliance enforcement, the prospects for legal displacement - a shift towards an 'automated international law'

However, I also conclude that technical and political features of the technology will in practice render AI destructive to key areas of international law: the legal gaps it creates will be hard to patch, and the strategic capabilities it offers chip away at the rationales for powerful states to engage fully in, or comply with, international law regimes.

Unlike domestic law, international law has not been aggregated from a pandect, and it is a still daunting task to draw any meaningful insights for further analysis due mainly to limited data (i.e., trial cases and precedents).

Abstract by the authors: Argues that virtual personal assistants may reproduce negative gender stereotypes concerning the role of women and the type of work they perform, and considers whether this could be classed as indirect discrimination under international human rights law.

Governing AI safety through independent audits

is a member of the Maryland Cybersecurity Council, established by the Maryland State Legislature to work with the National Institute of Standards and Technology and other federal agencies, private sector businesses and private cybersecurity experts to improve cybersecurity in Maryland.

he is a voting member of SAE 34 working group on AI in Aviation that is writing guidelines for AI in aviation.

he is bound by confidentiality agreements that prevent him from disclosing other competing interests in this work.

holds an EPSRC Fellowship investigating ethical data recorders in robots, leads a project on legality and ethics of data recorders in autonomous vehicles and is a member of the All-Party Parliamentary Group on Data Analytics;

he leads the development and implementation of Iceland’s cybersecurity strategy including the cybersecurity aspects of AI.

leads the development and implementation of Singapore’s AI Governance Framework and is a member of OECD’s Network of Experts in AI and the Global Partnership on Artificial Intelligence’s expert working group on Data Governance.

Foundations of Law and Finance: Augmented Lawyering

To enter the virtual seminar room, please use the following login credentials:

A related debate is whether the legal profession’s adherence to the partnership form inhibits capital-raising necessary to invest in new technology.

This Article presents what is to our knowledge the most comprehensive empirical study yet conducted into the implementation of AI in legal services, encompassing interview-based case studies and survey data.

A central theme is that prior debate focusing on the “human vs technology” aspect of change overlooks the way in which technology is transforming the human dimensions of legal services.

We document these new roles being clustered in multidisciplinary teams (“MDTs”) that mix legal with a range of other disciplinary inputs to augment the operation of technical systems.

Administrative Law in the Automated State

Moreover, one of administrative law's primary tenets–that governmental processes should be transparent and susceptible to reason-giving–would seem to stand as a barrier to the deployment of the very machine learning algorithms that are driving the emerging trends in automation.17 That is because machine learning algorithms–sometimes referred to as “black-box” algorithms–have properties that can make them opaque and hard to explain.

Unlike traditional statistical algorithms, in which variables are selected by humans and resulting coefficients can be pointed to as explaining specified amounts of variation in a dependent variable, learning algorithms effectively discover their own patterns in the data and do not generate results that associate explanatory power to specific variables.

WHO issues first global report on Artificial Intelligence (AI) in health and six guiding principles for its design and use

Artificial Intelligence (AI) holds great promise for improving the delivery of healthcare and medicine worldwide, but only if ethics and human rights are put at the heart of its design, deployment, and use, according to new WHO guidance published today.

For example, while private and public sector investment in the development and deployment of AI is critical, the unregulated use of AI could subordinate the rights and interests of patients and communities to the powerful commercial interests of technology companies or the interests of governments in surveillance and social control.

They should be accompanied by training in digital skills, community engagement and awareness-raising, especially for millions of healthcare workers who will require digital literacy or retraining if their roles and functions are automated, and who must contend with machines that could challenge the decision-making and autonomy of providers and patients.

Ultimately, guided by existing laws and human rights obligations, and new laws and policies that enshrine ethical principles, governments, providers, and designers must work together to address ethics and human rights concerns at every stage of an AI technology’s design, development, and deployment.

To limit the risks and maximize the opportunities intrinsic to the use of AI for health, WHO provides the following principles as the basis for AI regulation and governance: Protecting human autonomy: In the context of health care, this means that humans should remain in control of health-care systems and medical decisions;

Inclusiveness requires that AI for health be designed to encourage the widest possible equitable use and access, irrespective of age, sex, gender, income, race, ethnicity, sexual orientation, ability or other characteristics protected under human rights codes.