AI News, evan.selinger@rit.edu Rochester Institute of Technology Homep artificial intelligence

A.I. Ethics Boards Should Be Based on Human Rights

Whenever tech companies talk about ethics, critics worry that it’s a strategy for avoiding stronger government regulations and gaining goodwill, consisting of empty slogans followed by minimal legal compliance.

If tech companies want to create meaningful ethical practices, they need to invite the right people to their ethics boards and empower these folks to make publicly available recommendations that hold businesses accountable to transparent standards.

In military, law enforcement, banking, criminal justice, employment, and even product delivery contexts, algorithmic systems can threaten human rights by automating discrimination, chilling public speech and assembly, and limiting access to information.

“Whether our ethical practices are Western (Aristotelian, Kantian), Eastern (Shinto, Confucian), African (Ubuntu), or from a different tradition, by creating autonomous and intelligent systems that explicitly honor inalienable human rights and the beneficial values of their users, we can prioritize the increase of human well-being as our metric for progress in the algorithmic age.” Technology companies should embrace this standard by explicitly committing to a broadly inclusive and protective interpretation of human rights as the basis for corporate strategy regarding A.I.

If due process reveals that a board member says or does something that is substantively out of line with human rights, she should be removed no matter how high her profile or how significant her past contributions.

The penalty is strong but appropriate, and it disincentivizes “digital ethics shopping,” which is the corporate malpractice of appealing to malleable ethical benchmarks to justify status quo behavior.

Principles were central to the debate because they include a corporate commitment to avoid creating or using “technologies whose purpose contravenes widely accepted principles of international law and human rights.” Since James is known for being anti-LGBTQ+ concerning trans individuals who don’t fit within her personal views on human sexuality, and leads the Heritage Foundation, long a proponent of “traditional” marriage, how could she be expected to hold Google accountable to its stated ideals?

Their suffering wouldn’t be negated even if, somehow, James set aside her conflicting opinions in order to hold the company to its ideals during board meetings — ideals that, at least in part, clashed with James’ own convictions.

Principles are too vaguely worded to count as a clear policy statement, and contain caveats that may function as loopholes to cooperate with governments or businesses that aren’t fully committed to human rights.

Simply culling outliers who can’t agree on the innate worth and equality of every human shouldn’t be considered a radical step of exclusion any more than understanding the need to censor hate speech should equate to being against the fundamental right to free speech.

Properly functioning media limit advocacy-based arguments (like this industry spin) to the opinion section and have guidelines for avoiding false balance on issues like climate change and vaccine safety.

to be responsible global actors is to reject intolerance and the abuses of language that prop it up, including weaponized appeals to “ideological diversity” that really mean permission for ethics boards to reduce or redefine the fundamental human rights that every person deserves to have protected.

They shouldn’t be in the business of lethal autonomous weapons, government scoring systems, and government facial recognition systems if they can’t make a robust case for how these endeavors can coexist with human rights protections.