AI News, Access Now and Amnesty International launch Toronto Declaration ... artificial intelligence

New Toronto Declaration calls on algorithms to respect human rights

Today in Toronto, a coalition of human rights and technology groups released a new declaration on machine learning standards, calling on both governments and tech companies to ensure that algorithms respect basic principles of equality and non-discrimination.

While not legally binding, the declaration is meant to serve as a guiding light for governments and tech companies dealing with these issues, similar to the Necessary and Proportionate principles on surveillance.

“This may include, for example, creating clear, independent, and visible processes for redress following adverse individual or societal effects,” the declaration suggests, “[and making decisions] subject to accessible and effective appeal and judicial review.” In practice, that will also mean significantly more visibility into how popular algorithms work.

IGF 2018 WS #302 The Toronto Declaration on machine learning: the way forward

, From scoring systems used in the criminal justice which discriminate against people of color, to facial recognition systems that are more accurate when trying to identify white men rather than women of color, to the use of algorithms for recruitment or education processes which have disadvantaged candidates with a potential health issue or from lower income from jobs or university placement, the examples of discrimination are piling up.

Protestar es un Derecho

Foto: Flickr - Public Domain In May 2018,Amnesty International,Access Now, and a handful of partner organizations launched theToronto Declarationon

TheDeclaration is a landmark document that seeks to apply existing international human

rights standards to the development and use of machine learning systems (or

bedefinedas “ provid[ing] systems the ability to automatically learn and

improve from experience without being explicitly programmed.” One of the most significant risks with machine learning is the danger of amplifying existing bias and discrimination against certain groups who already struggle to be treated with dignity and respect.

AI can increase efficiency, find new insights into diseases,

of amplifying existing bias and discrimination against certain groups—often

marginalized and vulnerable communities, who already struggle to be

learning systems without safeguards, ML systems can reinforce and even augment

made in the design of AI systems lead to biased outcomes, whether they

that hit social media companies and there was a movement to proactively

We chose equality and non-discrimination in machine learning as a focus because it was already a pressing issue with an increasing number of real-life problems.

machine learning as a focus because it was already a pressing issue with an increasing

It is based on the human rights due diligence framework (originally defined

third party independent audits where there is a significant risk of human rights

TheDeclaration calls on governments to ensure standards of due process for the use of

machine learning is in the public sector, act cautiously on the use of machine

learning system in the justice system, outline clear lines of accountability

The development and launch of the Declaration is just a first step towards making the human rights framework a foundational component of the fast-developing field of AI and data ethics.

civil society is important for asserting the centrality of human rights in the

ensuring that existing human rights are not diluted by new technologies.

human rights practitioners and engineers will help to embed human rights