There is a narrative that machine learning systems are ‘ungovernable’. That such systems, often ‘black boxes’ that don’t show their working, are simply unwieldy and unknowable entities. That narrative is a myth.
It is a system’s human creators and operators who are responsible for the technological tools that they create and use – and any resulting impact. It is crucial that they are able to be held to account should any harm or wrongdoing occur. This is particularly important in high-stakes situations, where the introduction of new technologies could have a significant impact on people’s rights and liberties.
One of the main aims of creating the Toronto Declaration was to show the importance and relevance of existing human rights laws, standards and principles on the development and use of machine learning systems.
Governance must start with existing laws and standards
The Declaration was drafted during a time when various states, companies and multi-stakeholder groups were drafting ethical principles to guide the development and use of artificial intelligence. Since we published the Toronto Declaration, ethical guidelines, principles and advisors have proliferated in the tech industry.
While an ethical approach is undoubtedly necessary to ensure that new technologies are changing our world for the better, ethical principles are not universal, measurable or enforceable. This is where human rights comes in.
Holding human rights abusers to account
The international human rights framework comprises laws, standards and principles created over decades through consensus and collaboration. They are designed to protect our basic freedoms and dignity, and to ensure that each of us is treated with respect, no matter who we are or where we are in the world.
Human rights are not just abstract concepts, they are defined and protected by law. They include means for accountability – so when our human rights are not respected, those responsible for harm may be held to account.
Human rights and ethics are not mutually exclusive – they are complementary. But human rights is ethics codified with law, with clear definitions and processes to achieve justice.
Safeguarding equality and non-discrimination
While many human rights are impacted by machine learning systems, we focused on the right to equality and non-discrimination in the Toronto Declaration for a few reasons:
- The right to equality is a human right that underpins all other rights.
- To focus! Crafting a statement with many contributors required clear terms and definitions, so we honed in on machine learning systems and equality rights to set parameters for this work. That is not to say that other human rights are not affected by machine learning systems – they are.
- There is significant evidence to show that machine learning systems are already disproportionately impacting the right to equality.
- Machine learning systems are designed to discriminate, in some respects. They are generally optimized to reinforce successive outputs: they may replicate, reinforce and augment patterns to exclude unfavorable or unpopular data over time. In society, this can manifest as a way of excluding people with less well-represented traits, potentially already marginalized for their ‘unpopular’ attributes.
- Machine learning and automated systems can impact many people at scale, quickly, which arguably increases their risk to human rights.