Who Is Liable if AI Violates Your Human Rights?

Shutterstock

All around the world, tech companies are rushing to release AI chatbots and AI technology. However, rapid technological progress without legal accountability could lead to human rights violations

by Alina Liebholz

May 29, 2023

Artificial intelligence (AI) has become part of our daily life and will become more important over the years. It can solve complex problems, reduce human errors and increase productivity. 

However, if legal frameworks and paths of accountability do not adapt and evolve alongside swift technological advancements, AI may encounter human rights issues. This is particularly important since private companies often control AI.

AI technology can take various forms. There isn’t a universally accepted definition of what AI is, but in essence, it’s a technology that can perform tasks that require some degree of intelligence. 

Tech giant Google defined it as a science that builds “computers and machines that can reason, learn, and act in such a way that would normally require human intelligence or that involves data whose scale exceeds what humans can analyze.”

Recently, there has been growing awareness of generative AI platforms like ChatGPT. Yet, AI technology is being used in a wide range of sectors such as the home, healthcare, security, and finance industry.

Anna Bacciarelli, program manager in Human Rights Watch’s Tech and Human Rights division, stated in an interview regarding platforms such as ChatGPT: 

“If generative AI is the future, then this is a vision proposed and realized by a handful of powerful tech companies and individuals, each with their own commercial interests at stake. This raises important questions about corporate power and accountability.”

Want to make videos accessible? Add captions for just $1.25/min. 24 hour turnaround & 99% accuracy

How can AI violate human rights?

One example of how AI could infringe or could be used to infringe human rights is when users provide sensitive information to chatbots. This could compromise the user’s privacy rights if not protected properly. Another example is that AI can foster discrimination in its own algorithms. This has been observed in tests of facial recognition and medical algorithms

Additionally, AI surveillance technology can harm one’s right to peaceful assembly and protest by giving governments information on the identity of protestors. Lastly, when AI is used to censor lawful content on social media sites, it may harm the right to free speech.

These are just some examples of how AI could infringe on human rights. Violations as such affect the right to privacy, freedom of expression, peaceful assembly, and non-discrimination, which are all protected by the Universal Declaration of Human Rights (UDHR). 

However, it’s worth noting that AI can also enhance human rights when used appropriately. For instance, by contributing to a reduction of medical errors or by giving access to education by translating content.

eBooks.com - leading provider of e-textbooks

Furthermore, the United Nations, in partnership with the Danish Institute of Human Rightsand a Danish social enterprise called Specialisterne, is using AI to help governments manage the vast amounts of human rights guidance. 

Will AI companies be held accountable?

Many tech companies self-regulate their human rights regulations. However, self-regulation cannot be a trusted mechanism to ensure human rights compliance. After all, as Bacciarelli said: “AI is simply too powerful, and the consequences for rights are too severe, for companies to regulate themselves.”

As of May 2023, there is no standardised or internationally agreed approach to AI regulation concerning AI and human rights. 

In fact, as Chatham House, The Royal Institute of International Affairs stated in a research paper on AI governance and human rights: “Many sets of AI governance principles produced by companies, governments, civil society and international organizations fail to mention human rights at all.”

Under international human rights law, companies have the responsibility to respecthuman rights. However, this soft law instrument, laid out in the Guiding Principles on Business and Human Rights (UNGP), is not legally binding.

International human rights law was created in the later 1940s, but it didn’t account for the rapid technological advancements or the increasing influence of private companies. 

As long as international human rights law has not recognised the legal accountability of private actors, national legislation needs to cover questions of liability. International human rights law requires the state to “respect, protect and fulfil” human rights. 

In April 2021, the EU proposed a bill on rules for artificial intelligence, and at the beginning of May, the European Parliament voted on the bill, which will be finalised in June 2023. The EU bill requires AI systems to be fully compliant with human rights. The law outlines how the existing European Convention on Human Rights applies to AI technology. 

Following the vote, Mher Hakobyan, Advocacy Advisor at Amnesty International, said: “Today the European Parliament sent a strong signal that human rights must be at the forefront of this landmark legislation, by voting to ban several AI-based practices which are incompatible with human rights.”

The EU is not alone in its efforts to bring forward legislation covering AI and human rights. The USCanada, the UKSouth Korea and many other countries have also produced bills on the issue. 

Although national laws are developed, they do not provide equal protection globally as international human rights regulations would if applied to private businesses. It will be interesting to see if this expansion of liability will be made in light of technological advancements. 

Meanwhile, it will be essential to educate users about human rights implications when they are using AI. For instance, a UNESCO and UNITAR youth working group created an educational course about the interaction of AI and human rights, free to access in 20 languages. 

Using educational tools can assist users in comprehending the effect of AI on human rights and guide them in using AI in a manner that respects their rights, particularly in the absence of enforceable international laws against private companies.

Subscribe to our newsletter.

This article was originally published on IMPAKTER. Read the original article.

0 Shares