Human Rights and Artificial Intelligence

By Peter Schaar

Today (10 December 2019) is International Human Rights Day. It recalls that the United Nations General Assembly adopted the Universal Declaration of Human Rights (UDHR) exactly 71 years ago. Its first article reads: 

“All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood”.

The idea of universal human rights developed more than 200 years ago and the 1948 United Nations Declaration date back to analog times when digitalisation or even “artificial intelligence” was unthinkable. Nevertheless, they formulate the still valid standard for ethically responsible action, both by states and by private actors (German constitutional lawyers would speak here of “third-party effect”).

Human rights and the underlying idea of human dignity are universal, not only in the geographical sense. They apply without prejudice to religious and ethnic differentiations. And they claim validity independent of technical developments. Anyone who insists on compliance with human rights standards in digitisation is all too easily suspected of being hostile to technology. Data protection and privacy law are intended to guarantee the rights to self-determination and privacy in a world increasingly dominated by information technology. That is why data protection is seen as a fundamental right not only in Europe.

The fact that data protection is sometimes in tension with other fundamental rights, such as the guarantee of freedom of information and expression, must not lead to data protection being increasingly undermined and suppressed, as we are currently seeing in various places. 

The most serious threat to data protection and other fundamental and human rights continues to be the abuse of state power. Authoritarian states such as China and Iran use state-of-the-art information technology to monitor and control people comprehensively. Concentration and re-education camps are no better by the fact that artificial intelligence decides who is imprisoned and detained. And surveillance interferes with our freedom even when it is no longer carried out solely by human guards, but by digital systems. In recent decades, however, democratic constitutional states have also massively restricted fundamental rights and freedoms, often under the banner of supposed improvements in citizens’ security.

Some globally active economic players have also attained positions of power through their digital business models that go beyond the capabilities of individual states. They do not operate prisons and have no executive power of their own. Their influence is predominantly indirect, but nevertheless enormous: they control the worldwide flow of information to a large extent, and they decide which information can be found and which expression of opinion is permitted. They evaluate citizens and companies by assigning them “scores” and thus largely determine their chances. They are in a position to influence the formation of political opinion through non-transparent procedures, up to and including the manipulation of democratic elections and referendums.

In view of these threats, it is essential to defend human rights in the digital age. Governments and parliaments must not avoid this task. They must further develop the legal framework for the design and use of digital systems in such a way that human dignity is guaranteed even in the case of innovations. The great challenge here is that the democratic constitutional states – above all Europe – must prove that digitisation can take place successfully even if the fundamental basic and human rights are respected. 

Translated with www.DeepL.com/Translator (free version)

Leave a Reply

Your email address will not be published. Required fields are marked *