“Safeguarding Human Rights In The Era Of Artificial Intelligence” – (Council Of Europe)

Today I will explore yet another (hopefully) different angle on a topic that has grown very fascinating to me. How can human rights be safeguarded in the age of the sentient machines? An interesting question since I think it may also link back to technologies that are pre-AI.

Let’s begin.

https://www.coe.int/en/web/genderequality/-/safeguarding-human-rights-in-the-era-of-artificial-intelligence

The use of artificial intelligence in our everyday lives is on the increase, and it now covers many fields of activity. Something as seemingly banal as avoiding a traffic jam through the use of a smart navigation system, or receiving targeted offers from a trusted retailer is the result of big data analysis that AI systems may use. While these particular examples have obvious benefits, the ethical and legal implications of the data science behind them often go unnoticed by the public at large.

Artificial intelligence, and in particular its subfields of machine learning and deep learning, may only be neutral in appearance, if at all. Underneath the surface, it can become extremely personal. The benefits of grounding decisions on mathematical calculations can be enormous in many sectors of life, but relying too heavily on AI which inherently involves determining patterns beyond these calculations can also turn against users, perpetrate injustices and restrict people’s rights.

The way I see it, AI in fact touches on many aspects of my mandate, as its use can negatively affect a wide range of our human rights. The problem is compounded by the fact that decisions are taken on the basis of these systems, while there is no transparency, accountability or safeguards in how they are designed, how they work and how they may change over time.

One thing I would add to the author’s final statement would be the lack of safeguards in terms of what kind of data these various forms of AI are drawing their conclusions from. While not the only factor that could contribute to seemingly flawed results, I would think that bad data inputs are one of (if not THE) most important factor.

I base this off of observation of the many high profile cases of AI gone (seemingly) haywire. Whether it is emphasized in the media coverage or not, biased data inputs have almost always been mentioned as a factor.

If newly minted AI software is the mental equivalent to a child, then this data is the equivalent to religion, racism, sexism or other indoctrinated biases. Thus my rule of thumb is this . . . If the data could cause indoctrination of a child, then it’s unacceptable for a learning stage algorithm.

Encroaching on the right to privacy and the right to equality

The tension between advantages of AI technology and risks for our human rights becomes most evident in the field of privacy. Privacy is a fundamental human right, essential in order to live in dignity and security. But in the digital environment, including when we use apps and social media platforms, large amounts of personal data are collected – with or without our knowledge – and can be used to profile us, and produce predictions of our behaviours. We provide data on our health, political ideas and family life without knowing who is going to use this data, for what purposes and how.

Machines function on the basis of what humans tell them. If a system is fed with human biases (conscious or unconscious) the result will inevitably be biased. The lack of diversity and inclusion in the design of AI systems is therefore a key concern: instead of making our decisions more objective, they could reinforce discrimination and prejudices by giving them an appearance of objectivity. There is increasing evidence that women, ethnic minorities, people with disabilities and LGBTI persons particularly suffer from discrimination by biased algorithms.

Excellent. This angle was not overlooked.

Studies have shown, for example, that Google was more likely to display adverts for highly paid jobs to male job seekers than female. Last May, a study by the EU Fundamental Rights Agency also highlighted how AI can amplify discrimination. When data-based decision making reflects societal prejudices, it reproduces – and even reinforces – the biases of that society. This problem has often been raised by academia and NGOs too, who recently adopted the Toronto Declaration, calling for safeguards to prevent machine learning systems from contributing to discriminatory practices.

Decisions made without questioning the results of a flawed algorithm can have serious repercussions for human rights. For example, software used to inform decisions about healthcare and disability benefits has wrongfully excluded people who were entitled to them, with dire consequences for the individuals concerned. In the justice system too, AI can be a driver for improvement or an evil force. From policing to the prediction of crimes and recidivism, criminal justice systems around the world are increasingly looking into the opportunities that AI provides to prevent crime. At the same time, many experts are raising concerns about the objectivity of such models. To address this issue, the European Commission for the efficiency of justice (CEPEJ) of the Council of Europe has put together a team of multidisciplinary experts who will “lead the drafting of guidelines for the ethical use of algorithms within justice systems, including predictive justice”.

Though this issue tends to be viewed as the Black Box angle (you can’t see what is going on inside the algorithms), I think it more reflects on the problem that is proprietary systems running independently, as they please.

It reminds me of the situation of corporations and large-scale data minors and online security. The EU sets the standard in this area by way of levying huge fines for data breaches, particularly those that cause consumer suffering (North America lags behind, in this regard).
I think that a similar statute to the GDPR could handle this issue nicely on a global scale. Just as California was/is the leader in terms of many forms of safety regulation due to its market size, the EU has now stepped into that role in terms of digital privacy. They can also do the same for regulating biased AI (at least for the largest of entities).

It won’t stop your local police department or courthouse (or even your government!) from running flawed systems. For that, mandated transparency in operations becomes a necessity for operation. Governing bodies (and international overseers) have to police the judicial systems of the world and take immediate action if necessary. For example, by cutting AI operations funding to a police organization that either refuses to follow the transparency requirements or refuses to fix diagnosed issues in their AI system.

Stifling freedom of expression and freedom of assembly

Another right at stake is freedom of expression. A recent Council of Europe publication on Algorithms and Human Rights noted for instance that Facebook and YouTube have adopted a filtering mechanism to detect violent extremist content. However, no information is available about the process or criteria adopted to establish which videos show “clearly illegal content”. Although one cannot but salute the initiative to stop the dissemination of such material, the lack of transparency around the content moderation raises concerns because it may be used to restrict legitimate free speech and to encroach on people’s ability to express themselves. Similar concerns have been raised with regard to automatic filtering of user-generated content, at the point of upload, supposedly infringing intellectual property rights, which came to the forefront with the proposed Directive on Copyright of the EU. In certain circumstances, the use of automated technologies for the dissemination of content can also have a significant impact on the right to freedom of expression and of privacy, when bots, troll armies, targeted spam or ads are used, in addition to algorithms defining the display of content.

The tension between technology and human rights also manifests itself in the field of facial recognition. While this can be a powerful tool for law enforcement officials for finding suspected terrorists, it can also turn into a weapon to control people. Today, it is all too easy for governments to permanently watch you and restrict the rights to privacy, freedom of assembly, freedom of movement and press freedom.

1.) I don’t like the idea of private entities running black box proprietary algorithms with the aim of combatting things like copyright infringement or extremism either. It’s hard to quantify really because, in a way, we sold out our right to complain when we decided to use the service. The very public square that is many of the largest online platforms today have indeed become pillars of communication for millions, but this isn’t the problem of the platforms. This is what happens when governments stay hands off of emerging technologies.

My solution to this problem revolved around building an alternative. I knew this would not be easy or cheap, but it seemed that the only way to ensure truly free speech online was to ditch the primarily ad-supported infrastructure of the modern internet. This era of Patreon and crowdfunding has helped in this regard, but not without a set of its own consequences. In a nutshell, when you remove the need for everyday people to fact check (or otherwise verify) new information that they may not quite understand, you end up with the intellectual dark web.
A bunch of debunked or unimportant academics, a pseudo-science pedaling ex-psychiatrist made famous by an infamous legal battle with no one (well, but for those, he sued for using their free speech rights), and a couple dopey podcast hosts

Either way, while I STILL advocate for an (or many) alternatives in the online ecosystem, it seems to me that at least in the short term, regulations may need to come to the aid of the freedom of speech & expression rights of everyday people. Yet it is a delicate balance since we’re dealing with sovereign entities in themselves.

The answers may seem obvious at a glance. For example, companies should NOT have been allowed to up and boot Alex Jones off of their collective platforms just for the purpose of public image (particularly after cashing in on the phenomenon for YEARS). Yet in allowing for black and white actions such as that, I can’t help but wonder if it could ever come back to bite us. For example, someone caught using copyrighted content improperly having their entire Youtube library deleted forever.

2.) I don’t think there is a whole lot one can do to avoid being tracked in the digital world, short of moving far from cities (if not off the grid entirely). At this point, it has just become part of the background noise of life. Carrying around a GPS enabled smartphone and using plastic cards is convenient, and it’s almost impossible to generate some form of metadata in ones day to day life. So I don’t really worry about it, short of attempting to ensure that my search engine accessible breadcrumbs are as few as possible.

It’s all you really can do.

What can governments and the private sector do?

AI has the potential to help human beings maximise their time, freedom and happiness. At the same time, it can lead us towards a dystopian society. Finding the right balance between technological development and human rights protection is therefore an urgent matter – one on which the future of the society we want to live in depends.

To get it right, we need stronger co-operation between state actors – governments, parliaments, the judiciary, law enforcement agencies – private companies, academia, NGOs, international organisations and also the public at large. The task is daunting, but not impossible.

A number of standards already exist and should serve as a starting point. For example, the case-law of the European Court of Human Rights sets clear boundaries for the respect for private life, liberty and security. It also underscores states’ obligations to provide an effective remedy to challenge intrusions into private life and to protect individuals from unlawful surveillance. In addition, the modernised Council of Europe Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data adopted this year addresses the challenges to privacy resulting from the use of new information and communication technologies.

States should also make sure that the private sector, which bears the responsibility for AI design, programing and implementation, upholds human rights standards. The Council of Europe Recommendations on human rights and business and on the roles and responsibilities of internet intermediaries, the UN guiding principles on business and human rights, and the report on content regulation by the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, should all feed the efforts to develop AI technology which is able to improve our lives. There needs to be more transparency in the decision-making processes using algorithms, in order to understand the reasoning behind them, to ensure accountability and to be able to challenge these decisions in effective ways.

Nothing for me to add here. Looks like the EU (as usual) is well ahead of the curve in this area.

A third field of action should be to increase people’s “AI literacy”.

Indeed.

In an age where such revered individuals as Elon Musk are saying such profoundly stupid things as this, AI literacy is an absolute necessity.

States should invest more in public awareness and education initiatives to develop the competencies of all citizens, and in particular of the younger generations, to engage positively with AI technologies and better understand their implications for our lives. Finally, national human rights structures should be equipped to deal with new types of discrimination stemming from the use of AI.

1.) I don’t think that one has to worry so much about the younger generations as they do about the existing generations. Young people have grown up in the internet age so all of this will be natural. Guidance as to the proper use of this technology is all that should be necessary.

Older people are a harder sell. If resources were to be put anywhere, I think it should be in programs which attempt to making aging generations more comfortable with increasingly modernized technology. If someone is afraid to operate a smartphone or a self-checking, where do you even begin with explaining Alexa, Siri or Cortana?

2.) Organizations do need to be held accountable for their misbehaving AI software, particularly if it causes a life-altering problem. Up to and including the right to legal action, if necessary.

 It is encouraging to see that the private sector is ready to cooperate with the Council of Europe on these issues. As Commissioner for Human Rights, I intend to focus on AI during my mandate, to bring the core issues to the forefront and help member states to tackle them while respecting human rights. Recently, during my visit to Estonia, I had a promising discussion on issues related to artificial intelligence and human rights with the Prime Minister.

Artificial intelligence can greatly enhance our abilities to live the life we desire. But it can also destroy them. It therefore requires strict regulations to avoid morphing in a modern Frankenstein’s monster.

Dunja Mijatović, Commissioner for Human Rights

I don’t particularly like the darkened tone of this part of the piece. But I like that someone of influence is starting to ask questions, and getting the ball rolling.

It will be interesting to see where this all leads in the coming months, years and decades.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.