Artificial intelligence under control

By Audrey SomnardMisch PautschLex Kleren Switch to French for original article

Listen to this article

Are machines neutral? Artificial intelligence feeds on... human data. With all the biases that our society entails. Here are some explanations.

Algorithms have a way of working. By dint of automating processes and relying more and more on artificial intelligence, we almost forget to take care that the machines do not reproduce our prejudices and the discriminations that dominate our societies. If a racialised person is rejected from a recruitment process by a racist recruiter, can the machine do better? In theory, yes, but it depends on the parameters and data that it has been given beforehand. These are social issues that concern professionals in the sector as well as activists who campaign for more control. Oyidiya Oji is one of them. She was recently in Luxembourg to give a conference organised by Lëtz Rise Up entitled "Discrimination based on artificial intelligence: what should we do about it?

The latter is not from the tech world, but she quickly became interested in the topic: "In January 2020, I left my job and, at that time, I was listening to a podcast about technology that said we need more people of colour in this sector. I then thought maybe I should do something different because I have a business background." So she took a data science bootcamp to learn the basics of programming. And to find out what was going on in the sector. "I started reading that in the US, for example, driverless cars were crashing or more likely to crash into women, especially if they were women of colour or people with some kind of disability, because the car can't see darker skin. In this case, because the engineers are often men, they think, sure, it works. But it works for them. Some people. I also saw a video of a man in a hotel who was just trying to reach under the soap dispenser. It didn't work. So he took a piece of white paper. And then it worked."

Oyidiya Oji

Based on this premise, the young woman learned more and more about this phenomenon. Minorities 'forgotten' by algorithms. As a black woman, Oyidiya Oji fell into a spiral of information. The stories came one after the other: "It's not just about racism. It has to do with racism, but it also starts with women, who are half the population. We don't produce technology ourselves, which is always a problem. For example, I remember when smartwatches first came out there were all sorts of applications you could imagine, but there were none for tracking menstrual cycles. Nobody had thought of that."

So the young activist decided to take the fight to a professional level: "Today, I work with people who at the time I was just listening at conferences." She joined the European Network Against Racism (ENAR) last September as Policy and Advocacy Advisor for Digital Rights. What made her particularly interested in this topic was the industry's assumption that machines are necessarily neutral. "I started because this sector promotes the idea that AI is neutral. It doesn't see colours. It doesn't see colours, but it can see where people live. I mean where you live, you can already know more or less where people come from, what their social status is or what their average salary is. There is a lot of information that is more than just colours at the end of the day. In Luxembourg you can use your country of birth, or the country of birth of your parents, that is in the database. So there are many systems like that that can trace your origins." Maryam Khabirpour, audit partner at Deloitte, also thinks so: "One might think that anything that is produced by a tool or by a machine, anything that is produced by a model, is inherently neutral. But it is not neutral. Because it is powered by human beings and human beings are definitely not neutral. So we transfer our own prejudices to machines, which can be seen as the education passed on to children."

"I started because this industry promotes the idea that AI is neutral. It doesn't see colours. It doesn't see colours, but it can see where people live."

Oyidiya Oji, Policy Advisor for the European Network Against Racism

Sensitive information for which employees in the sector often do not see any problems or sources of possible discrimination: "Of course, the people who form this data and test it are not aware of it in many cases, because all the colleagues and people who work on this subject have a very specific training. Usually, and depending on whether they work in the technology industry or not, there is often compensation for employees who recommend someone. What kind of people do you recommend? People like you, " continues Oyidiya Oji.

Factual data that feeds into the algorithms, but that needs to be put into context. This is what Emilia Tantar, Chief Data and Artificial Intelligence Officer at Blackswan Luxembourg and head of the Luxembourg delegation of AI standardisation, explains: "Take for example your credit score, this is usually subjectively assigned by a financial institution. This is also an example that is related to Luxembourg because if we take the past data that we have, if I remember correctly women in the 1970s were not allowed to take out loans. So if we take some information, women are less likely to repay. So of course this is a type of bias in the financial field."

What interests Oyidiya Oji is that technical progress should not come at the expense of women and minorities who are often employed to clean up the web, especially for content moderation: "We talk about innovation, but what kind of innovation and for whom? If it's for the people who have always innovated, yes, of course. They go very far, they will even go to the moon and Mars. But what about the people who are just behind, with very low salaries, who are trying to make sure that certain types of people get the information in a way that is not harmful? So, for example, when you get information on your newsfeed from any social network that is not harmful, it's because there is someone consuming that harmful information. So that you can avoid it. And those people are always located in the South."

"That's one of the biggest challenges you face with AI: you have to constantly be on your toes."

Bettina Werner, audit partner at Deloitte

But there is a bigger problem, pointed out by Bettina Werner, also an audit partner at Deloitte, the fact that it is a constant struggle to keep AI under control: "One of the biggest risks is that it is not enough to fix a problem once and for all. In the audit business, you can't just do one audit and say everything is fine. Whatever you do tomorrow, even if today it's fine and neutral, at the end of next month the system can be skewed again and as society evolves it can be skewed in different ways. So one time you will have got it right, but the next time it may not be right. So I think that's one of the biggest challenges you face with AI: you have to constantly be on your toes."

Need for regulation

This is why according to Bettina Werner the topic of governance is very important. "Someone has to be in charge of the application in the company, someone who is responsible. You can't let the application run by itself. Someone has to be there, someone has to have some control over the system, and someone has to make sure that it is still in line with the company's objectives, " she says. An opinion also shared by specialist Emilia Tantar: "I think that artificial intelligence as it is, it is approximation techniques, it provides approximate solutions that may be far from the solution that will work best in real life. So we need regulation. Fortunately, we have at the European level the AI Act which has been proposed in 2021 and which should come into force in 2024. I think the discussions are currently underway in the European Parliament. It's a legislative framework, but how do we apply it in practice? I think the end user also needs to be reassured that these systems are safe and have been tested, as is the case with car seats for children."

Emilia Tantar

For the two auditors, European regulation is a step in the right direction, especially to give companies a framework. "The EU AI Act has been adopted by the European Parliament on 11 May 2023, with the final text expected to be released by May 2024. It provides for important sanctions for companies. And I think what's important is that it's not just for EU companies, but also for companies that would like to sell AI or develop AI within the EU. One of the things that needs to be done is to highlight where AI is actually being used, because otherwise we are not even aware of it, " Bettina Werner continues. "Let's take the example of the credit score. I might get a letter saying that unfortunately I can't get a loan, but I don't necessarily know that in the background it's not a person who has assessed it, but an algorithm. These applications impacting health, safety or other fundamental rights have been classified under the EU AI Act as “high risk” and come with an enhanced level of supervision."

The European draft of the AI Act does not put all forms of artificial intelligence in the same basket. Suggesting a movie, a piece of clothing will not be considered as important as scanning CVs or even making health diagnoses. This is what Emilia Tantar explains: "There are several risks. There are risks related to prejudice, exclusion, and some applications are even banned. Those that could affect vulnerable categories are prohibited applications. So it's about consciously building systems that exclude certain categories on the basis of race, skin colour or other criteria. So these applications are banned in Europe, even if you want to pass the risk analysis."

The regulation of artificial intelligence

  • The AI Act is a proposed European law on artificial intelligence. It is the first regulation on AI from a major regulatory authority in the world. The Act classifies AI applications into three risk categories. First, applications and systems that create unacceptable risk, such as government-run social rating systems, like those used in China, are banned. Second, high-risk applications, such as a CV scanning tool that ranks job applicants, are subject to specific legal requirements. Finally, applications that are not explicitly banned or listed as high risk are largely unregulated.

    Like the EU's General Data Protection Regulation (GDPR) in 2018, the EU AI Act could become a global standard, determining the extent to which AI has a positive rather than negative effect on everyday life, wherever one is. The EU AI Act is already making waves internationally. At the end of September 2021, the Brazilian Congress passed a bill creating a legal framework for artificial intelligence. This bill still needs to be passed by the Brazilian Senate.

    Following the Commission's proposal of April 2021, the Act has been adopted by the European Parliament on 11 May 2023, with the final text expected to be released by May 2024. During this period, standards will be prescribed and developed, and the governance structures put in place would be operational. The second half of 2024 is the first time the regulation could become applicable to operators with the standards ready and the first compliance assessments carried out.

This is rather reassuring, even if the specialist is already expressing doubts about the limits of such a device: "These applications are banned, but unconscious prejudices are the most dangerous. The ones that are authorised on the market are high-risk applications. So we have to test and we are currently working on the technical specification of the risk catalogue. It is a catalogue that can include unconscious bias. For example, if we take the example of skin colour, you can have systems that have been used in examinations, for example in universities. People in certain skin colour categories are not easily detected and are considered not to have passed the exam or not to have been able to register for the exam because of this. This is an exclusion from a certain institution. This is education and it can happen

But she remains confident that the sector will agree on some harmonisation of standards. "In terms of the standards that exist internationally, the whole industry is coming to a consensus on international standards in artificial intelligence. As long as there is a consensus among the creators of these systems about levels of trust and so on, that is something that is reinforced, of course, by legislation. But as it becomes accepted internationally, it becomes best practice."

"Not close to human intelligence"

And even though AI has made huge strides in recent months and the results are bluffing, as for example with ChatGPT, Emilia Tantar temporises: "We think AI can do everything, but it is not close to human intelligence. We still need grounded education, because if you want to build critical thinking while navigating education with tools that don't always provide the basic truth, that's a danger to society. How do you build critical thinking on what basis?" By feeding on data gleaned from the web, AI works by occurrence. The more often information is given, the more it will be absorbed by the AI, which will take it on board. This can ultimately distort research, for researchers or journalists for example.

"I think in Luxembourg we are really privileged because we have a strategy and we could see that with the adoption of AI in public services."

Emilia Tantar, Chief Data and Artificial Intelligence Officer at Blackswan Luxembourg

For activist Oyidiya Oji, the need for regulation of the sector is a step in the right direction. "But let's also say that public institutions are only the users. So maybe in addition to checking the transparency and explanatory power of these models when they are already deployed, say in the public service, they should also be seen by the private sector. A legal framework needs to be created, as well as a risk management system and an information system. But I don't think that in the United States they will just do a little bit of what we try to do in Europe. For them, innovation and everything else has to be fast." The latter is somewhat dubious about the barriers that any form of regulation represents for the industry giants: "Today, they rely 100% on AI. So if you say to them 'hey, you need to put the brakes on a little bit and make sure everyone is okay', that's already going against capitalism, the way it has worked for many years."

With technology constantly evolving, not only the industry but also governments have their role to play. For Emilia Tantar, the Grand Duchy has already jumped on the bandwagon: "I think in Luxembourg we are really privileged because we have a strategy and we could see it with the adoption of AI in public services. I think the national strategy supports the Digital Luxembourg initiative, which offers awareness with courses on the basics of AI. So you have a solid source of knowledge, where to go to get some awareness, and then you have the university and LIST, which show that technology transfer is responsible, so yes, there is a lot of discussion."

Citizens' organisations are also keeping an eye on things. "With many other organisations in Brussels and other places in Europe, we have joined a coalition called Protect, not surveil. This is a coalition that we first created as a joint declaration signed by 200 organisations, and then we made a list of demands in which we said that there are certain types of systems used against displaced people. This type of technology should be banned or at least categorised as high risk, " says Oyidiya Oji.