EU and AI: Between innovation and regulation

Sponsored content Switch to French for original article

Listen to this article

Since the launch of ChatGPT in November 2022, artificial intelligence has moved out of the labs and into every sector of our economy. That's the focus of the latest episode of the podcast Evergreens by Spuerkeess, now also available as an article.

Bryan Ferrari and his three guests dive into generative artificial intelligence, exploring a central question: How can we balance technological innovation, strategic autonomy, and system security in a world increasingly driven by AI?

Nicolas Griedlich (Deloitte Luxembourg), Francesco Ferrero (LIST), and Rachid M'haouach (Spuerkeess) share their insights and the concrete challenges businesses face when adopting AI. It's a rich exchange that helps shed light on the stakes of building AI that is sovereign, responsible, and high-performing.

Bryan Ferrari: We last talked about this in May 2023, and a lot has happened since then. To start, who wants to tell us what has changed in the field of artificial intelligence since the arrival of ChatGPT in November 2022?

Nicolas Griedlich: Since May 2023, so much has happened. For some people, things are moving so fast that it's hard to keep up. I always like to look at a development in its entirety. Let's remember that generative AI really began back in 2016, as research. Then in 2022, the results of that research were made publicly available. From there, tech players quickly seized this new technology and integrated it into their own tools. We saw every major player in the market gradually roll out their own LLMs—large language models—for both the public and businesses. Companies began observing this technology and asking themselves how they could use it. That raised a lot of questions around legal aspects, compliance, risks, and how to adopt this technology responsibly. Internal communication was also key—explaining to employees how the company planned to use it, since the very first use cases came from private individuals. We've seen developments that go well beyond just generating content—text, images, or videos. Generative AI has spread into so many other fields: drones, agriculture, research… Its scope goes far beyond content creation. It's impacting every sector of society—and of course, finance is no exception.

Nicolas Griedlich

Francesco Ferrero: In my view, generative AI has popularized artificial intelligence. Technicians like me have been using AI for a long time, but that wasn't the case for the general public. ChatGPT and its counterparts introduced a super-simple user interface—and that changed everything. But there's a catch: while the interface may be simple, the technology behind it is incredibly complex. It brings in entirely new aspects; it's a non-deterministic technology. That means you can ask the exact same question twice and get two different answers. That's a big shift. We're used to precision, but here there's a creative dimension. And this creativity is so strong that sometimes the models hallucinate—they invent things. That has already caused problems, for instance with companies using chatbots as customer-facing tools. Some very famous cases exist, like someone managing to "buy" a car for one dollar by convincing the chatbot. These models are trained on data created by the public, so they inevitably reflect societal trends. For all these reasons, we need to think about how to prepare society—but also professional users, who not only have to comply with rules, but also deliver quality services and ensure information reliability. We need to work together to prepare this second phase, where the technology is used more effectively.

Rachid M'haouach: AI is more than just a buzzword. It's a reality. Its use has shown real positive impacts, but the technology also comes with risks and threats. That's why the AI Act came into force in 2024—to set boundaries and ensure the safe use of this technology.

Ferrari: Once again, while others are inventing, we're regulating. We were the first to put an AI Act in place, weren't we?

Griedlich: Not exactly. The Chinese actually published a similar paper, with the same principles of control and regulation of AI, six months before us.

Ferrari: AI has been democratized—but its scale is much greater. Political interests are at play once again. The Americans are leading the charge. The Chinese, whom we thought were lagging, launched the highly powerful DeepSeek in January. What does all this mean for the future?

Ferrero: The Americans are leading, the Chinese are close behind, and Europe doesn't yet have technical capacity. To me, this is a crucial point: only a handful of private companies are developing models that aren't open source—especially the big American players—and we're all users of them. In reality, we're relying on black boxes, which is a risk, because we're using a technology we don't control and whose inner workings are unknown—even to its inventors. With the geopolitical tensions we've seen, there's a real risk of facing restrictions one day. So, for a European bank or company that depends on this technology for its business, the question is: is this a reliable technology that I can count on with the continuity I need? That said, Europe is finally beginning to react. We're seeing the initiative to launch AI factories. It's a major project at both the European and national government levels. Thirteen AI factories have already been launched, including one in Luxembourg. That's a very significant step.

Ferrari: So, what exactly is an AI factory?

Ferrero: An AI factory is a service center designed to support businesses. A new one is being set up in Luxembourg—Meluxina AI. It will enable work on complex models, fine-tuning, and even training larger models. It's going to open up new possibilities for Luxembourg and for those across Europe who make use of it.

Griedlich: I think Europe does have some real strengths. The concept of AI factories was introduced only a few months ago, but the ecosystem is already here. We have data centers, we have supercomputers. Everyone has heard about the shortage of NVIDIA chips—well, we already have them in our data centers, so we're not affected. There's a political will that's clearly taking shape. In terms of skills, we have the talent and the research centers. Historically, innovation often comes from across the Atlantic and then we adopt it. But this time, with what's now being set in motion, I believe we have the chance to be front-runners, because the expertise is here. LIST alone has about a hundred data scientists, and AI research has been going on for decades—over 70 years. We have all the foundations we need to move forward. And it's not by chance that the AI Act is the only truly new development in Europe—and even that regulation isn't a barrier to innovation. It simply sets out guiding principles to make sure we're building this in a safe and well-defined way.

M'haouach: Yes, we're seeing that Europe is really intent on advancing research in this area. Huge investments have been made in recent months to catch up, whether in expertise, models, or infrastructure. The partnership between the Luxembourg government and Mistral, along with what's happening at Meluxina, will give an enormous boost. It will provide us with the flexibility to scale AI usage safely, because the data flow remains local.

Ferrari: Let's talk about data centers. The Americans control three-quarters of the world's computing power. Colossus, the largest data centers in the world, were built by Elon Musk. Stargate, the project Donald Trump announced earlier this year, will be on the same scale… Over there, things are moving three times faster. We're managing complexity, as you said, but we're years behind—especially if we want to be sovereign when it comes to this technology.

Griedlich: I don't think the solution is as simple as saying we're just behind and we need to catch up. We already have a lot of the fundamentals in place, as I mentioned earlier, and the key is to make the most of them. It's about usage management. Building a new data center just for this would imply we've already maxed out the capacity of the ones we have in Europe—and that's not the case. Now, on the issue of sovereignty… When a state makes a decision, it does so for a reason. In certain use cases, it might decide to rely temporarily on computing power outside Europe. That, too, can be a sovereign choice. The problem is that if we don't have the capacity to do it ourselves, then we're no longer truly sovereign, because we have no other option but to depend on others. So, the goal isn't necessarily to match their capacity. We shouldn't get drawn into that kind of race. What matters is knowing exactly how much computing power we need to support the evolution of the next generations of LLMs.

Francesco Ferrero

Ferrero: For me, the real issue is strategic independence. I don't really like the word sovereignty. What you just said is absolutely true. But another problem is that the few data centers we've built in Europe are running on American chips. Soon there will be Chinese alternatives… But the real issue is when data centers themselves become critical infrastructure. And that's already happening, because AI is gradually becoming essential for almost everything—for defense, for example. So we have to ask: is AI an enabler of certain defense operations? At some point, we need to consider the possibility that the U.S. or China could decide to stop supplying us with the chips we depend on. And I'm not saying this randomly—Luxembourg is already affected. A decision by the Biden administration has placed Luxembourg on a list of countries without unlimited access to American chips. In my view, this makes it imperative to launch initiatives. Interesting companies like OpenChip are working on European alternatives. But this will require investments, time to mature the technology, and above all, a real push to secure independence.

Ferrari: Another issue is the availability of data. I like the Apple analogy here. Models feed on data. Google, Meta, and Amazon tap into it freely. Apple says, "No, we won't use our data to feed the models"—and as a result, they're nowhere in this race. Doesn't Europe, with GDPR and the Data Privacy Law, face a disadvantage compared to the U.S.?

Ferrero: Large models like GPT-4 are trained on such massive volumes of data that no one can actually know what data was used. That's a problem, because the AI Act requires transparency on the training data, when in reality it's technically impossible to track. Another issue is that these models are getting too big. But do we really need that? That's a philosophical question. Today's AI paradigm is all about building enormous models trained on massive datasets to try to solve everything. Personally, I don't think that's the right approach. I believe we should be thinking about smaller, more specialized models—and then combining those specialized models together. That would also address a looming problem: the energy consumption of AI. Because if we're at the point where we need nuclear reactors to power data centers—and companies like Microsoft and Google have already admitted they won't meet their 2025 climate targets because of AI—then we clearly have a problem. At some point, there won't be enough energy, or even enough water, since cooling data centers also requires water. And that will force us into difficult trade-offs. That's why I believe we should focus on what's called frugal AI—minimizing the amount of data and resources needed to solve problems.

"AI is more than just a buzzword. It's a reality."

Rachid M'haouach, Chief Data Officer at Spuerkeess

M'haouach: The question of private data is a very real one. Do we really need massive models to solve our problems? On the ground today, we see that in most cases, a small or medium-sized model is enough. Beyond the volume of data, what matters most is its quality. If a model is trained on poor-quality data, even if we train it locally, it won't mean much. The output may be largely wrong—or completely wrong. So, the key is the relevance of the data to the specific use case. Instead of using all available data for all use cases, we need to analyze the use case and the problem we're trying to solve, and then determine which data is the most relevant and of the highest quality. That's a crucial point.

Griedlich: I think beyond that, the goal should be to have the technology and make it frugal. Of course, I agree—it has to be refined to minimize what's needed. If cars had started out that way, we'd all still be riding buses. But another point we haven't discussed is the skill sets required to deploy these technologies in, say, a bank. Not everyone is a researcher. Not everyone has a PhD. And if we always need a PhD to make this work, we're in trouble. The majority of people don't have one—that's just reality. And we're going to need resources that simply won't be available on the market, especially in Luxembourg. That's inevitable. We'll need to find a way to develop these skills and create educational or deployment models that allow people to use these tools safely—without necessarily understanding every single detail of what's happening inside. There's also the issue of familiarizing people with these tools. Because once again, we're in the public domain—people already think they understand AI from their daily use. But that's not necessarily the kind of use we want to see in companies. The reality is, we need to pass on knowledge to the next generations so that it doesn't get lost. Take a concrete example: if tomorrow no one does KYC checks anymore because machines do them automatically, people still need to understand what is happening and why it's done. That's going to be a major challenge.

M'haouach: What you just said is crucial. Cultural adoption applies to companies at every level. The AI Act also pushes in that direction—making sure users are aware of what they're working with. That includes the board, whose role is to endorse and fund a strategy. At Spuerkeess, to help employees adopt AI, we proposed that each department appoint an AI Champion or Data Citizen. These people took a few days of training, including hands-on exercises with generative AI. Once they understood what AI is, we asked them how it could support their daily work—and within just a few weeks, we received around fifty use cases. This shows that cultural adoption and training truly drive uptake.

Ferrero: At LIST, to tackle this issue, we're building a fully open-source platform called Besser. It's designed to automatically develop software that integrates AI, without requiring users to be experts. For now, it's mainly a graphical interface, but the vision is to enable people to interact with a chatbot—describe what they need—and get the software tailored to that need.

Ferrari: So even if tools like this make things easier, we'll still need experts to oversee the technology. That's a danger, because people will tend to trust it blindly. What other problems do you see?

Ferrero: Personally, I'm especially focused on the issue of bias. We launched a project called AI Sandbox that measures bias in AI models—whether commercial or open source—across multiple languages. It's a huge topic, because if a model is racist or homophobic, those biases will show up in its responses, which is illegal under the AI Act. Cybersecurity is another big concern, particularly because of a trend among developers called vibe coding—building software and code with the help of AI models. The problem is, this often generates fragile code that isn't secure against cyberattacks. And if that kind of code is integrated into solutions, companies expose themselves to serious risks.

Griedlich: One of the dangers is deploying generative AI just for the sake of deploying generative AI.

Ferrari: That's what we'd call AI washing?

Griedlich: Exactly. At the end of the day, what matters most for a company is evaluating the real value it brings, because it's expensive. You need to buy infrastructure, you need to build it out. And progress isn't always fast. Most AI projects end up going over budget compared to their initial estimates. Why? Because an entire ecosystem is affected, and sometimes you only realize later what you didn't anticipate. Of course, it makes sense—it's about preparing for the future—but the first projects usually cost more than expected. That's why it's essential to properly evaluate the business case. And here's the key point: if adoption only happens across five isolated use cases, you won't have made the transformation necessary to truly embrace this technology. The ultimate goal isn't just to adopt generative AI—it's to make your organization more efficient, to do more with the same number of people. That's a complex process to implement. One of the real risks is not thinking carefully about the level of ambition you want to set with this technology. And that holds true for every new technology.

Rachid M'haouach

Ferrari: To wrap up, let's talk about opportunities and emerging trends. Some people mention automation, robots, self-driving cars… In your view, what's realistically achievable in the not-so-distant future?

M'haouach: We started with classical AI, then moved to generative AI, and now we're heading toward agentic AI—agents capable of carrying out a range of tasks completely autonomously. Personally, that's what I see coming next.

Ferrari: So basically Siri on steroids. Siri that can book your haircut appointment or let your boss know you're sick. Is that what you mean?

M'haouach: Exactly.

Griedlich: That's the direction we're heading in, yes—and it's going to affect every sector. Fundamentally, research and development doesn't look at just one technology; it's about continuously improving processes. Naturally, everyone will try to see how this new capability can enhance their own processes. That said, there's an important point to keep in mind. By definition, an agent executes a task. As you said, it's the one booking your appointment, planning your vacation, or summarizing your emails. Agentic AI means agents working together toward a shared goal. But all the limitations we've discussed—causality, bias—only get amplified when you put agents together. You may have seen examples on YouTube where agents talk to each other and debate. They never reach a conclusion, because they always have yet another opinion to offer. And that highlights a critical issue that isn't technological at all: it's about governance. How will you manage that? How do you make sure interference doesn't alter the behavior of another agent—especially once they start making decisions that really matter to you?

M'haouach: Fortunately, the AI Act doesn't allow machines to make decisions entirely on their own, so there will always be collaboration between human and machine.