Interview with Pauline Baron (EDHEC Master 2020), AI Research Project Lead at AXA
In just two years, Pauline’s career at AXA perfectly illustrates how much of an opportunity space AI has become. Having joined the group in 2023 as an AI Governance Officer, she moved at the end of 2024 into the role of AI Research Project Lead, at the heart of AXA’s most innovative initiatives. In this interview, she talks about her responsibilities, her role in AI research, and shares her perspective on a sector shaped by rapid technological acceleration and deep organisational change.
What are your responsibilities at AXA today?
My role is a little hard to define because it evolves a great deal. After spending three years working on AI ethics and regulation, I am now focused on research projects. For the past year, I have been working as part of a partnership between AXA and Stanford University. This partnership brings together several projects centred on the future of AI, and I am involved in two of them.
One of these projects has a business and organisational focus. We are trying to understand how AI can augment certain professions, particularly that of underwriter, which is very specific to the insurance sector and currently faces generational challenges. We conduct interviews with underwriters across different AXA entities, observe how they interact with existing tools, and prepare an experiment to see how they adopt an AI tool. The aim is to take the time to observe, using a more scientific approach rather than a purely operational one.
In this project, I contribute my knowledge of the business and organisational context, working in tandem with a researcher who brings methodological expertise. I also take on a degree of project management, coordination and stakeholder liaison.
How does an AI research team fit within a structure like AXA’s?
AXA is a global group operating in more than fifty countries, whose core business is insurance distribution. And within this vast organisation sits a small AI research team of around fifteen people. Despite its size, the team plays a genuinely important role.
I am part of AXA Group Operations, the entity responsible for operations, procurement, security, IT and innovation across the group. It was within this innovation hub that the research team was created in 2018, at a time when almost no one was talking about responsible AI. The team initially positioned itself on this topic, before also developing a strong thought leadership dimension, which is actually how I joined AXA. Today, the team is seen as highly valuable, notably because it contributes to AXA’s visibility and reputation. For instance, the Evident AI Index ranked AXA as the number one insurer in AI maturity in 2025, and research publications were an important criterion in that assessment. Very few insurers have a comparable setup.
Even if what we do is not always directly connected to the core insurance business, our role is clearly aligned with the group’s overall strategy, which places a strong emphasis on innovation and on the ability to understand, anticipate and frame the evolution of AI.
Can you tell us about your journey since EDHEC?
I joined the Grande École programme and then became part of the GETT programme from its very first cohort. It was exactly what I was looking for: an international dimension, strong exposure to technology, and a mindset deeply rooted in innovation. My final year at Berkeley was a turning point. We had incredibly rich courses on AI, blockchain and innovation more broadly, and we visited major tech companies. It was extremely stimulating.
Then COVID hit and I returned to France. I joined a consulting firm for my end-of-studies internship, without a very clear idea of what I wanted to do. I was assigned to a project at AXA, working with both the research team and the learning team to help roll out an AI training programme. That project immersed me in a fascinating world. When my assignment ended, they offered me the opportunity to come back to work on responsible AI and governance. I had really enjoyed working with the teams, and they had appreciated the way I worked.
How did AXA’s governance around responsible AI take shape?
For several years, progress was mainly driven by regular discussions and awareness-raising efforts. I facilitated a quarterly forum, the AXA Responsible AI Circle, which brought together a wide range of profiles: data experts, business representatives, legal specialists and colleagues from international entities. We met to understand AI risks, discuss developments at European level, and think about how AXA could anticipate these changes.
At the same time, we were building very concrete tools, such as an AI risk matrix and a glossary to help teams navigate these topics. Then, at the end of 2022, everything accelerated. The arrival of ChatGPT triggered a massive awareness across all levels of the group. In parallel, European regulation became more robust. AXA’s official AI strategy was defined, and governance structures were put in place to integrate these new challenges. Today, there is a clear vision, an internal policy, reference frameworks, and strong alignment across teams.
What do you see as the biggest challenges in this context of acceleration?
The first challenge is clearly speed. AI is evolving at a pace that sometimes exceeds our ability to step back and reflect. There can be a sense that everything has to move fast, that everything must be transformed, and that every use case should involve generative AI. I find it fascinating, but sometimes excessive. There is a huge focus on generative models, even though simpler, more robust forms of AI often exist and are better suited to real business needs.
Another challenge is finding the right balance between enthusiasm and clear-headedness. I firmly believe in the potential of AI, but I also think it is important to maintain discernment and not reinvent everything simply because a technology is fashionable.
How do you stay up to date with everything happening in this field?
I benefit from an extremely rich environment. I work with researchers, experts and people who have an impressive command of these subjects. Whenever I don’t understand something, I can ask questions, discuss and dig deeper.
I also read a great deal, particularly on regulation and institutional analysis: OECD resources, European Commission publications, specialist reports. Social media also plays a major role, especially LinkedIn. You see the latest advances, analyses and debates there. In the end, information almost comes to you on its own, because the topic is everywhere. Sometimes the hardest part is not getting overwhelmed by it all.
How does AXA bring internal teams together around this transformation?
It is a huge undertaking, but an essential one. The drive comes from the top, which is crucial because it provides reassurance and a clear direction. Then there is a strong focus on communication and education: explaining, reassuring, listening and addressing concerns. Each entity also offers training tailored to its specific needs, and at group level we regularly develop modules to build a shared AI culture.
Change management is central. We are even designing workshops specifically aimed at helping people overcome fears, understand what AI really is, and distinguish reality from projection. Fears often stem from distance or misunderstanding. Once you explain how AI works, its limits and its possible uses, people’s relationship to the subject changes completely.
Does the regulatory framework influence the way you move forward?
Yes, and it is absolutely fundamental. Regulation can seem restrictive; it may feel like it slows projects down or adds extra steps, but it is indispensable. As a citizen, I am very grateful that it exists, because it protects against potential abuses. Some technical profiles are so focused on innovation that they do not always see this aspect. Yet insurance is an extremely highly regulated industry. You cannot fully automate a critical process or make purely automated decisions without the possibility of human recourse. GDPR prohibits this, and rightly so. The legal framework does not block innovation; it forces it to be properly governed, and I believe that is essential.
Comments0
Please log in to see or add a comment
Suggested Articles