OUR PORTFOLIOS

Skip to main content

Knowledge Hub

-

Building Trust in Digital Identity: Ethics, Bias and the Future of Fraud Prevention

Emma Lindley | Identity & Fraud Expert and Founder of Women in Identity | & Mateo Jarrin Cuvi | Global Manager for Partners & Media at The Association of Governance, Risk & Compliance
Building Trust in Digital Identity: Ethics, Bias and the Future of Fraud Prevention

Ahead of the Fraud Leaders’ Summit in London, Emma Lindley, Identity and Fraud Expert and Founder of Women in Identity sat down with Mateo Jarrin Cuvi, Global Manager for Partners & Media at The Association of Governance, Risk & Compliance to discuss the changing face of fraud prevention and how to build trust in digital identities. 

Meet the experts:

Emma Lindley is an award-winning senior executive and internationally recognised leader in fraud, payments, and digital identity. Awarded an MBE in 2022, she brings over 20 years’ experience across financial services, e-commerce, gambling, airlines, and government, with leadership roles at Visa, Cybersource, GBG, and Cifas, and as a co-founder of two successful businesses.

A trusted advisor to UK and US governments, Emma is a frequent global keynote speaker and has been named one of the UK’s 100 Most Influential Women in Tech (2022–2024).

Mateo Jarrin Cuvi has a 20-year career spanning multiple industries, with a focus on content creation, business writing, and financial services. He is Global Manager for Partners and Media at the Association of Governance, Risk & Compliance (AGRC), where he leads strategic partnerships and promotes the association’s work. Mateo holds degrees from the University of Virginia and the University of California, San Diego.

 

Mateo: You’ve spent more than two decades working globally across digital identity, fraud, and payments. How has your view of “trust” in digital systems evolved over that time?

When online systems first emerged, many people simply didn’t trust them. There were real concerns about electronic money -could it just disappear?

Over time, while there are still some people who don’t trust electronic systems - my 80 year old aunt still prefers cash - most people are now very comfortable sending money electronically, doing bank-to-bank transfers, and using credit cards online.

So we reached a point where people understood these systems and trusted them. Where we are now, however, is a different phase of trust. People’s concerns today are less about whether the system works and more about whether their money is safe.

We’re seeing a huge rise in scams and social engineering. These tactics aren’t new - they’ve simply shifted from someone knocking on your door to someone targeting you digitally. As a result, people are asking: Am I being scammed? Am I sending my money to a real organisation?

More recently, with the rise of generative AI and AI agents, we’re entering another evolution. Technology itself is now being used by the fraudster. These attacks can be scaled in ways we’ve never seen before, which raises entirely new trust challenges.

 

Mateo: What gaps do you currently see in how digital identity solutions are designed and how can greater diversity in the industry lead to better outcomes for users?

If we look back at how digital identity and fraud systems were originally designed, they were largely point solutions. You might have one system for chargeback detection, another for fraud detection, and another for identity verification or KYC.

Fraudsters quickly learned how to work around these isolated systems, especially when organisations themselves weren’t sharing data internally or collaborating across departments.

What we’re seeing now is a shift towards platform-based approaches. Organisations are knitting together these point solutions into orchestrated platforms that provide visibility across the entire customer journey.

This has also driven greater collaboration internally. Fraud isn’t just a financial issue for the CFO anymore - it’s also a security issue for the CTO and CISO. Those roles now need to work much more closely together, and that’s a really important evolution.

Moving onto the second part of your question. We all have unconscious bias - myself included. No one sets out to build bad systems, but bias can still creep in without us realising.

About ten years ago, alongside my day job, I founded a non-profit called Women in Identity, where we research how unconscious bias shapes the systems we build and how diverse teams improve outcomes.

Homogeneous teams tend to share the same perspectives, which is a problem when building systems for everyone. For example, biometric systems tested only on similar-looking teams may appear to work perfectly, but fail for people with different skin tones or backgrounds.

The same is true for fraud systems: diverse teams bring broader insights into how fraud works and evolves. Ultimately, diversity helps teams spot issues earlier, build fairer systems, and deliver better outcomes for users.

 

Matteo: What are some of the other biggest ethical considerations organisations often overlook when they’re implementing identity verification technologies?

When we think about unconscious bias, we also need to think about the ethical implications of how these systems are designed and deployed.

If we look at something like food regulation in the UK, we have the Food Standards Agency. Restaurants are rated on cleanliness from one to five, ingredients are clearly labelled, and calorie content is displayed. As consumers, we understand those standards and make informed choices.

What I find interesting is that when we look at technology - particularly AI - we don’t yet have the same level of global standards or consistency. In Europe, steps are being taken through councils and regulatory frameworks, but in many other parts of the world those guardrails simply don’t exist.

The way I think about it is that we are consuming technology in much the same way we consume food. Even though we’re not eating it, these systems shape decisions, access, and outcomes in our daily lives. Because of that, we need to fundamentally reframe how we think about technology on a global scale and apply stronger ethical thinking to how it’s built and deployed.

 

Mateo: You have advised private sector and government – what are some of the biggest opportunities and challenges for public-private collaboration when it comes to building identity systems people can trust?

A lot of this depends on the political and social structure of a country. In democracies, public-private partnerships tend to work better because governments often set a framework and invite private sector companies to participate.

In the UK and other similar markets, governments establish standards and certification processes, and private companies bid to be part of those frameworks. That collaboration between government and industry brings transparency and shared accountability.

The downside is that this approach can be slower and more complex. It takes time to get systems to market. But the benefit is trust - people can see how systems are built and governed.

In contrast, when governments build and mandate digital identity systems themselves, rollout can be much faster. We’ve seen this in countries like China, where digital identity has been mandated and widely adopted. However, in those cases, the public is largely asked to trust the government without transparency or choice.

There are clear pros and cons. Faster rollout versus transparency and public trust. Different countries will land in different places, and neither approach is without its challenges.

 

Mateo: Where do you see real innovation happening in the identity space today - and where is it most needed next?

If we look at identity innovation in waves, we started with what I’d call Identity 1.0, where people had to physically go into bank branches with documents.

Then we moved into Identity 2.0, where we began using digital identity databases, photographing documents, and applying biometrics to verify individuals. That was a huge step forward.

Now, with the rise of generative AI, we’re entering the next major wave. Generative AI presents enormous opportunities for identity and fraud organisations - improving accuracy, speed, scalability, and the ability to detect fraud patterns globally through advanced machine learning models.

But, as with any powerful technology, it can be both a tool and a weapon. Fraudsters are also using generative AI to try to break these systems.

The challenge - and the excitement - lies in staying ahead. How do we use these technologies for good, while constantly anticipating how they might be exploited? That’s where the next wave of innovation really sits.

 

Mateo: As a final question, what advice would you give to emerging leaders looking to make an impact in the identity and fraud space?

Alongside my professional role, a lot of my impact has come from voluntary work - particularly through founding Women in Identity, which ultimately led to me being awarded an MBE.

There are some incredible organisations people can get involved with alongside their day jobs, such as the Merchant Risk Council, ID Pro, and Women in Identity. These communities sit alongside the identity, fraud, and payments ecosystem and offer a much broader perspective.

Getting involved gives you access to incredible networks, opens up new job opportunities, and - most importantly - helps you better understand the real problem sets we’re trying to solve. It also creates space to discuss ethics, bias, and differing viewpoints you might not encounter in your day-to-day role.

From my own experience, that involvement has given me a much deeper understanding of the industry and a level of empathy I simply wouldn’t have had otherwise. I’d strongly encourage emerging leaders to engage with these communities and contribute where they can.

 

Looking Ahead

Both Emma and Mateo will be at the upcoming Fraud Leaders’ Summit in London. Expect deep discussions on critical areas such as fraud prevention, financial crime, AML, risk management, identity verification and payments.

Loading