Luware Blog | Expert Insights, News from the Cloud

AI-powered Compliance: Key Insights from the RegTech Summit London

Written by Joshua Wood | 28.10.2025 06:00:00

The AI-powered compliance landscape is evolving at a breathtaking pace, and the recent RegTech Summit in London (RTS25) made one thing crystal clear: Artificial intelligence (AI) is revolutionizing AI-powered RegTech from a cost center to a value driver. But, as with any transformative journey, the path ahead is not without its challenges. 

In this article, we'll explore the key takeaways from the summit regarding the opportunities and obstacles that come with embracing AI-powered compliance:

  • AI is transforming compliance from reactive oversight to proactive value creation. 

  • Explainability and governance are essential for building trust in AI‑driven decisions. 

  • Data silos remain the biggest blocker, limiting cross‑asset visibility. 

  • Private AI compliance tools are rising, but require high‑quality, structured data. 

  • Agentic AI compliance is emerging fast, automating investigations and risk detection. 

  • Evidence, transparency, and auditability are now baseline expectations from regulators. 

Read on for more details. 

What is the RegTech Summit London?
The RegTech Summit London is one of the leading industry gatherings focused on the future of regulatory technology. It brings together compliance leaders, regulators, and technology innovators to discuss how emerging tools can help financial services organizations meet complex regulatory requirements more efficiently. With AI now central to how firms detect risk, automate workflows, and meet regulatory demands, RTS25 has become a key forum for compliance leaders navigating the intersection of technology and regulation.

How AI-powered compliance is transforming regulatory oversight into business value 

Traditional compliance teams have long been focused on monitoring and reporting, but the game is changing. Today, these teams are being asked to do more: To drive commercial value while maintaining rigorous oversight. It's a tall order, but one that's being made possible by the emergence of "compliance copilots": AI-powered compliance assistants that monitor, flag issues, and even explain their reasoning.  

The vision is a future where compliance officers operate like pilots in a cockpit, overseeing multiple systems in real-time rather than working reactively. Imagine a world where compliance is no longer a rearview mirror activity, but a proactive, forward-looking function that enables businesses to stay ahead of the curve.  

This is the promise of AI-powered compliance, and it's an exciting prospect for firms looking to unlock new levels of efficiency, insight, and competitive advantage. 

 

Real-time AI compliance monitoring: Balancing innovation and accountability 

That said, while real-time AI compliance monitoring is a tantalizing prospect, it's also a daunting one. Executives are raising concerns that live oversight could disrupt existing compliance cultures and introduce risk if not implemented carefully. The consensus is clear: Compliance automation must come with transparency, governance, and explainability. In turn, AI compliance tools will need clear audit trails, with every decision logged, recorded, and reproducible.  

This is a critical juncture for AI in RegTech, as firms looking to harness the potential of AI-powered compliance navigate the trade-offs between innovation and accountability. It's a delicate balance, but one that's essential for building trust in AI-powered compliance systems.

Why explainability and governance are critical for AI compliance tools

Explainability remains a sticking point, and it's an issue that's not unique to AI in RegTech. Full transparency into complex AI compliance models is not yet possible—and perhaps never will be. But even partial explainability can be enough to build trust in AI-powered systems. As one expert noted, "Even doctors can't fully explain the brain, but they still save lives." The goal is traceable AI-powered compliance, where every prompt, decision, and follow-up step can be tracked and reviewed. 

Governance is also critical, as firms need to ensure that AI-powered compliance systems are aligned with regulatory requirements such as MiFID III and DORA, and internal risk management frameworks. This means establishing clear guidelines and protocols for AI development, deployment, and monitoring, as well as ensuring that AI systems are subject to regular testing and validation. 

Data silos are holding back AI-powered RegTech—here's why it matters 

As was evident from a poll conducted at the summit, data integration remains one of the sector's toughest challenges, as many firms still operate with siloed communication, trading, and jurisdictional data. This makes it difficult to gain a unified view across assets or markets, and it's a problem that regulators are increasingly prioritizing. Firms that can achieve cross-asset visibility stand to lead the pack, but it's a complex task that requires significant investment in data infrastructure and integration. 

Poll regarding the biggest challenge in regulatory reporting.

Some institutions are building their own surveillance and data correlation systems to tackle this head-on, reflecting a broader trend towards custom AI compliance tools tuned for regulatory environments. This is a significant opportunity for firms, as it enables them to develop AI-powered compliance systems that are tailored to their specific needs and requirements. 

Private AI models in financial compliance: The data quality problem 

While public large language models (LLMs) grab headlines, financial institutions are increasingly opting for private AI compliance models to maintain control, privacy, and compliance. Yet, training these private systems with high-quality data remains a bottleneck, as many firms struggle to source, clean, and structure data effectively. 

The cost of doing so is significant, and there's confusion amongst financials on which business entity should foot the bill for such complex data transformation tasks. The compliance team, as a cost center, doesn't have the budget allocated, and it's a challenge that firms need to address if they're to unlock the full potential of AI-powered compliance. 

Agentic AI: The future of automated risk detection 

Another major talking point at the conference was the emergence of agentic AI: Systems capable of independently following data trails and escalating potential risks. These AI compliance tools are being explored for use cases like market abuse detection, data leakage, and financial crime monitoring, and they have the potential to revolutionize compliance automation workflows. 

Agentic AI systems could assist in automation of Know Your Customer (KYC) checks, sanctions screening, and due diligence, drastically reducing manual workload while improving accuracy. This is a significant opportunity for firms, as it enables them to unlock new levels of efficiency and productivity in compliance. 

The impact of AI in compliance: Practical use cases  

Now that we’ve explored why AI-powered compliance is a game-changer, let’s look at where it’s making the biggest impact. The use cases below show how financial institutions are applying AI compliance tools to solve real challenges while staying aligned with strict oversight requirements. 

Use case What it solves AI advantage
Market abuse surveillance Detects spoofing, layering, insider trading Real-time pattern analysis, fewer false positives
KYC & AML automation Speeds onboarding, reduces compliance costs Automates checks, risk scoring, adverse media
Cross-asset surveillance Finds manipulation across asset classes Correlates trades across markets and jurisdictions
Transaction monitoring Prevents fraud and money laundering Instant alerts, adaptive models
Regulatory change management Keeps policies aligned with new rules AI scans updates, maps to workflows
Workflow automation Speeds investigations and reporting Automates triage, SAR (Suspicious Activity Report) drafting
Private AI models Protects sensitive data Controlled environments, RAG (Retrieval-Augmented Generation) via secure knowledge bases
Agentic AI compliance for risk Proactive risk detection and escalation Autonomous agents under governance guardrails

 

Building trust in compliance tools: Evidence, explainability, and testing 

To sum up, pre-testing, live-testing, and post-testing of AI compliance models are becoming standard in regulated environments, and it's a critical aspect of building trust in AI-powered compliance systems. The three keywords repeated throughout the summit were: 

  • Evidence: AI compliance tools need to be backed by robust evidence, demonstrating their effectiveness and reliability in compliance workflows. 
  • Explainability: AI-powered compliance models need to be transparent and explainable, enabling compliance officers to understand the reasoning behind their decisions. 
  • Trust: AI compliance models need to be trustworthy, with clear audit trails and reproducible results that enable compliance officers to build confidence in their decisions. 

Without these, even the most advanced AI-powered compliance systems will face resistance from regulators and internal risk committees alike. It's a challenge that firms need to address, and it's one that requires a deep understanding of AI in RegTech, the compliance landscape, and the regulatory requirements that govern both.

The future of AI-powered RegTech: What compliance leaders should do next

The RegTech sector is entering a pivotal phase, one where AI, data integrity, and compliance oversight must evolve together. Firms that embrace transparent, traceable, and tested AI compliance tools won't just stay compliant—they'll unlock new levels of efficiency, insight, and competitive advantage. 

The message from the RegTech Summit London was clear: The compliance function is no longer a mere back-office obligation. It's becoming a strategic differentiator. Organizations that invest in the right AI-powered compliance infrastructure today will be better positioned to respond to regulatory change, reduce operational risk, and build lasting trust with regulators and stakeholders alike. The question is no longer whether to adopt AI in compliance—it's how fast, and how well.

Here’s where to start:

  • Audit your data infrastructure. Cross-asset visibility starts with breaking down silos. Identify where your communication, trading, and jurisdictional data is fragmented and prioritize integration.

  • Demand explainability from your AI vendors. If your current tools can't produce a traceable audit trail for every decision, that's a gap regulators will find before you do.

  • Establish an AI governance framework. Clear protocols for development, deployment, and monitoring of AI compliance tools aren't optional. They're becoming baseline regulatory expectations.

  • Start testing now. Pre-testing, live-testing, and post-testing of AI models is standard practice in leading institutions. If you're not doing all three, you're behind.

  • Explore agentic AI—carefully. The potential for autonomous risk detection is significant, but governance guardrails must come first. Pilot in low-risk workflows before scaling.

  • Review your compliance recording setup. Monitoring is only as good as the data it captures. Ensure your compliance recording solution is built for the demands of AI financial compliance.