Mahesh Paolini-Subramanya, Chief Technology Officer, BKN301
Across the Middle East, artificial intelligence (AI) in banking and financial services has moved well beyond experimentation. It is now embedded in live banking operations, shaping how institutions detect fraud, assess credit risk, interact with customers and make day-to-day decisions.
A PwC study reflects this shift. More than a third of Middle East CEOs report embedding AI directly into products and services, almost double the global average. Around 70 per cent report they already have defined AI roadmaps, signalling that the region is ahead in terms of intent and adoption.
In my conversations with bank leaders, however, I have noticed a recurring theme. Once AI moves out of pilots and into production, operating it at scale becomes far more complex. Outcomes can be uneven, automated decisions are harder to explain, or outputs vary across systems that were never designed to work together in real time.
In most cases, these challenges trace back to the same issue: the condition of the data that feeds AI.
AI is a business decision
At the point of scaling, AI stops being a tech initiative but a business decision. AI now influences pricing, credit decisions, fraud detection, customer experience and financial reporting. It shapes cost structures, governance and organisational speed. Risk and compliance teams are also becoming more involved as AI begins impacting the sphere of decision-making.
The fundamental question leaders need to ask is: Can the organisation trust the data feeding into those systems?
That uncertainty carries a real operational cost because AI operates at both speed and scale, but it is only as good as the data it relies on. When data definitions vary between systems, data lineage is unclear, or when ownership is fragmented, outputs will still be generated, but the decisions of that output may be harder to defend. This may lead teams to question AI decisions and hesitate in acting on AI outputs. As a result, they may introduce manual checks or additional validation steps to compensate for gaps in confidence.
This slows down decision cycles in risk assessment, especially while results take time to be reconciled across systems. Over time, the promised speed of AI gives way to friction and the organisation struggles to scale it cleanly.
Legacy data environments limit real-time intelligence
Financial institutions’ core data systems were built for transaction processing and periodic reporting. Over time however, digital channels, analytics tools and regulatory solutions are layered onto these platforms, meaning data gets copied, transformed and redefined repeatedly along the way. The availability of data increases, but so does complexity. Data lineage becomes harder to trace, and definitions weaken because these environments were not designed to support continuous, real-time intelligence across the organisation.
As decisions become real time, accountability demands follow
As AI begins to influence decisions in real time, expectations around accountability also rise. Regulators are paying closer attention to how automated outcomes are generated and governed. In the UAE, authorities including the Central Bank, the Dubai Financial Services Authority and Financial Services Regulatory Authority have made clear that the responsible use of AI in financial services, with strong data governance and model risk controls must be implemented.
Institutions need confidence in where data comes from, how it is governed and why outcomes change over time. Without that visibility, governance becomes repetitive and expensive, while the organisation may miss opportunities gained from agile and precise action.
AI returns depend on data discipline
So how do you ensure your AI investment shows returns? AI investments are justified on the basis of efficiency gains, better risk management and improved customer outcomes. But when data foundations are weak, those returns become harder to realise.
The erosion of trust in AI may result in manual interventions, while lengthened audit and reporting processes increase operating costs, making the financial case for AI harder to defend, even when AI models are capable.
This also influences investment horizons. Leaders often approach investment in data modernisation as long-term programmes. However, meaningful benefits from improved data discipline can actually emerge sooner. Investment patterns may need to shift, but in a good way, with expectations of quicker returns. This places urgency on investing now, proactively rather than reactively.
So what does data readiness mean at scale?
It begins with clarity. Organisations need to identify which data truly matters to their most critical decisions and ensure standardisation and consistent governance across the business. Lineage must be clear for leaders to understand where information originates and how it is changing. Definitions need to be shared across functions so that risk, finance, compliance and technology teams are working from the same understanding.
Equally important is separation. Operational systems should focus on their core functions, while data consumption for analytics, compliance and AI is primed to reduce duplication and rework. This is when AI begins to deliver visible and sustainable returns.
What separates leaders is data discipline
You might be thinking: If all financial institutions do this, what will give my organisation edge?
Two banks can deploy comparable models and platforms, but where one extracts sustained value while the other remains constrained, the difference is rarely the technology. It is the discipline of the data environment, and the behaviours built around it.
Data discipline should be embedded both in technology and in how teams operate. When data systems are reliable and consistently governed, behaviour across the organisation changes in subtle but important ways. Team conversations shift from reconciling numbers to acting on shared points of reference. Decision-making speeds up while risk and compliance teams start speaking the same language as product and operations.
Governance reviews become more focused on judgement and oversight rather than rechecking basic assumptions, and governance becomes ingrained in how decisions are made, embedded early rather than imposed reactively.
This is where data discipline, coupled with organisational human expertise, translates into a long-term strategic edge that is harder for competitors to replicate.
Data foundations will define the Middle East’s AI leadership
The Middle East has a distinct advantage, with regional strategies shaped by a long-term strategic outlook. This provides a strong foundation for sustainable AI adoption, particularly as priorities are measured in decades rather than quarters. That perspective aligns closely with the foundational work required to scale AI responsibly and with confidence.
As banks in the region continue to expand, business models will become more diverse and regulatory expectations more rigorous. This only increases the importance of strong data foundations. While models will continue to evolve and capabilities expand, AI delivers real returns when data readiness is treated as a leadership mandate, underpinned by a long-term commitment to consistency, trust, and execution at scale.
