Market Commentary

Growing Banks' AI Reliance on Big Tech: A Double-Edged Sword

Turra Rasheed
11 Jun 2024 · 2 minutes read

The rapid expansion of Artificial Intelligence (AI) in the financial sector is reshaping the landscape of banking, yet it brings a new set of risks that must be carefully managed. As banks increasingly turn to AI for tasks like fraud detection and customer service, their reliance on major tech firms—often referred to as Big Tech—grows. This dependence is not without significant concerns, particularly regarding the concentration of power and potential vulnerabilities within the financial system.

At a recent conference in Amsterdam, leading banking executives expressed unease about the implications of this trend. Bahadir Yilmaz, Chief Analytics Officer at ING Groep NV (ING:US), highlighted the increasing necessity for banks to depend on the infrastructure and computing power provided by a few dominant tech companies. "You will always need them because sometimes the machine power that is needed for these technologies is huge. It's also not really feasible for a bank to build this tech," Yilmaz stated.

This sentiment was echoed by Joanne Hannaford, who leads technology strategy at Deutsche Bank’s (DB:US) corporate bank. Hannaford underscored the regulatory complexities associated with moving data into the cloud, a necessity driven by AI's computational demands. The British government has proposed regulations to address financial firms' reliance on external technology providers, fearing that issues at a single cloud provider could disrupt services across multiple financial institutions.

Amid these discussions, another layer of complexity emerges the influence of U.S. lawmakers and politicians who have significant financial stakes in both the banking and tech sectors. There is growing concern that the rules and regulations governing these industries may be swayed by these interests. Lawmakers are tasked with the dual responsibility of fostering innovation while ensuring the security and stability of the financial system. This requires an unbiased approach to regulation, free from conflicts of interest.

An open letter from current and former employees of AI companies, including Microsoft’s (MSFT:US) partner OpenAI and Google’s (GOOGL:US) DeepMind, has brought attention to the broader risks of unregulated AI. They warn that the financial incentives of AI firms may impede effective oversight, leading to the spread of misinformation, deepening inequalities, and other potential harms. The letter emphasizes the need for transparency and accountability in AI development and deployment, urging companies to allow employees to voice concerns without fear of retribution.

Moreover, the recent actions by OpenAI to disrupt covert influence operations using AI models highlight the real-world implications of these technologies. While AI holds immense potential for positive transformation, it also poses risks that must be mitigated through robust governance and regulation.

As AI continues to evolve, it is crucial for regulators to balance innovation with safety. Policymakers must ensure that the financial system remains secure and resilient, even as it integrates advanced technologies. This includes fostering competition among tech providers to prevent over-reliance on a few dominant players and ensuring that the benefits of AI are broadly shared while minimizing its risks.