The future of AI in finance doesn't look like ChatGPT
Delivered straight to your inbox
Get each edition of Unfiltered - our no-BS, uncensored analysis of fintech news and hot topics sent to your inbox each fortnight.
Find out moreThe AI revolution is reshaping financial services with the force of a category-five hurricane. The seismic shifts it’s creating are palpable, with generative AI standing at the forefront of this transformation.
A recent survey of 56 U.S. financial services executives showed that 42% are planning to increase their AI investment by at least 50%. Meanwhile, the predicted annual growth for AI in financial services across the Middle East is 20-34%. These are just sample stats to demonstrate the unrivalled momentum the technology has across the industry right now.
AI is clearly here to stay, but what will the advancements look like for fintech in the immediate future?
"The real game-changer in fintech is not merely the blend of AI into financial products—it's the sheer power behind it."
AI's unstoppable march on financial services
In the U.S., Public's Alpha chatbot, powered by OpenAI's GPT-4, exemplifies the AI wave. Offering real-time and historical market data through a conversational interface, Alpha is a beacon of the future, signalling the end of traditional market analysis tools as we know them. Across the globe, financial giants are scrambling to integrate AI into their operations, with Wells Fargo, Cleo, and Emirates NBD being prime examples of financial brands incorporating AI and chatbots into their products.
But here's where I draw the line in the sand: the real game-changer in fintech is not merely the blend of AI into financial products—it's the sheer power behind it.
This conversation pivots us towards the crucial distinction between Large Language Models (LLMs) and their less complex counterparts, Small Language Models (SLMs). LLMs, with their staggering billions, even trillions of parameters, have ascended to celebrity status within the community. These are the powerhouses behind many of fintech's most sophisticated chatbots, with Alpha standing out as a prime example. Take, for instance, ChatGPT-4, boasting over 1.2 trillion parameters.
Think of parameters as the building blocks of AI's understanding and generating capabilities. Each parameter contributes to the model's ability to grasp nuances, process information, and generate responses that are increasingly indistinguishable from those a human might produce. In essence, the leap to LLMs like GPT-4 signifies a monumental advancement in these models' sophistication, enabling them to tackle a broader array of tasks with greater accuracy and nuance than ever before.
But as we peel back the layers, it becomes glaringly apparent that more is not always synonymous with better.
The LLMs' Achilles Heel
Yes, LLMs like GPT-4 are impressive, but integrating them into fintech comes with significant challenges:
- Data requirements and privacy concerns: LLMs thrive on vast datasets, which can be a mismatch for the niche, sensitive nature of financial data. Additionally, outsourcing AI solutions raises significant data privacy and security concerns, given the sensitive information handled by financial institutions, potentially handcuffing banks to tech giants in an uncomfortable embrace.
- Cost and dependency: The sheer scale of LLMs translates to substantial computational power and maintenance costs. Dependency on third-party LLM providers also introduces potential risks and costs, particularly as these models grow more complex.
Our annual UX report is out now!
The best product experiences and hottest trends in fintech, revealed. Featuring insights from experts at Monzo, JPMorgan Chase, Nationwide, and 11:FS.
SLMs: The Unsung Heroes
In the finance sector, where agility and precision are paramount, SLMs are emerging as the smarter alternative to the industry's traditional reliance on larger, more cumbersome models. These streamlined, efficient SLMs, such as Microsoft's Phi-2, are redefining effectiveness by proving that size isn't the sole determinant of capability. With parameter counts that are relatively modest, ranging from a few million to a couple of billion, SLMs showcase unparalleled proficiency in executing tasks that require in-depth, domain-specific knowledge, all while avoiding the bulk and complexity of their larger counterparts.
The real value of SLMs, however, extends beyond their efficiency and domain-specific prowess. Their small size and short development times mean that developing these models in-house is a realistic expectation. By training them on internal proprietary data, organisations not only ensure that these tools are exquisitely aligned with specific financial tasks such as market trend analysis, sentiment analysis, and regulatory compliance but also significantly mitigate costs and diminish reliance on third-party providers. Moreover, this in-house approach plays a crucial role in bolstering security. By keeping the development and deployment of SLMs within the confines of the organisation, firms can maintain tighter control over their data, reducing the risk of breaches and ensure that sensitive financial information remains secure.
A balance between efficiency, customisation, domain-specific expertise, and security often outweighs the benefits of sheer computational power. It represents a move towards a more deliberate and considered application of AI, focusing on creating models that are not only effective and efficient but also secure and tailored to the unique needs of the finance sector.
"Financial services will be dominated not by sheer computing power, but by the agility, efficiency, and specialised expertise that only SLMs can offer.”
Moving In-house within Financial Services
Leading the transformative wave of developing in-house language models is none other than JP Morgan Chase. Their deep dive into AI research is not just an ambitious venture; it's a potent declaration of their determination not only to compete but to lead. Their impressive increase in AI research output from 30% in 2018 to 45% in 2023 is a testament to their serious investment and the substantial resources dedicated to AI development.
With a robust team of over 200 AI researchers, Chase is setting the stage not just to compete in the big leagues but to redefine them entirely. Their strategic decision to trademark IndexGPT for wealth management clients is a bold move, signalling a significant shift towards developing their own SLMs. Although we cannot confirm the number of parameters they are specifically using, it's likely to be an SLM trained on their own proprietary data.
This initiative reflects a profound understanding that the future of financial services will be dominated not by sheer computing power, but by the agility, efficiency, and specialised expertise that only SLMs can offer.
My Unfiltered Opinion
Make no mistake about it, 2024 heralds a pivotal moment in the financial technology landscape, underscoring a deliberate transition from the broad strokes of generalised LLMs like ChatGPT, to the nuanced, strategic embrace of in-house developed specialised SLMs.
This shift isn't merely about refining tools; it's a fundamental rethinking of AI's role in finance, prioritising bespoke solutions that offer enhanced security, streamlined computational demands, and pinpoint accuracy tailored to the unique needs and proprietary data of financial institutions.
The future of AI in finance is specialised, unfolding now through a strategic sprint towards specialisation that will chart the course of digital financial services for years to come.