NEWS

AI in financial services: The digital-first generation

by | 22/07/2025

AI in financial services: The digital-first generation 

Artificial intelligence is transforming the trading desk, whether that desk is on Wall Street or in your home office. Algorithms threaten to usurp quants, while AI tools promise to put institutional-grade expertise into the hands of ordinary investors.

When it comes to welding the power of these new technologies, we’re not all on an even playing field. That’s the claim, persuasively argued, of Goldman Sachs Chief Information Officer Marco Argenti.

Considering the effects of ‘agentic AI’ – “artificial intelligence systems that can perform tasks on behalf of humans and make independent decisions without direct oversight” – on the workplace, Argenti asks us to consider the analogy of someone who learned to play the piano in adulthood. “You might be enthusiastic and dedicated, but the odds of becoming a prodigy are slim.” Similarly, when people learn to operate a computer later in life, they lack the fluency of users who grew up with the technology.

We can see this dynamic as AI tools develop. “A generational divide is emerging—not because more seasoned professionals lack intelligence or drive, but because they didn’t grow up with these tools.”

Critically, Argenti suggests, while older team members lack the intuitive knowledge of AI tools that their younger peers possess, they are the bearers of “most of the institutional knowledge and experience”.

The older generation thus has a responsibility both to usher in a new generation of talent and ensure a “path to seniority” for junior AI adepts.

 

Computer literacy

 

These are salient points. It is not sufficient for financial services firms to consider how AI will impact operations in the short-term. The industry needs to take seriously the reality that a new generation of employees, clients and partners will transform trading and investment. They must also learn to leverage the next generation’s skills effectively.

But we should also think critically about what it means to grow up in a particular technological paradigm.  

Sociologist Kieran Healy pointed out that the rapid rate of digital transformation actually encompasses two revolutions, “tending to pull in opposite directions.”

“On one side, the mobile, cloud-centered, touchscreen, phone-or-tablet model has brought powerful computing to more people than ever before. This revolution is the one everyone is talking about, because it is happening on a huge scale and is where all the money is. In practice it puts single-purpose applications in the foreground and hides from the user both the workings of the operating system and (especially) the structure of the file system where items are stored and moved around.”

The second revolution means that “open-source tools for plain-text coding, data analysis, and writing are also better and more accessible than they have ever been”.

This second revolution has transformed Healey’s field, with the implication that many young students enter social science wanting to work with data but “have little or no prior experience with text-based, command-line, file-system-dependent tools. In many cases, they do not have much experience multi-tasking in a windowing environment, either, at least in the sense of making applications work together in the service of a single goal”.

The tension being that these young people are fully adept at wielding the most advanced computing power but lack the specific skills needed for a different kind of technological application.

 

Means and ends

 

Healey’s text, which was published in 2016, was not referring to agentic AI. But the broader point stands – indeed, it may even be more relevant in the case of AI, whose internal reasoning is often opaque even to experts.

In a recent editorial, Finalto Group Head of Regulatory Reporting Eric Odotei considered the challenges that opaque AI systems pose.  Odotei explains that we cannot afford to evaluate AI solely on its ability to deliver outcomes, suggesting that “there are real risks in relying on black box systems that produce results without offering any transparency into how those results were reached.”

Firstly, AI can be wrong. If we don’t understand the system’s inputs and reasoning, we can’t effectively assess its output.

Then there is the question of bias. A model’s reasoning is dependent on the training data. There is always the risk of bias and harmful decision-making, replicated at scale.   

And in financial services, accountability is a particular concern. Regulators and lawmakers demand transparency. You need to be able to explain how decisions were reached; you cannot simply point to your AI module.

For older users of emerging AI technologies, scepticism about AI – especially relatively autonomous agentic AI – may come naturally. Indeed, older financial services professionals may need to overcome a degree of anxiety about new technologies in order to wield them effectively.

By contrast, for younger employees, who have grown up with AI-enhanced workflows, autonomous digital systems may feel completely normal. The risk is that proficiency and fluency can shade into credulity. That as we depend on hybrid workforce models – AI and humans working together – we fail to critically interrogate our digital colleagues.

 

All opinions, news, research, analysis, prices or other information is provided as general market commentary and not as investment advice and all potential results discussed are not guaranteed to be achieved. The information may have been derived from publicly available sources, company reports, personal research, or surveys. Past performance is not indicative of future performance. Trading carries risk of capital loss. Service available to professional clients only.

Related News & Events