Oftentimes, lost in discussions about the new technology are considerations of the practical applications. Artificial intelligence (AI) is no different, despite capturing the imaginations (and portfolios) of analysts, institutional investors, and retail.
For many Americans, the concept of AI being the “next big thing” remains an abstract concept, with individual use cases still hard to understand. To learn more, TheStreet interviewed PwC’s U.S. Chief AI Officer, Dan Priest, who offers some ‘on the ground’ insights for how banks and financial services firms are thinking about AI.
Over the last few weeks, fears around AI have caused selloffs across several industries. No sector has been immune: software, financial services, commercial real estate, even insurance companies seem to be affected. I’m curious to what extent these risks are substantiated from your vantage point.
“The market volatility reflects a fair amount of confusion about how and when AI will disrupt these impacted sectors. And because there’s greater uncertainty about AI’s long-term impact on earnings, investors are increasing the discount rates used in their discounted cash flow models, which has a large impact on PE ratios. There are cases where investors believe some legacy companies are vulnerable to disruption, and they want to see more investment in clearly articulated business strategies that contemplate AI. In short, the volatility is a signal that investors believe AI’s disruptive potential is very real, and they want greater clarity on how management teams across businesses and sectors will respond to lower the risk of future earnings.”
Let’s dig in and do a little reality check on AI past, present, and future: How is artificial intelligence being used in the banking and capital markets sector? How is that changing now? How might that transform in the future?
AI is already being used in banking and capital markets to automate routine processes, enhance customer experiences, and strengthen risk and compliance functions. Key uses include fraud detection, customer service, robo-advisors, and improvements in back-office and bank operations. Risk and compliance teams are also benefiting from AI, using it to identify anomalies, improve monitoring, and streamline regulatory reporting. Across these areas, firms are applying AI both to gain efficiencies and to improve performance.
At the same time, banks have been cautious about adopting AI more broadly because of regulatory expectations. For example, supervisory guidance such as SR 11-7 requires banks to manage model risk and make sure that quantitative decision-making methods are accurate, well-governed, and properly validated. As a result, AI models increasingly need to be explainable and auditable to meet regulatory standards.
What’s changing now is the scale and sophistication of applications: firms are moving beyond rules-based automation to more advanced machine learning and generative models that can interpret unstructured data, support more nuanced decision-making, and personalize client interactions at scale. Looking ahead, regulatory complexity, system and data challenges, as well as organizational barriers will get addressed as competitive pressures mount. The future of banking promises to be more intelligent, more efficient, and more competitive.
I’m wondering if we can talk about LLMs, since when people are talking about AI, they’re mostly talking about them (as opposed to machine learning which has become instrumental in fraud prevention). Where do you see LLMs fitting into the businesses you’re working with at PwC?
Machine learning has been embedded in areas like fraud detection, credit modeling, and risk management for many years, and that foundation isn’t going away. Where LLMs differ is in their ability to work with both structured and unstructured data, and their inference capabilities allow AI agents to operate not only at the data and application layers, but also as an extension of the operating model. While humans will remain accountable, AI agents will execute large scopes of tasks that increase productivity and improve performance across the Banking and Capital Markets value chain.
In practice, we’re seeing LLMs show up as internal knowledge assistants, document review and summarization tools, coding copilots, customer service support, and as AI agents that can orchestrate multiple steps in a workflow instead of just answering a question.
Importantly, LLMs are generally being layered onto existing systems, not replacing core risk or fraud models. Think of them as the interface and connective tissue: they help employees access the right data and model outputs faster, interpret what’s going on, and act accordingly. Traditional machine learning remains better suited for highly structured, predictive tasks. LLMs expand capabilities; they don’t substitute the underlying analytics infrastructure.
A lot of the new anxiety surrounding AI in the markets seems to be based on the rollout of various enterprise tools (Anthropic) and plugins (OpenAI) which investors are appraising as possible disruptors. However, conversely, there is a strong argument that many of these AI tools will end up implemented in these industries. To what extent are AI businesses disruptors, as opposed to resources that turbocharge incumbents?
AI is certainly a disruptive force. Timing will determine whether AI turbo charges businesses or empowers disruptors to disintermediate legacy businesses. Leaders aren’t waiting — they’re disrupting themselves to avoid being disrupted by others.
In regulated industries like banking, large institutions tend to adopt and integrate new technologies rather than be rapidly displaced by them. Established firms already have customer relationships, data, and regulatory frameworks in place, giving them a strong foundation to move quickly as AI capabilities evolve. AI tools — whether developed internally or sourced from third parties — can enhance productivity, personalization and operational efficiency within those structures.
In that sense, many AI companies function as technology enablers. The competitive advantage often comes from how effectively incumbents implement AI within their existing operating models, rather than from AI alone.
Is the advent of some of these new large language models (LLMs) and AI tools better for industry incumbents (like big banks or asset managers), smaller players (like regional banks or small asset managers), or fintechs (companies that build on top of traditional rails but are largely in the software business)? Who comes out ahead if AI in its current form proves to be useful?
AI has already proven useful in many ways, and that’s sparked both excitement and angst about what will come next. All of the players in the banking ecosystem are managing the risk of disruption. Fintechs should be cautious about whether code generators bite into their business model. Smaller players are going to create synthetic scale with AI agents and try to acquire share from the majors. The big banks and asset managers are ratcheting up their investments and putting their balance sheets to work to build next-gen capabilities that broaden and deepen their moats. They all have threats and opportunities they need to manage– time will tell who wins.
Lost in this recent conversation around artificial intelligence is a consideration of the costs. Given the current level of investment, it’s hard to believe AI will be much cheaper in the future as there are shortages (energy, compute, etc). Sure, a lot of businesses are running on enterprise plans and have some control over costs and implementation, but at what point does an LLM strategy become untenable? Do you talk to clients worried about that at this stage?
Cost is very much part of the discussion. While token consumption is increasing, models are becoming more efficient, meaning the cost per token is coming down. However, it isn’t the most pressing obstacle at this time. More often, companies are focused on whether their AI investments are keeping pace with peers and delivering the ROI they expect. Our benchmarks suggest that only a minority of companies have fully figured this out.
We do see clients being thoughtful about model choice, usage controls, and architecture decisions that can help manage expenses. The experimentation phase is giving way to a “disciplined march to value,” where every dollar of AI spend is linked to a business case—productivity, revenue, customer experience, risk management—and tracked over time, which is how you keep an LLM and agent strategy financially sustainable.
Finally,I think it’s a big question whether AI will replace employees or serve as a resource for them. Especially given the tepid pace of hiring in white collar fields, I’m wondering: Do you think AI will replace jobs, create new jobs, or supplement existing functions?
We have no doubt that like every other disruptive technology in the past, AI will disrupt jobs, roles and ways of working. In our experience, humans are the number one success factor in getting value from AI. So yes, new jobs will be created, some jobs will be replaced, and all functions will be supplemented with AI. Leaders will do this proactively and in a way that taps the human value proposition, even if it looks very different in the future.