Connect with us

Perspectives

Can Wall Street influence how AI develops?

Setting standards that determine how AI can be used requires a scale that even the biggest banks can’t muster.

Published

on

Artificial intelligence, particularly generative AI, continues to promise vast productivity enhancement to many industries, including banking and insurance.

AI also poses many challenges, as manifested in its tendency to hallucinate. Another is the potential for abuse. This can stem from unconscious biases in data training sets, which result in discriminatory outcomes for people of color. It can also reflect how genAI systems are programmed, as evidenced by the recent kerfuffle over “woke” images of popes or other historical figures who appear as anything but white males.

In the most extreme cases, asset managers could turn to AI for research or even trading portfolios. Hallucinations could ruin a firm; as could trying to explain to a regulator why a bot caused a flash crash.

AI is unlikely to be unleashed in such dramatic fashion, but it can be put to work in more subtle ways. In fact, it already is.

Banks, insurers and fintechs already use AI tools to score credit ratings or underwrite policies. The industry is at risk of being unable to explain to a disgruntled customer why they were denied a loan, for example.

The more mundane issue is when AI can be applied. For example, software can be used to parse someone’s social-media output to judge their mental state, which could be used to price a financial product. This raises lots of questions.

Should firms be allowed to consider such data? If not, what substitutes will they explore to get a view of a potential customer? What constitutes privacy, and how is it enforced?

Regulate, please

The natural answer to such questions is to bring in the regulators. It’s best to develop a neutral set of rules to restrain a firm’s worst impulses. It’s also easier to let the regulators do the heavy lifting – and keep the freedom to shrug if they don’t.

Regulation is required, but is it enough? Maybe, but only if the finance industry is satisfied to leave the innovation to Big Tech and the new breed of AI startups.

When it comes to AI, the reality is that regulators will never be able to keep pace. That’s not a bad thing: we expect innovation to come from the private sector. But the nature of AI makes regulation difficult.

First, there are few people working at regulators who have deep expertise in machine learning and other AI tools, let alone genAI.

Second, keeping up in this world requires commanding huge arrays of GPUs, graphics processing units, the backbone chips that power AI applications, and the hardware of data centers that comprise the cloud.

The AI industry includes startups like OpenAI, Big Tech players such as Microsoft and Meta, chip specialists like Nvidia, and cloud providers like AWS. These giants have uniquely vast resources that hoover up the best talent – and purchase the computing power to run AI systems.

Neither regulators nor enterprises can set the agenda so long as this remains the case.

Purchasing power

Regulatory bodies can try to set rules – and they should, because they can shape the basic norms – but they will struggle to deal with the nuances of how to prevent banks and others from abusing AI systems.

There are alternatives, though. One is to look back at how governments have helped support their innovation economies in the early days. For example, Silicon Valley owes much of its success to the massive purchasing programs of NASA and the US military in the 1950s and 1960s.



Similarly, only governments have the potential to wade into the market of AI infrastructure and purchase GPUs for their own research programs that can match the scale of Big Tech. This is one way to set standards, through participation and leadership, rather than endlessly trying to keep up by writing more rules.

What of financial services? So far there is no sign that governments are prepared to play this role, which leaves other industries at the mercy of Big Tech.

The lesson is similar: Wall Street needs to become such an important customer to Big Tech that it can set standards for how AI is treated.

The problem is size. Not even a J.P. Morgan has the heft to match a Microsoft in this arena. It could never justify the cost.

Open source AI

But what about the industry as a group? Is there a way for Big Finance – in league with the leading fintechs around the world – to pool resources and become a strategic customer?

Banks are not used to playing together. Such an approach would be completely alien.

On the other hand, banks are slowly warming to open source for developing software. They recognize that sharing code for many non-core functions – being community players instead of proprietary owners – can create better quality and more resilient software.

Does open source work for genAI?

The answer is unclear. Some Big Techs in this space have been open with their development, such as Meta, which lets AI startups download and adapt some of its models.

Industry standards for open source require all use cases be allowed, but few genAI startups actually meet that criteria. Most, including the absurdly named OpenAI, operate a closed shop.

That’s because genAI is not like other categories of software. The source code is only one component. Just as important is the training data and how that data is categorized. Today there is no consensus within the AI industry as to what “open source” even means.

Here’s the opening for financial institutions. Banks, exchanges, and data vendors collectively own a critical mass of data, much of which is specific to capital markets and financial services. In theory if there were a mechanism to aggregate this information, there could be a basis for co-developing code and the standards that go with it.

Vendors would resist any move that destroys their business; banks and insurers aren’t keen to collaborate on anything that might be deemed core. On the other hand, there could be areas within financial services that, for most players, are not core, and in which an industry solution could be desirable. Digital identity, compliance, reporting, and aspects of risk management all come to mind.

DigFin knows this is a very speculative notion, one that may never justify the enormous effort that would be required to make it happen. On the other hand, how important is it for the financial industry to shape its future instead of passively waiting for Silicon Valley to do so in its place? This is perhaps where we return to the idea of government as a big customer of AI. For government to act in this capacity, it needs programs of its own. Regulating financial services in the age of AI seems like a good place to begin.

The Future of Cross-Border Payments with VISA Direct

DigFin direct!

  • Hauptseite
  • Grocery Gourmet Food
  • Can Wall Street influence how AI develops?