The US Securities and Exchange Commission (SEC) has raised concerns about the impact of generative AI on financial markets. In a speech delivered to the National Press Club, SEC Chair Gary Gensler highlighted that recent advances in generative AI could result in institutions relying on the same subset of information to make decisions. This dependence on a limited number of tech platforms for data and computing power could lead to financial institutions using flawed data and making the same bad decisions, similar to the events that transpired during the 2008 financial crisis.
Gensler emphasized the potential risks associated with generative AI and other deep-learning models, stating that these technologies could exacerbate the interconnected nature of the global financial system. He warned that AI may encourage herding behavior, where individual actors make similar decisions based on the same signals from a base model or data aggregator. This could lead to financial fragility and increase the potential for a crisis similar to the 2008 financial meltdown.
It is worth noting that the financial sector has already been utilizing AI systems for various purposes. Insurance companies and creditors, for example, use algorithms and natural language processing to analyze financial data and make decisions about loan amounts. Trading firms rely on AI to detect fraud and identify market signals rapidly. However, Gensler specifically focused on large language models (LLMs) and referred to generative AI as the “most transformative technology of our time.” He acknowledged that generative AI is not yet widely used in finance but expressed concerns about its future impact.
This is not the first time Gensler has highlighted the risks associated with AI in finance. While still at MIT, Gensler co-authored a paper that examined the challenges of regulating AI in the financial industry. He recognized that current regulatory structures may not adequately address the issues arising from the use of AI in finance.
The SEC has been actively engaged in AI regulation for some time. It established FinHub, a resource center dedicated to addressing questions related to AI, cryptocurrency, and other fintech-related issues, back in 2018. The agency has also pursued legal cases against companies operating in emerging technologies, such as the recent lawsuit against Coinbase in the crypto space.
Interestingly, the SEC itself utilizes machine learning to aid market surveillance and enforce its policies. However, Gensler believes that existing risk management guidelines need to be updated to account for new and powerful technologies. He suggests that a comprehensive reevaluation of how AI is used in the financial industry may be necessary.
In summary, the SEC’s concerns over the impact of generative AI on financial markets highlight the potential risks associated with relying on a limited number of tech platforms for data and decision-making. The agency’s Chair, Gary Gensler, warns that the interconnectedness of the global financial system could be amplified by the use of generative AI and other deep-learning models. While the financial sector has already embraced AI, Gensler’s remarks suggest that regulatory frameworks and risk management guidelines need to be updated to address the challenges posed by emerging technologies. The SEC’s active role in AI regulation, coupled with its use of machine learning, underscores the importance of ensuring the responsible and ethical use of AI in the financial industry.