Summary and key findings from ESMA’s TRV on AI in EU Securities Markets
Last week, ESMA published a TRV article on the impact of Artificial Intelligence in EU Securities Markets. The report provides a useful overview on the use of AI across key sectors and the related challenges for AI implementation across business lines.
AI can be used for a broad set of applications across both front-office and back-office operations, however its adoption across the markets is limited by technological constraints (e.g. data availability) and novel AI risks (e.g. explainability, bias, concentration risk).
ESMA will keep monitoring AI developments across the industry and analyse related material risks to ensure these are well understood and taken into account for appropriate AI governance and regulation of EU securities markets.
Asset management
Portfolio managers use AI to enhance fundamental analysis and quantitative funds to systematic investment strategies
AI, via techniques such as natural language processing (NLP), enables the inclusion of alternative and unstructured datasets for identification of investment opportunities
AI supports back-end processes such as risk management and compliance
The adoption of AI is limited by technological barriers such as data quality and data availability, but also perceived risks of AI models being black boxes that cannot be explained to clients
Robo-advisors, fully automated portfolio managers, do not guarantee to improve outcomes based on classical portfolio theory unless they are fed large amounts of personal data
Personalisation stands at odds with service costs and scalability. Consumers’ trust to adopt such products is hindered by explainability challenges
Trading
Pre-trade analysis
AI can reduce the market impact of trades and support securities pricing algorithms, such as optimising hedging and quoting decisions and automating client responses.
AI is used in securities lending to set optimal prices and predict "hard-to-borrow" securities; some securities lenders use random forest and polynomial regression models for pricing and supervised clustering algorithms for predicting "hard-to-borrow" status
Some lenders are exploring the use of NLP for automating the negotiation process
Trade execution
ML models are used by brokers and large buy-side investors to split and execute metaorders optimally across different venues and times to minimise market impact and transaction costs
Reinforcement learning is used to determine optimal size and execution time of child orders
Efforts are ongoing to pool data, but subject to privacy concerns
Some asset managers use techniques like principal component analysis or synthetic data to transform the data, but it may reduce explainability
Post-trade processing
ML is used in post-trade processing (reporting, clearing, and settlement) to predict the probability of a trade not being settled, so as to optimally allocate resources and increase settlement efficiency
Adoption of AI in post-trade processing is still limited and mostly not yet widely used by central securities depositories and central clearing counterparties
Most central securities depositories are still operating on legacy technology but plan to expand their use of AI in the near future.
Data reporting service providers and trade repositories have either deployed or started to develop AI solutions (based on ML models or NLP) for anomaly detection, data verification, data quality checks, and automated data extraction from unstructured documents.
Credit Risk Agencies
Credit risk agencies (CRAs) use a variety of tools such as NLP, clustering techniques, Bayesian statistics, natural language generation, deep learning, text extraction tools, and boosting algorithms
CRAs are not currently using AI to automate the full scope of credit rating assessment process, which relies on a combination of quantitative tools and expert judgment
CRAs expect the role of AI in the credit rating industry to grow in the next few years due to efficiency and precision gains
However, there are challenges to rolling out AI extensively, such as regulatory uncertainty and the need for large investments in infrastructure and expertise.
Proxy advisory firms
Some proxy advisory firms use AI for information gathering, synthesizing, and processing
The demand for ESG-related analysis is driving the development of AI tools (e.g. AI is used for web-scraping publicly available documents and NLP to generate ESG assessments)
AI does not directly or autonomously provide voting recommendations to clients
NLP-based tools are being developed to facilitate institutional investors' voting decisions
The development of such tools may benefit the process of shareholder activism by encouraging informed participation and reducing “robo-voting”
Key AI risks:
Issues associated with AI are to a certain extent similar to traditional finance models, however given the speed and scale at which AI systems operate, most regulators are in early stages of designing AI specific governance principles and guidance for financial firms that use AI. The key novel risks that ESMA observes are focused in the following areas:
Concentration risk
The report predicts that the cost of developing AI systems will lead to a barrier to entry, resulting in outsourcing to a small group of large asset managers with the necessary resources to invest in technology, data, infrastructure, and talent. The dominance of a few providers also creates concentration and interconnectedness risks, as seen in the broader digital financial services sector. Overreliance on third-party service providers can also result in commercial capture and dependency risk.
The concentration of AI tools among a few significant providers has the potential to cause systemic risk, particularly in the context of algorithmic trading. This could lead to herding behaviour, convergence of investment strategies, and uncontrolled chain reactions, such as the “flash crash”, exacerbating market volatility during shocks.
Bias risk
The use of AI in financial decision making can result in algorithmic bias. This refers to the algorithm creating unfair outcomes that differ from its intended purpose, which can be caused by either the design of the algorithm or the collection and use of data. There is less risk of algorithmic bias in AI models used in asset management and securities markets compared to those used in banking and insurance. However, certain forms of bias can impact the results of asset allocation models, leading to suboptimal results or potentially threatening market integrity. For instance, an AI algorithm may favour stocks of companies with certain characteristics, like the ethnicity or gender of the CEO, even if such characteristics are no longer positively correlated with performance.
Operational risk
The quality of the data used in training AI and ML applications is a major concern, as poor-quality data can lead to unreliable results. In general, a systematic adoption of AI increases existing operational risk resulting from poor internal control processes or incidents related to external events, such as cybersecurity attacks.
Conclusion:
European Union regulators are keeping a close eye on the advancement of AI in the financial services sector. As companies wait for more guidance on standards for AI usage, such as the EU AI act, they can take a proactive approach and build consumer confidence by investing in ethical AI practices and risk management.