An article by CMS RegZone (available online here) states the European Securities and Markets Authority (“ESMA”) has recently released a public statement on the use of artificial intelligence (“AI”) in the provision of retail investment services, the area that also covers consumer lending and leasing. The statement provides initial guidance to those entities either looking to use or using AI now so they can ensure compliance with their obligations under MiFID II. MiFID II is the successor regulatory framework in force by the EU since 2007. MiFID is an acronym for Markets in Financial Instruments Directive. The statement provides some actual cases as well as some challenges that ESMA has already identified when they attempt to implement AI.
Will this statement be ESMA’s guidance approach?
This isn’t ESMA’s first statement, but significantly, it is its first formal guidance on AI. It’s a n initial step all about consumer protection, transparency and fairness. If it comes in here, ASIC would look at issuing similar guidance.
The statement by ESMA concentrates on the use of AI under the existing MiFID II framework. As with the Financial Conduct Authority in the UK, equivalent to ASIC here, ESMA will approach AI using its existing regulatory toolkit. It expects firms to do the same. ESMA appears to apply scalability to its approach with concentration on processes and client services.
Importantly, the statement also includes where staff use third-party AI technology with or without senior management’s knowledge or approval. ESMA expects companies to control this by having appropriate measures in place. For those working from home, that may present some serious challenges for management.
Application of MiFID II to AI systems
ESMA believes those using AI must implement increased attention. They must acknowledge the AI-specific processes and procedures required under key areas of the MiFID II framework. Looking at what this means is:
- Client best interests rule and information requirements – companies must be ” transparent about the role of AI in decision-making processes. Any use of AI for client interactions (whether chatbots or other AI-related automated systems) should be disclosed.” All consumer communications stating where AI is used must be “presented in a clear, fair and non-misleading manner.”
- Additional Organisational requirements – Companies are going to need:
- a “”meticulous approach” to sourcing data, ensuring that algorithms are trained on accurate and sufficiently broad data sets, and that there is “rigorous oversight” over the use of data.”
- increased awareness that not only covers “the operational aspects of AI, but also its potential risks, ethical considerations, and regulatory implications.”
- implement measures to control the accuracy of the information supplied to and used by the AI systems as well as what those systems deliver to consumers.
- Improved Conduct – companies using or intending to use AI will require a higher level of diligence to ensure the equivalence of responsible lending and product suitability. Companies are going to have to have “rigorous quality assurance processes,” including algorithm testing and periodic stress testing.
- Increased Recordkeeping – companies will need far more record keeping on AI use and to “any related complaints and detail AI deployment, including the decision-making processes, data sources used, algorithms implemented, and any modifications made over time.”
What the companies delivering products or services into the EU will need to do
The article concludes by stating companies will need to balance “harnessing the potential of AI and safeguarding public confidence”. It states “[c]ompanies will need to focus on delivering transparency, implementing strong risk management practices, and complying with legal requirements.”
Will something similar by introduced here?
Clearly, there will be a number of significant challenges to overcome. Will it by introduced here? I would expect with the global harmonisation occurring, most certainly. When it does arrive, I expect there will be additional safeguards and requirements added. The ESMA statement is a first step and ASIC will no doubt learn from what they add.
This guidance will likely impose significant costs in order for front-end systems to comply. For large organisations, this may not be an issue but smaller ones may find some requirements difficult to overcome. There will also be some additional disclosure requirements which may cause some headaches. If you’re looking to use AI in the future or are doing so now, please take note of this guidance. It may save you quite a lot of money and development effort.