ACER issues REMIT 2 Guidance for PPAETs and Non-EU Firms
ACER has issued new REMIT 2 guidance, clarifying obligations for non-EU market participants and PPAETs, focusing on registration and reporting rules.
The Commodity Futures Trading Commission (CFTC) in the US has issued a Request for Comment (RFC) seeking industry input on the definition and applications of AI in financial markets. The RFC covers a broad spectrum of AI use, including trading, risk management, compliance, cybersecurity, recordkeeping, data processing, analytics, and customer interactions.
Many energy and commodity firms already use AI for certain aspects of their business including trading, risk management, record keeping, and surveillance. Whether AI is built in-house or is dependent on a third party, Compliance has a critical role in ensuring that AI is appropriately governed and can be evidenced to regulators. This RFC provides important guidance and perspectives for Compliance.
While AI continues to be an emerging topic, regulators globally expect firms to govern and document how they oversee the development and deployment of AI into their organisations. Firms currently using AI are strongly urged to review their current governance frameworks and, where appropriate, benchmark against the above frameworks alongside meeting region-specific regulatory requirements.
The US’s Commodity Futures Trading Commission (CFTC) issued (click here) aRequest for Comment (RFC) seeking industry input on the definition and applications of AI in financial markets. The RFC covers a broad spectrum of AI use, including trading, risk management, compliance, cybersecurity, recordkeeping, data processing, analytics, and customer interactions.
The request also seeks comment on the risks of AI, including risks related to market manipulation and fraud, governance, explainability, data quality, concentration, bias, privacy and confidentiality and customer protection.
Several AI themes relevant to Compliance within the RFC include:
Many energy and commodity firms already use AI for certain aspects of their business including trading, risk management, record keeping, and surveillance. Whether AI is built in-house to, for example build an algorithm to trade a specific asset class, or is dependent on a third party for example using machine learning techniques within a third party communication surveillance system to identify risk behaviours in written communications, Compliance has a role in ensuring that AI is appropriately governed and can be evidenced to regulators.
AI Governance Frameworks. In the accompanying statement to the RFC (click here), CFTC Commissioner Kristin Johnson references leading AI Governance Frameworks developed by the private sector, such as SalesForce (click here), that provide guidelines for responsible development of generative AI.
Reliance on Third-party development of AI. Commissioner Johnson noted in her statement that firms should have the skills, expertise, and experience to develop, test, deploy, monitor, and oversee controls over the AI and ML that a firm utilises. Specifically, she quotes a recent IOSCO report on the use of AI and ML (click here) as follows:
“Regulators should require firms to have the adequate skills, expertise and experience to develop, test, deploy, monitor and oversee the controls over the AI and ML that the firm utilises. Compliance and risk management functions should be able to understand and challenge the algorithms that are produced and conduct due diligence on any third-party provider, including on the level of knowledge, expertise and experience present.”
Regulators should require firms to understand their reliance and manage their relationship with third-party providers, including monitoring their performance and conducting oversight. To ensure adequate accountability, firms should have a clear service level agreement and contract in place clarifying the scope of the outsourced functions and the responsibility of the service provider. This agreement should contain clear performance indicators and should also clearly determine rights and remedies for poor performance.”
When defining an AI policy, there are several aspects Compliance teams may wish to consider. ISACA, a global IT governance association, recently published an article entitled ‘Key Considerations for Developing Organizational Generative AI Policies’ (click here). Specific steps they recommend following include:
Several leading AI frameworks and position papers Compliance teams may wish to review further when defining their own AI governance framework include:
More broadly, AI Regulation is currently nascent and asymmetrical across the USA, Europe, and Asia. Regulatory developments across each region include:
While AI continues to be an emerging topic, regulators globally expect firms to govern and document how they oversee the development and deployment of AI into their organisations. Firms currently using AI are strongly urged to review their current governance frameworks and, where appropriate, benchmark against the above frameworks alongside meeting region-specific regulatory requirements.
We provide a summary of the key questions included in the CFTC RFC below. Compliance teams are invited to review these questions and, where appropriate, benchmark the underlying concepts against their current governance frameworks. Those firms operating in the USA under CFTC jurisdiction may also wish to participate and respond to the RFC.
Below is an extract summary of CFTC questions from the AI RFC covering the following themes:
While some questions are drafted to provoke industry conversation on emerging AI which is still in an exploratory stage, many questions are targeted at AI that is already implemented and live.
Firms should reasonably expect that if a regulator is requesting this type of information through an RFC that these topics represent areas of heightened interest. These areas should be considered and addressed within an appropriate policy framework where relevant i.e. firms should consider taking a proactive policy framework approach regarding this topic.
Question 2. General Uses.
a. Trading.c. Compliance.
Compliance is broadly interpreted here and includes, but is not limited to, know-your-customer (KYC customer validation), anti-money laundering, anti-fraud, trade documentation and regulatory reporting.
d. Books and records.
CFTC-regulated entities are required to maintain in a readily producible fashion a variety of records, including trade histories, audio recordings, and digital communications.
e. Systems development.
AI-based tools are being increasingly used by software developers to enhance productivity, particularly for manual and repetitive tasks.
f. Cybersecurity and resilience.
Question 6. AI and third-party service providers.
Question 7. Governance of AI Uses.
Question 9. Governance.
Given the unique challenges associated with identifying and managing AI risks, concerns have been raised regarding firms’ ability to manage such challenges through existing governance processes.
Question 13. Market Manipulation and Fraud
Question 18. Third-party service providers