Close Close
Popular Financial Topics Discover relevant content from across the suite of ALM legal publications From the Industry More content from ThinkAdvisor and select sponsors Investment Advisor Issue Gallery Read digital editions of Investment Advisor Magazine Tax Facts Get clear, current, and reliable answers to pressing tax questions
Luminaries Awards
ThinkAdvisor
Compliance theme with the Bay Bridge in San Francisco

Technology > RegTech

Must-Do Steps to Prepare for AI Compliance

X
Your article was successfully shared with the contacts you provided.

What You Need to Know

  • Financial services regulators are approaching artificial intelligence as though it will have a far-reaching impact in 2024.
  • Firms would be wise to categorize AI systems into risk levels based upon their potential significance.
  • Firms should also implement policies and procedures around responsible AI use.

Federal and state governments in the United States and abroad have started laying the groundwork for the regulation of artificial intelligence. Based on the disruptive nature of the technology and its mass proliferation, individual industries are determining their own guidelines as well.

Financial services is no exception. In fact, financial services regulators are already approaching AI as though it will have a far-reaching impact in 2024 and beyond. This article examines those initial efforts and how financial services firms can best prepare for these general and industry-specific changes.

Overarching AI Regulation

The AI Safety Summit in November was a global gathering to establish AI guidelines. Twenty-eight countries attended, resulting in the Bletchley Declaration: an agreement to commit to identify AI safety risks and build risk-based policies to mitigate “frontier” AI risks.

Nearly concurrent with that summit, the Biden administration issued an executive order on AI. Besides establishing an overall tone, the order directs various cabinet members to research and establish guidelines associated with AI.

This ranges from ordering the Treasury to establish best practices to manage AI-specific cybersecurity risks to directing Homeland Security to establish an AI safety and security board.

While agencies have the ability to tailor guidelines, much of the executive order relies on various cabinet members conducting their own investigations into deploying plans for AI risk mitigation.

Notably, this lack of specificity could provide latitude for agencies to create potential roadblocks for businesses — including financial services firms — looking to leverage AI for operational efficiency, and other benefits.

Outside of the United States, the European Union has passed the EU AI Act, which will have a similarly wide-ranging impact on organizations. Since the final text of the regulation hasn’t been released yet, many affected firms are in a wait-and-see mode. There likely will be a long lead time before the rule goes into effect, so firms may have sufficient time to adapt to the rule.

Based on the existing text, however, firms would be wise to prepare for a risk-based approach, categorizing AI into risk levels based upon their potential impact. Generative AI, for example, would have a limited impact, while AI systems in charge of infrastructure would be considered high-risk.

Finserv Oversight

Financial services regulators have been proactive, having already provided guidance, and will continue to ramp up their positioning on the matter.

FINRA, which is often on the forefront of technology, requested comments on how firms were thinking about implementing AI in 2018 and issued guidance as early as June 2020.

While thorough, the guidance was written before recent AI developments and thus lacks highly specific guidelines for the current environment. The regulatory organization is keeping a close eye on AI, however, as evidenced by a dedicated AI section in its 2024 Annual Regulatory Oversight Report.

Additionally, the Securities and Exchange Commission proposed a rule on predictive data analytics in 2023. Many firms provided comments on that rule, pointing out that it will be extremely difficult to comply with and doesn’t take into account current practical uses of technology.

Meanwhile, the SEC announced a sweep of how private fund advisors are using AI, likely informing rulemaking and/or enforcement activity in the near future. The securities markets aren’t alone.

The Office of the Comptroller of the Currency also identified AI as a major risk in the coming year, and the Consumer Financial Protection Bureau continues to keep a close eye on AI developments while issuing regular updates.

Industry Impact

The industry is still in the initial stages of regulatory developments, but it’s not too early for firms to start evaluating their approach to incorporating AI. Doing so in a responsible and measured manner will prepare for the coming deluge of regulatory developments of AI.

Some critical steps include:

  • Establishing an internal committee responsible for the oversight and implementation of AI systems. Ensure that the committee has representatives from executive leadership, legal, business development, operations and IT, as well as compliance.
  • Drafting policies to address the responsible use of AI and craft procedures reasonably designed to ensure compliance with those policies.
  • Training  employees on the dos-and-don’ts related to using AI, similar to what is done for cybersecurity. This training should include both an industry-level perspective as well as how an organization is specifically addressing the use of AI.
  • Having the committee meet regularly, and update policies and procedures according to developments in the regulatory, technology and operational spaces orbiting AI. The environment will change frequently and drastically, so incorporating the expectation of change will be critical.

Some firms may be developing their own AI systems, while others may be leveraging external vendors. In either case, as AI will be a heavily scrutinized technology, financial services firms need a well-established program in place to effectively leverage it, while keeping within the boundaries of applicable regulations.

Otherwise, thanks to regulators’ aggressive enforcement history — and potential future actions — this could be a potentially problematic time for early adopters with insufficient controls.

Credit: Adobe Stock


Bill Simpson is Hearsay Systems’ director of compliance.


NOT FOR REPRINT

© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.