Carlo di Florio. Courtesy photo

As a regulatory specialist, I love learning about agentic AI capabilities, risks and controls from ACA's AI technology leaders. One such model, generative AI, is becoming a powerful tool for investment advisors.

Give generative AI the right data and a clear prompt, and it can identify patterns, compare results to historical trends and explain complex ideas in plain language. It can process, analyze and summarize large volumes of data in seconds, making compliance reviews faster, strengthening risk monitoring and supporting research.

But generative AI has limits. Someone must decide when to use it, how to frame questions and how to interpret the answers that come back. It doesn't monitor systems on its own or keep working once it delivers an answer. If no one asks the next question, the process stops.

In that sense, generative AI is reactive: it's capable, but it waits for direction.

A newer class of systems, AI agents, is changing that dynamic.

Agents can be programmed to generate their own questions, decide when analysis is needed, select appropriate data and iterate on findings. In some cases, they can also recommend actions or carry out predetermined steps. Humans still define the objectives and set the boundaries, but agents can keep working within those guardrails for longer without constant input.

That's a meaningful shift. AI isn't just producing insights; it's starting to play a role in execution.

Used well, AI agents can improve efficiency across portfolio management, trading, operations, client service and compliance. But because agents can initiate analysis, and sometimes act, without human instruction, they also create risks that require additional governance.

Why Governance of AI Agents Is Different

Governance for generative AI focuses on models and data, things like model risk, bias, explainability, data quality, privacy and cybersecurity. All of that still matters.

But it's not enough once systems start operating continuously and across workflows.

AI agents can persist over time, interact with multiple systems and influence outcomes without someone initiating each step. Governance, then, must expand beyond technical controls to include how systems are used, who is responsible and how they're supervised.

For investment advisors, this ties directly to fiduciary duty and regulatory expectations. Regulators will ask:

— Who approved this system?

— What authority does it have?

— How is it monitored?

— Who is accountable if something goes wrong?

Answering those questions requires more than good technology; it requires clear governance.

Treat AI Agents Like Digital Employees

The easiest way to think about AI agents is not as advanced intelligence but as digital employees.

Like junior employees, agents can work quickly and follow instructions, but they lack judgment and experience. They need clear direction, supervision and limits.

Firms already know how to manage this kind of risk. They define roles, limit access, supervise work and document activity.

The same approach works for AI agents. They operate under delegated authority, follow rules they didn't create and cannot be held accountable for outcomes. When they encounter something outside their scope, they should escalate issues back to their human users.

The framing of AI agents as employees is also useful because it aligns with controls that are familiar to investment advisors: job descriptions, permission structures, supervision and audit trails. Thinking of AI agents as junior employees allows firms to apply what they already know rather than build an entirely new governance model.

How to Make it Work

The five key elements of an effective AI governance framework, to meet regulatory, investor and board expectations, include:

— Authorized use policy

— Governance and oversight

— Model testing and validation

— Cybersecurity and privacy

— Vendor oversight

Here are recommendations specifically focused on governing agentic AI, which align with the broader framework.

1. Purpose and Scope: Give agents a job, not freedom.

Every agent should have clear roles and responsibilities. What data is it allowed to analyze? What options can it recommend? What actions, if any, can it take?

If the mandate for an AI agent is too open-ended, the agent may drift beyond its intended use. Clear boundaries keep the agent's work focused and safe.

2. Identity and Permissions: There's no authority without accountability.

Agents should have defined identities and tightly controlled access. Permissions should follow a "least privilege" approach: only what's necessary to do the job.

Too much access increases risk. And at their core, permissions represent decisions about how much authority the firm is willing to delegate.

3. Human in the Loop: Accountability stays with people.

AI agents can help but can't take on fiduciary or regulatory responsibility. That stays with humans.

Firms must define where human review is required, how approvals work and when escalation is triggered.

4. Transparency and Auditability: If you can't explain it, you can't defend it.

Firms must be able to trace what action an agent took, why it acted and what data it used.

Firms should stand ready to produce and review activity logs, explanations and "data lineage," a term that includes data sources, movement through systems, transformations and uses. This information should be built into governance frameworks from the start.

5. Continuous Oversight: Agents are never "finished."

Agent behavior can change as data, markets and business processes evolve. So, governance should not stop after deployment.

Firms should monitor for drift and periodically reassess whether an agent is still fit for purpose. Controls like pause or shutdown mechanisms are also critical.

Cost of Getting Governance Wrong

Without strong governance, firms risk agent sprawl; too many systems running without clear ownership, and "shadow automation," tools operating outside formal controls.

These issues can affect client outcomes, portfolios and compliance. And in a regulated industry, governance gaps can lead to regulatory findings.

Fixing problems after the fact is far more expensive than getting governance right up front.

Governance as a Strategic Advantage

Good governance isn't just about managing risk; it also facilitates AI adoption.

When controls are clear, leadership can deploy AI agents with confidence. Governance builds the trust needed to move from experimentation into production.

Investment advisors are in a strong position: They already understand supervision, risk management and documentation. Applying those principles to AI allows firms to scale safely.

The payoff is significant: less operational friction, better use of data, and more time for advisors to focus on judgment, relationships and client outcomes.

An Agentic Future Is Already Here

AI agents are already starting to show up in investment management, especially in compliance, risk and operations.

The firms that succeed in using them will be the ones that take governance seriously early on, applying familiar supervisory principles to new technology.

At this point, the question isn't whether AI can act.

It's under whose authority AI agents act and with what safeguards in place.

Carlo di Florio is president of ACA Group, a provider of compliance, risk and technology solutions for financial services firms.

NOT FOR REPRINT

© Arc, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to TMSalesOperations@arc-network.com. For more information visit Asset & Logo Licensing.