Close Close
Popular Financial Topics Discover relevant content from across the suite of ALM legal publications From the Industry More content from ThinkAdvisor and select sponsors Investment Advisor Issue Gallery Read digital editions of Investment Advisor Magazine Tax Facts Get clear, current, and reliable answers to pressing tax questions
Luminaries Awards
ThinkAdvisor

Life Health > Running Your Business

i-Ethics: Are “Automated Ethics” coming soon to a firm near you?

X
Your article was successfully shared with the contacts you provided.

Idly clicking the TV remote and at some point you end up on the Sci-Fi channel. A minor hit, I Robot, is playing which focuses on a programmed machine that develops a mind of its own, which defies all sorts of logic because wires and metal and plastic can’t think, nor can they reason.

Or can they?

Fast forward to a recent Nature.com article that discusses how scientists are tackling the unthinkable: how to develop a robot that can learn from its behavior and make ethical decisions accordingly. Healthcare robots and military drones tie directly to this topic because, “researchers are increasingly convinced that society’s acceptance of such machines will depend on whether they can be programmed to act in ways that maximize safety, fit in with social norms and encourage trust,” the article stated. A host of brain trust-type folks are attempting to solve the problem to determine “what kind of intelligence, and how much, is needed for ethical decision-making, and how that can be translated into instructions for a machine,” according to the article. And yes, it’s ok to chuckle at the how much intelligence is needed to be ethical reference.

This isn’t just some futuristic endeavor occurring in movies or a secret laboratory. Aren’t advisors facing some of the same issues? Results of a 2015 Cost of Compliance survey states that the “regulatory fatigue” is just compounding and as we know, the flip side is increased pressure to fully implement automatic compliance surveillance systems. This helps streamline the business, minimize omissions and also (hopefully) alleviate that ever-growing fatigue; however, the approach also cedes some human control and oversight. Is anyone worrying that increased automation may start to mist over the ethical aspects that used to accompany each and every step of an advisor’s actions? Does one take for granted that the machines are now thinking for us and at some point, are we abdicating some type of responsibility? 

We have not yet entered a phase where automation is incorporating learned-behavior ethics, though it is probably not too far away. Every week there seems to be just a bit more news around self-driving cars and the ethics behind the responsibility these autos have when they are involved in a crash.  “When is it right to hand our decisions over to machines? And when is automated ethics a step too far?”—which is fascinating in that it is a “when”, not an “if”, prediction, according to the article.

But is an element of automated ethics creeping up on advisors as well? There is no doubt that machines improve our lives exponentially but concerns that we are losing some of our core behaviors in the process are valid. You comply, you automate, but advisors must retain an ethical thread throughout. Building the perfect system may offer the best of all the worlds and improve the firm’s productivity and output. But watch for that moment that the robot experts are salivating at trying to conquer next. “Going forward, we will have to try to program things that come more naturally to humans, but not to machines,” the same article stated.

Something like…ethics? 


NOT FOR REPRINT

© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.