Idly clicking the TV remote and at some point you end up on the Sci-Fi channel. A minor hit, I Robot, is playing which focuses on a programmed machine that develops a mind of its own, which defies all sorts of logic because wires and metal and plastic can’t think, nor can they reason.
Or can they?
Fast forward to a recent Nature.com article that discusses how scientists are tackling the unthinkable: how to develop a robot that can learn from its behavior and make ethical decisions accordingly. Healthcare robots and military drones tie directly to this topic because, “researchers are increasingly convinced that society’s acceptance of such machines will depend on whether they can be programmed to act in ways that maximize safety, fit in with social norms and encourage trust,” the article stated. A host of brain trust-type folks are attempting to solve the problem to determine “what kind of intelligence, and how much, is needed for ethical decision-making, and how that can be translated into instructions for a machine,” according to the article. And yes, it’s ok to chuckle at the how much intelligence is needed to be ethical reference.
This isn’t just some futuristic endeavor occurring in movies or a secret laboratory. Aren’t advisors facing some of the same issues? Results of a 2015 Cost of Compliance survey states that the “regulatory fatigue” is just compounding and as we know, the flip side is increased pressure to fully implement automatic compliance surveillance systems. This helps streamline the business, minimize omissions and also (hopefully) alleviate that ever-growing fatigue; however, the approach also cedes some human control and oversight. Is anyone worrying that increased automation may start to mist over the ethical aspects that used to accompany each and every step of an advisor’s actions? Does one take for granted that the machines are now thinking for us and at some point, are we abdicating some type of responsibility?