A computer scientist called Dr Ben Shneiderman argues against fully automated AI, suggesting it could absolve humans of ethical responsibility.
We're replacing humans in certain places with systems that are robotic and artificially intelligent. And the designers need to make ethical decisions about what they imbue the software and the robots with. It's becoming a big deal for society.
He goes on to cite examples of where fully automated AI is undesirable, such as the Boeing 737's MCAS flight control system, nuclear reactors or lethal military robots and drones.
However, he seems to be generalising too much for my liking.
I believe there are indeed situations where human input is needed. Until AI can cope with nuance and context, human input is necessary for many tasks. But there are plenty of situations where a robot can act autonomously, such as on assembly lines or when vacuuming a floor. As AI gets cleverer then the number of tasks suitable for a robot with fully automated AI will increase.
Therefore the particular caution I'd urge is that AI is not given complete power over tasks too soon and that, I'd suggest, is what went wrong with the MCAS system.
We must also remember that humans are particularly bad at making decisions themselves, of which there's ample evidence on daily news bulletins.
I tend to agree with the counter argument presented by Missy Cummings, director of Duke University’s Humans and Autonomy Laboratory:
The degree of collaboration should be driven by the amount of uncertainty in the system and the criticality of outcomes.
Nuclear reactors are highly automated for a reason: Humans often do not have fast enough reaction times to push the rods in if the reactor goes critical.