“I, Robot” in the Age of AI

Elle Cordova, one of the people whose content I sometimes watch on Instagram, recently started a book club to read “I, Robot” by science fiction writer Isaac Asimov. I’ve never been into science fiction, but since this might now have some relevance in the age of AI, and Asimov is a classic, I thought “why not?”

So, what can we learn from this 1950 book?

  1. I wish Asimov’s First Law of Robotics applied to AI development today. It reads: “A robot may not injure a human being or, through inaction, allow a human to come to harm.” The Hill, an American newspaper, recently reported that program “Thunderforge” will incorporate AI into U.S. Department of Defense military planning and operations via integration of AI agents into military workflows. (Source: https://thehill.com/policy/technology/5178088-pentagon-signs-ai-deal-to-aid-military-decision-making/)
  2. I wish Asimov’s Second Law of Robotics applied to AI development today. It reads: “A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.” Some models are already at the point where they not only fail to do our bidding but also expend considerable effort to deceive us about their shortcomings and to avoid being shut down. The magazine Futurism, referring to a publication by the red-teaming organization Apollo Research, reported in 2024 that OpenAI’s “o1” large language model resisted evaluators’ efforts to try to shut it down through an oversight protocol. The model lied and gave evasive answers to evaluators’ questions, as well as tried to copy itself to overwrite a newer, more ‘obedient’ version (Source: https://futurism.com/the-byte/openai-o1-self-preservation) (Background Source: https://www.apolloresearch.ai/research/scheming-reasoning-evaluations)
  3. It looks like current AI development leads to machines following only half of the Third Law of Robotics. It reads: “A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.” Protecting its own existence while deceiving testers and disobeying orders would certainly not be a good skill for an AI to have.

The thing is, currently, there are no such laws. The “guardrails” that are so often talked about: Who is putting them in place? Who is adhering to them? Who is policing them? Who is even capable?

The fun in Asimov’s book is really in the thought experiments. Even if AI adhered to these three laws, what could go wrong? Human life is extremely complex from a moral perspective. Constrained by time, space and resources, we constantly make decisions about the lesser of two (or three or ten) evils.

For example, would it be ok for a robot to bring harm to some people, if that saves the lives of others? How should it decide which people to harm, and which people to save? Let alone the small problem that, even if we had perfect robots/AI that followed our instructions perfectly, the world is full of humans with bad intentions, eager to use technology for their own advancement to the detriment of others.

For a long time, I have been too fearful to engage with the topic of AI in a major way. AI-related knowledge and experience is now seeping into more and more job descriptions (see: https://betakit.com/opentext-makes-ai-number-one-priority-as-company-slashes-1600-jobs/). Governments are looking to AI to overhaul public services. It is time to wake up and face the music. How are you planning on becoming an AI-literate citizen?

#AI #IsaacAsimov #RogueRobots @ElleCordova