I, Creative
In December of 1950, Isaac Asimov published “I, Robot,” a short story collection which also represented one of the first in-depth fictional considerations of artificial intelligence. It wasn’t the first piece of popular fiction to feature a “Robot” as we think of them, in rough outline — that was probably L. Frank Baum’s character Tik-Tok (lol) featured in his book “Ozma of Oz,” written in 1907. And the term “Robot” itself wasn’t really coined until 1920 by Czech playwright Karel Čapek in his play “R.U.R.” or “Rossum’s Universal Robots.” (Extended reading: The story of how Čapek came up with the word is cool in and of itself.)
Asimov didn’t create the word, and he didn’t invent the concept of an automated mechanical being. But he did write a series of interconnected short stories as a fictional near-future history of the development and advancement of robotics, and the main theme of that set of stories is not about the automated metal beings themselves, or their astonishing capabilities, but rather about what it meant for them to border on sentience, and to deal with rules that challenged their role as semi-sentient beings capable of learning, evolving, and feeling. One of these short stories is called “Runaround,” which is the first appearance of Asimov’s famed Three Laws of Robotics, although you could much more definitively think of them as the Three Laws of Artificial Intelligence. These laws are:
A robot may not injure a human being, or through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Pretty simple, right? Straightforward. Robots are deferential and subordinate beings, and this hierarchy dispenses with any need for deep consideration of moral philosophy, because robots will only do what we tell them to do! Theoretically!
However, this is not how the robots are used in Asimov’s story. If they were only executing the strict parameters of machine code for specific purpose functions, the Laws of Robotics wouldn’t be necessary. This would also be unrealistic, because this is not how humans would use robots with artificial intelligence if given the chance. Much more true to life, the robots in Asimov’s fictional universe are expected to learn, adapt, intuit, make decisions independently, and improvise. They are not built only to serve; they are built to help mankind chase dreams.
Each story in “I, Robot” is about humans and robots working together. More specifically, the conflict in each story derives from robots trying very hard to follow the Three Laws, and humans running into trouble when they discover that the black-and-white strict application of the Three Laws is impossible in actual real world practice, which results in problems for the robots. They get anxiety, they develop operational challenges and tics, and in some cases, in service of the Three Laws, they even develop special capabilities, like reading minds or the ability to lie.
All of those aberrant behaviors are borne of earnest attempts to follow the rules, but because human life — real life — is messy and complicated, the rules often don’t work when non-sentient beings try to follow them in service of participating in human life.
Take “Runaround,” the story in which the Laws are introduced, for example. A special advanced robot is being used by human workers on the planet Mercury in the year 2015. The human workers discover they need selenium to replenish their life support systems. They send Speedy, their main and most advanced robot, to go get some from a nearby deposit.
Speedy goes out to retrieve the selenium and doesn’t return for five hours. The human workers mount less sophisticated robots and go out on the planet’s surface looking for Speedy, and they find him running in circles around the pool of selenium. They recover Speedy (and the selenium) and then, after some research and deep thinking, the human workers figure out the reason for Speedy’s “malfunction:” he was trying to follow the Three Laws.
Because Speedy is more complex and capable — and as such, much more expensive — than his robotic counterparts on Mercury, Speedy’s impression of the Third Law (“A robot must protect its own existence…”) was strengthened by its manufacturer, U.S. Robotics. Because his need to keep himself from harm conflicts with his need to follow orders, the heightened and potentially-robot-damaging radiation he discovered in high concentrations around the selenium pool breaks Speedy’s decision logic. As a result, he puts himself in an infinite indecision loop, as well as an infinite physical loop around the selenium pool.
And So…
The reason I bring this up is to frame a discussion on AI and how it will affect work in general, and creative work specifically. You have no doubt seen many stories over the last several months about all kinds of AI applications that have hit mainstream consumption: AI-generated images, deep fake audio of US presidents playing video games together, and AI-powered chatbots trying to break up the marriages of tech reporters.
None of these applications actually represent artificial intelligence. These are large learning models trained on huge data sets and then given the capabilities to do certain tasks, like drawing images or writing emails or creating some other kind of output. Sometimes this stuff is startling and impressive. Sometimes it is silly. And sometimes it reveals the true limitation of imitative — versus intuitive — generation.
Midjourney is not a real artist, but rather a powerful computerized generative engine that produces composite, derivative graphic output by copying and incorporating (in the truest sense of the word) the work of human artists. Some of these referential, deferential images are impressive and cool-looking, but they all lack the spark of human ingenuity and sentient invention.
Likewise, ChatGPT can produce boilerplate assimilative reproductions of fit-to-purpose text, but it cannot generate something truly original.
It can reference and recreate the electric sheep it has been shown, the ones it has seen and indexed, but it cannot dream of them.
So why does this matter? People all over the world are — successfully! — using learning models to generate graphics, write business emails, steal money, and cheat on their homework. I have received more than a few cold-call emails pitching me on replacing my creative staff with a learning model service, messages that hysterically claim in breathless terms (no doubt written by a chatbot) that this or that AI tool can do everything a human can do.
Now, to be sure, these robots can do many impressive things. Some of these abilities are even terrifying, especially when it comes to things like disinformation and cyberwar. In the latter case, a well-built learning model can assimilate and synthesize vast amounts of information and then carry that information into a cyberattack, adapting in real time to circumvent security and damage systems.
It can do that one thing very well because we have given it the tools and parameters with which to do that thing. Consider a smart missile: it is sophisticated and can target extremely precise coordinates and, upon detonation, it can set a very intentional hot-burning fire in a very specific place.
But that fire cannot cook a meal. The application of skills, power, and capabilities rarely ever occur in a vacuum, and specific applications are easy compared to intuitive invention and decision-making.
Adapting within an environment only populated with received information is not the same as deciding; high functionality is not the same as sentience.
In an ethics-free environment, deferential, referential generation of text or images may save some time here and there, but it is not creation, and bears no stamp of instinct, experience, nuance, or subtlety — and if we consider the ethics at all of large learning models essentially stealing and repurposing the work of human artists, the whole enterprise as it is currently being discussed and pitched tends to sour pretty quickly. And if we’re being honest, even a basic consideration of these ethical issues reveals more about how capital and corporate leaders regard creatives and their work than it does about technology in general or the learning models themselves.
These robots may yet prove to be valuable (or even indispensable) tools along the path of human enterprise, but these particular robots in the form of large learning models are just tools. A hammer is useful in driving nails and is in fact much more effective and efficient than maybe any other tool at driving nails, but it does not replace the carpenter, let alone the architect.
When things are new, we love them. It is perhaps one of the most defining characteristics of the Modern Business Human to embrace a New Thing and hype it up as the solution to many more problems than the one it actually solves. I offer this not as an exhortation to never use any of these learning model tools, or to say they have no place. Rather, I write this to encourage the perspective that learning models have their place, and that place is not — and will never be, nor should it reasonably or ethically be considered to be — to serve as a replacement for human intuition, experience, sentience, or creativity.
All images (except the illustration of Tik-Tok) created using the Adobe Firefly Beta.