We are all used to the idea that humanity shapes technology. After all, we humans are the designers, right? Wait. Maybe we are being a bit arrogant here. The French philosopher Émile-Auguste Chartier, known as Alain, wrote this about fishing boats in Brittany:
Every boat is copied from another boat. … Let’s reason as follows in the manner of Darwin. It is clear that a very badly made boat will end up at the bottom after one or two voyages and thus never be copied. … One could then say, with complete rigor, that it is the sea herself who fashions the boats, choosing those which function and destroying the others.
In this view, boat designers are more agents of mutation than designers, and sometimes their mutations result in a “badly made boat.” Could it be that Facebook has been fashioned more by teenagers than by software engineers?
My book takes the position that digital technology coevolves with humans. Facebook changes its users and its designers who then change Facebook. The thinking of software engineers is shaped by the tools they use, themselves earlier outcomes of software engineering. And the success of each mutation depends less on its technical excellence than on its ability to “go viral.” The techno-cultural context has more effect on the outcome than all of the deliberate decisions of the software engineers. And this context evolves.
All of this implies that we humans are less in control of the trajectory of technology than we tend to think. My book tries to help us understand this trajectory as a Darwinian coevolution. To do that, I had to take a deep dive into how evolution works, how humans are different from computers, and how technology today resembles the emergence of a new life form on our planet.
This latter idea, to view digital technology as a new life form, is likely to be the most controversial idea in the book. Computers are made of silicon and wires, not meat and leaves. Sure, the mechanisms and the chemistry are different, but what we need to focus on is not how they are made, but rather on how they work.
Life is a process, not a thing. In the words of Daniel Dennett, “It ain’t the meat, it’s the motion.” The digital processes that surround us, like living creatures, respond to stimulus from their environment. They grow. Think about how Wikipedia started on one server in 2001 and has grown to run on hundreds of servers scattered around the planet. The machines, and most especially the software, even reproduce (mostly with our help, for now). They also inherit traits from their forebearers (“Every boat is copied from another boat.”)
Don’t get me wrong. To consider the machines to be “living” is not to assign them rights or agency. It is just understanding that they have a certain autonomy and an ability to sustain their own processes. Some are capable of behaviors that we can call “intelligent,” but most are not.
Even if we view them as “living,” in some sense, we have to recognize that they are not biological beings, and they differ from us in important ways. Digital machines, defined by software, can be copied perfectly and “travel” at the speed of light. No biological being can do that. Also, no AI software has a body like ours. To the extent that our own cognitive selves depend on our embodiment, the AIs will never be like us. But the machines are acquiring bodies. Consider a self-driving car. Will it ever reach the point that we must hold it accountable for its actions?
I am an engineer. I design and build things that never before existed (mostly software, these days). For most of my life, I thought these things were my personal creations, the progeny of my brain. I have since come to realize that this illusion is a bit like being proud of the bag of groceries I just brought home from the grocery store, as if that were my own personal accomplishment. As the coronavirus calamity has so dramatically underscored, my role in that accomplishment is miniscule. The intricate (and fragile) supply chain that delivers an amazing variety of goods from global agriculture, food processing, paper product manufacturing, and chemical processing; the transportation system built on centuries of development of ships, railroads, roadways, and trucks; and the socio-economic system that organizes the thousands of workers down to the kind soul that bags my groceries, all make my meager accomplishment pale.
The same is true of the software artifacts I “create.” The laptop computer that I work with; the programming languages, integrated development environments, and compilers that I use; the wealth of help that is readily available to me through Wikipedia, Google search, and Stack Overflow, all make those few lines of code that I write a paltry permutation in a massive interconnected ecosystem. Even so, those few lines of code must be the result of my own cognitive creative processes, right?
In my book, I use the term “digital creationism” for the idea that technology is the outcome of a top-down intelligent design process, where every aspect comes from a conscious, deliberate decision made by a human creator. I offer an alternative view, where humans are the sources of mutation in a Darwinian evolution, where the technologies that procreate most effectively are the ones that survive, and where the technologies that survive shape the humans who further mutate the technology. Those who fear that we will lose control of artificial intelligence will not be reassured by the possibility that we are coevolving and therefore never really had control.
The question of whether machines can—or even should—be considered as living beings unleashes a torrent of other difficult questions. Are digital artifacts capable of living and reproducing on their own, without the help of humans? What are their mechanisms for reproduction, heredity, and mutation? Will they match or exceed human intelligence? Are they capable of self-awareness or even free will? Are they capable of ethical action? These are all hard questions. Most of them can equally well be asked about humans, as philosophers have been doing for millennia.
My book does not offer easy answers. I hope, however, that readers will come away with a better understanding of the questions. Perhaps too, wrestling with these questions will lead us to a better understanding of our human tangle with technology.
Many books you buy these days give a story that could have been presented in three pages, but since nobody buys a three-page book, had to be expanded to 200. Not this one. There are many angles, and I expect any readers will resonate with some and not with others.
If you are worried about how technology affects humans, and about how, in the coronavirus era, we are each becoming a digital persona, you may want to start with chapter 13 (Pathologies). Science fiction dystopias routinely portray humans who have succumbed to a War of the Worlds, a takeover by machines. I present a different view, one that is no less scary, of a more gradual coevolution, where the humans change along with the machines. In this view, undesirable outcomes need to be treated as illnesses, not invasions. The coronavirus is not an invasion, and our struggle against it is not a war. It is a scientific, medical, and cultural challenge. Our evolution at the hands of technology is similarly transformative.
If you are hoping for “the singularity” to enable you to upload your soul to a computer and become immortal, then please skip chapters 8 (Am I Digital?) and 9 (Intelligences). These chapters will pop your balloon.
If you are the sort of person who loves an argument, and you want to disagree vehemently with my arguments, then please read chapters 2 and 7. They disagree with each other, so you’re sure to find plenty of ammunition here. Chapter 2 (The Meaning of “Life”) finds ways in which digital technologies resemble living things. Chapter 7 says that they will never resemble us because they are made of the wrong stuff. The former borrows heavily from biology, while the latter borrows from psychology.
If you like a serious intellectual challenge, try chapters 11 (Causes) and 12 (Interaction). These two chapters take a deep dive (too deep, probably, for this sort of book) into the fundamental question of what it means to be a first-person self. My goal is to try to understand whether digital machines can ever achieve that individual reflective identity that we humans all have. These chapters offer some weighty arguments that if the machines ever do achieve this, we can never know for sure that they have done so. Even if the machines fall short of that goal, however, their increasing interactions with their physical environment (as opposed to just an information environment) will lead to enormously enhanced capabilities.
Last but not least, Chapter 14 (Coevolution) gathers the forces of the (sometimes conflicting) prior interpretations into a forceful argument that humans and technology are coevolving. I point out that recent developments in the theory of biological evolution show that the sources of biological mutation are much more complex than Darwin envisioned. The sources of mutation in technology look more like these newer theories than the random accidents that Darwin posited. Most important, I argue human culture and technology are evolving symbiotically and may be nearing a point of obligate symbiosis, where one cannot live without the other.
Today, the fear and hype around AI taking over the world and social media taking down democracy has fueled a clamor for more regulation. But if I am right about coevolution, we may be going about the project of regulating technology all wrong. Why have privacy laws, with all their good intentions, done little to protect our privacy and only overwhelmed us with small-print legalese?
Under the principle of digital creationism, bad outcomes are the result of unethical actions by individuals, for example by blindly following the profit motive with no concern for societal effects. Under the principle of coevolution, bad outcomes are the result of the procreative prowess of the technology itself. Technologies that succeed are those that more effectively propagate. The individuals we credit with (or blame for) creating those technologies certainly play a role, but so do the users of the technologies and their whole cultural context.
Under digital creationism, the purpose of regulation is to constrain the individuals who develop and market technology. In contrast, under coevolution, constraints can be about the use of technology, not just its design. The purpose of regulation becomes to nudge the process of both technology and cultural evolution through incentives and penalties. Nudging is probably the best we can hope for. Evolutionary processes do not yield easily to control.
Perhaps privacy laws have been ineffective because they are based on digital creationism as a principle. These laws assume that changing the behavior of corporations and engineers will be sufficient to achieve privacy goals (whatever those are). A coevolutionary perspective understands that users of technology will choose to give up privacy even if they are explicitly told that their information will be abused. We are repeatedly told exactly that in the fine print of all those privacy policies we don’t read. And, nevertheless, our kids get sucked into a media milieu where their identity gets defined by a distinctly non-private online persona.
I believe that, as a society, we can do better than we are currently doing. The risk of an Orwellian state (or perhaps worse, a corporate Big Brother) is very real. It has happened already in China. We will not do better, however, until we abandon digital creationism as a principle. Outlawing specific technology developments will not be effective. For example, we may try to outlaw autonomous decision-making in weapons systems and banking. But as we see from election distortions, machines are very effective at influencing human decision-making, so putting a human in the loop does not necessarily solve the problem. How can a human who is, effectively, controlled by a machine, somehow mitigate the evilness of autonomous weapons?
A few people are promoting the term “digital humanism” for a more human-centric approach to technology. This point of view makes it imperative for all disciplines to step up and take seriously humanity’s dance with technology. Our ineffective efforts so far underscore our weak understanding of the problem. We need humanists with a deeper understanding of technology, technologists with a deeper understanding of the humanities, and policy makers drawn from both camps. We are quite far from that goal today.
Edward Ashford Lee has been working on software systems for 40 years and has recently turned to philosophical and societal implications of technology. After education at Yale, MIT, and Bell Labs, he landed at Berkeley, where he is now Professor of the Graduate School in Electrical Engineering and Computer Sciences. His research focuses on cyber-physical systems, which integrate computing with the physical world. He is author of several textbooks and two general-audience books, Plato and the Nerd: The Creative Partnership of Humans and Technology (2017) and The Coevolution: The Entwined Futures and Humans and Machines (2020).