Edward Ashford Lee


On his book The Coevolution: The Entwined Futures of Humans and Machines

Cover Interview of May 20, 2020


Today, the fear and hype around AI taking over the world and social media taking down democracy has fueled a clamor for more regulation. But if I am right about coevolution, we may be going about the project of regulating technology all wrong. Why have privacy laws, with all their good intentions, done little to protect our privacy and only overwhelmed us with small-print legalese?

Under the principle of digital creationism, bad outcomes are the result of unethical actions by individuals, for example by blindly following the profit motive with no concern for societal effects. Under the principle of coevolution, bad outcomes are the result of the procreative prowess of the technology itself. Technologies that succeed are those that more effectively propagate. The individuals we credit with (or blame for) creating those technologies certainly play a role, but so do the users of the technologies and their whole cultural context.

Under digital creationism, the purpose of regulation is to constrain the individuals who develop and market technology. In contrast, under coevolution, constraints can be about the use of technology, not just its design. The purpose of regulation becomes to nudge the process of both technology and cultural evolution through incentives and penalties. Nudging is probably the best we can hope for. Evolutionary processes do not yield easily to control.

Perhaps privacy laws have been ineffective because they are based on digital creationism as a principle. These laws assume that changing the behavior of corporations and engineers will be sufficient to achieve privacy goals (whatever those are). A coevolutionary perspective understands that users of technology will choose to give up privacy even if they are explicitly told that their information will be abused. We are repeatedly told exactly that in the fine print of all those privacy policies we don’t read. And, nevertheless, our kids get sucked into a media milieu where their identity gets defined by a distinctly non-private online persona.

I believe that, as a society, we can do better than we are currently doing. The risk of an Orwellian state (or perhaps worse, a corporate Big Brother) is very real. It has happened already in China. We will not do better, however, until we abandon digital creationism as a principle. Outlawing specific technology developments will not be effective. For example, we may try to outlaw autonomous decision-making in weapons systems and banking. But as we see from election distortions, machines are very effective at influencing human decision-making, so putting a human in the loop does not necessarily solve the problem. How can a human who is, effectively, controlled by a machine, somehow mitigate the evilness of autonomous weapons?

A few people are promoting the term “digital humanism” for a more human-centric approach to technology. This point of view makes it imperative for all disciplines to step up and take seriously humanity’s dance with technology. Our ineffective efforts so far underscore our weak understanding of the problem. We need humanists with a deeper understanding of technology, technologists with a deeper understanding of the humanities, and policy makers drawn from both camps. We are quite far from that goal today.