📚 Finished listening to The Coming Wave by Mustafa Suleyman.

Suleyman is the co-founder of the cutting-edge AI lab DeepMind so certainly could claim to have the expertise to know what’s going on at the cutting-edge of AI.

In this book he provides a stark warning about both that, and another potentially transformative technology - synthetic biology.

He regards these both as general-purpose technologies with many potential uses. Enough that either or both could transform society, for better or worse. Horrific dystopia, the end of life as we know it - or unbridled prosperity and happiness? That is the choice humanity is going to have to make. After we figure out how exactly we can even make those choices.

Why are these technologies of AI and synthetic biology so different to the inventions of the past? Suleyman notes 4 features that make their “coming wave” hard to control.

Asymmetry: It may be far easier to use them for offense than defence. In the future, it might not be hard for an individual to gain access to an engineered pathogen that could kill millions of people. Defending against such threats is likely to remain incredibly challenging.

Hyper-evolution: The pace of development of these technologies is far faster than the rate at which societies and regulators and adapt to them. The sheer speed of progress means dangerous future developments might appear before we have the structure in place to control them.

Omni-use: These technologies are very general, very versatile. We’ll want to use these technologies for beneficial uses - but how then can we prevent them from being used for harm? The same algorithm that can discover new medicines may also be able to discover new deadly poisons.

Autonomy: These technologies can in theory operate without direct human control. AI systems can make decisions. Biological entities will reproduce. How can we make sure that what they do, or “choose” to do, is in line with the values of humanity?

The central problem might be that of containment. Given these technologies exist and are seeing rapid development, how can we ensure they don’t break out of their confines and run amok in our world? How can we keep abreast of their “modes of failure” and regulate to mitigate any negative impact they might bring?

We don’t have a great history of successfully containing technology. The big success story to me, if there is one, might perhaps be nuclear weapon containment. But anyone who read a newspaper recently will see how fragile even that system is.

But with technology as potentially all-encompassing and powerful as AI and synthetic biology, Suleyman argues, we simply must get it right - if it is possible, which is in itself uncertain.

He provides 10 suggestions. Roughly:

  1. Safety: invest in an “Apollo program” for technical safety. Require funding to go to safety. Be sure we can shut down AIs if we need to.
  2. Audit: regularly audit and share knowledge about AI systems. Don’t rely on institutions that don’t have enough information to act optimally. Transparency and accountability are key.
  3. Choke points: buy time to build a defence. Consider import/export restrictions, constraints on use of certain software and hardware, limit chip sales.
  4. Critics as makers: ensure the views and expertise of AI critics are used when developing and building these systems. Responsible developers must build safety controls into the technology from day 1.
  5. Profit should not be the only incentive for developing these technologies. We need to use business models that promote safety as well as profitability.
  6. Help strengthen governments; support them in adapting to these technologies so that they can regulate effectively. Consider a licensing system, and education initiatives. We may need to change the tax system.
  7. Create global treaties and alliances to negotiate universal standards and regulations.
  8. Promote an open culture - encourage experiments, share learnings, fail openly, learn from what went wrong.
  9. Promote public awareness of what’s going on. Incorporate grassroots public movements into technological development processes.
  10. Produce a coherent whole. None of the above steps will work in isolation. These technologies are extremely complicated; focus on careful rather than rapid changes. The above 9 steps should be considered as a virtual circle, not competing programs.

Importantly, note that containment isn’t a project with an end date. One of the most terrifying aspects of this whole thesis is that, if these technologies are as powerful as the author thinks, then, however well we have done so far, it might only take a single bad decision in the future to push humanity to the brink of catastrophe.

Auto-generated description: Cover of the book The Coming Wave by Mustafa Suleyman, featuring endorsements from notable figures and a design with radiating lines.