🎙️ Listening to The Ezra Klein Show podcast episode “Why A.I. Might Not Take Your Job or Supercharge the Economy”.

I think he’s spot on regarding the real AI alignment problem.

Sure, it’s possible that some super-powerful computer brain will eventually decide to remove the human race so it can get on with manufacturing paperclips. If anything remotely like that is a plausible scenario then of course we need to work on the knotty problem of “traditional” AI alignment before it’s too late.

But not at the expense of our present-day major AI-related alignment problem. Today’s critical AI alignment problem is that the basic motivations of the organisations that develop, own and often hold some kind of monopoly over the most powerful AIs are usually not aligned with the public good in general, and almost never aligned with protecting or promoting the interests of the worst-off in society.

Right now, AI development is being driven principally by the question of can Microsoft beat Google to market? What does Meta think about all that?

The interests of companies, governments and some other types of institutions are not usually aligned with the interests of humanity as a whole. I’d probably generalise even further to say that the greater social and economic structures that most of us live under now are not aligned with the interests of the majority of the people enmeshed in them today, whether or not they may have served us well in some ways in the past.

Admittedly the veracity of these sorts of claims of course always depends on what you feel society should be optimised for, it’s subjective rather than objective. But it doesn’t seem likely to me that in a world where a primary motivation for the most cutting-edge AI development is something like “How can Big Tech company X make a more impressive seeming and addictive chatbot than company Y such that its shareholders get richer faster?” is likely to produce the best possible result in terms of maximising the benefits for almost any of humanity.

Google’s main revenue stream remains advertising. Assuming that persists, I doubt society as a whole is going to benefit much by Google’s brightest minds figuring out how to use these amazing new technologies to build better adverts, to make us click on more things we’re not really interested in, more likely to buy things we don’t really want.

Sure, the natural focus of these companies might help those who - to borrow a famous phrase - own the means of production become ever more rich, ever more monopolistic, ever more exploitative. But beyond that why should we imagine that a product designed to make shareholders rich would be especially aligned with any other outcome?

I do not want the shape and pace of AI development to be decided by the competitive pressures between functionally three firms. I think the idea that we’re going to leave this to Google, Meta and Microsoft is a kind of lunacy, like a societal lunacy.

Generating extra money for rich people may not be the primary personal driver for developers in these fields. In fact I’m sure it’s not. No matter our job, few of us turn up each day exclusively motivated by the idea that by end of our workday we may have increased the wealth of some other firm’s investment managers.

For a start, a certain type of person, me included, finds these technologies to be intrinsically interesting - witness the rapid development of open source alternatives to the big tech AI tools that have been rapidly developed by volunteers. Albeit volunteers that usually don’t have access to anything like the massive resources that the Big Tech firms can bear to these tasks.

And if modern-day AI tools are as revolutionary as some believe then there’s a huge amount of increased social good potentially within our grasp, especially if we believe that most people are fundamentally “good” in important ways. But people operate inside contexts that provide structures often far more powerful than the individual in terms of promoting their own rules, regulations, incentives and goals.

I think you have to actually ask as a society, what are you trying to achieve? What do you want from this technology?

If the only question here is what does Microsoft want from the technology? Or Google. That’s stupid. That is us abdicating what we actually need to do.

Of course this general idea doesn’t only affect AI. It’s a constant battle in the world of technology and beyond.