Artificial Intelligence has huge potential social benefits, such as devising new lifesaving drugs or finding new ways to teach children.
But it also has even larger potential social costs. If we’re not careful, AI could be a Frankenstein monster: It might eliminate nearly all jobs. It could lead to autonomous warfare.
Even such a mundane goal as making as many paper clips as possible could push an all-powerful AI to end all life on Earth in pursuit of more clips.
So, how would you build an enterprise designed to gain as many of the benefits of AI as possible while avoiding these Frankenstein monster horrors?
You might start with a nonprofit board stacked with ethicists and specialists in the potential downsides of AI.
That nonprofit would need vast amounts of expensive computing power to test its models, so the nonprofit board would need to oversee a for-profit commercial arm that attracted investors.
How to prevent investors from taking over the enterprise?
You’d have to limit how much profit could flow to the investors (through a so-called “capped profit” structure) and you wouldn’t put investors on the board.
But how would you prevent greed from corrupting the enterprise, as board members and employees are lured by the prospect of making billions?
Well, you can’t. Which is the flaw in the whole idea of private enterprise developing AI.
The nonprofit I described was the governing structure that OpenAI began with in 2015, when it was formed as a research-oriented nonprofit to build safe AI technology.
But ever since OpenAI’s ChatGPT looked to be on its way to achieving the holy grail of tech — an at-scale consumer platform that would generate billions of dollars in profits — its nonprofit safety mission has been endangered by big money.
Now, big money is on the way to devouring safety.
In 2019, OpenAI shifted to a capped profit structure so it could attract investors to pay for computing power and AI talent.
OpenAI’s biggest outside investor is Microsoft, which obviously wants to make as much as possible for its executives and shareholders, regardless of safety. Since 2019, Microsoft has invested $13 billion in OpenAI, with the expectation of making a huge return on that investment.
But OpenAI’s capped profit structure and nonprofit board limited how much Microsoft could make. What to do?
Sam Altman, OpenAI’s CEO, apparently tried to have it both ways — giving Microsoft some of what it wanted without abandoning the humanitarian goals and safeguards of the nonprofit.
It didn’t work. Last week, OpenAI’s nonprofit board pushed Altman out, presumably over fears that he was bending too far toward Microsoft’s goal of making money while giving inadequate attention to the threats posed by AI.
Where did Altman go after being fired? To Microsoft, of course.
And what of OpenAI’s more than 700 employees — its precious talent pool?
Even if we assume they’re concerned about safety, they own stock in the company and will make a boatload of money if OpenAI prioritizes growth over safety. It’s estimated that OpenAI could be worth between $80 billion and $90 billion in a tender offer — making it one of the world’s most valuable tech start-ups of all time.
So it came as no surprise that almost all of OpenAI’s employees signed a letter earlier this week, telling the board they would follow Altman to Microsoft if the board didn’t reinstate Altman as CEO.
Everyone involved — including Altman, OpenAI’s employees, and even Microsoft — will make much more money if OpenAI survives and they can sell their shares in the tender offer.
Presto! On Tuesday, OpenAI’s board reinstated Altman as chief executive and agreed to overhaul itself — jettisoning board members who had opposed him and adding two who seem happy to do Microsoft’s bidding (Bret Taylor, an early Facebook officer and former co-chief executive of Salesforce, and Larry Summers, the former Treasury secretary).
Said Satya Nadella, Microsoft’s chief executive, “we are encouraged by the changes to the OpenAI board,” calling it a “first essential step on a path to more stable, well-informed, and effective governance.”
Effective governance? For making gobs of money.
The business press — for which “success” is automatically defined as making as much money as possible — is delighted.
It had repeatedly described the nonprofit board as a “convoluted” governance structure that prevented Altman from moving “even faster” and predicted that if OpenAI fell apart over the contest between growth and safety, “people will blame the board for … destroying billions of dollars in shareholder value.”
Which all goes to show that the real Frankenstein monster of AI is human greed.
Private enterprise, motivated by the lure of ever-greater profits, cannot be relied on to police itself against the horrors of an unfettered AI.
This past week’s frantic battle over OpenAI shows that not even a nonprofit board with a capped profit structure for investors can match the power of Big Tech and Wall Street.
Money triumphs in the end.
The question for the future is whether the government — also susceptible to the corruption of big money — can do a better job weighing the potential benefits of AI against its potential horrors, and regulate the monster.