AI could usher in an amazing new age for humanity.
Loading audio narration…
There may also be some potential downsides. As one company put it:
“Jobs and entire industries being disrupted; bad actors misusing the technology; misaligned systems evading human control; governments or institutions deploying AI in ways that undermine democratic values; and power and wealth becoming more concentrated instead of more widely shared.”
If you’re surprised to learn that this risk assessment comes from OpenAI, the company at the forefront of the AI push, you shouldn’t be: Concern about the technology’s use and effects was core to OpenAI’s founding in 2015, when Sam Altman and others created a nonprofit entity specifically to help manage those risks.
OpenAI has had a weird and convoluted history since then, and now has a very weird for-profit/nonprofit structure. One constant is that Sam Altman still runs it, and that lots of people who know Sam Altman don’t trust him.
That’s the takeaway from a lengthy new Altman profile in the New Yorker, written by Ronan Farrow and Andrew Marantz. In the piece, Altman is frequently described by past and present associates and coworkers as… slippery, at best. “He’s unconstrained by truth,” a former OpenAI board member says.
This is not a new charge: In the fall of 2023, OpenAI’s board fired Altman and said he was “not consistently candid in his communications.” Altman then essentially engineered a reverse coup, where he was reinstated as CEO and most of his detractors left the company.
The fact that many people who worked with Altman no longer want to do so is not news, either. There are several companies founded by former Altman colleagues turned adversaries — most prominently at Anthropic, run by former OpenAI executive Dario Amodei, whose distaste for Altman has been extensively documented. (Remember this photo op from February, where Amodei and Altman wouldn’t hold each other’s hands?)
And if you are looking for more evidence that working with Sam Altman can be a difficult prospect, consult The Information, which reports that he doesn’t seem to be getting along with CFO Sarah Friar, which should be a problem for a company prepping for a massive IPO this year. Among the apparent issues: Friar reportedly “told some colleagues earlier this year that she didn’t believe the company would be ready to go public in 2026.”
I’ve asked OpenAI for comment about both stories.
All of this brings us back to OpenAI’s new whitepaper, titled “Industrial Policy for the Intelligence Age: Ideas to Keep People First”, which says that AI will be awesome, but there will be some downsides, so we should “start a conversation about governing advanced AI.”
Among the proposals: Changing the tax code to extract more money from big companies and rich people, to make up for the drop in payroll taxes that could come if lots of people are AI’d out of work. Another idea: Extra taxes for companies that AI people out of work. That is: If AI delivers on its promise, it could produce wrenching, destabilizing change.
All of which means we’re in a position where we need to put an enormous amount of faith and trust in the people who run AI companies. Stories like the New Yorker’s will make it harder to trust Altman.

