Trade EverythingJul 11
free markets are responsible for our prosperity. let’s build more of them.
Tarek MansourOpenAI launched in 2015 with the stated goal of building humanity’s first artificial general intelligence (AGI). The response to that launch was, as Altman tells Lex Fridman in a recent podcast interview, mostly derision and laughter. People rolled their eyes — there go those crazy techbros again!
But with the 2020 release of GPT-3, the laughter turned to curiosity. Then the December 2022 launch of ChatGPT, powered by an even more powerful version language model, turned the curiosity into a kind of rolling wave of euphoria with undertones of rising panic.
The recent rollout of OpenAI’s GPT-4 model had a number of peculiar qualities to it, and having stared at this odd fact pattern for a while now, I’ve come to the conclusion that Altman is carefully, deliberately trying to engineer what X-risk nerds call a “slow take-off” scenario — AI’s capabilities increase gradually and more or less linearly, so that humanity can progressively absorb the novelty and reconfigure itself to fit the unfolding, science-fiction reality.
Here’s my thesis: The performance numbers published in the GPT-4 technical report aren’t really like normal benchmarks of a new, leading-edge technical product, where a company builds the highest-performing version it can and then releases benchmarks as an indicator of success and market dominance. Rather, these numbers were selected in advance by the OpenAI team as numbers the public could handle, and that wouldn’t be too disruptive for society. They said, in essence, “for GPT-4, we will release a model with these specific scores and no higher. That way, everyone can get used to this level of performance before we dial it up another notch with the next version.”
I’ve said a few times publicly that I think OpenAI and its main backer, Microsoft, would love to have a regulatory moat around AI in order to substitute for a possible technical moat.
But to be fair, I don’t think this is the whole picture – not by a long shot. Everyone who’s working in machine learning, with zero exceptions I can think of, considers this technology to be socially disruptive and to have a reasonable potential for some amount of near-term chaos as we all adjust to what’s happening.
I’m not talking about AGI-powered existential risk scenarios, though there are plenty of worries about that. But more along the lines of the kinds of social changes we saw with the smartphone, the internet, or even the printing press, but happening in such a small amount of time that the effects are greatly magnified.
“The current worries that I have,” Altman told Fridman, “are that there are going to be disinformation problems, or economic shocks, or something else at a level far beyond anything we’re prepared for. And that doesn’t require superintelligence, that doesn’t require a super deep alignment problem and the machine waking up and trying to deceive us, and I don’t think that gets enough attention.”
So I think Altman is quite sincere in his repeated public calls for democratic oversight of AI development, because the level of concern he’s expressing in his calls for government intervention fits squarely within what I know of even the more accelerationist types in the AI scene – myself very much included (though I am not personally pro-government intervention).
There are a few pieces of evidence that have come together to convince me that OpenAI is deliberately not pushing the performance envelope, but is instead carefully managing the take-off scenario so that it unfolds on a slower timetable.
There’s a section in the GPT-4 technical report where the team describes the remarkable predictability of performance scaling (as measured by their benchmarks) that they get from increasing the model size.
A large focus of the GPT-4 project was building a deep learning stack that scales predictably. The primary reason is that for very large training runs like GPT-4, it is not feasible to do extensive model-specific tuning. To address this, we developed infrastructure and optimization methods that have very predictable behavior across multiple scales. These improvements allowed us to reliably predict some aspects of the performance of GPT-4 from smaller models trained using 1,000× – 10,000× less compute.
What this says is that they can build a small model with a limited number of parameters, making it cheap and easy to train. Then they benchmark it, and based on those benchmarks, they can predict the performance of a version of that model with a larger number of parameters by just applying a scaling formula.
To just make up some numbers for the sake of example: If OpenAI trains a toy version of GPT-4 that has 100 million parameters, and it scores at 21 on the ACT, then they can plug that data point into their parameters-vs.-ACT-score formula to see that a 1-billion parameter version will score a 25, and a 10-billion parameter version will score a 29, and a 100-billion parameter version will score a 32, and so on.
So based on this paragraph alone, we should look at GPT-4’s benchmark performance as a pre-selected outcome. They looked at a point on their parameters vs. performance curves and said, “Let’s turn the scaling dial so that GPT-4 lands… there! That’s about what we estimate society will be ready for when we launch this in a few months.”
An important indicator that they’re operating the way I’m describing is in the first sentence of the paragraph I quoted above: “A large focus of the GPT-4 project was building a deep learning stack that scales predictably.” They put so much effort into optimizing for predictable scalability because they want to be able to manage the pace of take-off.
So how fast could take-off be? That’s unclear, but it does seem like OpenAI has considerably more performance on tap than it has actually poured with GPT-4. Specifically, it seems likely that GPT-4’s parameter count is probably lower than GPT-3.5’s.
Some indicators that this may be the case:
If GPT-4 is indeed smaller than GPT-3.5, then OpenAI is leaving performance on the table by restricting the parameter count — and the company knows exactly how much, at least on its existing benchmarks.
I take some of Altman’s comments to Fridman — the exchanges where he’s saying he doesn’t think LLMs will get us all the way to AGI — to indicate that even maxing out the benchmarks they’re using for performance would not qualify as “AGI” to him.
In this clip, Fridman asks Altman, “Do you think it’s possible that large language models really [are] the way we build AGI?”
“I think it’s part of the way,” Altman responds. “I think we need other super important things.”
All this is of a piece with the opening up of the OpenAI Evals tools. The company wants help with creating new benchmarks they can use in conjunction with their scaling formulas to more fully map out the capability space of large language models at much higher scaling regimes than they’re currently operating in. That way, they have a better idea where really cranking up the scaling will take them, so they don’t train an AGI by accident.
(Accidentally training an AGI would be very bad, because one model file leak, as happened with Meta’s LLaMA, and the genie is out of the bottle. They’d want time to prepare for the training run they think is going to get them across the finish line.)
So it seems unlikely that just YOLOing the parameter scaling to as high as they can possibly get it would get them into AGI territory, but they don’t know that for sure, and they also don’t know what parts would be missing.
I think we should count ourselves fortunate if indeed OpenAI is deliberately holding back its capability scaling to a pace that’s less disruptive than it could otherwise be, but we shouldn’t count on this one company to protect us from a fast take-off scenario or even a sudden capability jump that sows chaos. There are many other runners in this race, with many different agendas, and we’re living in a world where at any moment a new model can drop that changes everything.
-Jon Stokes
0 free articles left