3 min read

The 3rd step in autocomplete

Let’s talk about GPT-3, the 3rd step in autocomplete; the massive language model that’s here to let the sunshine down on our brainstorms.

The proverbial Magic 8 ball, or as HuggingFace calls it “…what calculators are to calculus.”

But don’t get your hopes up just yet, as Sam Altman humbly reminds us there are still a lot of kinks to work out. GPT-3’s training data references all information stored as text, including the good, the bad and the ugly; it’s all there, fueling the beast.


Or as Julian Togelius coyly states, “GPT-3 often performs like a clever student who hasn't done their reading trying to bullshit their way through an exam. Some well-known facts, some half-truths, and some straight lies, strung together in what first looks like a smooth narrative.”


So what can it do and how does it work?

While appearing to create magic, GPT-3 is simply a trained language model that generates text. A pattern recognition machine that pulls from weighted connections between the different nodes in its neural network, all uninfluenced directly by the human brain. Instead it pulls from 175 Billion trained parameters, mined for statistical regularities.

The big question is, can we scale this model up or do we need to build something entirely new to push AGI?  Are we missing key components like cause and effect to create artificial minds; or are problems in AI mostly solved by throwing more data and processing power at them?

While GPT-3 is not here to finish your novel, or create a full blown experience, it can do some pretty amazingly clever things. Kaj Sotala compiled a very amusing thread highlighting people’s experiences with GPT-3.