OpenAI’s latest language-generating AI is now out, and it is mindboggling
by Pisana Ferrari – cApStAn Ambassador to the Global Village
Over the past few days the internet has been abuzz with talk about GPT-3, OpenAI’s newest AI language model. GPT-3’s predecessor, GPT-2, released in 2019, was already able to produce convincing streams of text in a range of different styles when prompted with an opening sentence. But GPT-3 has taken language generation to an entirely new level. It has 175 billion parameters – the values that a neural network tries to optimize during training – versus GPT-2’s already huge 1.5 billion. Open AI has released the software to selected people who requested access to a private beta. For now, it wants outside developers to help it explore what GPT-3 can do, but it plans to turn the tool into a commercial product later this year.
Examples of GT-3’s language capabilities
Samples of GPT-3 texts have been circulating widely on social media. Arram Sabeti, a San Francisco-based developer and artist, tweeted last week that he was “blown away” by the language capabilities of GPT-3, which, he added, “is eerily good at writing almost anything”. Sabeti used the AI to generate short stories, songs, press releases, technical manuals, and more, and has published the results on his blog. In the example that follows the prompt was: “Below is a screenplay for a film-noir hard boiled detective story by Raymond Chandler about the boy wizard Harry Potter”; the rest was written by GPT-3:
“Harry Potter, Private Eye.
Scene: A small dingy office, early morning, furniture of the Salvation Army store variety. Sordid atmosphere. Harry Potter, in ratty tweed suit, unpressed shirt, and unshined shoes, sits behind the desk looking haggard, rumpled, and embittered. In one corner of the office a torn shade rattles at the wind gusts outside. Harry has one long index finger on the butt of a .38 and he broods over a photograph. The photograph shows a gorgeous blonde wearing an evening gown. She is holding a cocktail in one hand and looking pensively at a cigarette in the other. A tear adorns her cheek. Harry looks at her for a long moment, sighs, then puts his gun down and stands. He straightens his clothes, scratches his head, then with more resolution turns around, picks up his pistol, and leaves the office.” (excerpt)
Writer and independent researcher Gwern Branwen has published a detailed analysis of GPT-3 and says that the texts it produces “are not just close to human level: they are creative, witty, deep, meta, and often beautiful”. He has tried his hand at different things including poetry and fairly tales. Based on his prompt “I am feeling bored today. Grandma, tell me a story about the time the Cat stole the sun”, here is what GPT-3 wrote:
“Once upon a time, the Cat went to visit the Sun. He hadn’t seen the Sun for quite some time. He told her to come out for a walk. The Sun told him that she didn’t want to. The Sun was shy and didn’t want to come out. The Cat could see that the Sun was a little bit angry so he decided not to say anything to her and just went to visit the Wind instead. But the Wind was angry too. The Wind wanted to blow the Cat away. The Cat ran back home, looked for something nice to eat and went to bed. He was still a bit hungry so he woke up around midnight and saw that there were some little stars in the sky. The stars came down to visit the Cat and they were very hungry as well. The Cat ate a star and they all went to bed and fell asleep. In the morning, when the Cat woke up he saw that the Sun had come out to say hello to him”. (excerpt)
GPT-3 can also produce pastiches of particular writers. Mario Klingemann, an artist who is considered a pioneer in the use of computer learning in the arts, shared a short story on Twitter called “The importance of being on Twitter,” written in the style of Jerome K. Jerome. Klingemann says all he gave the AI was the title, the author’s name and the initial “It.”
“It is a curious fact that the last remaining form of social life in which the people of London are still interested is Twitter. I was struck with this curious fact when I went on one of my periodical holidays to the sea-side, and found the whole place twittering like a starling-cage. I spoke to the sexton, whose cottage, like all sexton’s cottages, is full of antiquities and interesting relics of former centuries. I said to him, “My dear sexton, what does all this twittering mean?” And he replied, “Why, sir, of course it means Twitter.” “Ah!” I said, “I know about that. But what is Twitter?” “It is a system of short and pithy sentences strung together in groups, for the purpose of conveying useful information to the initiated, and entertainment and the exercise of wits to the initiated, and entertainment and the exercise of wits to the rest of us.” (excerpt)
Nature and limitations of GPT-3
Rob Toews, writing for the Forbes magazine, says that there is widespread misunderstanding about the nature and limits of GPT-3’s abilities. He claims it is important for the technology community to have a more clear-eyed view of what it can and cannot do. He explains that having trained on a dataset of half a trillion words, GPT-3 is able to identify and brilliantly use the linguistic patterns contained therein. But it does not “know” what these words actually mean (“GPT-3 it is shockingly good and completely mindless”, as an article in the MIT Technology Review reads), and is therefore susceptible to basic errors that an average human would never commit.
At the core, says Toews, GPT-3 is a text predictor, even if very sophisticated. A human gives it a chunk of text as input, and the model generates its best guess as to what the next chunk of text should be. It can then repeat this process-taking the original input together with the newly generated chunk, treating that as a new input, and generating a subsequent chunk-until it reaches a length limit. The consequence is that it can struggle to maintain a coherent narrative or deliver a meaningful message over more than a few paragraphs. This does not mean that GPT-3 is not a useful tool or that it will not underpin many valuable applications, he concludes. But OpenAI CEO Sam Altman himself wrote on Twitter that “the GPT-3 hype is way too much … AI is going to change the world, but GPT-3 is just a very early glimpse.”.
About Open AI
OpenAI is a non profit AI company founded in 2015 in San Francisco, California. Its mission is to ensure that artificial general intelligence benefits all of humanity: https://openai.com/
About GPT models
The name GPT stands for “Generative Pretrained Transformer”, referencing a 2017 Google innovation called a Transformer which could figure out the likelihood that a particular word wouldll appear with surrounding words.
See also
Our article about other predictive text models and GPT-3’s predecessor GPT-2 titled “Predictive text AI: a “co-writer” that can save users 2 billion keystrokes a week”
Sources
“GPT-3 Is Amazing-And Overhyped”, Rob Toews, Forbes, July 19, 2020
“OpenAI’s New Language Generator GPT-3 is Shockingly Good — and Completely Mindless”, posted by msmash on Slashdot, July 20, 2020
“OpenAI’s new language generator GPT-3 is shockingly good-and completely mindless”, Will Douglas Heaven, MIT Technology Review, July 20, 2020 hyperlink