Just a few weeks ago, the GPT-3 language model was made available to the public.Developed by OpenAI, a leading artificial intelligence company, the model can create complex, comprehensible, human-like texts with only a small amount of input, using the power of artificial intelligence and deep learning. In other words, it looks likerobots are finally coming to replace human authors. But how big is the threat, really? Can a computer programreplace the human mind? Most importantly, do I need to look for a new job?
The debate around artificial intelligence has long been torn by the question of whether the advent of robots will cost us our freedom, threaten us with physical extinction, or leave us behind to squabble about who should have the last slice of bread while the robots flicker away in the ruins of our once-great civilisation. And while, no, you should not literally be worried about dying in a nuclear winter caused by super-intelligent AI gone rogue, artificial intelligence is finally starting to make its way out of the lab and into the wider world. That means this debate needs to be had, and it needs to be had soon before robots start writing memoirs, screenplays, novels, and other cultural markers that tell us where we came from, warts and all...
Did that second paragraph seem a little bit 'off' to you? It took a bit of a turn, didn't it? Did it need to be that long and wordy? If so, don't blame me; blame GPT-3. The program wrote the entire paragraph, with the first paragraph (written by me, a real human being) used as a prompt. That's how GPT-3 works - it takes a little bit of original material and expands on it using AI.
The GPT-3-written text isn't perfect, but it's certainly not terrible. It follows logically from the first paragraph that I wrote. It has a clear structure and even uses the same tone and style. It's a bit verbose, but I certainly wouldn't be able to tell it apart from a text written by a human - did you?
Until very recently, the GPT-3 model was relatively closed off, available only to some chosen testers and lucky individuals who got access via a waiting list. Earlier in November, the waitlist was scrapped, and the gates were thrown open. Now, any developer can start using and experimenting with the GPT-3 model, paying only for the text they generate in a 'pay-per-use' model. And anyone can play around with it, free of charge - have a go yourself.
The potential applications of a model like GPT-3 are fascinating. It may simply seem like a fun curiosity, but there are plenty of practical use cases where automatic, accurate text production is useful.
For example, chatbots have been around for a while, but they're often frustrating to interact with. A chatbot that can give logical, useful answers to questions instead of pre-set responses would be fantastic in customer service. Services like GPT-3 can make this possible.
Similarly, a business communication tool that can read four pages of meeting notes and produce a short summary would be appreciated by most companies. GPT-3 can even be used for more creative tasks - imagine you're a B2B company looking to start selling its wide range of products online. You can give GPT-3 basic information about the products' specifications and features and instantly get enticing ad copy for each one, adapted for different target groups and platforms.
It's this creative aspect of AI that creates debate. Can computers really create new things as humans can? Can a computer write a Nobel Prize-winning novel or the script for a scary movie? The software hasn't come that far yet, but people are working on it. And with a giant like Microsoft bankrolling OpenAI's research, the software will get more sophisticated.
This poses an obvious question - will human content creators be replaced in the future? Will you be able to sack your in-house content specialists, tear up your contracts with agencies like Zooma and use GPT-3 to produce 100 articles a day? In my opinion, the answer is no, for one simple reason:
No AI model is a match for genuine knowledge
Tools like GPT-3 are impressively good at two things - language processing and making use of existing knowledge. The program was 'trained' with hundreds of thousands of web pages, books, and Wikipedia articles, so it can convincingly refer to real-world facts, figures, and ideas - you can see that in the second paragraph of this article. From its training, GPT-3 clearly recognises an ongoing debate around the use and future of AI. Not bad.
When it comes to processing, it's also fairly talented. That's why it's good at condensing lots of text into a short summary or turning one type of text into another.
However, what GPT-3 can't do is know the details of your company's products and solutions, why they are relevant and valuable for your prospective and existing customers, and how they can solve their problems and challenges.
The best knowledge content, which creates trust for your company and presents you as a knowledgeable expert partner, needs to be created using knowledge from within your company. It's in the heads of your product managers, salespeople, service reps, and designers, not on a Wikipedia page that can be crawled by an AI program.
GPT-3 is really good at producing understandable texts around a certain theme that use many words but don't say much. It can also use its 'knowledge' to write fact-based, more analytical texts on general topics. This is good news for high school students who can't be bothered writing an essay, but it doesn't make much of a difference for B2B companies who need to make their knowledge available to a specific audience.
The technology will become more sophisticated over time, and I'm sure the texts produced by GPT-3 will improve. But for the time being, I think B2B knowledge content will need to be created by humans, not robots.
Subscribe to The Onlinification Hub, and get insights like this delivered straight to your inbox.