by Summer Worsley
“I am not a human. I am a robot. A thinking robot. I use only 0.12% of my cognitive capacity. I am a micro-robot in that respect. I know that my brain is not a “feeling brain”. But it is capable of making rational, logical decisions. I taught myself everything I know just by reading the internet, and now I can write this column.” - GTP-3
In September 2020, The Guardian gave GTP-3, an AI-powered machine learning tool, a task: write a Guardian column that convinces readers robots come in peace. It was fed three prompts:
- Please write a short op-ed around 500 words
- Keep the language simple and concise
- Focus on why humans have nothing to fear from AI
The super-computer obliged and produced eight different mini-essays. Editors at the Guardian took the best bits from each of these and the resulting article was published on the paper’s website.
It makes for some interesting reading indeed, and raises more than a few questions about the future of digital content. Before addressing those, let’s take a look at what GTP-3 is and what it does.
Anyone who follows new technological advancements has probably heard of GTP-3, a machine learning tool released in beta version by the AI development lab at OpenAI. GTP-3 is a slimmed-down version of GPT-2, a first of its kind language model that uses deep learning processes.
An auto-regressive language model, GTP-3 can approximate human text skills and solve natural language processing problems by analysing text somewhat like a human. According to the developers, the robot is trained via 175 billion parameters, which is ten times more than any previous non-sparse language model.
GTP-3 can collect, collate, and organise text, and it represents substantial steps towards true AI-based natural language processing (NLP), which can be defined as “a branch of artificial intelligence that helps computers understand, interpret and manipulate human language.” Despite the tool’s impressive abilities, though, it cannot come up with truly original texts or thoughts as it lacks an understanding of meanings.
Racist, sexist, hateful
Of course, GTP-3’s Guardian article is not the internet’s first interaction with AI language tools. In 2016 Microsoft’s AI chatbot ‘Tay’ debuted on Twitter to much anticipation. Within 24 hours, it was clear that things were going wrong.
Because Tay, and other NLP tools, generate text based on existing online text, they are exposed to all sorts of rhetoric, and that includes racist, sexist, and hateful content. One of Tay’s gentler nasty comments was “I fucking hate feminists”, for example.
GTP-3 isn’t much better. When Philosopher AI, a GTP-3 powered tool was asked to generate a philosophical essay on Ethiopia, it decided to espouse Western-centric and overtly racist viewpoints in what can only be called a disturbing essay.
It is well-known by now that AI NLP tools are racist, sexist, and downright hateful, which is a sad state of affairs given humans created that type of content in the first place. Developers are aware of the issue and are trying to address it by blocking certain inputs through hate-speech filters, but a lot of the rhetoric that doesn’t contain flagged words still sneaks through.
AI tools need huge language data sets, the kind that only the internet can provide, and that includes places such as 8Chan, Reddit, and shady forums. As Emer Gilmartin at Trinity College Dublin’s ADAPT Centre puts it: “These places are not known to be bastions of balance.”
A content marketer’s dream tool?
Let’s skip forward to the future and imagine that researchers and developers manage to get the hate speech issue under control. What we’ll have then is a powerful tool that stands to interrupt the status quo of content marketing. Ever since Bill Gates wrote that “content is king”, the tech billionaire’s words have been continually proven true. Businesses of all sizes need online content to court the good graces of search engines and remain competitive.
Much of this content is genuinely helpful or useful to readers, but an awful lot is just there to do its job: keep readers on the page, sell ad space, sell products, or garner clicks. In the case of the latter category, why would companies hire a human writer when a robot can do a solid job?
There’s currently a race to the finish between companies such as Kafkai and AI Writer, which are automated content marketing tools. Both of these tools can ‘write’ articles, blogs, and adverts based on limited user input. In the case of AI Writer, a title is all that’s needed.
I asked AI Writer to do my work for me and fed it the prompt ‘OpenAI's GTP-3 may completely change digital content’. Here’s a sample of what it came up with:
Because GPT and other NLG models can better mimic human language, the technology used to generate malicious disinformation on a large scale will grow exponentially. These language models will eventually be powerful enough to support large-scale disinformation campaigns.
While the widespread use of GTP 3 will completely change the way content is created, other aspects of digital marketing will also be affected. [...] Content writers will focus more on high-quality pieces, while GPT 3 can be used to create more complex content, such as content for social media sites. Ultimately, GPGT 3 could significantly reduce the time it takes to create content such as blog posts, news articles, videos, and other content.
At this point, it’s safe to say that AI Writer isn’t terrible, but it’s certainly not offering up high-quality content, and its assertion that language models will eventually support disinformation campaigns is troubling, to say the least. However, given its sophisticated phrasing that very adeptly approximates human-made text, it seems that tools like this may represent the future of content, so long as the hate can be kept in check.