Much Ado About GPT-3

Take away infrastructure deficits, costs or lack of digital skills and the fear of technology ranks top among reasons why potential users dismiss or are hesitant about its adoption. Perhaps the media is to blame, films like The Matrix or more recently, Netflix’s The Social Dilemma that have us questioning what we really know about the world of algorithms and real or virtual realities. Or perhaps, it is the unsavoury actions of powerful technology companies in recent past, of massive data harvesting and racketeering. This fear manifests in varying forms but with the advancement of technologies like artificial intelligence, this fear is a more substantive thing that concerns the economic sustenance of workers who may or may not be displaced by them.

 

Not too long ago, The Guardian published an article written by a robot. In the article, Open AI’s GPT-3 (Generative Pre-trained Transformer) language model tries to allay the fear of its readers about the future of the world with technologies like it in existence. Tongue in cheek, GPT-3 draws on the very pervasive fears about the needfulness of human intelligence in a world of robots and the narrative of a world overrun by annhilistic robots who become more powerful than the humans who created them.  

 

Unsurprisingly, the article caused quite the stir. Readers were fascinated and unimpressed, they thought the article reeked of youthful language or were apprehensive due to the fact that the article had not been published as produced but tweaked by the publication’s editorial staff from eight different outputs. 

 

GPT-3 was released sometime in June and is now the world’s largest natural language processing model with 175 billion machine learning parameters (or weights, which are what the model learns during its training). It’s immediate predecessor, GPT-2, had 1.5 billion parameters. Since OpenAI released the GPT-3 API for researchers to try out various projects, the model has written newsletters, song lyrics, fiction, poetry, news articles, gaming dialogue, op-eds, and even lines of code. It could potentially do more. 

However, many are asking, how much more? And what will that mean for the economic sustenance of all the lines of jobs that it could potentially disrupt? Will the effects on female workers in these lines of employment be significantly different?

 

“The chances are high,” says Blaise Aboh, a Tech Architect exploring the impact of artificial intelligence in data analytics and journalism.

 

Language-prediction and generating softwares or bots are not new even to the media. What makes GPT-3 stand out is the sheer volume and breadth of information it has been trained with so that whatever prompts it is given, whether to write a line of code or poetry, it has access to billions of data points it can pull from to produce comparable outputs, but a whole lot faster. 

 

However, there are caveats.  As impressive as it is, the model is still a man-made creation and has limitations especially when we consider notions of human intelligence or the fact that there is  still a lot of complexity about how our brains work that scientists stay researching. 

 

Does GPT-3 have the nuances and contexts of lived reality? Is GPT-3 conscious in the way that we understand human consciousness? Does it understand the sentences it streaks together, the poetry it writes? Can it reason?  And because poetry or news articles always need context and nuance, it is, perhaps, far-reaching to assume that one day, our newsrooms will consist of high-speed gadgets and robots churning out articles at great speed when prompted. 

 

According to Aboh, this iteration of the language model means that there is a lot more that is possible and there is no determining the direction the technology could take in the coming years.   

 

“I think that GPT-3 has the ability to understand causality,” he says. “We like to say that machines do not have emotions, they don’t have a conscience, but I think GPT-3 is getting there.”

 

Based on published projects by researchers with access to the tool, it is apparent that the model is far from perfect, still makes silly errors or is incoherent even with the volume of information it has to pull from. 

 

Still, whether it will take the livelihoods of individuals using language of any kind or format is subject to three factors: access, pricing and use cases. 

 

Unlike its predecessors, OpenAI did not open-source GPT-3. It released the API to allow researchers to try out the tool on a diverse range of test subjects with the possibility of opening up wider access to more users. 

 

In addition to commercialization, the researchers at OpenAI cited the need to “ more easily respond to the misuse of the technology” as one of the reasons why open sourcing the model was not the path to take. 

 

But the waitlist is long. According to Aboh, two months after applying, he is yet to get access to the API. Aboh says this is where gender could add a layer of complexity to the conversation around job preservation. 

 

“Because it is a priority software and you have to be on a waitlist, it might be important to ask how is the access [staggered] between male and female researchers?” he said. 

 

Everyone who has access to GPT-3 is using it to build what is important to them, Aboh explains. If enough female researchers and/or technology journalists do not have access to the model to carry out test projects that may be specific to their lived experiences, then there is a problem. Already, GPT-3 is being fed with millions of data points from online material that are biased and reinforce stereotypical portrayals of women in some of the material the model will or has produced.

In addition to the question of access is cost implication. Reports regarding the price of GPT-3 discloses a monthly subscription cadence that is dependent on the volume of work the buyer intends to produce. According to a Reddit user, the $100 monthly Create pricing plan requires the production of roughly 3,000 pages of text while the $400 monthly Build plan will require the buyer to produce about 15,000 pages of text monthly. 

 

For now, GPT-3’s current customers include Casetext, a law firm looking to streamline research hours using the model as well as Reddit which is looking to better moderate interaction on the platform. Those who will pay for the tool eventually, will have to regularly produce a significantly substantial amount of texts to justify their subscription. 

 

For now, it is still in private beta testing and OpenAI says it is treading carefully with the tool. 

 

“I think it is important for ethical bodies to know when to say no, when to ask, when do we stop?” says Aboh. With the likelihood of disinformation, inaccuracies being passed off as legitimate information, these are all legitimate concerns. 

 

Aboh says what does make sense for journalists asking if GPT-3 will be the end of their careers is to keep abreast of what researchers are creating with the tool in various fields and begin to consider what areas of the job will be impacted positively by its introduction. 

“Studies show that we cease to exist without human interaction,” writes GPT-3 in The Guardian and this is what we must remember as we look towards the future.

Leave a Reply

Your email address will not be published. Required fields are marked *

Latest Blogs

Vacancy: Monitoring, Evaluation, Accountability, and Learning (MEAL) Officer (Short-term contract)

Vacancy: Project Research Consultant- Policy Frameworks for Reporting Violence Against Women & Girls and Toolkit Development. 

AWiM24 Conference Call for Papers

Follow Us

10k

7k

45

34

69k

DR. YEMISI AKINBOBOLA

C.E.O & Co-founder, AWiM

Dr Yemisi Akinbobola is an award-winning journalist, academic, consultant and co-founder of African Women in Media (AWiM). AWiM’s vision is that one-day African women will have equal access to representation in media. Joint winner of the CNN African Journalist Award 2016 (Sports Reporting), Yemisi ran her news website IQ4News between 2010-14.
Yemisi holds a PhD in Media and Cultural Studies from Birmingham City University, where she is a Senior Lecturer. She has published scholarly research on women’s rights, African feminism, and journalism and digital public spheres. She was Editorial Consultant for the United Nations Security Council Resolution 1325 commemorative book titled “She Stands for Peace: 20 Years, 20 Journeys”, and currently hosts the book’s podcast.
She speaks regularly on issues relating to gender and media. In 2021 she was recognized as one of 100 Most Influential African Women.