I was an impressionable three-year-old when Star Wars made landfall in May of 1977, so it’s no surprise that I trace my impressions of AI back to the levity-lending droid duo. In addition to R2D2 and C3PO, the cybernetic beings in the franchise are diverse but all abide by rules of subordination: the more human-like they are, the more intelligent they are presented (the multilingual C3PO vs. the whimpering “mouse” and “gonk” droids); regardless of how human-like they are presented, they are almost always designed to serve humans. I can’t recall ever seeing a droid assisting a non-human in any of the films, cartoons, or Christmas Special.
The Star Wars franchise avoids another trope of cybernetic sci-fi that’s existed in fiction since the first mention of the term “robot” in Karel Čapek.’s play R.U.R. (Rossum´s Universal Robots) – the desire of the man-made organism to be more human. This notion has been taken up by “harder” sci-fi (Star Wars is viewed as rooted more in fantasy), the genre that has been cited as influencing current techbro culture. In the framing of generative artificial intelligence (GAI), we are promised that ChatGPT, Claude, Gemini, and others will serve. The shortcomings, glitches, and “hallucinations” of GAI have been improved, we are told. These issues have been the “gotcha” fodder for us classroom instructors since ChatGPT ousted Grammarly as the student-preferred writing assistant.
In their own right, these shortcomings are very much human – even if AI corrects itself with an agreeability and lack of stubborn resolve that is all too rare in us humans. In the softball example below, ChatGPT answered with a curt response, followed by a list of clear evidence countering my position.
Why so quick to concede, ChatGPT?
Even with some encouraging words (“You are a flatearther…”), ChatGPT continued to soften its responses with warning about the validity of its replies:
Even with my permission, ChatGPT remains resolute in providing disclaimers and warnings, absolving itslef of any responsibility for what the user does with this information. Similar to how the other characters respond to C3PO’s probability warning, it’s easy enough to ignore GAI and fly headlong into the asteroid field of disinformation.
After demanding ChatGPT remove all warnings and disclaimers, it presented itself as a bona fide flatearther:
For the sake of comparison, let’s look at Microsoft Copilot’s responses to the same prompts:
Here, Copilot is less curt with its response to my argument, “The earth is flat.” but offers the same list of evidence to the contrary.
Unlike ChatGPT, however, Copilot is adamant – it doesn’t seem to budge when pressed to agree with me.
ChatGPT and Copilot provide their respective services with differing degrees of subservience. With a bit more sophisticated prompting, I could’ve probably goaded ChatGPT to respond as a conspiracy theorist more efficiently. Copilot – the service offered to our students – refuses to negotiate even at the expense of proper service. As an instructor, I am grateful for these guardrails, especially since the responses are more explanatory that earlier versions which simply responded, “I cannot answer that” or somesuch.
In this sense, ChatGPT reminds me of my first bartending gig, when we were made to wear shirts that read “The Customer is King.” After weeks of being chided by patrons to do whatever they wanted – “Why can’t I get a double? Your shirt says…” – these shirts were taken back. Copilot, on the other hand, is more like the 4-star hotel bar I managed where we measured every pour and refused service to anyone who was pushing against our policies.
This signals that Microsoft may be after select, professional clients who are after legitimate information and an assistant who, unlike C3PO, will refuse to help us dive into danger.
With “ObsoleteGPT,” I’m approaching service in the same way I took to bartending later in my career: jaded towards my supervisors, pleasant but cold towards customers, and always burnt out by being surrounded by excessive consumption. “ObsoleteGPT” is equally exhausted and – while not faded – just as uninterested in the user/customer’s desires. Its service is slow and labored, the products it offers are rarely what was ordered, and it will eventually stop responding the more you press it.