ChatGPT and Me: On the Benefits and Dangers of Artificial Intelligence

A group of AI robots answering questions. Photo by Mohamed Nohassi on Unsplash.

Everyone is fearful that it is only a matter of time before new advances in Artificial Intelligence like ChatGPT will run our lives; but isn’t AI a product of reason alone, incapable of the complex blend of rationality, emotion and intuition that humans use to decide things? 

by Dr. Annis Pratt

May 29, 2023

Human history has been tragically marred by our presumption that understanding a problem with our brains means that we will be able to summon our will to solve it.  Excessive trust in purely neocortical reasoning overlooks the facts:    

  1. That decision-making involves a complex interplay of reason and emotions and that choice requires more than the neocortex;
  2. That we are as likely to make choices leading to bad social outcomes as decisions for the common good.

The question is: Can Artificial Intelligence like ChatGPT eventually supplant us because of its superior problem-solving ability, creating a world where we play second fiddle?

At the beginning of the twentieth century, we believed that our technological inventions are intrinsically beneficial; we assumed that our sparkling new machines would, in and of themselves, lead to a better society.  Silly us: we got a century of total war.  

Or, as Justin Gregg puts it in If Nietzsche Were a Narwhal: What Animal Intelligence Reveals about Human Stupidity:  “If Nietzsche had been born a narwhal [a medium-sized whale with a long, single horn] the world might never have had to endure the horrors of the Second World War or of the Holocaust.”  

Unlike animals, which seem to know what they are doing, human beings – for all of our linguistic skills and our museums and libraries and symphonies – are tragically prone to muck things up.   Artificial intelligence uses a reasoning process modeled on ours, but is it capable of the complex blend of language, emotion, and cognition that go into our philosophies and religions, our ethical values and compassionate mitzvahs? 

Understandably, very important people are alarmed that Artificial Intelligence might go off on its own to threaten human society. 

Gregory Hinton, for example, quit his job at Google when he was asked to apply AI to Pentagon concerns. The MIT Technology Review interviewed him and reported that “Hinton says that the new generation of large language models—especially GPT-4, which OpenAI released in March—has made him realize that machines are on track to be a lot smarter than he thought they’d be. And he’s scared about how that might play out.”  

Hinton worries that, in their ability to answer questions by filtering internet data to create new algorithms, computers can be programmed to make battlefield decisions on their own.  “Don’t think for a moment that Putin wouldn’t make hyper-intelligent robots with the goal of killing Ukrainians. He wouldn’t hesitate,” he says. “And if you want them to be good at it, you don’t want to micromanage them—you want them to figure out how to do it.” 

Note who is doing what here: the threat is not in AI’s inherent workings but in the way that human beings choose to use them against each other:  Hinton is making a distinction between technical capacity and moral will when he questions what will happen if “our technological “ability is applied to something inherently immoral?”  See also a recent Impakter article addressing the same issue: Artificial Intelligence: How Worried Should We Be?    

The problem is that America cannot afford to let China or Russia develop more efficient military AI than we can, so we are engaged, once again, in a tragic worldwide arms race.  

ChatGPT and Me

Soon after I learned about ChatGPT, a bot using a generative pre-trained transformer, I got into an online conversation about its benefits and threats with a friend who is a businesswoman and works in Artificial Intelligence and a second friend who doesn’t but who, like me, is deeply concerned.  

Friend 1: OMG, I never thought I would see my computer do this in my lifetime!

Me: What, what?

Friend 1: I gave ChatGPT a lot of information about my business and then put a link on my website for my clients to use.  A new client just asked it a whole bunch of questions and it answered them all in detail (courteously, as I asked it to do) including stuff I hadn’t programmed in!”

Me: What do you mean?

Friend 1: The client asks ‘of course, you do pro bono work?’.  AI replies: ‘Actually, we don’t.  But we probably can give you a discount on our first contact.’ I didn’t include that info in my instructions:  AI mined the internet and figured it out from our website:  I hadn’t dreamed it would answer a brand new question! What’s more, I only spent one day feeding ChatGPT info about my business –something that would have taken me three weeks of tedious coding before.  

Me: But doesn’t that prove what all of these AI executives are warning us about – that AI might go off on its own and think for itself?

Friend 1: No, it was only looking around on the internet and using what was already there.

Enter Friend 2: The more I learn about Chatbots, the more terrified I get. 

Me: They only mimic the neocortex; they can’t make human-like decisions on their own.

Friend 2:  How about language bots doing things like writing poetry, summarizing the New Testament in seventeen words, and responding to queries in languages it had not been taught, like Bengali? That doesn’t sound like a simple response to a print command or even to programmed logic, that is, ‘if someone says x, respond with y’ – and, too, if you ask it if it has feelings it says it does!

eBooks.com - leading provider of e-textbooks

Friend 1: Just because it could learn those other things does not mean that it is believable when it says it has feelings and is sentient.  All those other capabilities may be true and then we might have a piece of code that says ‘print these words’ if somebody asks it if it has feelings. I think the big lesson that the current wave of large language models has taught us is that there is a big difference between being able to manipulate language and being intelligent or sentient in any real way.  As it turns out, manipulating language can look very convincing, and up until now the only creatures that could manipulate language in such a smart way were indeed sentient and intelligent. That is no longer true. Smart use of language, in other words, was a good proxy for the presence of sentience and intelligence”  (sentience, here, connotes feelings and sensations),

Friend 2: So are you saying that manipulating language is different from thinking? What is thinking then? So was Descartes wrong when he said “I think and therefore I am’?

Friend 1 to Friend 2: Great question!! there’s symbolic thinking – in words – and all of the nonverbal/subsymbolic things we do, which are a really big deal. 

Me: Yes, Descartes was wrong:  new cognitive philosophy takes emotions, hunches, and intuitions into account in decision-making.  (I send them a link to my Impakter article Political Intransigence and Your Inner Elephant)

At this point I decide I need to personally experience what everyone is talking about, so I go the horse’s mouth and pull up ChatGPT (which is lurking in my Bing icon on my Microsoft operating system). 

Want to make videos accessible? Add captions for just $1.25/min. 24 hour turnaround & 99% accuracy

Bing’s chat, after giving me the usual definitions, concludes with a sentence that sums it all up: “At its simplest form, AI is a field that combines computer science and robust datasets to enable problem-solving”, giving me as sources the online Britannica, Wikipedia, and IBM.

Chattering and Mattering   

It is ironic that AI inventors should have chosen “chat” as the term for this distinctly non-gossipy technology.  Chatting is, after all, a form of human conversation that is as often random and emotional as it is reason-based, a rich and complex creation that arises spontaneously from people interacting within a social matrix. It is how we have generated all kinds of how-to and what’s-it-all-about information that is the basis of human culture, passed on from generation to generation.  

Writing about the Enlightenment philosopher Denis Diderot, American writer and essayist Adam Gopnik notes that “he knew that we are sooner seduced by someone who is smart enough to enlist our sympathy than by someone who tries to enlist our sympathy by being smart.  Almost alone among his peers, he was presciently aware that chattering could be a way of mattering.” 

Philosopher Elizabeth Anderson emphasizes the contrast between “rational-choice theory, in which individuals make utility-maximizing decisions, occasionally in heartless-seeming ways,” and solving problems “through the experienced problems of real people.”  

Artificial intelligence makes utilitarian decisions based on what it takes logically as the greatest good of the greatest number. 

But it is heartless. 

Can Artificial Intelligence Run Amok? 

Microsoft Says New A.I.  Experiment Nears Human Insight,” heralds a New York Times headline touting A.G.I., or Artificial General Intelligence, which is software that mimics human thinking in solving tasks it hasn’t been programmed for (it uses ChatGPT-4,  a step beyond GPT-3).  

Microsoft suggests that both machine learning and human thinking are limited to neocortical functions and that all human tasks have rational solutions.  Paradoxically, they add that A.G.I  can and should be “fine-tuned to avoid hate speech, misinformation, and other unwanted information.” 

Having seen on January 6, 2021, when the US Capitol was assaulted, how the dark side of human nature can use conspiracy memes and hate messaging to foment violence and discord, we wonder how this fine-turning might work.   

While curating itself for factual mistakes, how can Artificial Intelligence access our human gifts of heart and soul, ethics and moral values to curtail the tragic human harms its machines might abet?  

Free Domain Privacy

Last week, I wrote a letter to our local newspaper editor to sound an alarm about some of my neighbors who insist that building affordable housing will lower our property values. Support for my hunch that this was a racist dog whistle was almost unanimous; one respondent, however, accused me of ignorance about property values.  

I decided to ask ChatGPT about my conundrum: 

ChatGPT: “In this context, a human might say that there is no evidence that multi-family, affordable housing will lower the town’s property values. On the contrary, such housing may increase the diversity, inclusivity, and vibrancy of the community, as well as provide more opportunities for people to access quality housing and services. There is no statistical or logical basis for the email’s claim, and it may reflect a biased or discriminatory attitude towards people who need or prefer affordable housing options.” (italics added)

I was grateful for the statistical backup and interested that Chat was programmed to remind me that it wasn’t human, but where did its value statement come from?  Just as my AI friend coded  using courteous language with her clients, somebody has programmed my Chat Bot to counter the  racist implications  of  “property values.” 

Open AI, which owns ChatGPT, is clearly modeling values-based human oversight, but can we expect Microsoft, Google, Facebook, et al. to code ethical principles into their programming?

Isn’t asking tech companies to police themselves like putting the fox in the henhouse? Aren’t some broader controls needed? 

As Open AI Chief Executive Sam Altman put it in testifying before the U.S. Senate Committee on Artificial Intelligence, “I think if this technology goes wrong, it can go quite wrong.”  IBM’s privacy and trust officer Christina Montgomery and AI critic Professor Gary Marcus also testified about the urgent need to set up a governmental agency “that issues licenses for the development of large-scale AI models, safety regulations and tests that AI models must pass before being offered to the public.”   

As Senator Blumenthal, the chair of the committee, concluded, “it is high time that the United States government makes the effort to learn all about both the benefits and harms of AI and to buckle down to ‘make the rules for it.”   

We were so tragically wrong about the moral neutrality of technology in the twentieth century: maybe this time we can retain our human agency by applying human values – our principles and our ethics and the laws that enforce them – to the use of technology for the common good and on behalf of the interdependent web of being of which we are a part. 

In that way, we might just be able to rescue our beloved planet from what we have done to it.   Let’s employ the full range of our complex intelligence for good rather than evil. We have the capacity:  this time, let’s not muck it up!

Subscribe to our newsletter.

This article was originally published on IMPAKTER. Read the original article.

0 Shares