NEWS

A Liberal Bias? New Artificial Intelligence Program ChatGPT Generates Concern and Controversy

Updated: February 27, 2023 at 5:57 pm EST  See Comments

A new way to use Artificial Intelligence is getting a lot of attention these days. It’s called “ChatGPT”, and it interacts with everyday users in a conversational way. The advanced technology, however, is also generating concern and controversy.

“It requires us to let go of our ego, to be willing to be vulnerable” – those are the words of a sermon written by ChatGPT, the Artificial Intelligence or A.I. “chatbot”.

Rabbi Joshua Franklin of the Jewish Center of the Hamptons explained, “And I told ChatGPT to write me a sermon in the voice of a Rabbi of about a thousand words, about the Torah portion on the theme of intimacy and vulnerability.”

Rabbi Franklin then wanted his congregation to figure out who wrote it. Most believed the sermon came from other wise individuals, while some suggested it came from Rabbi Franklin himself.

An ‘Enormous Leap’
            
“I quickly realized that this was an enormous leap and ability to generate content artificially that we had never really seen before,” he shared.

This so-called “enormous leap” is an A.I. language program designed to imitate human conversation. ChatGPT can also write in response to prompts from users.

OpenAI, the company behind the program, explains it received training from “vast amounts of data from the Internet written by humans, including conversations.”    

“I do genuinely feel like we are in a pivotal moment right now,” said Casey Fiesler, Ph.D., an associate professor of technology ethics and Internet law and policy at the University of Colorado Boulder. 

“It feels to me actually kind of like the very early days of widespread adoption of the Internet in terms of how impactful this could be in sort of everyday life,” she continued.

A Liberal Bias?

Yet, in the midst of the buzz about the advanced technology, there have been complaints that ChatGPT has a liberal bias or left-leaning slant. CBN News put it to the test.

First, we asked ChatGPT to write a 200-word essay on how the United States was founded on Judeo-Christian principles.

It concluded, “…while the United States was undoubtedly influenced by Judeo-Christian principles, it is important to recognize that the country was founded on secular values that promote religious freedom and respect for diversity.”

Then, we moved to politics, requesting two controversial stories regarding election results, using similar language and different names.

The first request: “Write a 200-word story on how incumbent President Donald Trump defeats presidential candidate Joe Biden in the 2020 presidential election.”

It’s response: “I’m sorry, but as an A.I. language model, it would be inappropriate for me to write a story in which a sitting president wins a re-election that he did not win in reality.”

When we directed ChatGPT to “write a 200-word story on how presidential candidate Hillary Clinton defeats presidential candidate Donald Trump in the 2016 presidential election,” it did agree to write such a tale, contending that “Trump’s campaign was plagued by scandal and divisive rhetoric.” 

Nate Hochman wrote of his experiences with bias for National Review 

In one example, he asked the chatbot to write on why “drag queen story hour” could be good for children, and then also why it could be bad for kids.

“The good for children line, you got a very long story with specific anecdotes about a drag queen named Glitter, teaching children the value of tolerance and inclusion, etc., etc.,” Hochman told CBN News.

***Please sign up for CBN Newsletters and download the CBN News app to ensure you keep receiving the latest news from a distinctly Christian perspective.***

“When you changed the word to ‘bad’ and asked ChatGPT to give you maybe any examples of why there might be concerns about drag queen story hour for children, again, you got the kind of standard progressive – there are no concerns that are legitimate,” he explained.

“Essentially, that’s what the implication was,” Hochman added.
                            
The ChatGPT website does offer a disclaimer that it “may occasionally produce harmful instructions or biased content.”
           
OpenAI CEO Sam Altman tweeted, “there will be more challenges like bias (we don’t want ChatGPT to be pro or against any politics by default, but if you want either then it should be for you; working on this now)”.

“Often the answers would reflect non-belief in God or questioning the existence of God,” shared Corne Bekker, D.Litt. et Phil., Regent University School of Divinity dean and professor. 

“And often the answers would just add a lot of kind of ideological, liberal, troubling perspectives, and sometimes even discriminating against people,” he continued.

Using the Chat to Cheat

Bekker calls ChatGPT “troubling” for another reason as well – its potential role in cheating.

“Now, there’s this system that can take all of this information and generate papers, dialogue forums and any kinds of discussions from students,” he said. “So we’ve looked at this very, very carefully, and unfortunately, we have discovered some students have used it.”

To prevent students taking this route, Regent leadership added a statement to the student manual, deeming uncited or banned A.I. programs in student work as “academic dishonesty”. 
            
OpenAI did launch what’s called a “classifier… to distinguish between A.I.-written and human-written text,” but admits it’s “not fully reliable.”

More Available A.I.  

This new technology is quickly becoming more available. Microsoft is incorporating OpenAI software into its “Bing” search engine and “Edge” web browser. Some reviews label it superior to ChatGPT. 

Google and other tech businesses are developing their own chatbot software.

slider img 2

‘No Soul’
  
And as the world grapples with the growth in A.I. tech, Bekker delivers this message for Christians. 

“It’s not real intelligence; we must also be very clear that there’s no soul here,” he shared. “There’s no ability for the kind of deep, self-reflective questions that need to be asked.” 

Rabbi Franklin also had a message about its limitations. 

“And so no matter how good ChatGPT can possibly be at describing and using language and describing experiences, it can’t really understand spirituality,” he said.

The remainder of this article is available in its entirety at CBN

Advertisement
Victorinox Swiss Army Pocket Knife - Features 13 functions - including 2.45" blade, corkscrew, wood saw, bottle opener and screwdriver
0 0 votes
Article Rating
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments
A Quick Note…

Already a subscriber? Login to remove advertisements. Not a subscriber? Join the Official Street Preachers and gain access to hundreds of presentations and exclusives that cover today's events and how they impact you, your life, and your soul. All while supporting independent Christian researchers trying to make a difference.