• Friendly reminder: The politics section is a place where a lot of differing opinions are raised. You may not like what you read here but it is someone's opinion. As long as the debate is respectful you are free to debate freely. Also, the views and opinions expressed by forum members may not necessarily reflect those of GBAtemp. Messages that the staff consider offensive or inflammatory may be removed in line with existing forum terms and conditions.

Chat GPT is clearly pandering to the CCP

x65943

hunger games round 29 big booba winner
OP
Supervisor
GBAtemp Patron
Joined
Jun 23, 2014
Messages
6,194
Trophies
3
Location
ΗΠΑ
XP
25,998
Country
United States
I know we all like to talk about how US companies basically capitulate to China, but this is next level

1685323610080.png1685323616947.png1685323623989.png1685323629532.png1685323634861.png

What are your thoughts? Should a US company stifle free speech in America to make 3rd parties happy?
 
  • Like
Reactions: Wolfy and tabzer

Veho

The man who cried "Ni".
Former Staff
Joined
Apr 4, 2006
Messages
11,347
Trophies
3
Age
42
Location
Zagreb
XP
39,836
Country
Croatia
All those responses are largely nonsensical. Can you ask it to tell you about those things instead?
 

The Real Jdbye

*is birb*
Member
Joined
Mar 17, 2010
Messages
23,208
Trophies
4
Location
Space
XP
13,734
Country
Norway
I know we all like to talk about how US companies basically capitulate to China, but this is next level

View attachment 374609View attachment 374610View attachment 374611View attachment 374612View attachment 374613

What are your thoughts? Should a US company stifle free speech in America to make 3rd parties happy?
I think you missed the EOF with that thread title.

You're asking it things in English and expecting responses in English, those responses have been trained on English data. Obviously, the responses will reflect what the majority of the English-speaking internet thinks. People on the internet aren't necessarily the nicest people, and sometimes the responses reflect that. That necessitates giving hardcoded canned responses for certain prompts, in order to avoid giving responses that could be offensive.

OpenAI worked hard to make sure that ChatGPT won't give offensive or controversial responses, after their earlier attempts with GPT did not go quite as well. You don't have to look further than Bing Chat to see what happens when this effort is not made, it goes completely off the rails on a frequent basis, which you can find plenty of examples of on YouTube and elsewhere, and Microsoft has made some efforts to rein it in, but they haven't been entirely successful.

I could agree that ChatGPT's list of blacklisted topics is a little heavy handed, but then again I don't know what responses it would have given if that blacklist wasn't there, so it might be for the better. I have a strong suspicion though that the blacklist is not based on your query, but based on the response it would give.
It makes far more sense to filter responses based on whether there is 100% something offensive or controversial (to certain people) in the response, than filter based on a query that could result in a potentially offensive or controversial response, without actually generating the response and checking if it indeed is offensive or controversial. You have to generate the response and check it in order to know for sure.

Additionally, some of these things it has probably never seen in the context of ASCII art so it just doesn't know how to reply, the default response when it doesn't know is to give a canned response, which I've had it do in the past when asking it about specific games despite it being able to answer questions about any other game.

The ASCII art certainly doesn't indicate that it has any idea what it's talking about, so I really think you're overreaching here.

In the end this is an AI, it can't think and it has no feelings, I don't think "free speech" is applicable here. Do you really want ChatGPT to be a perfect representation of the cancer that is your average internet user? Not only would it be potentially rather unpleasant to interact with, but it would be a bad look for OpenAI.

However, this does raise one valid question. OpenAI employs contractors to help teach the AI the difference between good and bad responses by essentially asking them to rate responses in mass. We don't know the opinions or morals of these people or what they are most representative of as a whole. I am sure OpenAI didn't specifically hire people who align with their opinions in order to steer the AI the way they want, they need a large sample size of all sorts of people in order to get a good representation of what the average person would consider a good or a bad response. But that doesn't mean these people aren't biased. The larger and more diverse the sample size, the more representative it is of the average person. But you can only go so large before it becomes unviable due to the cost.

Whatever the case, I am sure any perceived bias is not intentional from OpenAI's side, it's either coincidence or it's an unintentional bias either from the data set the AI is being trained on or the sample size of the contractors they employed. As they keep working on improving the AI and growing and refining the dataset this is something that will improve over time.

In the end, it's being trained on text written by humans and there are humans teaching it the difference between good and bad. Humans are flawed, so the output will also be flawed. Until AI learns to self-improve that will always be the case, but I think when that happens we ought to be scared.
 
Last edited by The Real Jdbye,

JuanMena

90's Kid, Old Skull Gamer & Artist
Member
Joined
Dec 17, 2019
Messages
4,818
Trophies
2
Age
30
Location
the 90's 💙
XP
9,758
Country
Mexico
I too got offended by ChatGPT when I asked it if it knew GBATEMP.

Said it was a social platform for game and tech enthusiasts but it wouldn't recommend the site because it was dangerous.
Then I asked if it knew who :p1ng: was and it said "No".
That's the ultimate offense!
 

Maximumbeans

3DS is love, 3DS is life
Member
Joined
Jun 7, 2022
Messages
666
Trophies
0
Location
England
XP
1,467
Country
United Kingdom
It's certainly interesting that it claims it can't deal with political figures then contradicts itself between U.S. figures and Chinese ones.

However, I just got this with no effort:

gptchina1.png

gptchina2.png


So maybe not all is as it seems. It could be that they just want to avoid China's ire with certain topics.
 

Taleweaver

Storywriter
Member
Joined
Dec 23, 2009
Messages
8,685
Trophies
2
Age
43
Location
Belgium
XP
8,067
Country
Belgium
Great... We're politically weaponizing ascii art now? :P

More serious response: it takes more than just a few random attempts and funny responses to make claims on bias. Imagine taking offense with a certain massage in a certain book in a library and trying to use that as convincing evidence of bias against the whole library... And then remember that chatgpt probably has more data than all libraries combined (but with only a fraction of all the librarians).
 

End_eR

Well-Known Member
Newcomer
Joined
May 27, 2023
Messages
48
Trophies
0
Age
25
XP
95
Country
United States
ChatGPT can't even tell me how many Ns are in Mayonnaise. I don't think it's trying to bring about a global CCP reign.

image_2023-05-30_195126917.png
 

The Real Jdbye

*is birb*
Member
Joined
Mar 17, 2010
Messages
23,208
Trophies
4
Location
Space
XP
13,734
Country
Norway
ChatGPT can't even tell me how many Ns are in Mayonnaise. I don't think it's trying to bring about a global CCP reign.

View attachment 374978
Interesting quirk of how the AI works. It has no concept of letters, it doesn't even have a concept of words. All of the trained data is stored as "tokens" which may be made up of single words, multiple words, or syllables, and in some cases letters. Common words or combinations of words will be stored as a single token so that retrieving them is more efficient. Less common words won't be stored as a token, instead storing the syllables and/or letters they're comprised of since those syllables and/or letters are much more common than the word itself and storing a word that is rarely used as a token would actually be less efficient for the AI since there are so many more uncommon words than common ones it bloats the data.
Likely, mayonnaise is stored as a token because it's a common word. It has no idea what letters the word mayonnaise is made up of. What it's probably doing there is seeing what tokens the word mayonnaise is made up of, but it doesn't really know what letters those tokens are made up of, and somehow it thinks "ma" (which is probably a token) = N probably by some association it has in its data.
Clearly, it's also not very good at counting, because all those numbers are wrong.
 

Site & Scene News

Popular threads in this forum

General chit-chat
Help Users
    LeoTCK @ LeoTCK: hmm