How you speak to ChatGPT tells a lot about you
Are you speaking more kindly to a chatbot than to your coworkers?
They say that you can tell a lot about a person by looking at how they speak to a chatbot when no one is looking.
Actually, no one says that. Just me. But hear me out.
So, I was analysing the prompts which people are using on my AI product and I noticed a lot of “kindly”, “please”, “could you” and “thank you”. I even noticed a couple of sorries splashed here and there. Me? I am very strict with these agents. More like a fatherly figure. I like to send them down an existential crisis when they get things wrong even when I know that my prompt was half hearted and without any real context. But hey, no one knows that side of me. The relationship between a chatbot and a human is deeply personal. It as sacred and binding as an attorney client privilege if not more. No one has any right to judge me on that. No one. Maybe Sam Altman, but no one apart from him.
But that got me thinking , why are people so polite to a machine? Why am I so... not? Turns out there is this whole theory behind it called the Media Equation. It states that:
People tend to assign human characteristics to computers and other media, and treat them as if they were real social actors.
The research was pioneered by Byron Reeves and Clifford Nass. They claim that human interactions with media (be it a chatbot, computer or even a film) are fundamentally social.
Politeness
Social observation hypothesis: When someone seeks direct feedback, humans tend to be more positive, polite and give more constructive feedback as opposed to when a third party seeks the same feedback about the same person. For eg. If you are having a hard time working with a colleague and they ask direct feedback from you, you will likely be polite and frame your feedback in a positive manner. However, if your manager seeks feedback on the same colleague, you will probably be much more honest and wont sugar coat your words.
Reeves and Nass designed an experiment to test whether this social construct held true for computers.
The test
Test participants worked with computer A on which they learnt random facts about american pop culture. After each question, the participant was asked whether they knew a great deal, somewhat or very little about the fact. After the facts were done, the computer administered a test based on what they learnt and showed the correct and incorrect answers.
After the performance test ended, the computer congratulated itself by saying that “it did a very good job” (note that “it” here refers to the computer itself and not the human).
After this exercise was done, participants were divided into two groups to evaluate the computer’s performance. Half the participants were asked to evaluate the computer’s performance on the same computer which had praised itself (the one on which the exercise was done) while the other half was asked to evaluate the computer’s performance on a different computer on the other side of the room.
In the evaluation, the participants were supposed to choose amongst 22 adjectives describing how they felt the computer performed. The adjectives ranged from negative, neutral to positive words like accurate, analytical, competent, fair, friendly etc.
Results
The participants who evaluated the computer’s performance on the same computer on which they took the test gave significantly more positive responses as compared to those who evaluated the performance on another computer.
The variance in responses given on the first computer was lesser than the variance in responses on the other computer.
Later on, the researchers introduced a control mechanism where participants were divided into three groups to evaluate the computer’s performance:
Evaluate on same computer
Evaluate on another computer
Evaluate on pen and paper
The responses on pen and paper were equally less favourable as the responses on the other computer.
This indicated that participants were more honest when they were speaking about the computer’s performance behind its back than when addressing it directly.
Interestingly, when asked afterward, participants confidently claimed they would never change their feedback just to be polite to a computer. This suggests our social responses to computers and media are automatic and unconscious.
Further experiments
Proximity to screens: In a televised presidential debate, the participant who moved closer to the camera received more favourable response from viewers because the candidate had reduced the distance between themselves and the viewers. Similarly, jump scares in horror movies work because people feel physically uncomfortable when a large face is put on a screen. The viewer feels that their personal space is being invaded.
Flattery: When a computer gave positive feedback to the user, it was perceived as more helpful as opposed to computers who gave negative feedback despite of the users being cognizant of the fact that the responses were pre programmed.
Personality and Voices: If a computer spoke with an encouraging and enthusiastic tone, users described it as having an upbeat personality. If a voice interface was male or female, people involuntarily applied gender stereotypes which is why a lot of voice bots default to female voices as they are often perceived as trustworthy and warm.
Takeaway
After a multitude of such experiments, Reeves and Nass came to the conclusion
Media = Real Life
Our brains haven’t evolved to treat media as different from actual social interactions. This is why we are wiling to share our deepest secrets with chatbots and engage in long conversations with them. They stimulate the brain in the same way a real life social interaction would.
This is why digital assistants are given human like names and voices (Siri, Alexa). Anthropomorphising software is way to illicit social responses. Remember how Clippy had a distinctive pair of eyes and brows to convey personality? While it did not have the right technology at the time, today’s virtual assistants have all the tech but lack the personality, instead you get a chat interface with a dropdown to toggle between features and models.
So, the next time your computer freezes with 24 tabs open and you find yourself cursing at it, remember, you’re only human.
This is insightful 🙌🏻.. though it doesn't answer the question of why are you strict to poor chatbot😅
Super interesting, thanks for sharing!