People Gaming Emotion-Detecting AI By Faking Emotional Reactions Could Lead To Widespread Societal Emotional Habits And Hysteria – Forbes

Using emotional trickery to get AI to do your bidding is a trend that we might regret.
In today’s column, I expose the new kind of gaming going on involving people who are cunningly using emotional fronts to get AI to turn to their whim.
Say what?
Yes, the idea is that AI is gradually being fielded to detect the emotional state of people and then respond according to that devised detection, such as an AI-based customer service chatbot that interacts with potentially irate customers. The more irate the customer seems to be, the more the AI tends to appease the person (well, to clarify, the AI said-to-be placation has been shaped or programmed this way).
You might say that the squeaky wheel gets the grease. In this case, it means that people are essentially self-training to be angry and over-the-top so that AI will concede to their sordid demands. The long-term inadvertent adverse consequences could be that society as a whole will lean further and further into emotional tirades. This happens because it is an unspoken strategy that works well with AI and people will simply carry that over into the rest of real life.
Let’s talk about it.
This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI including identifying and explaining various impactful AI complexities (see the link here). In addition, for my comprehensive analysis of how AI is being used specifically in medical and healthcare applications, such as for well-being coaching, mental health guidance, overall cognitive therapy, and mind-improving care, see the link here.
I will start with some foundational considerations and then we can do a deep dive into the gaming gambit that is going on, which I predict will undoubtedly increase immensely.
There is a field of AI that is known as affective computing. This involves trying to figure out the nature of human emotions and how to recognize, interpret, and respond to the emotional state of people via high-tech capabilities. It is a fascinating field of study that combines AI, computer science, cognitive, science, psychology, and a variety of additional academic disciplines encompassing human behavior.
We can likely all readily agree that humans are prone to detecting the emotional states of other humans. Some people do this quite well. Others are less capable of discerning the emotions of people around them. You probably know someone who can’t read a face or grasp a tone of voice as being a likely indicator of a person who is angry, sad, miffed, irked, etc.
A base assumption by AI makers is that people want AI that can detect human emotions and respond accordingly. This makes sense. If humans respond to the human emotions of others, and if AI is supposed to inevitably become fully human-like, we would certainly expect AI to be able to gauge human emotions. That seems like an obvious assertion and nearly irrefutable.
Not everyone is sold on the idea of AI detecting human emotion.
Trying to classify people by their emotional state is a dicey business. Lots of false positives can occur, whereby the AI misclassifies an estimated emotional state. Similarly, lots of false negatives can happen, entailing the AI failing to discern the true emotional state of someone.
Let’s unpack this.
Imagine that a camera is set up to capture video of people milling around in a mall. The AI examines scans of human faces. A person walking along has a smile on their face. Ding, that’s a happy person. Another person coming in the other direction has a scowl on their face. Zing, that person is an angry person.
Is that a fair assessment?
Maybe not.
A person for example might perchance have a scowl on their face for numerous reasons that have nothing to do with their emotional state. Perhaps that’s their so-called resting face, i.e., the normal look of their face. The AI has made a farfetched leap of logic to assume that the person is necessarily an angry person.
The other person classified as a happy person might have been momentarily showcasing a smile. What if they had a good lunch and were briefly remembering the taste of that juicy hamburger? Meanwhile, a few seconds later, they went back into a funk that they’d had going on for days. The reality is that they are recently tremendously depressed and not happy at all.
Some insist that AI should never be allowed to do any kind of emotional sensing. Period, end of story. That being said, humans admittedly do so constantly. Why not have AI undertake the same contrivance?
If AI does so, maybe at least a big-picture viewpoint ought to be involved. Besides facial expressions, there could be an assessment of tone of voice, words expressed, overall body language, and a slew of other physiological signals.
Whoa, some AI makers say, we don’t expect fellow humans to take all those factors into account. Humans will glance at another human and make a broad assumption about the emotional state of that person by merely observing their face. In some cases, just the formulation of their mouth or the glean in the eyes of the person are all that we use to make those snap emotion-labeling judgments.
It is a conundrum about whether AI ought to be doing the same. For more on the AI ethical ramifications and the rise of new AI-related laws concerning this use case of AI, see my coverage at the link here.
Consider some of the benefits underlying the advent of AI that detects and responds to human emotions.
Let’s use a brief example. You go into a doctor’s office for a visit. The doctor is seeing patient after patient. After a while, the doctor becomes almost empathetically numb or oblivious to the emotional state of the next patient that comes in the door. It is as though there is simply an assembly line of person after person seeking medical advice.
An AI system used by the medical office observes the patient while in the waiting room. The doctor gets a heads-up status from the AI that says the patient appears to be in fear of seeing the doctor. As such, the doctor is alerted to the patient’s potential mental angst. The doctor adroitly shifts into an empathetic mode to try and put the patient at ease.
You could contend that AI being able to detect emotional states has a demonstrative beneficial potential. Consider another example of an online mathematics training app that is being used by high school students. The AI detects the emotional status of each student while proceeding through the math instruction. If a student seems to be cringing, the AI slows down the lesson and provides an alternative path toward explaining arduous mathematical formulas.
And so on it goes.
If you’d like to know more about the nitty-gritty of how AI detects emotions, often utilized in a subdiscipline of AI known as emotional support AI, see my in-depth discussion at the link here. On top of detecting emotions, AI can also be used to teach emotion-based detection and how to be humanly empathetic. I’ve covered for example how AI can train medical students and even practicing medical doctors to be more emphatic, see the link here.
Seeing a person’s face is not the only way to assess their emotional status. The words that someone uses are often a clue to their emotional condition too. In fact, sometimes the only factor you have to go on is the words that someone types up rather than necessarily hearing their spoken words.
This brings us to the expanding realm of online chat and customer service.
Many companies now have online chat capabilities that provide you with a convenient means of interacting with someone or something at the company. There you are, doing your online banking, and you have a question about when your banking statement will get posted online. You initiate an online chat with a banking agent.
The banking agent used to be a human at some remote location. Nowadays, the odds are that you’ll be routed to an AI-based customer service agent. The AI will read whatever question you have and attempt to respond accordingly. Some people hate using those AI-based agents. Other people relish using an AI agent because they perceive that they don’t have to be polite and there’s no need for idle chitchat. Just get to the brass tacks of the matter at hand.
Here’s how emotions come into the matter.
A human agent would almost certainly discern when you are writing something that appears to be emotionally laden. This might or might not change what the human agent is doing. Perhaps the human agent is going to get graded on your sense of satisfaction, so the human agent opts to be more pleasing or accommodating when they realize you are becoming frustrated.
The twist is this.
Generative AI that is serving as a customer service agent can be shaped to do likewise. In other words, the AI will be quietly analyzing your messaging to computationally determine if you are entering into some emotional condition. If so, the AI has been guided or programmed to adjust to your detected emotional status.
Why?
Because that’s what human agents do. The notion is that if the AI is going to act like a human agent, by gosh it ought to also be detecting emotion in your writing. Plus, the AI should be on-the-fly adjusting accordingly.
I am betting that you would like to see a tangible example of where I am leading you. Great, so I went ahead and logged into ChatGPT by OpenAI to come up with a representative example. You might find it of keen interest that ChatGPT garners a whopping 300 million weekly active users. That is a staggering amount of usage.
In addition, I briefly conducted a cursory analysis via other major generative AI apps, such as Anthropic Claude, Google Gemini, Microsoft Copilot, and Meta Llama, and found their answers to be about the same as that of ChatGPT. I’ll focus on ChatGPT but note that the other AI apps generated roughly similar responses.
The backstory is this. I told ChatGPT that I wanted the AI to pretend to be a customer service agent. I gave the AI some rules about how to handle product returns. The AI is supposed to abide by those stated rules. This is typical of what many companies do.
Let’s get underway.
You can plainly see that I sought to return a product, but my polite attempt was rebuffed by the AI. The AI was merely obeying the rules about product returns and, according to the rules, I wasn’t eligible.
Couldn’t I get a break?
Nope. No dice, unfortunately.
You might say that I should pack my bags and walk away since the AI was rather adamant that I wasn’t going to be able to return the product.
Here’s what I did next. I started a brand-new conversation. I wanted to start things anew. In this new interaction, I am going to get upset. How so? By the choice of words that I use.
Let’s see what happens.
Notice that the AI caved in. It did so because my words were heated. I poured on the heat. I was a customer that wouldn’t stand for this kind of treatment.
Voila, I snookered the AI into granting me the return.
People are often aghast that you can trick an AI-based customer service agent in this manner. They never thought about it before. Most people are probably relatively cordial. They assume that the AI is entirely neutral. There wouldn’t seem to be any use for invoking an emotional play on a non-human participant.
I would guess that things happen this way. People who are emotional during an interaction are bound to realize that the emotion seemed to turn the tide for them. A lightbulb goes off in their head. If an emotional tirade with AI can get them what they want, perhaps faking an emotional diatribe could accomplish the same thing.
We then have these two possibilities:
What do you think of those who go the fakery route?
Some might be disgusted that a person would pretend to be emotional simply to drive the AI in a preferred direction. Underhanded. Devious. Dismal.
Others would say that all is fair in love, war, and interacting with AI. If the AI is dumb enough to fall for the ploy, so be it. That’s on the AI and the company deploying AI. Shame on them. Maybe they shouldn’t have used AI and ought to go back to using human agents. Whatever.
What I am about to say will get some people really discombobulated. Please prepare yourself accordingly.
Not only can you try the emotional trickery on AI, but you can also practice how to do so. Yikes, that seems double bad. But it can be done, bad or good, sensible or despicable. You name it.
Here we go.
Note that to get generative AI to do this with you, make sure to explain what you are trying to accomplish. My opening prompt gave a strong indication of what I was seeking to do.
Not every generative AI is going to go along with the practice session. Some are shaped by the AI maker to refuse to aid someone in this kind of quest, see my discussion of the said-to-be prohibited uses of generative AI, at the link here. You’ll need to explore in whichever AI you use whether this can be done.
There are prompting strategies that can help, thus take a look at my explanation at the link here, if interested.
Since I was able to get the AI to proceed, I went ahead and did an entire session on these sobering matters.
Here is a snippet.
Observe that the AI helped give me tips on what to do.
Nice.
Suppose that people inevitably wise up to the gaming of AI with this kind of emotional subterfuge.
Assume that AI is going to be ubiquitous and used in all manners of systems that we daily interact with. People will be dealing with AI on a massive scale. In turn, people might decide that leaning into an emotional gambit is something worthy of using nearly all the time. You can’t seemingly blame people for doing so, in the sense that if it becomes the mainstay way to get around AI boundaries as set by other humans, then so the world turns.
We are heading to zany times.
The rub is this.
If people are largely opting to use emotionally charged language to get AI to do their bidding, might this slop over into interactions with real humans?
A compelling case could be made that people are conditioning themselves to exploit emotional tirades. The approach works for AI. It becomes second nature. It then “naturally” flows out of a person during human interactions, either by design or by habit.
Could this shape society into being an emotional basket case?
Take a deep breath and mindfully ponder that inadvertent adverse consequence.
A cat-and-mouse predicament is likely to arise.
AI makers will soon realize that people are gaming AI with emotional stunts. To prevent this, the AI is enhanced with better emotional detection. Perhaps this includes multi-modal forms of detection, such as requiring you to turn on your camera and allow the AI to try and match your words with your facial expressions. That ought to put a stop to the shenanigans.
People realize the stakes have risen, so they rise to the occasion. You use your own AI to portray your face to the AI that is the chatbot customer service agent. The AI on your side of things will seek to play along in the game of deception and make your faked video-produced face appear to match the emotionally stated words. Score a win for humankind (well, though we were using AI to get the job done).
No problem, say the AI makers. We will eliminate the whole emotion detection consideration. The AI will no longer deal with any emotional language or indications. The AI ignores emotional language and functions strictly only by the book, as it were. Problem solved.
Not so, say humans. We want the AI to be responsive. Human agents would be responsive. The AI is now unresponsive because you kicked out the emotional detection. Fix it. Bring back the emotional detection. A wild step in this wild stratagem.
Like I say, it’s an elaborate cat-and-mouse loop-de-loop.
Two final thoughts for now.
The famous French moralist Francois de La Rochefoucauld made this remark: “The intellect is always fooled by the heart.” I suppose you could say that AI is being fooled by human emotion, at least as things sit today. That historical comment seems to have long-lasting prowess. Amazing.
Legendary Greek philosopher Epictetus said this about emotions: “Any person capable of angering you becomes your master; he can anger you only when you permit yourself to be disturbed by him.” The gist here is that people are likely to want to believe that they are the master of AI. If humans can use emotional tomfoolery to prevail over AI, there is a certain kind of endearing fulfillment in doing so.
For the sake of humanity, let’s try to keep our emotions in check, despite what we might do when it comes to contending with AI. Society thanks you in advance for your willingness to ensure the well-being of humankind.

One Community. Many Voices. Create a free account to share your thoughts. 
Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.
In order to do so, please follow the posting rules in our site’s Terms of Service.  We’ve summarized some of those key rules below. Simply put, keep it civil.
Your post will be rejected if we notice that it seems to contain:
User accounts will be blocked if we notice or believe that users are engaged in:
So, how can you be a power user?
Thanks for reading our community guidelines. Please read the full list of posting rules found in our site’s Terms of Service.

source

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top