Decoding emoji usage and emotional bias in LLM: A case study of angry faces and gender interactions in GPT-4o
DOI:
https://doi.org/10.3765/plsa.v11i1.6041Keywords:
GPT-4o, emoji, cross-gender bias, sentiment analysis, angerAbstract
As individuals increasingly use large language models (LLMs) for emotional support and companionship, the emotional intelligence of these systems becomes an urgent issue. This study examines how GPT-4o expresses anger in Mandarin (Traditional) using emojis, degree, and judgment expressions across five gender interactions: Male-to-Male (MtoM), Male-to-Female (MtoF), Female-to-Male (FtoM), Female-to-Female (FtoF), and unspecified (None). The present research analyzes 59,806 responses to assess whether GPT-4o’s emotional output reflects gender biases. Findings reveal that GPT-4o mirrors some human emotional behaviors yet deviates when addressing female recipients by overemphasizing anger. The results may aid LLM sentiment development and GPT-4o emotion recognition, addressing emotional misalignment that impacts trust and stereotypes in medical chatbots.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Zi-Xiang Lin

This work is licensed under a Creative Commons Attribution 4.0 International License.
Published by the LSA with permission of the author(s) under a CC BY 4.0 license.
