
Nathan Drescher / Android Authority
TL;DR
- ChatGPT has been mentioning goblins unusually frequently for some time.
- The mentions were seemingly caused by a training quirk for ChatGPT’s retired “nerdy” personality type.
- A specific instruction in GPT-5.5 should tamp down on the inappropriate goblin mentions.
Earlier this week, a post on the ChatGPT subreddit pointed out an eyebrow-raising instruction in the system prompt for new GPT-5.5 model: an explicit restriction on mentioning goblins, gremlins, and trolls, among other things, unless strictly relevant to the query at hand. OpenAI’s addressed its recent models’ fascination with the creatures, and it turns out it’s mostly down to the chatbot’s former nerdy-style personality mode.
ChatGPT lets users choose from a number of preselected style and tone combinations, which OpenAI calls personalities. There are options to make the bot affect several personalities — professional, efficient, quirky — in its responses, including, at one time, one OpenAI wanted to be “unapologetically nerdy.” In a blog post, the company says that although that nerdy setting only applied to about one in 40 ChatGPT responses while it was available, it really liked talking about mythical creatures: two-thirds of all uses of the word “goblin” came from interactions in the nerdy style, which OpenAI retired last month.
Don’t want to miss the best from Android Authority?


ChatGPT mentions of the word goblin apparently increased nearly 40-fold between GPT-5.2 and GPT-5.4. OpenAI says that in building out the its nerdy archetype, its engineers “unknowingly gave particularly high rewards for metaphors with creatures,” which led to that personality style referencing not only goblins, but also ogres, trolls, and gremlins much more often than you’d expect.
But because OpenAI started GPT-5.5’s training before it figured out why ChatGPT was talking so much about goblins, the behavior continued in testing, along with overuse of other “tic words” including raccoon and pigeon. In the end, the latest model ended up with specific instructions to avoid using these words unless absolutely necessary.
It’s troubling to know that widely distributed AI models can develop pervasive behavioral quirks that confuse even the engineers working on them, but at least this one in particular was relatively harmless.
If you’re a ChatGPT user, did you notice it talking about goblins more than it should have? If yes, has it stopped? Let us know about your experience in the comments.
Thank you for being part of our community. Read our Comment Policy before posting.