IN 1970, Masahiro Mori, a Japanese roboticist, coined the term ‘Uncanny Valley’ to explain the discomfort we feel when we see something inanimate that closely resembles a human being but isn’t quite right, something …off.
Like a human-faced robot or CGI character that resembles a human but then – oh wait, does that creature have six fingers?
The Uninvited Guest at the Uncanny Valley Party
Fast forward to the present day, we are standing face-to-face with a new guest at the Uncanny Valley party – Artificial Intelligence. AI, with its seemingly limitless potential, has made remarkable strides in recent years.
Among these advancements is the development of generative AI, which has the incredible ability to create its own content.

However, despite its impressive strides, generative AI is not immune to the pitfalls of the ‘Uncanny Valley.’ For example, it can construct a seemingly normal scene, but then some glaring mishap can occur – a character in the scene may end up with six or seven fingers or may resemble a creature from a Hieronymus Bosch painting.
In some cases, these anomalies are glaringly obvious. For instance, consider Midjourney’s representation of people eating pizza or spaghetti – a bold step in AI-generated artwork, yet, somehow, eerily disconcerting. Other times, these flaws are more subtle – a vague “iffy feeling” that something’s slightly awry.
Regardless of the glaring or nuanced nature of these blemishes, these irregularities often lead to a negative emotional response, causing an unconscious urge to avoid such images.
Overlooking the Forest: Generative AI’s Overshadowed Problem
This ‘Uncanny Valley’ effect is a widely studied phenomenon in fields ranging from visual effects in movies to humanoid robot development. However, despite its widespread recognition in these areas, little attention appears to have been directed towards the ‘Uncanny Valley’ problem in generative AI.

What we’re seeing is a classic case of forgetting to see the forest for the trees – we’re so enamored by AI’s capability to generate paintings appearing as though they were muses of Van Gogh himself that we overlook the unsettling incongruities.
What if we could change that? What if technology could identify these unsettling responses and course-correct?
Fixing AI’s Uncanny Affliction – A Neuroscientific Approach
Well, there is a path forward if we look to the field of neuroscience for answers. Neuroscience methods have been used in the past to detect similar unwanted responses in various research.
By studying the human brain and its reaction to stimuli, we can potentially attribute certain responses to specific features of an image or scene. Understanding these triggers could help improve the accuracy of generative AI results, making them more appealing and less disconcerting.
Amidst our discussions on potential remedies, there lies an untapped wealth of information in the realm of human-robot interactions. One of our own publications that was funded by DARPA provides compelling insights into the neural bases of Uncanny Valley responses.

By studying humans’ neurophysiological responses to interactions with robots, we successfully identified changes in brain activity that were indicative of an uncanny response. When presented with robots that were, to an extent, human-like, but exhibited incongruences and rendered an uncanny feeling, subjects’ brains showed significant changes in specific neuropsychological signature responses.
Basically, we developed an “Uncanny Valley Meter” from brain responses.
This research not only presents evidence of the brain’s response to Uncanny Valley scenarios in human-robot interaction, but it also equips us with the neuropsychological markers that could be leveraged to fine-tune our generative AI solutions. The method described subtly embraces the intricate dance between innovation, cognitive science, and neurology, providing a beacon guiding our journey toward addressing the Uncanny Valley conundrum.
In essence, neuroscience is messaging us loud and clear – it’s time to attentively listen to the whispers of our neural pathways and their eerie tales of uncanny interactions.
In doing so, we might just sculpt a friendlier, more captivating future for our generative AI capabilities.
Striking a Balance: Creativity, Perfection, and the Uncanny Valley
Imagine a system where upon generating an image, subtle triggers in the human brain are studied. The reaction of an individual to specific features of an image could be recorded and used to inform future AI results.
For instance, if a generated image of a person with seven fingers triggers a negative response, the system then uses that information to ensure that future anthropomorphic creations stick to the traditional finger count.
Feasibly, this could result in iterative refining. In accordance with a negative response, the system then red flags that fault. The AI would learn, grow, and adapt to prevent repeating the same misstep. The value of such a system would be two-fold. Not only do we get a generative AI that produces more desirable results, but we uncover more about how our human brains perceive art and images.
Does such a system already exist? Indeed, it does, and such predictive modeling is already making strides in growth and impact across the world.

This, however, is not to suggest that generative AI strives for a perfect rendition of real-world scenarios and characters. Perfection would strip away the creative charm and unpredictability that AI art could potentially introduce. Instead, the proposed means aspire to provide an avenue for balance, where AI can generate creative content without tipping too far into the abyss.
In conclusion, the time is now for generative AI to address its Uncanny Valley dilemma. As this disruptive technology continues to evolve, grow and challenge our concepts of creativity and reality, we must ensure that it doesn’t lose sight of human perspective in the process. Incorporating neuroscience methods in the journey ahead can provide critical insights that will aid in overcoming the Uncanny Valley effect seen in AI-generated content, making it more likable, relatable, and ultimately, more human.
So, my call to action is to researchers, innovators, and anyone tinkering about in the image generation landscapes: Let us bring neuroscience into the fold of generative artificial intelligence. For just as the uncanny valley reminds us of our innate human instinct to preserve the likeness of humanity in our creations, this cross-disciplinary approach may likely be the panacea to our AI’s uncanny problem.