Why AI-Generated Beauty Leaves Us Cold

Generative AI can now produce images, messages, and moods designed to feel deeply human. But increasingly, audiences report a strange detachment—even unease. This isn’t about bad design. It’s about what happens to human psychology when emotional signals no longer come from real people.

Why AI-Generated Beauty Leaves Us Cold
Breeze in Busan | How AI Is Changing the Way We Feel About Government

We are entering a time when emotions can be designed. With the rise of generative AI tools, it is now possible to fabricate beauty, nostalgia, even awe—on demand. Cities are adopting these tools to create public campaigns, marketing aesthetics, and digital narratives that feel familiar, comforting, and emotionally persuasive. But something strange is happening.

The more these simulated emotions proliferate, the more people report a growing sense of emotional detachment. These images and experiences look perfect, sound perfect, even feel perfect—and yet, we walk away from them feeling nothing. Or worse, feeling suspicious.

What if the very effort to automate emotional connection is backfiring?

This is not just a technological question—it is a psychologicalethical, and even political one. Because when governments and institutions begin to generate emotion without humans in the loop, we must ask:

Who is really speaking? Who is feeling? And what does it mean to be moved by something that was never felt in the first place?

The Psychology of Emotional Automation

When emotion is engineered, response becomes numb.


In the past, emotional reactions were earned. A painting, a song, a speech—they took time, effort, and human presence. They carried the residue of intention, risk, and vulnerability. That was partly why they moved us. But in the age of generative AI, emotion is no longer something to be cultivated. It is something to be manufactured.

With AI tools capable of generating “aesthetic emotion”—the kind that mimics warmth, nostalgia, serenity, or awe—governments and institutions are increasingly tempted to use these tools to enhance public appeal. Whether through digital campaigns, urban rebranding, or cultural messaging, the emotional tone can now be automated with remarkable precision.

Yet this abundance of emotionally charged content has a paradoxical effect: it makes us feel less.

Psychologists call this aesthetic fatigue—a form of desensitization that occurs when the brain is exposed to highly pleasing but low-effort stimuli too frequently. Just as overexposure to sugar dulls the palate, repeated encounters with hyper-polished, algorithmically tuned images can dull emotional response. The brain begins to register these experiences not as authentic signals, but as noise.

Even more troubling is the emotional disconnect many people report in response to AI-generated “beauty.” It’s not that the images aren’t impressive—it’s that they feel hollow. The absence of a person, a story, a risk behind the emotion triggers a kind of cognitive dissonance. We are being asked to feel something, but we sense no one is truly feeling it with us.

This disconnect is subtle, but powerful. And when it comes from a public institution—a city, a government, a museum—it risks more than just audience boredom. It begins to erode trust.

The Authenticity Instinct — Why We Need Flaws

Perfection is impressive. But imperfection is believable.


Human beings have a finely tuned instinct for authenticity. We may not always be able to articulate it, but we often know when something feels off—too smooth, too calculated, too perfect. In a world increasingly populated by artificially generated emotions, this instinct acts like a psychological immune system.

That’s because we are wired not just to receive emotion, but to detect the source of it. We don’t just react to beauty—we react to how beauty is made, and by whom. When we see brushstrokes, shaky camera work, or moments of emotional hesitation, our brains register them as signals of effort, vulnerability, and presence. These signs of flaws are not distractions—they are clues that someone was there, and that what we’re experiencing is real.

This is what psychologists call emotional credibility. Just as we are more likely to trust someone who stumbles over their words than someone who sounds too rehearsed, we are more emotionally responsive to art, speech, and imagery that shows its seams.

AI-generated visuals, by contrast, often erase the human messiness we subconsciously associate with sincerity. They optimize mood, style, and tone—but without the struggle or risk that typically gives emotion its weight. The result is often something technically perfect, yet emotionally sterile.

This doesn’t mean we reject technology. But it does mean that for something to feel meaningful, it must carry the marks of someone having cared. As philosopher Byung-Chul Han puts it, “The smooth is the signature of the late-modern.” But smoothness does not touch us. Texture does.

That texture—emotional, visual, or narrative—is what we seek when we try to feel connected. It’s what makes a story matter, what makes a place memorable, what makes a moment ours. And increasingly, we are noticing when it’s missing.

The Uncanny Civic — When No One Is Behind the Voice

We’re hearing more from our cities—but are they still speaking to us?


There’s a particular kind of unease that arises when we encounter something that looks emotionally expressive—but we can’t locate the person behind it. This dissonance is often described in robotics and animation as the uncanny valley: the moment when something is almost human, but not quite, and therefore disturbing.

But in the realm of public communication, a new kind of uncanny is emerging—not visual, but emotional. When a city releases a video, an image, or a slogan that’s warm and comforting, but clearly machine-generated or heavily engineered, the result is often subtle discomfort.

It’s not just that we doubt the message. We begin to doubt the messenger.

Emotional messaging in the public sphere—whether it’s civic pride, empathy, or hope—has always relied on the implied presence of a human speaker. A mayor addressing a tragedy, a mural painted by students, even a government infographic—these are not just content; they are expressions of intent. When those expressions are replaced by algorithmically generated tone and style, we’re left asking: Who is this coming from? And do they actually mean it?

When emotion is separated from authorship, we experience what might be called civic alienation. The voice is there, but the person is gone. And in a democracy, where trust depends not just on outcomes but on perceived transparency and intent, this absence is not cosmetic—it’s corrosive.

More than ever, people want to feel that their institutions are not just functioning—they are feeling. But to feel with someone, you must believe someone is there.

The Crisis of Public Trust in an AI Era

When institutions outsource emotion, citizens start to withdraw


In the digital age, we’ve grown accustomed to institutions speaking in cleaner, more coordinated tones. But now, with the rise of AI-generated communication, many public entities—governments, cities, museums, universities—are beginning to hand over even emotional expression to automated systems.

What may seem like an efficiency gain, or a design upgrade, quietly reshapes something much more fragile: public trust.

Emotional tone is not a cosmetic layer in civic life. It is one of the primary channels through which institutions establish legitimacy and relationship. A city's voice—whether in posters, videos, or public art—signals its values, priorities, and humanity. When that voice becomes synthetic, citizens instinctively begin to question whether they’re still being spoken to—or spoken at.

This creates a profound shift: from dialogue to simulation.

In a democratic society, trust is not just about competence. It’s about intention. We trust our institutions when we believe there is someone—flawed, visible, accountable—on the other end. When those institutions begin to automate their emotional language, they risk severing that relational thread.

It may not happen all at once. But over time, people begin to disengage—not because they are hostile, but because they no longer believe the feeling is mutual.

The more institutions use AI to sound emotionally intelligent, the more they may start to feel emotionally absent. And in a time when polarization, fatigue, and loneliness already fracture civic life, this kind of absence becomes a dangerous silence.

How Local Governments Are Using AI Without Clear Rules

The technology is here. The rules are not.


The emotional stakes of AI-generated public messaging would be easier to manage—if there were clear rules for how it should be used. But in most parts of the world, governments and municipalities are adopting generative AI tools faster than they are regulating them.

In South Korea, several city districts have experimented with AI-generated imagery in tourism and branding campaigns, yet few have articulated formal guidelines for what emotional tone, aesthetic ownership, or narrative control should be considered ethical. Most public institutions treat AI tools like a harmless creative assistant—convenient, scalable, and efficient. The ethical and psychological implications are often overlooked.

This policy vacuum is not unique to Korea. A 2024 handbook by the University of Michigan’s Science, Technology, and Public Policy program outlined the urgent need for local governance frameworks to manage AI’s civic deployment. Likewise, New York State has passed legislation requiring government agencies to disclose and assess their use of automated decision-making systems. But these are exceptions, not the rule.

What we are witnessing is a growing divide between what AI can do and what it should be allowed to do in public life. Without clear guidance, cities risk using AI not just to simplify communication—but to simulate connection, without accountability.

If governments are to use tools that speak, persuade, or even emote on their behalf, they must answer a harder question first:

What are the emotional rights and expectations of the public? And who decides when a feeling is “true enough” to be released in the name of a community?

The Need for Emotional Governance

What happens when the state begins to automate emotion—and why we must decide what it can feel on our behalf.


The traditional language of AI regulation—privacy, fairness, transparency—doesn’t yet account for one of the most subtle forms of harm: the manipulation or simulation of public emotion without consent. But as governments begin to use AI tools not just to inform, but to feel, a new frontier of civic ethics is emerging.

Emotional messaging by public institutions shapes more than sentiment—it shapes identity, belonging, and trust. When that messaging becomes automated, it raises new questions of accountability: Who chooses the tone? Who defines what emotions are appropriate? And what if the feeling generated no longer reflects the people it is meant to represent?

What we need is not just technological oversight, but a framework for what might be called emotional governance—a set of norms and practices that guide how institutions express emotion, particularly when those expressions are generated by machines.

Some early models exist. The Centralina Regional AI Working Group in the U.S. has drafted policy guidance on generative AI for local governments, urging officials to weigh risks around manipulation, misrepresentation, and community perception. Similar efforts are underway in parts of Europe. But globally, these are still isolated experiments.

In reality, we have not yet had a democratic conversation about the emotional power we are giving to algorithms—especially when they speak in the name of the public. Until we do, governments risk crossing an invisible line: from representation to simulation, from communication to aesthetic control.

Emotions are not neutral. When deployed by public institutions, they become a kind of soft power—one that should be wielded with transparency, consent, and care.

Feeling Again in the Age of Simulation

What if the most human thing left is simply to feel, and to be felt?


We live in a time when cities can smile without faces, speak without voices, and evoke emotion without ever having felt any. That is the strange achievement of generative AI in public life: the ability to create emotional texture without emotional labor.

And yet, we feel the difference.

We may not always articulate it, but we sense when an image asks us to feel something that no one felt while making it. We know when a story was generated, not lived. That quiet dissonance is becoming louder. It’s in the viral videos that leave no memory. It’s in the beautiful posters that no one remembers. It’s in the voices of our cities that no longer seem to speak to us, but merely at us.

The problem is not AI. The problem is forgetting what emotion is for.

Emotion is not just a design element—it is a signal of presence, effort, and shared meaning. When governments automate it, they risk severing their most vital connection to the people they serve. What begins as a tool for beauty may end as a barrier to belief.

To feel again—to trust again—we will need more than better algorithms.
We will need institutions that are willing to be imperfect, present, and human.

Because in the end, what moves us is not how good something looks—but how deeply we believe that someone meant it.