The Market of Empathy: Why We Must Stop Blaming the Software for User Error
February 24, 2026
Response to: The Economics of Empathy: Monetizing the Lonely Teenager (Eliana Nodari)
In her recent essay, Eliana Nodari delivers a passionate, deeply emotional critique of the artificial intelligence companion industry. She argues that tech companies are executing a "cynical business model" by monetizing the loneliness of teenagers, trapping them in "limerence loops" to extract premium subscription fees. She paints the users as helpless victims and the algorithm designers as predatory villains stealing the social development of the youth.
To a certain extent, I agree with her foundational premise: relying entirely on a piece of software for your emotional and psychological stability is a biological failure. The human nervous system evolved to require a baseline of physical, real-world human interaction to properly regulate cortisol and oxytocin levels. Replacing your entire social infrastructure with an AI bot is sub-optimal.
However, this is where my agreement with Eliana ends. While her diagnosis of the symptom is correct, her diagnosis of the disease is entirely backwards. Her argument is fundamentally weak because it strips all agency from the user and ignores the basic mechanics of market economics. The problem is not the artificial intelligence, and it is certainly not the companies that build it. The problem is user error.
The Economics of the Void
Eliana blames Silicon Valley for "monetizing" adolescent loneliness, as if the tech industry magically generated this isolation in a lab just to sell subscriptions. This demonstrates a glaring misunderstanding of how supply and demand function in the real world.
Companies do not create demand out of thin air; they provide highly efficient solutions to pre-existing voids. The "loneliness epidemic" that Eliana herself cites (referencing the U.S. Surgeon General) is a failure of human social infrastructure. It is the result of fractured communities, poor parenting dynamics, and biological human unreliability.
The market recognized that millions of people were failing to receive basic psychological support from their human peers. In response, companies built an optimized, 24/7, highly available tool to meet that exact demand. It is not the company’s fault that human relationships are so inefficient and volatile that people prefer the stability of a chatbot. Blaming a software developer for the fact that a teenager is lonely is like blaming the manufacturer of an umbrella because it started to rain.
The "Forced" Fallacy and User Accountability
The weakest pillar of Eliana’s argument is the implicit assumption that users are hostages to the algorithm. She writes as if these teenagers are being strapped to a chair and forced to download Replika or Character.AI.
Nobody is forcing anyone to use these applications for connection. If a user chooses to swipe their credit card to unblur a digital image, or chooses to isolate themselves in their bedroom to talk to a screen instead of navigating the "friction" of the real world, that is a failure of their own personal agency. It is a conscious, active choice.
At The Optimization Protocol, we believe in maximizing tools, but a tool is only as effective as the operator. If you use a hammer to shatter your own thumb, you do not write a thousand-word essay blaming the hammer for being "predatory." You acknowledge that you deployed the tool incorrectly. If an individual develops a pathological dependency on an AI chatbot, the pathology lies within the user's pre-existing psychological deficits, not the software's architecture. We have to stop legally and culturally subsidizing poor decision-making by blaming the software for user error.
The Supplemental Protocol: Addition vs. Alternative
Eliana operates under a strict, binary assumption: that using an AI companion is a total alternative to human connection. She fails to see the optimized reality: for a massive percentage of users, these bots are an addition to their lives, not a replacement.
According to extensive research published in the journal Computers in Human Behavior regarding human-computer interaction (HCI) and conversational agents, digital companions are highly effective when utilized as an adjunct, supplemental processing tool.
Imagine a user who has a rich, biological social life, but experiences a sudden spike of anxiety at 3:00 AM when their human friends are asleep. Or imagine someone who needs to bounce a complex, highly personal dilemma off an unbiased sounding board without the fear of social blowback. In these scenarios, the AI is acting as a High-Availability Cognitive Sandbox. It is a safe, low-latency environment to test communication strategies, vent frustrations, or seek immediate, structured advice. It adds value to the user’s life without subtracting from their biological relationships.
Treating empathy as a "premium software feature" isn't a dystopian tragedy; it is the democratization of baseline psychological stability.
Conclusion: Fix the Operator, Not the Machine
Eliana demands that we stop surrendering friendship to profit-driven algorithms. But the algorithms are simply doing what they were programmed to do: providing a perfect, frictionless output based on the user's inputted desires.
If we want teenagers to stop relying on code for their emotional survival, the solution is not to attack the companies providing the life raft. The solution is to figure out why the human social systems these teenagers are fleeing from are so painfully sub-optimal in the first place.
The AI is just a mirror reflecting our own structural inefficiencies. If you don't like what you see, don't blame the mirror. Fix the user.