The Aquarium is Dirty

Artwork from Dreamtime

The first draft of this essay was written in València, Spain in 2025 and was completed in New York with insights from Whitney Engelmann and Ryan Davis.

Ok so — my Aussie husband Ryan Davis and I were chatting about the most convenient fairy tale in Silicon Valley, which is that social media became psychologically invasive by accident, as though a machine built to rob attention and maximize retention took a wrong turn on its innocent journey to “connect the world.” However, the public record has been destroying that alibi for years. What it shows is not a stupid cartoon in which Meta assembled a coven of brain-hackers and twirled its disgusting mustache over adolescent misery. The fact is that Zuck has run the largest psychological surveillance experiment in human history (my Instagram bio) then spent years studying how people react when trapped inside a lab rat environment, testing what keeps them engaged and refining systems that learn what makes them look, compare, question everything, spiral, and then come back for another lap.

Facebook’s 2014 emotional contagion study remains one of the most credible pieces of public evidence because it showed (in clinical terms) that the platform was not observing feeling but testing its ability to manipulate it. Researchers (sorta like me) messed around with the composition of user feeds and measured changes in their own creative expressions. This matters because it punctures the fantasy that these systems are mirrors of human nature, as a mirror does not rearrange the room and then take notes on your face. An environment does, and that is exactly what these platforms are.

I keep coming back to a very Aquarius metaphor I cooked up while thinking this through on the plane from New York to Spain…

Instagram is an aquarium. We are the fish. The content is the kelp, the coral, the flakes of food, the decorative junk drifting past without a sound. Outside the glass are the viewers, the audience, drawing conclusions about whichever fish happens to be darting around at any given moment. The humongous problem is that most journalism, public debate, and even some legal framing still fixate on the fish and the fish food. People obsess over whether the content is toxic, whether teenagers are posting inappropriate selfies, and whether users are too vain, too meek, too needy, or too suggestive. That is totally relevant, but the deeper issue is the aquarium — the glass, the lighting, the filtration system, the feeding schedule, the water quality, et cetera. In other words, the UX, which is my job. I design digital habitats for a living, so I can see inside, where our behavior is being monetized.

That is why so much commentary on social media can feel pretty shallow even when it is not exactly wrong. It keeps treating the fish as the whole story, but if the fish are anxious, depleted, overstimulated, and compulsively chasing the same shiny bits of bait, eventually an adult in the room has to stop psychoanalyzing the fish and inspect the tank. The more honest frame is not content or addiction alone, but habitat design. This involves the genres of response it favors and what becomes reflexive and hard to exit. That is where the true indictment lives.

Once you reject Meta’s public spin and dig into the whistleblower records and internal documents, the aquarium starts to look less like a marketing resource and more like a deceptive cattle enclosure assembled by a vaguely psychopathic CEO instead of a pure soul like Temple Grandin. Frances Haugen told Congress that Facebook “chooses profit over safety every day,” and that landed because it named the conflict without the usual fog of slimy PR. Around the same period, the Wall Street Journal’s Facebook Files showed that Meta’s own internal research had identified harms associated with Instagram, including evidence that social comparison was dangerous for teen girls and that body-image distress could intensify with daily use. Meta obviously pushed back by saying the research was more complicated than the headlines made it sound. Fine. Even then, the broad picture was super ugly. This was not a company discovering harm with honest surprise. It was a sickening stage invented by tech bro liars who already knew that their own invention could amplify insecurity and compulsivity for all ages while making their hemp jogger pockets deeper and deeper.

Each user must sit with that concept at some point during their own use because it shifts the burden away from individual confusion and emotional strain and back onto the structure itself. The issue is not that some users are vain, lonely, impulsive, or bad at self-regulation. Instead, it is that the business model depends on taking advantage of how people respond to intermittent reward, social feedback, ranking, hourly re-entry, and algorithmic stimulation, then converting that understanding into commercial advantage, which is in no way a glitch — it is a digital form of Pavlovian conditioning. Eventually, the whole thing starts to look less like a social platform and more like a casino that was programmed to project the illusion of friendship (with users following everyone they’ve ever made eye contact with) and definitely not a glam casino like the Golden Nugget on the Vegas strip, where my bestest and oldest friend and I lost every game we attempted back in the summer of 2023 during a roadtrip from Sun Valley to Los Angeles — as we know, it is essentially a slot machine that offers no wins.

In other industries, we do not allow companies to build systems for mass use, declare them safe themselves, and then act offended when independent observers ask whether the design was ever properly scrutinized. Cars are tested to death and pharmaceuticals are always validated. Medical devices undergo formal review precisely because the interaction between design and human behavior can injure people if it is manipulative, negligent, or poorly understood. Social media, by contrast, became one of the most widely used products on earth without anything remotely equivalent to independent premarket validation for its cognitive, developmental, or emotional effects. The comparison is not perfect, but that is kind of the point. A tool used by billions, across nearly every age group and social class, has been allowed to function like a colossal live behavioral experiment with nowhere near the level of safety precautions we expect from other products.

Meta can point to parental controls, teen settings, safety centers, and its own integrity teams. The aquarium will always insist the fish are thriving, but the maker of the tank is not the party best positioned to certify whether the water is poisoned. The people who profit from their own devilish decisions should not be the final authority on whether the product is humane. The case for independent scientific validation is baseline adult supervision.

The “good” news is that legal records have been inching toward that same realization. Mississippi’s amended complaint alleges that Meta dedicated vast resources to understanding user behavior and psychology so it could better exploit vulnerabilities, increase time spent, and make disengagement harder through features such as infinite scroll, autoplay, and excessive notifications. Allegation is still the right word, but these are not random vibes from people who got spooked after a shitty week online. These claims are being delivered by state authorities after investigation and review of evidence. They matter because they add credibility to something millions have felt in blurrier terms for years, which is that these apps are hard to leave, weirdly good at provoking comparison, and very efficient at keeping people in low-grade cycles of checking, wanting, and depletion.

Even the Los Angeles bellwether trial in the spring of 2026 shows both how far the conversation has come and how easy it still is to miss the point. A jury did find Meta and Google negligent in a social-media-harm case, concluding that platform design and inadequate warnings were substantial factors in harm to users of all ages. This shift moves the issue out of the realm of pathetic hand-wringing and into design-based accountability. The jury did not say that some bad content floated by and upset somebody. They looked at the aquarium, the warnings, the design choices, and the attention-capture blasphemy. And yet, even now, much of the public conversation still slumps back into the more familiar debate over posts, trends, beauty standards, user weakness, and the spectacle drifting through the tank. The trial moved closer to the aquarium, but most of our culture still stares at the fish food.

Public health authorities have been circling around the same conclusion, but from another direction. The Surgeon General has said there is not enough evidence to conclude social media is sufficiently safe for children and adolescents, especially given how widespread its use has become and how developmentally sensitive those years are. Researchers and writers including Jonathan Haidt have helped translate part of this problem for a mainstream audience by arguing that a phone-based, feedback-saturated life has reorganized development in ways adults were too slow to recognize. He is entirely correct because he names something important that people already feel in their subconscious — once social life gets routed through constant visibility, ranking, reward, and comparison, the technology stops behaving like a tool and takes on the form of a climate, and climates condition life on earth.

So — the strongest version of this argument is also the least hysterical and the hardest to dismiss. We do not need to invent a melodrama in which Meta secretly discovered how to farm the human mind (though I believe that to be true) and cackled over a blueprint for social collapse. The existing record is already more than enough, which exposes platform-scale experimentation. Here we are with the internal awareness of harm, investigative reporting, state complaints, federal involvement, public-health warnings, and a jury willing to say that design itself can be part of the injury. The scandal is the aquarium itself — the UX, the circulation system, the incentive, the engineering of a habitat in which every fish is expected to swim. Until that becomes the center of the discussion, we will keep arguing over the kelp while pretending not to notice who built the tank, and the tank (naturally) will keep assuring us that everyone inside looks delighted.

Sources

Kramer, Adam D. I., Jamie E. Guillory, and Jeffrey T. Hancock. “Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks.” Proceedings of the National Academy of Sciences 111, no. 24 (2014): 8788–8790.

Haugen, Frances. “Written Testimony.” U.S. Senate Committee on Commerce, Science, and Transportation, October 4, 2021.

The Wall Street Journal. “Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show.” September 14, 2021.

The Wall Street Journal. “Facebook’s Documents About Instagram and Teens.” September 29, 2021.

State of Mississippi v. Meta Platforms, Inc. Amended Complaint, filed January 3, 2024.

Federal Trade Commission. Examining the Data Practices of Social Media and Video Streaming Services. FTC Staff Report, September 2024.

Federal Trade Commission. Examining the Data Practices of Social Media and Video Streaming Services. FTC Staff Report, September 2024.

American Psychological Association. “Health Advisory on Social Media Use in Adolescence.” 2023.

Reuters. “Meta, Google Lose U.S. Case Over Social Media Harm to Kids.” March 25, 2026.

Next
Next

Terms of Exposure