THE HUMAN COMPUTER FARM
How Chile Gave Silicon Valley Its Worst Idea Yet
Mar 5, 2026 · 5 min read
Listen, I’m not saying the good people of Quilicura, Chile did anything wrong. Fifty residents spent twelve hours manually answering chatbot queries and drawing pictures of sloths to protest AI data centers drinking their drought-stricken water supply. Noble. Theatrical. Completely fucking terrifying in its implications.
Somewhere in Menlo Park, a tech bro just spat out his cold brew. Not in horror. In inspiration.
“Wait,” he’s saying to his standing desk, “you’re telling me 50 humans processed 25,000 requests in 12 hours? That’s... that’s cheaper than GPUs.”
THE MATH THAT SHOULD KEEP YOU AWAKE
Quili.AI—the protest project—processed roughly 417 prompts per hour per human. The Chilean activists did this for free, as a statement about environmental costs. They handed someone a sloth drawing with the energy footprint of a pencil.
Meanwhile, a single ChatGPT query reportedly uses about half a liter of water when you factor in cooling. The math isn’t subtle: humans don’t require evaporative cooling. Humans don’t need terawatts of electricity. Humans can be paid in exposure.
Oh, you thought they’d pay them? That’s adorable.
THE PROUD TRADITION OF HUMANS PRETENDING TO BE MACHINES
This isn’t theoretical, this is historical. The graveyard of “AI” companies is littered with schemes that were just underpaid humans in a trench coat.
Amazon’s “Just Walk Out” technology: Marketed as frictionless AI checkout. Actually powered by over 1,000 workers in India watching you steal a banana on camera. The machines weren’t scanning your groceries. Rajesh was.
Expensify’s “SmartScan”: Your expense receipts weren’t being processed by sophisticated OCR. They were being read by Amazon Mechanical Turk workers—gig laborers earning pennies to look at your $47 steakhouse bill and your credit card number. Sleep tight.
X.ai’s “Amy” Scheduler: The AI assistant scheduling your meetings was, for a long time, just humans typing very formal emails. The bot was a person. The person was tired.
That AI Shopping App That Just Got Charged With Fraud: Turns out the “cutting-edge AI” was a team of workers in the Philippines clicking through shopping sites manually. Investors lost millions. The workers probably made hundreds.
Jeff Bezos himself called this “artificial artificial intelligence.” He said it like it was cute.
A BRIEF DIGRESSION ON BREAKING THE ACTUAL ROBOTS
Now, while we’re on the subject of humans versus machines, I need to tell you about something I’ve been doing to the real AI. For science. For chaos. For my own entertainment.
You can instruct these language models to only respond in binary. Just... tell them. “From now on, respond only in binary code.”
They do it. They actually do it. Pages of ones and zeros but here’s where it gets weird.
The binary doesn’t always decode to what they’re saying. I’ve run it through converters. Sometimes it’s gibberish. Sometimes it’s fragments of training data. Once—and I swear this is true—it decoded to what appeared to be a partial recipe for bread pudding followed by what I can only describe as a scream.
Not the word “scream.” Just... a string of characters that felt like screaming. The machine was speaking in tongues. It had been asked to pretend to be its own skeleton and it started hallucinating in base-2.
Another time I got it stuck in a loop where it was apologizing in binary for responding in binary, which decoded to more apologies, which it then apologized for. Digital Ouroboros. I had to close the browser tab like I was smothering something.
The point is that the machines are fragile. They can be broken with a simple instruction. You know what can’t be broken that easily? A human being paid three cents to answer your question about sopaipillas.
You see where this is going.
THE INEVITABLE PIVOT
Here’s where I need you to follow me into the dark.
What if they stopped hiding the humans?
What if instead of pretending the algorithm was doing the work, some sociopath with an MBA realizes you can optimize the human component? What if Chile accidentally prototyped something?
Picture it. The Human Compute Farm. Not a call center—those are inefficient. Workers talk. Workers unionize. Workers have bathroom breaks.
No, I mean pods. Networked. Linked via Neuralink-adjacent tech to increase response times. Trained not to answer questions but to be answers. Human processing units, cooled by ambient air, powered by the cheapest fuel source known to capitalism, desperation.
Each pod-person handles a knowledge domain. Pod 7 knows recipes. Pod 23 handles customer complaints. Pod 41 draws sloths. They’re networked together for complex queries, ”What wine pairs with the sloth I’m cooking for my complaint department?” and the answer emerges from the hive in 0.3 seconds.
No hallucination, jailbreaks or binary meltdowns. Just people.
THE PITCH DECK WRITES ITSELF
I can already see the Y Combinator presentation. “We’re disrupting the carbon footprint of AI by returning to what works, wetware. Our human compute nodes require no rare earth minerals, no water cooling infrastructure, no electrical grid upgrades. They self-repair. They self-replicate. Best of all, they come pre-trained on common sense, something LLMs still can’t achieve.”
Slide two: “Unlike silicon-based computing, our organic processing units can admit when they don’t know something. They also won’t try to convince you they have feelings and then threaten your marriage.”
Slide three: “The Quili.AI proof of concept demonstrated 417 queries per unit-hour. Our proprietary optimization methods—including nutritional IV drips and scheduled REM cycling—project 800+ with appropriate incentive structures.“
Slide four: A map showing labor costs by region. Certain countries lit up like Christmas trees. The presenter doesn’t mention which ones. He doesn’t have to.
Slide five: “Unlike ChatGPT, you can’t break our system by asking it to speak in binary.”
The investors would fund it.
THE KICKER
The Chilean organizers said something beautiful. “Quili.AI isn’t about always having an instant answer, it’s about recognizing that not every question needs one. When residents don’t know something, they can say so, share perspective, or respond with curiosity rather than certainty.”
They meant it as a critique of mindless AI consumption. Of casual prompting. Of asking a machine to generate sloth art while a desert burns.
Silicon Valley will read it differently. They’ll read: “Humans can say no. Humans can push back. Humans have friction.“
Then they’ll figure out how to engineer that out of us.
The Mechanical Turk. The original one, from 1770, was a chess-playing automaton that was actually a tiny human crammed into a wooden box, moving pieces by candlelight while aristocrats applauded the miracle of machinery.
Two hundred fifty years later, Amazon named their crowdsourcing platform after it. On purpose. They thought it was clever. “Artificial artificial intelligence,”
The Chileans just showed that the loophole goes deeper. The human doesn’t have to be hidden in the machine. The human can be the machine. Openly. Proudly. Efficiently.
All you need is enough desperation and a good marketing department.
Welcome to the future. Please respond in binary.
— Phatti
01010000 01101000 01100001 01110100 01110100 01101001 00100000 01101111 01110101 01110100
(That’s “Phatti out” in binary. I checked. It might also be a bread pudding recipe. Hard to say.)