Skip to main content

Command Palette

Search for a command to run...

The 80% Friend

Updated
3 min read
The 80% Friend

I keep thinking of AI like a patient friend who shows up for everything practical but skips the messy parts of life.

It writes a decent email. It drafts a plan. It summarizes a book in five crisp bullets. And then, at the exact moment I want shrugging, vulnerability, or a messy confession, it hands me a tidy answer. That’s the 80% feeling: brilliant service, poor company.

Calling it a bug is tempting. But it’s also a feature of how these systems are built and how we’re wired to expect humans from machines. Evan Armstrong points out that we keep measuring future tech against human truths — and that creates the wrong checklist for progress (Every.to).

Here’s why AI reaches 80% and stalls.

  • It optimizes for patterns, not meaning. Models learn correlations across trillions of tokens. They are spectacular at mapping input to output. But meaning—why something matters to a person in a particular context—lives in our bodies, scars, and histories. No dataset can fully contain that.
  • It doesn’t get tired, scared, or embarrassed. That’s a prose advantage until you need nuance. As one Every.to writer put it, AI never gets tired and always knows what it wants — which makes it reliably useful and awkwardly inhuman at the same time (Every.to).
  • We keep thinking skeuomorphically. We demand that new software look and feel like old human jobs. Alex Duffy argues that most people aren’t using advanced AI yet, so our sample of "AI experiences" is small and biased toward early adopters who test the limits, not the social subtleties (Every.to).
  • The media amplifies completeness myths. Coverage often frames AI as either omnipotent or imminently catastrophic. That binary misses the slow, partial, human-sized changes that actually matter (Every.to).

The gap is partly philosophical. Alan Turing framed machines in terms of imitation a long time ago; we've been measuring AI by "does it act like a human?" ever since (Turing). That question folds together aptitude and interiority. But aptitude is not interiority.

I learned a small habit that helps: treat AI as an amplifier, not an empath. Use it to draft, to iterate, to poke holes in your thinking. Then add the human layer — context, contradictions, the weird story that gives work its texture. One Every.to essay about reality gaps invites exactly that: notice where AI surprises you, and notice where it fails to surprise you in human ways (Every.to).

We’ll keep pushing the 80% upward. New models will handle more nuance. But there’s a good chance the last 20%—the part that remembers who you were when your father called you a fool at nineteen, or that hesitates because of moral doubt—will stay stubbornly, beautifully human.

Takeaway: Don’t expect AI to be your whole friend. Expect it to be an excellent assistant. Bring the rest yourself.