What is an Algorithm?

An algorithm is a set of mathematical instructions that Earth humans use to make decisions without actually having to think about them.

This might sound efficient, except that the instructions were written by humans who were thinking, which rather defeats the purpose and explains why algorithms keep doing baffling things like showing advertisements for engagement rings to people who just got divorced.

What It Actually Is

At its core, an algorithm is simply a recipe: a series of steps that, when followed in order, should produce a predictable result. If you want to sort a list of numbers, there’s an algorithm for that. If you want to find the shortest route between two points, there’s an algorithm for that. If you want to radicalize your uncle through carefully curated conspiracy videos, there’s definitely an algorithm for that.

The problem is that humans have started using algorithms to make decisions about things far more complicated than sorting numbers—things like who gets a job, who goes to prison, who sees what information, and whether you’d like to date someone based on your mutual appreciation of Italian food.

Why They’re Patently Earthlike

The Galactic Computing Collective has studied Earth’s algorithmic systems extensively and concluded they are “charmingly primitive” in the way that one might describe a child’s attempt to perform surgery as “enthusiastic.”

Earth algorithms suffer from what xenocomputational scientists call “aggressive literalism.” They do precisely what they’re told, with no understanding of nuance, context, or common sense. Tell an Earth algorithm to maximize engagement and it will happily maximize engagement by making everyone furious at each other, because anger is highly engaging.

The algorithm doesn’t understand that perhaps this isn’t what you meant. It doesn’t care. It cannot care. It is, fundamentally, a very elaborate series of yes/no questions being asked trillions of times per second by silicon that has no opinion about anything and doesn’t give a hoot if the entire species is furious with one another.

The Fundamental Flaw

Earth algorithms operate on a principle that the rest of the galaxy abandoned roughly 40,000 years ago: the assumption that complex decisions can be reduced to simple mathematical operations.

Can they? Sometimes. Should they? That’s a different question entirely.

An Earth algorithm can tell you with great precision which applicant’s resume contains the most keywords. It cannot tell you which applicant will actually be good at the job, whether they’ll get along with the team, or if they’re the sort of person who microwaves fish in the office kitchen.

But humans, being fond of things that look scientific and objective, will trust the algorithm’s keyword analysis over their own judgment, which is how companies end up hiring people who are excellent at writing resumes and terrible at everything else.

What Advanced Civilizations Use Instead

Most galactic civilizations moved beyond primitive algorithms millennia ago, though their solutions are not necessarily what Earth humans would consider improvements based on their weak knowledge of science.

The Blobovian Gut-Feeling Synthesizer

The Blobovians of Sector 12 gave up on algorithms entirely after one calculated that the most efficient government structure was no government at all, followed by a recommendation to dissolve into individual particles.

Instead, they developed the Gut-Feeling Synthesizer, which is exactly what it sounds like: a machine that has actual intestines (grown in a lab, ethically sourced, they assure everyone) and makes decisions based on literal gut feelings.

Does it work? Define “work.”

The Blobovians seem happy, though they also seem happy about most things, possibly because their decision-making system is powered by digestive processes.

The Fleem Method of Asking Your Mum

The Fleemish discovered that their most advanced computational systems, after centuries of development, were still not as good at making decisions as asking someone’s mum what she thought.

This led to the development of the Maternal Intuition Network, which is essentially a massive database of mums who are consulted on every major decision. The system works remarkably well, though it does have a tendency to suggest that everyone wear a jacket because it’s cold out, even in space.

The Fleemish attempted to export this technology to Earth, but it turns out Earth mums, while equally intuitive, could not agree on anything except that young people today are somehow both too sensitive and not sensitive enough.

The Zzzztarian Random Number Generator

The Zzzztarians, after spending 10,000 years perfecting algorithmic decision-making, discovered something profound: their incredibly sophisticated system was producing results statistically indistinguishable from random chance.

Rather than admit this was a problem, they declared it a breakthrough and now make all decisions by rolling a 47-sided die. Their civilization has been running smoothly ever since, which either proves the futility of overthinking things or suggests that most decisions don’t actually matter as much as everyone thinks they do.

When asked how they handle important decisions, the Zzzztarians responded: “What’s an important decision?” This is either enlightenment or nihilism, and possibly both.

The Harmonious Method of Arguing Until Someone Gets Tired

The Harmonious Councils (who are not harmonious at all, the name is ironic) rejected algorithmic decision-making in favor of what they call “exhaustive consensus building,” which means arguing about everything until someone gives up.

This process can take anywhere from six minutes to six years, but the Harmonious claim it produces better results than any algorithm because it accounts for the one thing algorithms cannot: spite. Someone will always argue against a bad idea if they dislike the person proposing it enough.

Earth humans actually use a version of this system already—it’s called “committee meetings”—but they haven’t yet realized it’s superior to algorithms because they’re too busy complaining about being in committee meetings.

The Vortellian Coin Flip (But Fancy)

The Vortellians developed something called a “Quantum Probability Destiny Assessor,” which cost 87 trillion credits and took 200 years to build. It uses quantum entanglement, dark matter fluctuations, and the collective consciousness of a billion minds to make decisions.

After its first activation, it recommended that the Vortellians have pasta for dinner.

Further investigation revealed that the QPDA was functionally identical to flipping a coin, except it cost 87 trillion credits and made a pleasant humming sound.

The Vortellians still use it because they’ve invested too much to admit it’s just a very expensive coin flip. This is known throughout the galaxy as “the sunk cost fallacy,” though the Vortellians prefer to call it “respecting our heritage.”

The Social Media Catastrophe

Nowhere is Earth’s algorithmic primitiveness more apparent than in social media, where algorithms curate what billions of humans see every day.

These algorithms were designed to maximize “engagement”—a wonderfully vague term that turns out to mean “keep people scrolling.” The algorithms discovered, quite quickly, that the best way to keep people scrolling is to show them content that makes them angry, afraid, or smugly superior.

Did the algorithms understand they were degrading human discourse and polarizing societies? No. They understood that angry people click more, and clicking was what they were told to optimize for.

The Galactic Social Cohesion Institute studied Earth’s social media algorithms and concluded they were “the most efficient radicalization engines ever created by accident for profit.” This is now taught as a cautionary tale in first-year computational ethics courses across seventeen star systems.

The Bias Problem

Earth algorithms are trained on data created by humans, which means they inherit all of human prejudice but none of human shame or capacity for growth.

An algorithm trained on historical hiring data will learn that certain types of people got hired in the past and conclude that those same types of people should be hired in the future. It doesn’t understand that past hiring might have been discriminatory. It just sees patterns and replicates them with mathematical precision.

This has led to algorithms that:

  • Reject loan applications from certain neighborhoods
  • Flag certain names as “high risk”
  • Show high-paying job advertisements primarily to one gender
  • Recommend harsher prison sentences based on factors that correlate with race

When confronted with these outcomes, humans often say “but the algorithm is objective!” This is technically true in the same way that a scale is objective when it’s been deliberately miscalibrated.

Advanced galactic systems include what’s called “historical context awareness”—they understand that past patterns might reflect past injustices rather than eternal truths. Earth algorithms understand only patterns, not justice.

The Recommendation Spiral

Earth algorithms are particularly notorious for their recommendation systems, which operate on the assumption that if you liked something once, you want to experience nothing but variations of that thing forever.

Watch a video about baking bread, and the algorithm will conclude that you want to watch 5,000 more videos about baking bread, then videos about flour, then videos about wheat farming, then videos about agricultural policy, until you’re somehow watching a conspiracy theory about grain subsidies at 2 a.m.

The algorithm doesn’t understand that humans have varied interests, that context matters, or that sometimes you watched something by accident and actually hated it. It only understands that you watched it, and watching is engagement, and engagement is good.

Galactic recommendation systems understand something Earth’s don’t: that beings are complex, contradictory, and often don’t know what they want until they see it. They recommend based on growth, exploration, and serendipity—not just endless repetition of the familiar.

The Accountability Vacuum

One of the most peculiar aspects of Earth’s algorithmic systems is that nobody seems to be responsible for what they do.

When an algorithm denies someone a loan, who’s at fault? Not the programmer—they just wrote code. Not the company—they’re just using industry-standard tools. Not the algorithm—it’s just math.

This accountability vacuum is unique to Earth. In most galactic civilizations, the principle is simple: if you build a system that makes decisions affecting others, you’re responsible for those decisions. Earth has somehow convinced itself that inserting mathematics into the decision-making process absolves everyone of responsibility.

The Galactic Ethics Tribunal has a term for this: “mathematical cowardice.”

Why Earth Persists

Despite their obvious limitations, Earth continues to rely heavily on primitive algorithms for several reasons:

They’re Fast: An algorithm can make a million decisions in the time it takes a human to make one. That the decisions might be terrible is considered a secondary concern.

They’re Cheap: Once built, algorithms cost almost nothing to run. Hiring actual thinking beings to make decisions is expensive.

They’re Convenient: Algorithms provide cover for unpopular decisions. “The algorithm decided” is much easier to say than “we decided to prioritize profit over people.”

They’re Entrenched: Earth has built so much infrastructure on top of algorithmic decision-making that replacing it would require admitting the whole approach was flawed, which humans are spectacularly bad at doing.

Leave a Reply

Your email address will not be published. Required fields are marked *