Conspiracy Bot Shows That Computers Can Be as Gullible as Humans
Computers believe in conspiracy theories now.
The New Inquiry
's Francis Tseng trained a bot
to recognize patterns in photos and draw links between similar pictures, forming the kind of conspiracy-theory diagram seen in the last act of a
episode or the front page of Reddit. It's a cute trick that reminds us that humans are gullible (hey, maybe those photos
match!), and that the machines we train to think for us could end up just as gullible.
Humans are exceptionally good at pattern recognition . That's great for learning and dealing with diverse environments, but it can get us in trouble: Some studies link pattern recognition to belief in conspiracy theories. ( Some don't , but that's what They want you to think.)
Until recently, computers haven't been especially good at pattern matching. The rise of
This isn't as easy as replicating a human brain, because we don't know how to do that. Instead, programmers simulate a brain-like behavior by letting the neural network search for patterns on its own. As technologist David Weinberger writes , these neural networks, free of the baggage of human thought, build their own logic and find surprising and inscrutable patterns. For example, Google's AlphaGo can beat a Go master, but its strategy can't be easily explained in plain language.
But these machines don't actually know what's real, so they can just as easily find patterns that don't exist or don't matter. This also results in surprising "mistakes," like
the funny paint colors
(stummy beige, stanky bean) generated by scientist Janelle Shane, or the horrifying mess of dog faces Google
These mistakes can be far more serious. Weinberger highlights software that racially profiled accused criminals , and a CIA system that falsely identified an Al-Jazeera journalist as a terrorist threat .
The New Inquiry 's bot similarly overextends its analysis by finding fake patterns. "If two faces or objects appear sufficiently similar, the bot links them," says Tseng, the bot's creator. "These perceptual missteps are presented not as errors, but as significant discoveries, encouraging humans to read layers of meaning from randomness."
It's tempting to think the bot is onto something. But chances are, you're really just looking at dog faces and made-up paint colors. The more computer programs behave like humans, the less you should trust them before learning how they were made and trained. Hell, never trust a computer that behaves like a human, period. There's your conspiracy theory.