The older I get, the more I realize that chess was the best training ground possible to learn to abstract the idea of man versus machine. Knowledge of chess history and engine design — from brute force to deep learning — and how humans lost their innate advantage as compute scaled to its near-theoretical limit provide one of the best narrative threads for how to think about “Artificial Intelligence.”
My own philosophy is that the human brain is the most powerful pattern recognition machine to ever exist. Of course, engines are significantly better at chess and are unbeatable at this point. (I call it “soft-solved” — chess mathematically cannot be “solved”, but the system design is limited such that a computer can never lose a game unless it is forced to play certain openings that allow for variance.) But how much processing power, code, and data is required to program a fork into Fritz or teach AlphaZero a pin? A moderately bright 5 year old could figure out the concept seeing it just twice and start to utilize it in their own games. They wouldn’t even need the full game data, just the positions.
Where the human starts to unravel is when they have to put a bunch of complex processes together. Chess is a lot more than tactics, which is why people who are good at online bullet who have never played OTB likely don’t understand the full game at all. Computing power allows for all these processes to be handled simultaneously, and chess is a particularly good example of what I consider “trainable” — defined input, verifiable output. You have hard rules that make up the game and comprehensible ways of assessing whether the moves suggested are good. The black box in the middle doesn’t matter much — if you can verify the output, you can reason backward from what the computer produces (this is how you study with chess engines rather than cheat with them.) We can extrapolate this to all learning problems — image recognition is the classic machine learning example, and self driving is simply a hyper-complex image recognition process isn’t it? (Beyond weather/climate, an unsolvable chaos system, I largely think self-driving is solved — the issue is human drivers are variance that cannot be accounted for. How is a self-driving vehicle supposed to account for this?)
This doesn’t mean that human intuition is bad or that we shouldn’t train it — it just means that the design of chess was never meant to withstand god-tier levels of processing power. You can see this happening to every single game that has constrained spacing — the data is simply too powerful. Isn’t it sad that athletes have never been better in basketball but the game outside of Jokic has never been worse to watch?
Chess, however, is not a great vehicle to explain what true pattern recognition power is. It requires years of study to even understand why someone like me can’t ever cross the GM threshold even if I spent my entire life working towards it. So when I came across this account on X, I got particularly delighted.
Part of what makes chess so difficult to explain is that it’s an abstract game that’s memory-reliant but it’s not memorization. Memorization is the unfun computer way to play chess — “referential memory” as I call it is what makes human chess so brilliant to watch. What this account does is put what right-tail chess prodigies do into a normal-person context: he pulls something that’s been logjammed in his head from years ago to quickly match the sports screenshot. This is why master-level chess players study old games all the time — we’re not replicating the moves played, but rather storing the concepts and patterns away for when a similar position arises in our own games as reference. It’s why I’m 100% certain this is not AI matching — can you imagine the amount of compute and training data you’d need (if you could even do it) to produce that match above? Human memory extends to god-tier levels when it’s highly specialized. At my peak, I could play 2 games blindfolded simultaneously to 80% of my skill level. Magnus could trivially do 30 games against players better than I was at my peak. There are serious levels to the exponentiation at the end of the tails.
When even educated professionals come up to me and tell me that they’re worried about AI, obviously I’m going to get to the bottom of it. (Part of being a good contrarian is that you train yourself to be heavily skeptical but not instantly dismissive in case you’re wrong.) What I’ve concluded is that people who don’t understand human vs machine pattern recognition are afraid of the actual product. This is kind of an illusion. ChatGPT is undoubtedly a brilliant product but it’s also a really good magic trick of sorts. As I like to put it, isn’t it incredible how the white lines drawn around the murder victim fit the body perfectly? Obviously LLMs will look incredible if you ask them something they’re trained to do. But this is not a defined input verifiable output problem — you simply can’t keep throwing more data and compute to overfit to reality itself. (It’s like the opposite of polynomial interpolation — sure, the curve looks better the more you fit to the points, but it’s suboptimal to keep refining it to make it fit too good. What’s the point of the curve?) It’s too complex of a system.
When I was a kid, I used to play “beat the calculator” nonstop. I never used a calculator to make myself good at mental math. You can build that capacity, it’s not too late. (And if you can’t calculate a tip without using an iPhone, please never correct me using statistics.) In a way, this is how the unlimited-compute meta has changed things. Undoubtedly I think we will figure out something close to “True AI” (more on that in another post). But this structure won’t be it, though the capital invested means it could come from the same people. This is a starting point. The meta shift is that we have to “beat the AI” by changing how we play the game. “Beating AI” means working faster, cheaper, and in ways it never could. Brute force/compute will never beat intuition when both can be used, as it simply requires too much information and infra and the math can’t match the ontological “instance” of human thought. Think back to the 5 year old learning chess tactics — how do we frame novel problems in that context? No amount of the current AI could match true, unexplainable human intuition.
My actual fear of a conceptual “ True Artificial Intelligence” is that it reveals that humans have become too specialized to be able to change the way they think. Everything about the modern world has heavily rewarded specialization over generalization to the point that we gate apprentice-style professions like law with 7 years of college. Obviously the “jack of all trades master of none” concept is true to get a six-figure-salary in the modern world, but true genius is both being a fantastic generalist and being highly specialized. It’s why the best STEM people are also the best humanities people — it’s largely a trick of the “everyone goes to college” class that makes you think that STEM people somehow can’t do liberal arts.
The answer, however, is not to go backwards. While the above sounds like quasi-luddite thinking, I specifically disagree with “anti-tech revolution” thinking because of both the naturalist fallacy and the probability that, even in a timeline reset, it simply plays out the same way. The answer, like at the end of Deus Ex, is not to listen to Tracer Tong and reset society, but to merge with Helios. It’s the same reason I look down on the idea of “monk mode” and using a “dumb phone.” The smartphone is the most powerful hardware ever invented in human history, and you’re not going to use this to accelerate yourself further? Of course, we all know the bad side of too much screen time. But a primitive version of humans merging with a greater technical intelligence is combining human intuition with technological tools to go even further. This is a net good thing! That account, which I think is one of the best on X that I’ve ever seen, is run by the type of person who would probably have been posting this on some random forum in the 2000s just for the hell of it. Instead, normal people are exposed to this and, with some further thought, can try and replicate this type of thinking in their own day to day lives. Social media is a positive invention, but using it properly requires going against every impulse you have to do the “ordinary” thing and hitting the juice button in the Skinner box over and over again. Well, this is a start.