In March 2023, a graphics card unlike any other descended from the Heavens and graced humanity with its power.
Man was not working from a blank slate, however. The previous iterations of cutting-edge compute had resulted in wondrous devices containing the entire knowledge apparatus of humanity that could be held in one hand, with directions to any location, communication with any person, and an endless stream of erotic content at one’s fingertips. Instantaneous information exchange also had the benefit of allowing anyone to speculate on the value of any given thing around the clock, which was readily taken advantage of to wager on sports outcomes, stock outcomes, rock jpegs, cat jpegs, ape jpegs, and anything else one could possibly imagine with enough suspension of belief. In the early stages of information being uploaded, shared, and processed, the expected outcome of the internet was somewhat of an egalitarian ideal that everyone would have equal access to information for the purposes of bettering themselves, no matter how they were situated in life. What resulted, however, was something of an overflow error: there’s concurrently too much to learn and too much junk to dig through (both unintentionally and algorithmically placed), so the end outcome is that of a reality transposed through a green-and-red plane, where all’s good as long as the number goes up.
An oft-repeated joke of mine is that the tech élite dream of going down in history as a cross between entrepreneur and sci-fi visionary, while the finance élite dream of being remembered as philosopher-kings (as evidenced without a trace of subtlety by the fact that the hedge fund manager equivalent of Mad Men is the self-bestowed title of Master of the Universe.) Elon Musk wants to be known for changing the world, while Ray Dalio wants to be known for having solved the world. Both of these ideals require deep knowledge and some external validation of that knowledge, thus resolving the oft-asked, sardonic, rhetorical question of “why do billionaires keep making money?” Of course, the answer is that they simply don’t know anything else, the perpetual inflow of dollars in increasing size is the only form of validation they recognize:
Isn’t it enough at some point when you have more money than god? Beyond the vanity of running up the score, it’s almost what you live for at that point. Howard Schulz, through selling coffee, imposes his diktats on the world. Bob Iger built a corporation so powerful that it’s warring over policy with the 16th largest economy in the world, and appears to be winning (~ed. — hah, that aged poorly.) Anna Wintour isn’t going to stop curating the culture that leads to the guest list that enters the pearly gates on this side of reality. Steve Cohen returned to managing money and trading because when you have that much capital to allocate, you necessarily spend time learning and opining about everything. It’s what makes Jeff Bezos such an outlier; for all intents and purposes, he seems to have totally left Amazon behind. Never have I seen someone of that stature able to shake it off.
Thus, the only true direct attack on the hyper-successful is the one that twists the knife the most — that they are benificiaries of survivorship bias, that anyone could have done what they did in their place. And it’s effective — Dalio will bristle if you suggest that he’s a billionaire due to raising a bunch of AUM, marketing doom predictions, and running a weird cult (as outlined for the public recently in The Fund by Rob Copeland) while benefiting from levered beta, and Musk will adamantly deny that his initial wealth did not come from emerald mines; even with his insane track record, he feels the need to constantly refute this on X. The other, more insidious attack is that of untethering dollars from productivity, as I’ve constantly written about:
The phrase “attention economy” gets bandied about a lot, so it’s worth thinking about it more precisely given the context of this post. If we think about the old basis of “fundamental value”, we come to something along these lines:
The vast majority of money that moves in and around the market is based on the philosophy that whatever is invested in will create future cash flow rewarding current shareholders, who hold a right to their share of the output… Investing is driven by valuation — taking snapshots of a company’s performance over time and stringing them together to estimate future outlook. The question valuation asks is very simple: given an investment in a company, what do I own now, what am I expected to own in the future, and how much is it worth at present value?
But when dollars are untethered from productivity, we get a different form of thinking, where Paris Hilton is more of a pioneer than Benjamin Graham. No longer is future cash flows the driver of valuation, but rather how much attention paid convert to bid? When we look at it from this purview, we arrive at my concept of “tradable hype”…
And, certainly, Elon Musk was the biggest beneficiary of this mentality taking hold in public markets. The stock price appreciation due to mania came before the company had truly scaled into the behemoth it is today. Indeed, from 2018:
I have always thought (and got clowned on by Marc Andreessen for, who misunderstood my point, thereby confirming that absolutely nobody is actually reading any of my drivel) that TSLA was a truly unprecedented innovator in the financial engineering space (as I allude to here) along with logistics and, yes, technological innovation:
(My point was, technology is always refining, but taking old stuff and streamlining the process + reducing the costs to the bare minimum is an entirely different, much harder challenge than coming up with some potential viable product. Ofc, elaborating on this takes more than 280 characters, so naturally everyone got stuck on “old tech” phrasing. Seriously, read to the end of threads, people!)
I thought TSLA and the related Elon-Plex financial shenanigans was a one-time occurrence that only could happen in a situation that merged retail mania, low rates, a flush, highly liquid valuation, and a product that was Actually Delivered™ in the end. But the issues with untethering dollars from productivity spread to everyone, not just Tesla shareholders. Valuation has not mattered for the past 10 years I’ve been watching markets: it simply does not move share prices day to day:
We know it’s all nonsense. Market capitalization, future cash flows, P/E ratios, Graham, “value”, are coping rationale for “intelligent investors” who don’t understand that valuation has not, nor has it ever been, a short thesis (ML 7, 12/09/20):
“Valuation is not a short thesis” is one of my all time favorite sayings about markets, which ties back into the “theoretical valuation” discussion from yesterday: as long as speculators are still biased towards the upside, the price of a stock will continue to be bid up. As such, shorting a stock requires both the valuation to be high and a catalyst to bring the stock price back to reality - in essence, betting on the downside requires being correct both on the price and the timing.
The best way to think about theoretical pricing (“valuation”) is that, you can have any sort of proprietary valuation you want, but the only thing that matters is whether you can get filled or not, whether it is an M&A deal or a call option. What we realize is that, beyond all the waxing poetic about the “philosophy of money” and “postmodern markets”, the only thing that matters anymore is “number go up”. You can call it (3,3), Marxism, “everyone dependent on the same trade”, or whatever you want, but the core point is that nobody values bid-side liquidity the same as ask-side liquidity at the moment, so all you can do is stay long with everyone else.
I’ve spent a lot of time talking to fund managers, retail traders, and institutional traders across the globe in the past few months, and nobody can particularly make sense out of what’s going on. I call it Schrödinger’s stock: no matter whether you are holding NVDA or not, everyone feels like a moron having the position that they do. Those that suspend the disbelief to rationally process that one side has much more upside than the other are being rewarded for it. But what exactly does this net gain represent? The divergence from reality started in 2020, of course, when “lockdown stocks” rose to prices indicating that they would capture the world and rake in endless profits from shut-in, and really gained steam in 2021. Paradoxically, this only gained steam even as rates were hiked in what was ostensibly an effort to get people investing “properly” again. The resultant atmosphere I’ve encountered is that of a fatalistic ennui, where an increasing share of debt is issued to pay off the interest of the prior debt, S&P returns are concentrated beyond the legal labeling limit of “high pulp”,
valuation is a dead language, and regulators are hell-bent on not working with anyone at all, further constraining allocations.
I like to think that we build tech as a reflection and extension of our own processes. For example, the split between RAM and storage is the difference between referential memory and rote memorization. As I outlined in Becoming One with the Machine,
It’s shockingly intuitive — why we can mimic AI systems in our own brain is precisely because whenever these AI systems are built, it's to incorporate how we interpret things currently to the best of our ability at the launch-point. Thus, generative text is an attempted version of coding, say, my own stream of consciousness ranting and single data point extrapolation ad infinitum. (Though I don’t need prompting, just the occasional redirect and hard stop.) Computer vision just attempts to generalize and harmonize the relationship between approximate and precise viewing and using compute to approximate the intuitive process as closely as possible (as AI problems scale with compute, not efficiency.) Even deep learning neural networks are based off of early-brain development “sponge brain” — after all, you learned your first language from scratch by immersing in it, didn’t you?
But I see something different happening with generative AI. It feels like we’ve very much lost the plot to even coherently process what this superpowered compute touching the theoretical limits of physics should be used for. There is no “defined input, verifiable output” framing of problems we are applying these models to. We are not attempting to recreate anything we can conceive of. Instead, we are throwing ever-increasing amounts of compute and data and “benchmarking” models as some sort of proof that progress has been made at scale beyond automating spam and listicles.
It all has a very Aztecan feel to it: we are incinerating compute and consuming electricity at a steadily increasing, probably unsustainable rate and praying to the silicon gods for a Coke bottle containing the AGI solution to reality to be produced from the machines sooner rather than later. The models may differ in name, size, structure, “safety design”, and more, but shouldn’t all of these models converge if an AGI truly does exist? The only difference is in path-dependency, which is quite terrifying if you think about it. Of course, this is the cynical allocator in me talking:
a) I don’t place any weight in unfalsifiable predictions and probabilities
b) If you’re calling BS 100% of the time, you’ll be right 97% of the time, but the 3% is where all the advances and gains are made
c) All of this sure seems like a really overfit magic trick. As I like to say, isn’t it incredible how the murder victim’s corpse falls exactly in the white lines?
d) I’d be a moron to not be keeping track in case something earth-shattering does come of it all
e) Philosophical approaches to determining the future is the proper way to proceed, but it’s far too early as there are no usable priors.
Although I have a documented distaste of sci-fi (as the survivorship bias criticism really applies to that genre more than any other), when I’m asked about what I think about AI and NVDA and what the future holds, I’m wont to quote Multivac:
Insufficient data for meaningful answer.