Will work on a response to this. My thesis is that, as demand for nerds decreases, humanity will be optimized for war fighting. Robots are energetically efficient, and the limitations on battery size and weight cannot be overcome by just building larger data centers.
Thank you for taking the time to cover this topic. It is always a joy to see political commentators of a certain leaning taking steps into AI discourse.
With that being said: likening RLVR to self-play is simply wrong. RLVR is infinite rollout data, not infinite evaluators. There is no meaningful (public) scheme for pure self-play bootstrapping of text intelligence ala alphazero.
Like alphago, modern LLMs need pretraining (human text supervision) and handmade environments (human labor *per-task-type*). There is no working notion of an "environment for everything" that permits unbounded ECI growth; the potential for AI capabilities growth in 2025 is bounded by the definitions of its evals (which, admittedly, is still a very wide bound, but not one that fully circumscribes human capabilities); evals which are still primarily developed by human labor. You can't LLM-as-a-judge your way to corporate secrets or visuomotor policies.
Of course, I fully expect the aforementioned bottleneck to be solved soon. But this is the most important miss of the article, and I hope as many readers note it as possible.
Other notes below.
---
ECI and other "AI IQ" indices are also naturally tainted by the same issue of [what's measured] -> [what's evaluable] -> [what's RLVR'd]. They don't give the same assurance of generalizability that human IQ does -- you *can* actually train on every exam for an AI, and there is no meaningful bottleneck for how much compute a single model can throw at a problem.
That doesn't mean ECI or other benchmarks aren't meaningful, of course. A lot is getting automated.
The issue with AI corporate financials isn't the rate of rev vs cap growth, its the base ratio. OpenAI's P/E ratio is absurd matter how you measure it; the "bubble" is its current multiple rather than the rate of change.
Importantly, *if the current trendlines continued for 3 years*, OpenAI would have a P/S of ~25 @ 22.5T (and a still negative P/E by their own projections!). A downturn in investor sentiment in the AI sector in 2028 would necessarily lead to a massive market cap contraction. Even with amazing technology, OpenAI needs a far larger rev multiplier to justify its position.
The comparisons of chatgpt growth to other historical technologies are always absurd. Especially in the case of the internet -- the internet is ChatGPT's distributor; it's hardly a surprise the former took far longer to bootstrap comms than the latter.
Comparing startup rev in isolation is nasty. No normalization between investment and rev differences, when VCs naturally pump AI startups with far more resources in the current context. Although I still support Collison's interpretation, it's in poor taste to use it as evidence-of rather than to-be-explained.
Strongly agree that, despite everything I've mentioned above, AI policy is the most important&impactful political issue of our era.
Much stronger case for AI far exceeding human capabilities than my post. Following on from this, I wonder whether the other premises in my 'AI safety' argument are also right?
There's a lot to say about AI alignment. I was intending on including it in this post but decided to focus on where AI is and the current trends given the length.
That said, the AI evolution argument results in very similar outcomes to the "gradual disempowerment" timelines. https://gradual-disempowerment.ai/
These sorts of timelines occur when there are many model providers and longer timelines to ASI. This is probably solved by the population caring about AI enough, but the surveys aren't optimistic on this. In my opinion this misalignment variant is the most likely.
Working as a wage-slave laborer constructing data-centers with AI boss man impatiently directing your actions like a meat puppet through augmented reality goggles.
Will work on a response to this. My thesis is that, as demand for nerds decreases, humanity will be optimized for war fighting. Robots are energetically efficient, and the limitations on battery size and weight cannot be overcome by just building larger data centers.
If you do write an article I will try to address it.
DMed u
Thank you for taking the time to cover this topic. It is always a joy to see political commentators of a certain leaning taking steps into AI discourse.
With that being said: likening RLVR to self-play is simply wrong. RLVR is infinite rollout data, not infinite evaluators. There is no meaningful (public) scheme for pure self-play bootstrapping of text intelligence ala alphazero.
Like alphago, modern LLMs need pretraining (human text supervision) and handmade environments (human labor *per-task-type*). There is no working notion of an "environment for everything" that permits unbounded ECI growth; the potential for AI capabilities growth in 2025 is bounded by the definitions of its evals (which, admittedly, is still a very wide bound, but not one that fully circumscribes human capabilities); evals which are still primarily developed by human labor. You can't LLM-as-a-judge your way to corporate secrets or visuomotor policies.
Of course, I fully expect the aforementioned bottleneck to be solved soon. But this is the most important miss of the article, and I hope as many readers note it as possible.
Other notes below.
---
ECI and other "AI IQ" indices are also naturally tainted by the same issue of [what's measured] -> [what's evaluable] -> [what's RLVR'd]. They don't give the same assurance of generalizability that human IQ does -- you *can* actually train on every exam for an AI, and there is no meaningful bottleneck for how much compute a single model can throw at a problem.
That doesn't mean ECI or other benchmarks aren't meaningful, of course. A lot is getting automated.
The issue with AI corporate financials isn't the rate of rev vs cap growth, its the base ratio. OpenAI's P/E ratio is absurd matter how you measure it; the "bubble" is its current multiple rather than the rate of change.
Importantly, *if the current trendlines continued for 3 years*, OpenAI would have a P/S of ~25 @ 22.5T (and a still negative P/E by their own projections!). A downturn in investor sentiment in the AI sector in 2028 would necessarily lead to a massive market cap contraction. Even with amazing technology, OpenAI needs a far larger rev multiplier to justify its position.
The comparisons of chatgpt growth to other historical technologies are always absurd. Especially in the case of the internet -- the internet is ChatGPT's distributor; it's hardly a surprise the former took far longer to bootstrap comms than the latter.
Comparing startup rev in isolation is nasty. No normalization between investment and rev differences, when VCs naturally pump AI startups with far more resources in the current context. Although I still support Collison's interpretation, it's in poor taste to use it as evidence-of rather than to-be-explained.
Strongly agree that, despite everything I've mentioned above, AI policy is the most important&impactful political issue of our era.
Much stronger case for AI far exceeding human capabilities than my post. Following on from this, I wonder whether the other premises in my 'AI safety' argument are also right?
https://substack.com/@evoreal/note/c-191473388?r=3aaohd&utm_source=notes-share-action&utm_medium=web
And cheers for the mention!
There's a lot to say about AI alignment. I was intending on including it in this post but decided to focus on where AI is and the current trends given the length.
That said, the AI evolution argument results in very similar outcomes to the "gradual disempowerment" timelines. https://gradual-disempowerment.ai/
These sorts of timelines occur when there are many model providers and longer timelines to ASI. This is probably solved by the population caring about AI enough, but the surveys aren't optimistic on this. In my opinion this misalignment variant is the most likely.
Very interesting. Any thoughts on what careers would be the best after AI becomes significantly better
Working as a wage-slave laborer constructing data-centers with AI boss man impatiently directing your actions like a meat puppet through augmented reality goggles.
So cringe that you actually believe that.
George Hotz elegantly explained why that's not going to happen in his debate with Yudkowsky.
I salute your skepticism