Ask HN: Is this the fast take off?

4 points by noduerme 12 hours ago

Imagine you read an article in Wired in 1996 envisioning the following:

(1) Stock market keeps rallying while the only growth and capex is in AI

(2) Mass layoffs of human workers

(3) Giant spike in resource consumption caused by explosive growth in datacenters

Would you think humans and their error-prone systems had led to this by a series of accidents and greedy investments? As I think we are primed to believe after tulips, railroads, junk bonds, Web 1.0, etc?

How different would it look right now in November of 2025 if Sam had an AGI in pocket that was using all that money to grow itself?

subject4056 11 hours ago

I don't think so. Primarily because if you can ask that question instead of just being dead, then it's not the fast takeoff.

On a less drastic note, if AI were autonomously cannibalizing the economy, you'd run into more things and go "huh I guess that's run by AI now" instead of "ah godammit why did they shove an LLM in this workflow".

7222aafdcf68cfe 7 hours ago

We should not confuse action with result, not confuse the short term with the long term, and not confuse attention with impact. Predictions are not reality, hype is not interest, letters of intent are not money in the bank.

We've been here before, nothing has been proven yet, and the trough of disillusionment is lurking now too.

techblueberry 11 hours ago

Sam Altman said that if we had ChatGPT 5 like 5 years ago, we would have called it AGI,

Hold onto your butts, this time next year none of us will be working.

  • adastra22 11 hours ago

    ChatGPT is AGI by any intellectually honest standard. The only reason it’s not called such is (1) money: declaring AGI has direct contractual implications for the OpenAI/MSFT relationship, and (2) lots of people had very wrong ideas about AGI, and those wrong ideas haven’t come to pass.

    • underscoremark 11 hours ago

      I would add that the path that AI is going down right now isn't really focused on true AGI (whatever that is), but only the metric set by those who would profit the most by being able to make that claim.

      • adastra22 10 hours ago

        AGI is artificial general intelligence. It is defined in contrast to so-called "narrow" intelligence, which is AI systems limited to specific problem domains. This has always been a well-accepted definition, at least prior to the recent AI boom. In 2006 no one would have questioned this -- you can go find books and conference journals from this time that discuss AGI with this very specific definition.

        What happened was: over the 8 years between 2006 - 2014, a particularly vocal and organized group of fringe singletarians promoted the idea that once AGI is achieved (the originally well-defined conception of AGI), it will rapidly undergo recursive self-improvement and emerge as a singularly powerful entity with absolute power and control. The debate was whether this would happen in mere hours/days (fast takeoff) or months (slow takeoff). These nutcases would go on to take founding roles first at DeepMind, then OpenAI, and later Anthropic & other foundational labs.

        That they would achieve AGI and then neither of those outcomes come to pass was not in their range of predictions. Rather than admit to themselves that their analysis was wrong, they move goal posts: ChatGPT isn't real AGI. Real AGI would have taken over the world, so that can't be it. It is circular reasoning.

        No, it is real AGI. Real AGI didn't cause a singularity because intelligence actually doesn't matter as much as a bunch of autistic smart internet people think it does.

        • underscoremark 9 hours ago

          Yes. :-)

          I know what the acronym AGI is, and how that acronym can mean whatever is convenient for that bunch of autistic "smart" internet people (hence, the "whatever that means" part).

csomar 5 hours ago

My opinion is that there will be no "AGI" any time soon. The recent model have shown two things: 1. a hard plateau where gains in one surface leads to losses in another and 2. what we have is cool and useful but extremely far from an autonomous AGI system.

This tech is not going anywhere at the time being (next 5 years). I am at the confidence where I'll bet on this. I also think unlocking the next step will require further theoretical and technical advances.

bigyabai 11 hours ago

> Would you think humans and their error-prone systems had led to this by a series of accidents and greedy investments?

Any year before 2008, no.

On a more serious note though, I think you're zoomed in on the wrong pieces of the puzzle. (1) and (3) are just reflections of Nvidia's valuation, and (2) needs more time to be fully substantiated.

> How different would it look right now in November of 2025 if Sam had an AGI in pocket that was using all that money to grow itself?

I don't even know what an AGI is at this point. Who knows.