The one issue is the accuracy of these AI models, which is that you can't -really- trust them to do a task fully, so that makes it hard to fully automate things with them. But the other is cost. Anyone using these models to do something at scale is paying maybe 100X over would it would cost in compute to run deterministic code to do the same thing. So in cases where you can write deterministic code to do something, or build a UI for a user to do it themselves, that still seems to be the best way. Once AI gets to the point where you can fully trust some model, then we've probably already hit AGI and at that point we're probably all in pods with a cable in our brainstems, so who cares...
The thing is that I don't use AI to replace things I can do deterministically with code. I use it to replace things I cannot do deterministically with code - often something I would have a person do. People are also fallible and can't be completely trusted to do the thing exactly right. I think it works very well for things that have a human in the loop, like codeing agents where someone needs to review changes. For instance, I put an agent in a tool for generating aws access policies from english descriptions or answering questions about current access (where they agent has access to tools to see current users, buckets policies etc). I don't trust the agent to do it exactly right so it just proposes the policies and I have to accept or modify them before they are applied, but its still better than writing them myself. And it's better than having a web interface do it because that is lacking context.
I think it's a good example of the kind of internal tools the article is talking about. I would not have spent the time to build this without claude making it much faster to build stand-alone projects and I would not have the agent to do the english -> policy output with LLMs.
>> The thing is that I don't use AI to replace things I can do deterministically with code. I use it to replace things I cannot do deterministically with code - often something I would have a person do.
Nailed it. And the thing is, you can (and should) still have deterministic guard rails around AI! Things like normalization, data mapping, validations etc. protect against hallucinations and help ensure AI’s output follows your business rules.
> Things like normalization, data mapping, validations etc. protect against hallucinations
And further downstream: Audit trails, human sign-offs, operations which are reversible or have another workflow for making compensating actions to fix it up.
I like the engineering part at the top but projecting AI perspectives blindsided through the lens of LLMs is effectively "looking backwards".
So this is nice
> productionizing their proof-of-concept code and turning it into something people could actually use.
because it's so easy to glamorize research, while ignoring what actually makes ideas products.
This is also the problem. It's a looking back perspective and it's so easy to be miss the forest from the trees when you're down in the weeds. I'm talking from experience and it's a feeling I get when reading the post.
In the grand scheme of things our current "AI" will probably look like a weird detour.
Note that a lot of these perspectives are presented (and thought) without a timeline in mind. We're actually witnessing timelines getting compressed. It's easy to see the effects of one track while missing the general trend.
This take is looking at (arguably "over") LLM timeline, while missing everything else that is happening.
> Creating AI models is hard, but working with them is simple
I'm not disagreeing with the overall post, but from closely observing end users of LLM-backed products for a while now, I think this needs nuance.
The average joe, be it a developer, random business type, a school teacher or your mum, is very bad at telling an llm what it should do.
- In general people are bad at expressing their thoughts and desires clearly. Frontier LLMs are still mostly sycophantic, so in absence of clear instructions they will make up things. People are prone to treating the LLM as a mind reader, without critically assessing if their prompts are self-contained and sufficiently detailed.
- People are pretty bad at estimating what kind of data an LLM understands well. In general data literacy, and basic data manipulation skills, are beneficial when the use case requires operating on data besides natural language prompts. This is not a given across user bases.
- Very few people have a sensible working model of what goes on in an autoregressive black box, so they have no intuition on managing context
User education still has a long way to go, and IMO is a big determining factor in people getting any use at all from the shiny new AI stuff that gets slathered onto every single software product these days
The higher your executive function the more use you will get out of LLMs. This is really the skill you should be testing for in interviews now. Not letting a candidate use AI for their interview is not a useful evaluation anymore. I want to see how you use it. Do you prompt well, how much are you trusting and verifying what it outputs
Prompt engineering is a transitory phase. Embedding it into existing tools so the 80-90% of regular prompt patterns can be worked into the UI (or contextual UI designed around how a user uses the product) is the next step
Yeah if you mean that LLMs are used with UIs on top that allow "too much magic", I agree.
Free form chat is pretty terrible. People just want the thing to (smartly) take actions. One or two buttons that do the thing, no prompting involved, is much less complicated.
The whole point is the prompt (+ a static set of (system)prompts). If your whole function as a human is clicking one of a set of buttons to trigger an AI action, then you are automate-able in a few lines of code (and the AI is better than you at deciding which button to click anyway (supposedly)).
There are like thousands wrappers around LLMs masquerading as AI apps for specialized usecases, but the real performance of these apps is really only bottlenecked by the LLM performance, and their UIs generally only get in the way of the direct LLM access/feedback loop.
To work with LLMs effectively you need to understand how to craft good prompts, and how to read/debug the responses.
No no definitely not meant to be condescending; our ui paradigms don't align with users that just want to get stuff done. See what dayvid writes, too.
and everyone's being sold on this tech being super magic but to some questions there is an irreducible complexity that you have to deal with, and that still takes effort.
It’s easy now to get something good enough for use by you, friends, colleagues etc.
As it’s always been, developing an actual product is at least one order of magnitude more work. Maybe two.
But both internal tools and full products are made one order of magnitude easier by AI. Whole products can be made by tiny teams. And that’s amazing for the world.
> Everything got one order of magnitude easier thanks to AI.
No. Not at all. Many things maybe got easier but a lot of things got magnitudes harder. Maintaining bug bounty programs for example, or checking the authenticity and validity of written content on blogs.
Calling LLMs are a huge win for humanity is incredibly naive given we dont know the long term effects these tools are having on creativity in online spaces, authenticity of user bases, etc etc.
This article aligns very well to my frustration to the current view of AI in media and discussion.
> AI tools like KNNs are very limited but still valuable today.
I've seen discussions calling even feed-forward CNNs, monte-carlo chains, or GANs "antiquated" because transformers and diffusion have surpassed their performance on many domains. There is a hyper-fixation on large transformers and a sentiment that it somehow replaces everything that came before in every domain.
It's a tool that unlocks things we could not do before. But it doesn't do everything better. It does plenty of things worse (at-least taking power and compute into account). Even if it can do algebraic now (as is so proudly proclaimed in the benchmarks), wolfram alpha remains and will continue to remain far more suited to the task. Even if it can write code; it does NOT replace programming languages as I've seen people claim in very recent posts on here on HN.
"There’s a reason we’re not seeing a “Startup Boom”
AI skeptics ask, “If AI is so good, why don’t we see a lot of new startups?” Ask any founder. Coding isn’t even close to the most challenging part of creating a startup."
-- uhhh... am I the only one seeing a startup boom??? There are a bajillion kids working on AI start ups these days.
AI deals continued to dominate venture funding during the third quarter. AI companies raised $19 billion in Q3, according to Crunchbase data. That figure represents 28% of all venture funding.
The fourth quarter of 2024 has been no less busy for these outsized rounds. Elon Musk’s xAI raised a behemoth $6 billion round, one of seven AI funding rounds over $1 billion in 2024, in November. That’s just months after OpenAI raised its $6.6 billion round.
This has been my main use case for AI. I have lots of ideas for little tools to take some of the drudgery out of regular work tasks. I'm a developer and could build them but I don't have the time. However, they're simple enough that I can throw them together in a basic script form really quickly with Cursor. Recently I built a tool to analyse some files, pull out data, and give me it in the format I needed. A relatively simple python script. Then I used Cursor to put it together with a simple file input UI in an electron app so I could easily share it with colleagues. Like I say, I've been developer for a long time but never written python or packaged an electron app and this made it so easy. The whole thing took less than 20mins and it was quick enough that I could do it as part of the task I was doing anyway rather than additional work I needed to find time to do.
Many good points in the article, but I'd caveat that if you judge performance only by ELO score, you are not applying the best criteria.
SWEBench performance moved from 25% to 70% solved since beginning of 2025, and even with the narrowest possible lens from 65% to 70% since May. ARC-AGI2 keeps rapidly climing. We have experimental models able to (maybe) hold their ground at IMO gold. As well as performing at IPhO gold level.
And that leaves out the point that LMArena is a popularity contest. Not "was this correct", but "which answer did you like better". The thing that brought us glazing. A leveling ELO (or a very slowly climbing one) is kind of expected, and is really saying nothing about progress in the field.
Still doesn't mean "THE MACHINE GODS ARE COMING", but I would expect to see continued if slowing improvement. (I mean, how much better can you get at math if you already can be a useful assistant to Terence Tao and win IMO gold? And wouldn't we expect that progress to slow?)
But more than "how hard of a problem can you solve", I expect we'll see a shift to instead looking at missing capabilities. E.g. memory - currently, you can tell Claude 500 times to use uv, not pip, and it will cheerfully not know that again the 501st time. That's much more important than "oh, it now solves slightly harder problems most of us don't have". And if you look at arxiv papers, a lot are peeking in that direction.
I'd also expect work on efficiency. "can we not make it cost about the amount of India's budget to move the industry forward, every year" is kind of a nice idea. No, I'm not making the number up, or taking OAIs fantasy numbers - 2025 AI industry capex is expected to be $375B. We'll likely need that efficiency if we want to get significantly better at difficulty level or task length, too.
I'm not sure what you mean. This is the vanilla Substack layout.
EDIT: Removed the video because a bug in Substack causes the space bar to play the video instead of scrolling down. Sorry for the unintentional jumpscare.
The one issue is the accuracy of these AI models, which is that you can't -really- trust them to do a task fully, so that makes it hard to fully automate things with them. But the other is cost. Anyone using these models to do something at scale is paying maybe 100X over would it would cost in compute to run deterministic code to do the same thing. So in cases where you can write deterministic code to do something, or build a UI for a user to do it themselves, that still seems to be the best way. Once AI gets to the point where you can fully trust some model, then we've probably already hit AGI and at that point we're probably all in pods with a cable in our brainstems, so who cares...
The thing is that I don't use AI to replace things I can do deterministically with code. I use it to replace things I cannot do deterministically with code - often something I would have a person do. People are also fallible and can't be completely trusted to do the thing exactly right. I think it works very well for things that have a human in the loop, like codeing agents where someone needs to review changes. For instance, I put an agent in a tool for generating aws access policies from english descriptions or answering questions about current access (where they agent has access to tools to see current users, buckets policies etc). I don't trust the agent to do it exactly right so it just proposes the policies and I have to accept or modify them before they are applied, but its still better than writing them myself. And it's better than having a web interface do it because that is lacking context.
I think it's a good example of the kind of internal tools the article is talking about. I would not have spent the time to build this without claude making it much faster to build stand-alone projects and I would not have the agent to do the english -> policy output with LLMs.
>> The thing is that I don't use AI to replace things I can do deterministically with code. I use it to replace things I cannot do deterministically with code - often something I would have a person do.
Nailed it. And the thing is, you can (and should) still have deterministic guard rails around AI! Things like normalization, data mapping, validations etc. protect against hallucinations and help ensure AI’s output follows your business rules.
> Things like normalization, data mapping, validations etc. protect against hallucinations
And further downstream: Audit trails, human sign-offs, operations which are reversible or have another workflow for making compensating actions to fix it up.
The article kind of addresses that in identifying what are the best type of problems AI can solve
I like the engineering part at the top but projecting AI perspectives blindsided through the lens of LLMs is effectively "looking backwards".
So this is nice
> productionizing their proof-of-concept code and turning it into something people could actually use.
because it's so easy to glamorize research, while ignoring what actually makes ideas products.
This is also the problem. It's a looking back perspective and it's so easy to be miss the forest from the trees when you're down in the weeds. I'm talking from experience and it's a feeling I get when reading the post.
In the grand scheme of things our current "AI" will probably look like a weird detour.
Note that a lot of these perspectives are presented (and thought) without a timeline in mind. We're actually witnessing timelines getting compressed. It's easy to see the effects of one track while missing the general trend.
This take is looking at (arguably "over") LLM timeline, while missing everything else that is happening.
> Creating AI models is hard, but working with them is simple
I'm not disagreeing with the overall post, but from closely observing end users of LLM-backed products for a while now, I think this needs nuance.
The average joe, be it a developer, random business type, a school teacher or your mum, is very bad at telling an llm what it should do.
- In general people are bad at expressing their thoughts and desires clearly. Frontier LLMs are still mostly sycophantic, so in absence of clear instructions they will make up things. People are prone to treating the LLM as a mind reader, without critically assessing if their prompts are self-contained and sufficiently detailed.
- People are pretty bad at estimating what kind of data an LLM understands well. In general data literacy, and basic data manipulation skills, are beneficial when the use case requires operating on data besides natural language prompts. This is not a given across user bases.
- Very few people have a sensible working model of what goes on in an autoregressive black box, so they have no intuition on managing context
User education still has a long way to go, and IMO is a big determining factor in people getting any use at all from the shiny new AI stuff that gets slathered onto every single software product these days
The higher your executive function the more use you will get out of LLMs. This is really the skill you should be testing for in interviews now. Not letting a candidate use AI for their interview is not a useful evaluation anymore. I want to see how you use it. Do you prompt well, how much are you trusting and verifying what it outputs
Is "executive function" the thing you mean here? You need a lot of self control to write good prompts?
Prompt engineering is a transitory phase. Embedding it into existing tools so the 80-90% of regular prompt patterns can be worked into the UI (or contextual UI designed around how a user uses the product) is the next step
Yeah if you mean that LLMs are used with UIs on top that allow "too much magic", I agree.
Free form chat is pretty terrible. People just want the thing to (smartly) take actions. One or two buttons that do the thing, no prompting involved, is much less complicated.
The whole point is the prompt (+ a static set of (system)prompts). If your whole function as a human is clicking one of a set of buttons to trigger an AI action, then you are automate-able in a few lines of code (and the AI is better than you at deciding which button to click anyway (supposedly)).
There are like thousands wrappers around LLMs masquerading as AI apps for specialized usecases, but the real performance of these apps is really only bottlenecked by the LLM performance, and their UIs generally only get in the way of the direct LLM access/feedback loop.
To work with LLMs effectively you need to understand how to craft good prompts, and how to read/debug the responses.
Is this a “you’re holding it wrong, you idiot” post?
No no definitely not meant to be condescending; our ui paradigms don't align with users that just want to get stuff done. See what dayvid writes, too.
and everyone's being sold on this tech being super magic but to some questions there is an irreducible complexity that you have to deal with, and that still takes effort.
> What I do see is a boom in internal tools.
It’s easy now to get something good enough for use by you, friends, colleagues etc.
As it’s always been, developing an actual product is at least one order of magnitude more work. Maybe two.
But both internal tools and full products are made one order of magnitude easier by AI. Whole products can be made by tiny teams. And that’s amazing for the world.
> Everything got one order of magnitude easier thanks to AI.
No. Not at all. Many things maybe got easier but a lot of things got magnitudes harder. Maintaining bug bounty programs for example, or checking the authenticity and validity of written content on blogs.
Calling LLMs are a huge win for humanity is incredibly naive given we dont know the long term effects these tools are having on creativity in online spaces, authenticity of user bases, etc etc.
A thing doesn’t have to be universally good to be good.
Obviously, but id refrain from calling something good without knowing the extend of its damage.
Agreed. We don't know if AI is another asbestos or not, yet.
This article aligns very well to my frustration to the current view of AI in media and discussion.
> AI tools like KNNs are very limited but still valuable today.
I've seen discussions calling even feed-forward CNNs, monte-carlo chains, or GANs "antiquated" because transformers and diffusion have surpassed their performance on many domains. There is a hyper-fixation on large transformers and a sentiment that it somehow replaces everything that came before in every domain.
It's a tool that unlocks things we could not do before. But it doesn't do everything better. It does plenty of things worse (at-least taking power and compute into account). Even if it can do algebraic now (as is so proudly proclaimed in the benchmarks), wolfram alpha remains and will continue to remain far more suited to the task. Even if it can write code; it does NOT replace programming languages as I've seen people claim in very recent posts on here on HN.
> AI as a product isn’t viable: It’s either a tool or a feature
This correlates with the natural world. Intelligence isn’t a direct means of survival for anything. It isn’t a requirement for physical health.
It is an indirect means, I.e. a tool.
"There’s a reason we’re not seeing a “Startup Boom” AI skeptics ask, “If AI is so good, why don’t we see a lot of new startups?” Ask any founder. Coding isn’t even close to the most challenging part of creating a startup."
-- uhhh... am I the only one seeing a startup boom??? There are a bajillion kids working on AI start ups these days.
Nope, you're spot on.
AI deals continued to dominate venture funding during the third quarter. AI companies raised $19 billion in Q3, according to Crunchbase data. That figure represents 28% of all venture funding.
The fourth quarter of 2024 has been no less busy for these outsized rounds. Elon Musk’s xAI raised a behemoth $6 billion round, one of seven AI funding rounds over $1 billion in 2024, in November. That’s just months after OpenAI raised its $6.6 billion round.
https://techcrunch.com/2024/12/20/heres-the-full-list-of-49-...
You can't rely on them directly for business value, but you can rely on them indirectly is the summary of this post
>> What I do see is a boom in internal tools.
This has been my main use case for AI. I have lots of ideas for little tools to take some of the drudgery out of regular work tasks. I'm a developer and could build them but I don't have the time. However, they're simple enough that I can throw them together in a basic script form really quickly with Cursor. Recently I built a tool to analyse some files, pull out data, and give me it in the format I needed. A relatively simple python script. Then I used Cursor to put it together with a simple file input UI in an electron app so I could easily share it with colleagues. Like I say, I've been developer for a long time but never written python or packaged an electron app and this made it so easy. The whole thing took less than 20mins and it was quick enough that I could do it as part of the task I was doing anyway rather than additional work I needed to find time to do.
Many good points in the article, but I'd caveat that if you judge performance only by ELO score, you are not applying the best criteria.
SWEBench performance moved from 25% to 70% solved since beginning of 2025, and even with the narrowest possible lens from 65% to 70% since May. ARC-AGI2 keeps rapidly climing. We have experimental models able to (maybe) hold their ground at IMO gold. As well as performing at IPhO gold level.
And that leaves out the point that LMArena is a popularity contest. Not "was this correct", but "which answer did you like better". The thing that brought us glazing. A leveling ELO (or a very slowly climbing one) is kind of expected, and is really saying nothing about progress in the field.
Still doesn't mean "THE MACHINE GODS ARE COMING", but I would expect to see continued if slowing improvement. (I mean, how much better can you get at math if you already can be a useful assistant to Terence Tao and win IMO gold? And wouldn't we expect that progress to slow?)
But more than "how hard of a problem can you solve", I expect we'll see a shift to instead looking at missing capabilities. E.g. memory - currently, you can tell Claude 500 times to use uv, not pip, and it will cheerfully not know that again the 501st time. That's much more important than "oh, it now solves slightly harder problems most of us don't have". And if you look at arxiv papers, a lot are peeking in that direction.
I'd also expect work on efficiency. "can we not make it cost about the amount of India's budget to move the industry forward, every year" is kind of a nice idea. No, I'm not making the number up, or taking OAIs fantasy numbers - 2025 AI industry capex is expected to be $375B. We'll likely need that efficiency if we want to get significantly better at difficulty level or task length, too.
nit: space bar doesn't scroll
this list isnt learnings.
It doesn't. It's a jump scare, and a stupid one at that.
I'm not sure what you mean. This is the vanilla Substack layout.
EDIT: Removed the video because a bug in Substack causes the space bar to play the video instead of scrolling down. Sorry for the unintentional jumpscare.
It means scrolling with the space bar is janky and broken, instead of smooth and working properly.
Removed the video because a bug in Substack causes the space bar to play the video instead of scrolling down. Sorry for the unintentional jumpscare.
Oh I didn't even see my video. I honestly thought it was a sick joke, lol. My speakers were turned way up.
Seems to make noise (I closed the page ASAP so not sure what it actually did.)
holy shit lol. I should've trusted you but I hit spacebar and was NOT expecting that hahaha.
Oh, I wanted to see what you guys are talking about, but I just get a scroll. I wonder if they changed it already.