wolframhempel 20 hours ago

I believe there are two kinds of skill: standalone and foundational.

Over the centuries we’ve lost and gained a lot of standalone skills. Most people throughout history would scoff at my poor horse-riding, sword fighting or my inability to navigate by the stars.

My logic, reasoning and oratory abilities on the other hand, as well as my understanding of fundamental mechanics and engineering principles would probably hold up quite well (language barrier notwithstanding) back in ancient Greece or in 18th century France.

I believe AI is fine to use for standalone skills in programming. Writing isolated bits of logic, e.g. a getRandomHexColor() function in JavaScript or a query in an SQL dialect you’re not deeply familiar with is a great help and timesaver.

On the other hand, handing over the fundamental architecture of your project to an AI will erode your foundational problem solving and software design abilities.

Fortunately, AI is quite good at the former, but still far from being able to do the latter. So, to me at least, AI based code editors are helpful without the risk of long term skill degradation.

  • flowerthoughts 17 hours ago

    I'd classify this as theoretical skills vs tool skills.

    Even your engineering principles are probably superior to ancient Greeks, since you can simulate bridges before laying the first stone. "It worked the last time" is still a viable strategy, but the models we have today means we can often say "it will work the first time we try."

    My point being that theory (and thus what is considered foundational) has progressed as well.

  • politelemon 20 hours ago

    > horse-riding, sword fighting or my inability to navigate by the stars.

    Some better more suitable examples would be warranted here, none of these were as widespread or common as you'd assume. Little to no metaphorical scoffing would happen for those. Now, sewing and darning, and subsistence, while mundane, are uncommon for many of us.

    • sshine 15 hours ago

      For some strange reason, I’m better at sewing than both my wife and mother-in-law. I learned it in public school when both genders learned both woodworking and sewing, and maintained an interest so that I could wear “grunge” in the 1990s. The teachers I had remembered that those classes were gendered while they worked.

  • codebra 16 hours ago

    “still far from being able to do the latter” These models have been in wide use for under three years. AI IDEs barely a year. Gemini 2.5 Pro is shockingly good at architecture if you make it into a conversation rather than expecting a one-shot exercise. I share your native skepticism, but the pace of improvement has taken me aback and made me reluctant to stake much on what LLMs can’t do. Give it 6 months.

  • sceptic123 17 hours ago

    Taking your SQL example, if you don't properly understand the SQL dialect how can you know that what the AI gives you is correct?

    • LiKao 16 hours ago

      I'd say because psychologically (and also based on CS Theory) creating something and verifying draw from similar but also unrelated skills.

      It's like NP. Solving an NP problem is very hard. Verifying that the solution is correct is very easy.

      You might not know the statements required, but once the AI reminds you of which statements are available, you can check the logic using these statements makes sense.

      Yes, there is a pitfall of being lazy and forgetting to verify the output. That's where a lot of vibe coding problems come from in my opinion.

      • sceptic123 14 hours ago

        The biggest problem with LLMs is that they are very good at presenting something that looks like a correct solution without having the required knowledge to confirm if it is indeed correct.

        So my concern is more "do you know how to verify" rather than "did you forget to verify".

  • globular-toast 20 hours ago

    This is a great comment and says what I've been thinking but hadn't put into words yet.

    Too many people think what I do is "write code". That is incorrect. What I do is listen, read, watch and think. If code needs writing then it already basically writes itself because at that point I've already done the thinking. The typing part is an inconvenience that I'd happily give up if I could get my thoughts into the computer directly somehow.

    AI tools make the easy stuff easier. They don't help much with hard stuff. The most useful thing I've found them for is getting an initial orientation in a completely unfamiliar area. But after that, when I need hard details, it's books, manuals, blogs etc just like before. I find juniors are already lacking in their ability to find and assimilate knowledge and I feel like having AI isn't going to help here.

    • namaria 19 hours ago

      Abstracting away the software paraphernalia makes this more clear in my view: our job is to understand and specify abstract symbolic systems. Making them work with the current computer architectures is incidental.

      This is why I don't see LLM assisted coding as revolutionary. At best I think it's a marginal improvement on indexing, search and code completion as they have existed for at least a decade now.

      NLP is a poor medium for specifying abstract symbolic systems. And LLMs work by finding patterns in latent space, I think. But the latent space doesn't represent reality, it represents language as recorded in the training data. It's easy to underestimate just how much training data were used for the current state-of-the-art foundational models. And it's easy to overestimate the ability these tools have to weave language and by induction attribute reasoning abilities to them.

      The intuition I have about these LLM-driven tools is that we're adding degrees of freedom to the levers we use. When you're near an attractor congruent with your goals it feels like magic. But I think this is over fitting: the things we do now are closely mirrored by the data we used to train these models. But as we move forward in terms of tooling, domains, technology, culture etc, the data available will become increasingly obsolete, relevant data increasingly scarce.

      Besides there's the problem of unknown unknowns: lots of people using these tools are assuming that the attractors they see pulling on their outcome is adequate because they can only see some arbitrary surface of it. And since they don't know what geometries lie beneath, they end up creating and exposing systems with several unknown issues that might have implications in security, legality, morality, etc. And since there's a time delay between their feeling of accomplishment and the surfacing of issues, and they will be likely to use the same approach, we might be heading for one hell of a bullwhip effect across dimension we can't anticipate at all.

satvikpendem 20 hours ago

I do the same now, I don't use Cursor or similar edit-level AI tools anymore, I just use inline text completions and chat to talk through a problem, and then, I'll copy-paste anything needed (or rather type it in manually just to have more control).

I literally felt myself getting AI brain rot, as one Ask HN put it recently, where it felt like I started losing brain cells and depended too much on the AI over my own thinking and felt my skills atrophy. At the end of the day, in the future, I sense there will be a much wider gap between those that truly know how to code, and those that, well, don't, due to such over-reliance on AI.

  • greyman 20 hours ago

    I also stopped using Cline as well as Claude Desktop + MCPs. Gemini for example is rushing forward, Google surely is putting huge resources into developing it, and if in the matter of months AI will be able to implement additional feature itself in 0-shot, why bother with IDE?

    • satvikpendem 20 hours ago

      And what will you do when that zero shot doesn't work, and continues not to work? It will always be necessary to dig in and manually change things, hence an editor or IDE will continue to be needed.

      • greyman 19 hours ago

        Yes this happens. Then I use "dumb" IDE like Goland... or, I didn't stop using it. My point is that I currently do not invest my time into learning "agentic IDE" like Cursor, since I am not sure this is something useful in the future.

mentalgear 20 hours ago

I also do most of my coding artisanal, but use LLM for semantic search, to enrich the research part.

Definitely never trust an LLM to write entire files for you, at least if you don't want to spend more time in code review than writing or you expect maintaining it.

Also, a good quote regarding the AI tools market:

> A lot of companies are creating FOMO as a sales tactic to get more customers, to show traction to their investors, to get another round of funding, to generate the next model that will definitely revolutionize everything.

  • alfiedotwtf 5 hours ago

    > I also do most of my coding artisanal

    Off-topic, but I just wanted to say I love this as a statement!

nesk_ 20 hours ago

I've recently disabled code completions, it's too much mental workload to read all those suggestions for so little quality.

I still use the chat whenever I need it.

specproc 20 hours ago

Nicholas Carr has a nice book on the dynamic the author is describing [0], i.e. that our skills atrophy the more we rely on automation.

Like a lot of others in the thread, I've also turned off Copilot and have been using chat a lot less during coding sessions.

There are two reasons for this decision, actually. Firstly, as noted above, in the original post and throughout this thread, it's making my already fair-to-middling skills worse.

The more important thing is that coding feels less fun. I think there are two reasons for this:

- Firstly, I'm not doing so much of the thinking for myself, and you know what? I really like thinking.

- Secondly, as a collary to the skill loss, I really enjoy improving. I got back into coding again later in life, and it's been a really fun journey. It's so satisfying feeling an incremental improvement with each project.

Writing code "on my own" again has been a little slower (line by line), but it's been a much more pleasant experience.

[^0]: https://www.nicholascarr.com/?page_id=18

acron0 21 hours ago

This feels similar to articles with titles such as "Why every developer should learn Assembly" or "Relying on NPM packages considered harmful". I appreciate the core of truth inside the sentiment, and the author isn't _wrong_, but it won't matter over time. AI coding ability will improve, whether it's writing, debugging or planning. It will be good enough to produce 90% of the solution with very little input, and 90% is more than enough to go to market, so it will. And yes, it won't be optimal or totally secure, or the abstractions might be questionable but...how is that really different than most real software projects anyway?

  • fieldcny 20 hours ago

    Software is the connective tissue of the world, generating mediocre quality results (which will be the best outcome if you don’t really understand what you are looking at) is not just lazy it can be dangerous, do the worlds best engineers make mistakes? Of course they do, but that’s why building high quality software is collaborative process you have to work with others to build better systems. If you aren’t, you are wasting your time.

    As of now (and this could change, but that doesn’t change the moral and ethical obligations), software engineers are richly rewarded specifically because they should be able to write and understand high quality code, the code written is the foundation of how our entire modern world is built.

  • tauchunfall 20 hours ago

    >It will be good enough to produce 90% of the solution with very little input, and 90% is more than enough to go to market, so it will.

    What backs up this claim? And when will it reach it?

    We could be very well reached a plateau right now, which means looking at previous trends in improvements does not allow us to predict future improvements. If I understand it correctly.

  • yapyap 20 hours ago

    That is a hellish look toward the future. To be clear I don’t think you’re wrong, if companies can squeeze more out of devs by forcing them to use AI I bet they will, move fast and break stuff and all that, but it’s still quite the bummer.

    • futuraperdita 20 hours ago

      I'd argue it's a hell many other people see daily, and we've been privileged to have the space to care about craft. Corporations have never cared about the craft. The business is paying me to make, and the moment they can get my level of quality and expertise from someone much cheaper, or a machine itself, I'm gone. That dystopia has always been present and we just haven't had to stare it down as much as some other industries have.

    • satvikpendem 20 hours ago

      I don't think it's really any different than how most products are made currently, do you think most startups are caring about security and things that would slow down their initial release? All the rest is tech debt that can be solved once product market fit is solved.

      The only thing I'd worry about is when no one knows how to solve these when everyone relies on AI.

      • mentalgear 20 hours ago

        > All the rest is tech debt that can be solved once product market fit is solved.

        Even then, it's mostly never.

        • satvikpendem 20 hours ago

          Indeed, most startups fail, so it doesn't really matter in the end how well their code is created.

    • ghaff 20 hours ago

      I don't have a real opinion of the value at this point but, to the degree that there are significant productivity enhancement tools available for developers (or many other functions), and they refuse to use them, companies should properly mark those folks down as low performers with the associated consequences.

      "I don't want to use the web."

      • reshlo 20 hours ago

        “It would enhance productivity” is not a sufficient justification for requiring someone to do something. Ignoring safety regulations would often enhance productivity, but I’m sure you understand why we shouldn’t do that.

        • satvikpendem 20 hours ago

          Ignoring safety regulations would not enhance productivity in the long term, so that example doesn't quite prove the point. Productivity enhancement in general is sufficient justification for a company, as otherwise they can simply fire you, hence, to them, it is sufficient.

          • ghaff 18 hours ago

            I was assuming that other requirements associated with the software were otherwise met. If you're simply less productive all other things being equal, you should probably be at least eased out especially if you're simply refusing to use appropriate tools (assuming those tools actually do enhance productivity).

rahkiin 21 hours ago

I onky use line-completion AI that comes with Rider. I think it is a reasonable mix of classic code completion but with a bit more smart to it, like suggesting a string for a Console.Write. But it does not write new lines, as indicated by the author.

didip 20 hours ago

Why? Clearly AI tools make life easier.

I could drive a manual car, but why? Automatic transmission is so much more convenient. Furthermore, for some use-cases FSD is even more convenient.

Another example: I don't want to think about gait movement of my robots, I just want it move, from A to B.

With programming, same thing: I don't want to waste time typing `if err != nil {}`, I want to think about the real problem. Ditto on happy-case unit tests. I don't want to waste my carpal tunnel prone wrists on those.

So on and so forth. Technology exists to make life more convenient. So why reject technology?

  • alfiedotwtf 5 hours ago

    The fun factor: as an example, I specifically bought a manual car because automatics are boring to drive :)

jamez1 20 hours ago

Skill loss works both ways. You might miss out on forming early skills in using llms effectively, and end up playing catch up in 3-5 years from now if LLMs mark all the skills you hold to be void.

It is also likely LLMs will change programming languages, we will probably move to more formal, type safe languages that the LLM can work with better. You might be good at your language but find the world shifts to a new one that everyone has to use LLMs to be effective for.

  • sceptic123 16 hours ago

    Is there really that much skill involved in using LLMs effectively, most of the criticisms I see end up being countered with something along the lines of "you're not using the right model". That implies that much of the skill people talk about is less important than picking the correct model to use (which tend to be the more expensive ones).

    And in your LLM future, who will maintain all of the legacy systems that are written in languages the LLMs don't end up assimilating? It's reasonably safe to assume there will be plenty of work left there.

comrade1234 21 hours ago

It’s replaced google search for me when trying to look up a specific problem. I’d actually rather use google because the results from ai are too long and wordy and give too many answers/options, but something has happened to google that has made it useless. It started when they began putting reddit at the top of the results and it’s just getting worse over time.

  • paroneayea 20 hours ago

    I mean isn't what happened to Google leading people to use generative ai tools instead of searching that web searches started filling with generative AI garbage, both in terms of web content as in terms of Google itself generating it

shmichael 19 hours ago

Inventing the automobile has clearly made humanity less fit. Should we stop driving?

No. Going back to the stone age is not the solution. For the majority of our day, commuting without a vehicle will be impractical. So will coding without AI, especially as AI improves.

To retain human competency, we will have to find a novel solution. For walking, we created concentrated practice time - gyms/outdoor runs. Some evolution of leetcode, or even an AI guided training, might be the solution for coding skill preservation.

  • tasuki 14 hours ago

    > Inventing the automobile has clearly made humanity less fit. Should we stop driving?

    Yes, pretty please.

    I live in a town of 400 thousand, it's basically 10 kilometers accross. Very easily walkable. Why does everyone drive? I'm about as fast on foot as when they're stuck in morning traffic. I'm also enjoying my time more than the people stuck in traffic. (And I'd enjoy it even more if there weren't so many cars around!)

    I don't understand people who drive to the gym to walk there. They could just walk to the gym and back, instead of going to the gym...

  • hyperjeff 8 hours ago

    > Inventing the automobile has clearly made humanity less fit. Should we stop driving?

    Perhaps an apt analogy. One could argue that the lure of convenience of automobiles led to one of the worst decisions of the 20th century, to restructure society around automobiles, causing a self-perpetuating reliance feedback loop with many destructive side-effects (physical, environmental and cultural). We should pause a bit and not rush head-long into AI without trying to think the path forward through. It's a decision that we will all make together as a culture. There are many current troubles with AI already, even if they make no mistakes at all.

greyman 20 hours ago

I also stopped using AI code editors, but for different reasons. I realized, that with advances like Gemini 2.5 Pro, AI will soon be able to implement whole features, with correct prompt. So the real skills is how to prompt the AI a maintain the overall architecture of the project. I wonder if the IDEs like Cursor or Cline will ever be needed in the future; as for myself, I stopped investing into learning them. I currently use 2.5 Pro + Repo prompt app which prepares prompt and then apply the result to codebase semi-automatically.

xenodium 20 hours ago

> I chose to make using AI a manual action

I also find explicit/concious LLM interaction a happy medium.

Building tooling into my editor to expedite this concious usage made it much more enjoyable when I didn't have to context-switch into another app (ie. take my current text selection, or error under cursor, etc) https://github.com/xenodium/chatgpt-shell?tab=readme-ov-file...

mrweasel 19 hours ago

My refusal to use AI/LLMs are a little more political. Until these companies start behaving more ethical and stop pushing the cost of business, i.e. scrapping data relentless and without respect of copyright and licensing, I don't feel like supporting them.

strofocles 21 hours ago

I think the conclusions and the advice for new programmers is valid. The comparison with the FSD is not very relevant in my opinion but it may be, I just don't see it, as it is a very different kind of skill. I would separate the preference from the "objective truth" and your goals. I think as a technologist you need to keep using AI if not for any other reason then only to keep up with the progress of the technology and have a first hand experience. For projects that I mainly do to learn, I disable autocompletion (Cursor) and type every single character by hand. For some other projects where I am more interested in the end result I am allowing autocompletion and most of the time I read and make sure I understand the generated code.

submeta 20 hours ago

That’s like saying „I don’t use C anymore because that makes me forget how to use machine language.“ Humans build ever complex systems by building higher abstractions. I don’t need to know how electricity works to switch on/off the light. Learn how to build differently using AI tools. You cannot stop the trend by sticking to old ways.

askonomm 21 hours ago

I stopped using any sort of AI once I started feeling my overall problem solving ability disappear (as in, from memory, intuitively, the one thing that makes me valuable in the job market), and I don't want to end up as some glorified copy-paste vessel. Not to mention, being a copy-paste vessel is only fun for manager-type people I figure. Reviewing code is the least fun part of my job, using AI to write code is essentially just spending all your days reviewing code. No thanks. I like to hammer my own nails, not watch some idiot who can't learn and has to be babysit at every turn try to hammer the nails.

  • imhoguy 5 hours ago

    Actually LLMs help me to start coding at all, where without it I would procrastinate to not even start. I don't enjoy project bootstraping, and they help greatly with setup. Once a project gets a nice and quick development feedback loop I can drop LLMs and as they slow me down really, esp with deeper or newer abstractions. I may have undiagnosed ADHD.

  • dotancohen 20 hours ago

    I completely agree with you.

    However, when I'm working on my own time, fixing those poorly-hammered nails can take 1/4 the time _most of the time_. If I see that the AI (artificial intelligence) is unable to handle the task, then I redelegate it to my own NS (natural stupidity) and bang it out slowly but properly.

    AI makes coding boring and tedious. But it happens faster so I can take on more projects, or have more time for other fun things that I enjoy more than actually coding.

    • askonomm 20 hours ago

      The interesting part here is how it differs for you and me. I work on quite complex systems most of the time, and I write things much faster myself. AI needs so much hand holding that it's equivalent to trying to sprint while carrying a backpack filled with bricks. And also, it changes the way I think, which is really scary to me. I very quickly stop thinking about what a good solution to a problem is, and instead start thinking about how to convince AI to come up with a good solution to a problem. I basically become a manipulator, not a creator. A Steve to a Wozniak. If I wanted that I'd have gone into sales or management instead.

      I also wonder what is the end of this relying-on-AI cycle? You need good problem solving skills to be able to use AI to make somewhat working solutions. The more you depend on AI, the worse become your problem solving skills. And then what. Since you're no longer capturing knowledge into your own brain, your moat as a problem solver is also slowly disappearing, no? And once the job market figures out you actually can't do anything outside of an AI, or the tech has moved on, and what you knew before you started using AI has become irrelevant, who will even hire you anymore?

      I see this a lot with juniors entering the job market nowadays, unable to problem solve (many don't even know how to copy and paste text), and sure enough, they get fired a lot.

  • croes 21 hours ago

    For me it robs me of the feeling if achievement of I finish a complex task with AI.

    It’s not me who figured it out so it’s not my achievement

  • fire_lake 21 hours ago

    Are you less productive though? Businesses ultimately only care about what gets shipped.

    • askonomm 20 hours ago

      I have found that unless your job is making simple CRUD apps and you really don't care about quality at all, yes, you will be less productive. The amount of tiny problems slipping through is immense, and it is known that code reviews are ineffective at noticing all the bugs, as well as it not having any idea of the general code style or conventions, inability to learn if I tell it, and so forth. In the end I spend at least as much time as I would spend coding the thing myself just massaging the AI to make a somewhat working thing, and that at the expense of much, much worse quality. And, I myself am also miserable, since I'm basically spending my entire day communicating to a kindergarten child and teaching it, over and over again, except that a child eventually learns, AI doesn't. That's horrible. I don't want to spend my days doing that.

      Companies I work at tend to value quality and correctness over productivity, as ultimately low quality and incorrect software makes for unhappy clients who we have to service out of our own pockets later on anyway as part of the warranty we provide for our software, ending up costing us a lot more in money and reputation. But, like I said, I'm also not seeing any productivity gain, not unless you throw every good practice out of the window and pretend to be the only dev in the team as the AI will willfully rewrite large chunks of the app and destroy your colleagues work in the process.

      And for what? So that you can be a manager? What is your moat in the end of this? Will you advertise on your resume the ability to manipulate AI via text prompts as your skill? I suppose if you want to be a manager, maybe that's appealing. I don't. I want to actually make things, and be an active and intellectual part at making those things, not just a manager monitoring butts in (virtual) seats.

    • zwnow 20 hours ago

      People working for these kinda businesses should get their bums to a different business asap. Would u trust a bridge that was built in a rush?

Surac 21 hours ago

Amen Brother. Yes delegating all the work to AI will be a net loss.

lordofmoria 20 hours ago

What? Note for any juniors reading this: DO NOT TRY THIS AT HOME.

Does the author enjoy writing code primarily because they enjoy typing?

Are they not able to have the mental discipline to think and problem solve whilst using the living heck out of an AI auto complete?

What's the fun in manually typing out code that has literally been written, copied, copied again, and then re-copied so many times before that the LLM can predict it?

Isn't it more dangerous to not learn the new patterns / habits / research loops / safety checks that come with AI coding? Surely their future career success will depend on it, unless they are currently working in a very, very niche area that is guaranteed to last the rest of their career.

I'm sorry, this is a truly unnatural and absurd reaction to a very natural feeling of being out of our comfort zone because technology has advanced, which we are currently all feeling.

  • sceptic123 16 hours ago

    What are the safety checks you talk about here?

    • lordofmoria 11 hours ago

      Sorry, I probably phrased that poorly - when you’re coding with AI, you should get in the habit of spending more time checking for security mistakes. Not sanitizing, not scoping properly. Same mistakes a junior or mid would make, but unlike them, the AI will not doubt itself and highlight particular code it wrote asking “is this right?”. So you need to develop the habit of being careful.

drivingmenuts 14 hours ago

I am, despite my distrust of code generating LLMs, using one to write a piece of software that I have intended to write for a while, but haven't, thanks to laziness and inertia.

It's interesting, but I'm finding I'm spending a lot of time just figuring out how to describe something to the LLM. A bit less than writing it myself, really, except for the Qt-related stuff, which I've read up on and practiced a bit, but I wouldn't say I'm competent, yet. The generated code is OK and it works the first time, exactly the way I describe it to the LLM. I believe you can easily spot the problem there.

It will take a while to learn how to integrate AI into my workflow, and I can't say it's exactly an enjoyable experience, but I feel like it's something I need to do. I do feel it's a crutch, though.

I will say, there are two areas where it shines so far: writing tests and making me practice my code reviewing.

We'll see how it goes. It's going to take amazing results to ever get me to pay a monthly fee for something running remotely that seems like it should run locally.

DeathArrow 20 hours ago

I agree that you shouldn't over rely on AI, that you shouldn't rely on AI to write code when you are learning.

However, in most codebases I can't see that happening. Once the codebase is complex enough, AI will not work, take more time to use than writing the code yourself or breaks the existing code.

The author himself said that AI isn't usable for anything more complex than an University project, yet he complains he lost his skills because he used AI.

As far as I see it, in the current stage of AI, it has limited usage.

You can use it to start a greenfield project or start a new feature that is independent of the rest of the codebase. It will help with most of the setup and boiler plate.

Past that, it will either not be able to do what you need, it will take more time or will break the code. So this pretty much doesn't allow you to modify large code bases or add logic. But you can still use it to generate a function, a method or a service, provided that function, method or service does not require much context and you don't need to modify the rest of the codebase to accommodate it.

I see AI as merely an accelerator, not solving problems by itself but helping you solve problems faster, sometimes. I think it is very similar to Intellicode in Visual Studio and other editor tiols, which don't write code but provide method autocompletion, provide small suggestions, provide syntax highlighting, provide formatting and warn you when you make syntax errors.

Was I using a text editor instead of a good IDE, my speed of development would be slower.

ilrwbwrkhv 21 hours ago

I completely agree with this assessment. I also keep my editor separated from the AI stuff. The way I talk about it is that using AI completions will create distance from the code. And just like a sports car driver sits close to the car and as low as to the ground as possible, I think of using non-AI editing like seated inside of a low to the ground sports car. While AI coding is floating above the car and controlling the steering with wires.

ohgr 21 hours ago

I never started. They only generate substandard garbage and I have higher standards than that.

  • sieszpak 21 hours ago

    I’m using Sonnet 3.7 now, previously used Sonnet 3.5, and I recommend it. If you know what you want to do, the work becomes pleasant and orders of magnitude faster. IMO, you need to learn how to code from scratch with LLMs. Remember—garbage in, garbage out. The Pareto principle also applies here: AI does 80% in the first shot.

    • zwnow 20 hours ago

      I like to code with minimal emissions, something thats never talked about with LLMs for some reason.

      • satvikpendem 20 hours ago

        I recall the actual emissions figures for AI usage are much lower than commonly touted in the media. And if one really cares, efficient local models exist, but I doubt the average Photoshop user thinks about their emissions in their usage of the application while simultaneously fear-mongering about AI art, which I've seen happen before.

        • zwnow 20 hours ago

          Still, the training of bigger and bigger models is expensive. Models that nobody asked for in the first place. I dont drive a car, I also wont use LLMs. I live near dikes and I'd really prefer to be able to live here for a few more decades...

          • vaylian 19 hours ago

            It is correct that training 1 model is a lot more expensive than running that same model 1 time. But when you have many people using AI, then you also get a lot of emissions.

            The key challenge with greenhouse emissions is that the sources are so diverse and so distributed. We need to look at all the emitters, even if they are only in the range of 0.1-1.0% of global emissions.

          • satvikpendem 20 hours ago

            > I dont drive a car, I also wont use LLMs.

            Okay, I think you are in the extreme end of most people, in the US at least (among many other countries as well), so it will be difficult to convince you of things most people might want or need. People also live in the woods in log cabins, they might not have asked for central heating, but for most people, that sure does help.

            • zwnow 20 hours ago

              I live in the country with probably the biggest car lobby. It just disgusts me that families now have 3-4 cars instead of 1. One is fine, it worked 20 years ago, why doesn't it work today? People just became so entitled to getting anywhere any time they want with zero regards of their carbon footprint. Maybe im biased because we actually learned about that in school so I try to minimize mine... But yeah, I wont change shit with being sparse.

    • robin_reala 21 hours ago

      So you’re 100-1,000 times faster when you use LLMs to help?

      • frainfreeze 20 hours ago

        In things that you don't do often? Definitely. I bet every backend developer loves it for frontend tasks, at minimum. And things like makefiles and bash scripts become easy and enjoyable to do well. No need to leave the editor to look up errors or at least get some ideas and pointers on them too. It all depends on how you use the tool, for us it was great help, and God mode for many things

    • ohgr 20 hours ago

      I'm the person who has to clean up the garbage out. Most people have no idea if it's garbage or not. In fact I'd argue that they have little ability to assess quality having never been exposed to it. That's the only reason this technology has any interest. Not because it's good but people aren't good enough to tell if it isn't.

      On that note, I'm sure Sonnet 3.7 does indeed approach the asymptote of not good enough better than Sonnet 3.5.

  • welder 21 hours ago

    Same, always have copilot autocomplete turned off. I do use the chat, but rarely.

    I was using it as docs but had to stop because it gives straight up wrong answers while sounding so confident. It's just faster to go directly to the docs or use Dash.app

  • handfuloflight 21 hours ago

    Indeed, you get at least what you put in.

Barrin92 20 hours ago

It just never made any sense to me. If we're talking real production code bases, not a one off hobby script, working with these systems just didn't get me anything.

If you're in a codebase of tens or hundreds of thousands of LOC, these systems don't work well, which leaves only two options, you enter some never-ending chat in which you have to have conversations with these systems and they act like a dimwitted intern, or you just give up on being a serious software engineer and pray and commit things you don't understand.

If I have to understand everything anyway I might as well just write it myself instead of talking to a bot that yaps without end, if i went with option two I should be fired because I'd be unqualified for my job. You're just backloading your problems basically.

senko 21 hours ago

tldr for those who won't read past the headline: still uses AI, but copy pastes code around from/to chatgpt to have a greater feeling of control.

  • BergAndCo 10 hours ago

    And the reason for this is he "forgot to drive after not driving for a few months". Which is weird because I haven't driven for years but I can still instantly do it when I need to, or instantly ride a bike after years of not cycling.