mikeocool 3 hours ago

Reading the comments and posts about both Claude Code and Codex on Reddit (and often hacker news), it’s hard to imagine they’re not extremely astroturfed.

There seems to be constant stream of not terribly interesting or unique “my Claude code/codex success story” blog posts that mange to solicit so many upvotes.

  • konishipolis 3 hours ago

    my suspicion is that much (or at least some) of the negative sentiment towards claude code is from folks that were on it early (when code was even more widely used than codex) and created intensive workflows using it. when anthropic tightened quotas to make it more equitable across plan users they were much more likely to be impacted.

    this is obviously pure conjecture, but perhaps the OE folks had automated their multiple roles and now they need to be more involved.

    • ffsm8 18 minutes ago

      Eh, honestly I had some health issues since the vibe coding craze started. Normally I'm one of the people that try things like that out - mostly cuz I don't actually have any hobbies beyond coding and generally find such things funny.

      As I got better round June/July I finally found the energy to try it out. It was working incredibly well at the time. It was so fun (for me), that I basically kept playing with it every day after finishing work. So for roughly 1.5 months basically every free minute each day, along with side explorations during work hours when I could get away with it.

      Then I had to take another business trip mid August, when I finally came back in September it was unrecognizable - and from my perspective, it definitely hasn't recovered to how ultrathink+opus performed back then.

      You can definitely still use it, but you need to take a massively more hands-on approach.

      At least my opinion is not swayed by their reduced quota ... But to stay in line with the sentiment analysis this article is about - neither have I tried Codex to this point. Which I will, eventually.

  • eddiewithzato an hour ago

    Truthfully I find sonnet-4.5 better at Rust code than Codex (medium/high). Haven't tried anything else (like react/typescript) since I only use AI for issues/problems I don't understand.

  • fragmede 3 hours ago

    In life, it helps to be skeptical, so the real question is where do I find real life humans to ask about their experiences? And even then, they could still be paid actors. Though, I've often wondered how would that work. Like, the marketing department staffed by hot people finds developers and then offers to Venmo them $500 to write something nice online about the product? It's a big Internet, and there's a lot of people on Upwork, so I'm not saying it isn't happening, but I've never gotten an email asking me to write something nice about Claude Code in exchange for a couple of bucks.

    • 100721 an hour ago

      One thing worth taking into account is the practice of finding people who actually like the product, and then paying them to write an honest review. I find this to be much closer to ethical than paying exclusively for positive reviews to people who may not have ever used the product, but it has a similar net effect of distorting the sentiment by amplifying a subset of opinions, so still not ideal but at least it’s rooted in honesty.

      If you haven’t been vocal about your support of products in general, you wouldn’t show up on the radar for these “opportunities.”

  • nickstinemates 3 hours ago

    Meanwhile I am talking about unique shit with Claude Code trying to draft on that sentiment for little to no traction with them. We've built the best way to automate and manage production infrastructure using these models and no one gives a shit. It's so weird.

    • 100721 an hour ago

      > Meanwhile I am talking about unique shit with Claude Code trying to draft on that sentiment for little to no traction with them.

      What does this mean? What do you mean unique shit? What do you mean when you say you’re trying to draft on the sentiment? What is “them” referring to?

      Genuinely. I’m not being (deliberately) obtuse, just trying to follow. Thanks

    • typpilol 2 hours ago

      Best according to?

extr 4 hours ago

Notice how pricing is the top discussion theme. People love free shit and it's hard to deny codex usage limits are more generous. My 2c for someone who uses both tools pretty consistently in an enterprise context:

- Codex-medium is better if you have a well articulated plan you "merely" need to execute on, need help finding a bug, have some specific complex piece of logic you need to tweak, truly need a ton of long range context to reason about an issue. It's great and usage limits are very generous!

- Sonnet 4.5 is better for everything else. That means for me: non-coding CLI ops, git ops, writing code with it as a pair programmer, OOD tasks, big new chunks of functionality that are highly conceptual, architectural discussion, etc. I generally approve every edit and often interrupt it. The fast iteration and feedback is key.

I probably use CC 80% of the time with Codex the other 20%. My company pays for CC and I don't even look at the cost. Most of my coworkers use CC over Codex. We do find the Codex PR reviewer to be the best of any tool out there.

Codex gets a lot of play on twitter also because a lot of the most prolific voices there are solo devs who are "building in public". A greenfield, solo project is the ideal (only?) use case for running 5 agents in parallel or whatever. Codex is probably amazing at that. But it's not practical for building in enterprise contexts IMO.

  • loveparade 40 minutes ago

    Interesting, my experience has been the opposite. I've been running Codex and Sonnet 4.5 side by side the past few weeks, and Codex gives me better results 90% of the time, pretty much across all tasks. Where Claude really shines is that it's much faster than codex. So if I know exactly what I want or if it's a simpler task I feel comfortable giving it to Claude because I don't want to wait for Codex to work through it. Claude cli is also a much better user experience than codex cli. But Codex gets complex things right more consistently.

  • qsort 4 hours ago

    They are similar enough that using one over the other is at most a small mistake. I prefer Claude models (perhaps I'm more used to them?) but Codex is also very good.

    • extr 4 hours ago

      Totally agree. A lot of it is simply personal preference at this point.

  • another_twist 2 hours ago

    > better for everything else. That means for me: non-coding CLI ops, git ops, writing code with it as a pair programmer, OOD tasks, big new chunks of functionality that are highly conceptual, architectural discussion..

    I would argue this is the wrong way of using these tools. Writing out a defined plan in plain english and then have codex / claude write it out is better since that way we understand the intention. You can always have codex come up with an abstract plan first, iterate on it and then implement. Kind of like how we would implement software in real life.

  • quintu5 4 hours ago

    For larger tasks that I know are parallelizable, I just tell Claude to figure out which steps can be parallelized and then have it go nuts with sub-agents. I’ve had pretty good success with that.

    • DanielAtDev 2 hours ago

      I need to try this because I've never deliberately told it to, but I've had it do it on it's own before. Now I'm wondering if that project had instructions somewhere about that, which could it explain why it happened.

the_duke 4 hours ago

In my experience gpt5-codex (medium) and codex-cli is notably better than Sonnet 4.5 and claude-code. (note: never tried Opus)

It is slower, but the results are much more often correct and it doesn't rush into half-baked solutions/dumb approaches as eagerly.

I'd much rather wait 5 minutes than have to clean up manually or try to coax a model into doing things differently.

I also wouldn't be surprised if the slowness was partially due to OpenAI being quite resource constrained. They are repeatedly complaining about not having sufficient compute.

Bigger picture: I think all the AI coding environments are incredibly immature. There are many improvements to be unlocked.

  • ripped_britches 4 hours ago

    That’s falsifiable quite easily by measuring tokens per second.

    Rather, the real reason codex takes longer is that it does more work to read more context.

    IMO the results are much better with codex, not even close

  • fragmede 3 hours ago

    Where codex falls short is in background processing, both running a daemon in the background and using its output as context while simultaneously being interactive for the user, and with subagents, ie, do multiple things in parallel. Presumably codex will catch up, but for now, that puts Claude Code ahead of things for me.

    As far as which one is better, it's highly dependent on what we're each doing, but I will say that I have this one project where bare "make" won't work, and I have a script that needs to be run instead. I have instructions to call that script in multiple .md files, and codex is able to call the script instead of make, but it keeps forgetting that and tries to run make which fails and it gets confused. (Claude code running on macOS host but build on Linux vm.) I could work around it, but that really takes the "shiny" factor off of codex+GPT-5 for me.

    • another_twist 2 hours ago

      Honestly I think the simplicity of codex to not do anything fancy pants like background coding is what gives it an edge. I am happy to wait for a while and even to repeat context to it (helps me remember stuff anyway) if it types out the right thing.

visiondude 5 hours ago

Ah bots analyzing bots. Seems openai has a larger bot army than Anthropic rn

  • candiddevmike 3 hours ago

    Need to go to conferences and actually talk to people to understand what Real People (TM) think of GenAI.

  • aaronSong 4 hours ago

    openai's crawling is the best. just following anthropic's way

another_twist 2 hours ago

Regular codex user. Its my typing assistant. Allows me to be the ideas guy when writing software. Codex makes plenty of mistakes when generating large blocks of code but its easier to cleanup and consolidate with a refactoring pass once the typing had been done.

_heimdall 3 hours ago

Its interesting to me that Codex has such high sentiment. I'm definitely an outlier on the more principled end of the spectrum, but I refuse to use OpenAI products.

I take issue with the AI industry in general and the hand-wavy approach to risk, but OpenAI really is on another level in my book. While I don't trust the industry's approach to AI development, with OpenAI I don't trust the leaderships' intentions.

  • gorjusborg 2 hours ago

    > Its interesting to me that Codex has such high sentiment.

    Me too, so much so that I doubt this is legitimate. This blog post is the only place I've seen people 'raving' about codex.

    Claude Code is the current standard all others are measured against.

kazinator 4 hours ago

Sucks-Rules-o-Meter, but 2025.