vulk 11 hours ago

On a side note Alex books are a breath of fresh air for someone who is learning. They are always updated to the latest version of Go and if there is something new the old code base is updated and the new concepts introduced while you are being notified and send the new version of the book.

I never seen that before, all the other learning sources that I have are just abandoned, often there will be something that brakes and you have to spend good amount of time to figure out how to fix it, which can just discourage you to go on.

Kudos to Alex that is how it should be done.

makkes 11 hours ago

The code he provides doesn't compile and needs to be changed like so:

  --- main_before.go      2025-10-15 09:56:16.467115934 +0200
  +++ main.go     2025-10-15 09:52:14.798134654 +0200
  @@ -13,8 +13,10 @@
  
          slog.Info("starting server on :4000")
  
  +       csrfProt := http.NewCrossOriginProtection()
  +
          // Wrap the mux with the http.NewCrossOriginProtection middleware.
  -       err := http.ListenAndServe(":4000", http.NewCrossOriginProtection(mux))
  +       err := http.ListenAndServe(":4000", csrfProt.Handler(mux))
          if err != nil {
                  slog.Error(err.Error())
                  os.Exit(1)
  • alex_edwards an hour ago

    Thanks for pointing this out, what a facepalm. I've fixed it in the post.

nchmy 9 hours ago

I deeply appreciate this thorough review of CSRF protection via headers. I've been looking into the topic to see if I can get rid of csrf tokens, and it seems like I can now - if I ignore/don't care about the 5% of browsers that don't support the required headers.

It makes me wonder though - most browser APIs top out around 95% coverage on caniuse.com. What are these browsers/who are these people...? The modern web is very capable and can greatly simplify our efforts if we ignore the outliers. I'm inclined to do so. But am also open to counterarguments

  • johannes1234321 8 hours ago

    Those 5% are probably a wild collection of devices. TVs with embedded Browsers, old phones and other computers elsewhere.

    As example: my late grandfather of 100 years took records of his stamp collection in Excel. He used the computer for Wikipedia as well, but we didn't upgrade ist as he was comfortable, but upgrading to later Windows to ruin newer browser would have been too much of a change and rather made him stop doing what brought him fun. The router etc blocked worst places frequent backups allowed restore, thus actual risk low.

    Anecdote aside: there are tons of those machines all over.

    And then another big one: bots claiming to be something which they aren't.

    • veeti 8 hours ago

      There are a lot of people happily browsing away on unsupported Apple devices that don't get any more Safari updates. Lot of strange webkit edge cases to be found that don't exist in any other browser.

      • nchmy 7 hours ago

        Apparently less than 1.5% of global internet users are on versions of safari that don't support Sec-Fetch-Site.

  • kijin 8 hours ago

    If a browser is too old to send either the Sec-Fetch-Site header or the Origin header, it will probably ignore Referrer-Policy and always set the Referer header, which contains the origin.

    So I wonder why the author didn't consider falling back to the Referer header, instead of relying on an unrelated feature like TLS 1.3. Checking the referrer on dangerous (POST) requests was indeed considered one way to block CSRF back in the day. Assuming all your pages are on the same https origin, is there an edge case that affects Referer but not the Origin header?

    • nchmy 6 hours ago

      Caniuse.com shows both origin and referer headers have 96.3% support, with Sec-Fetch-Site not far behind at 94.2. So it's probably a moot point.

      Ive read in various places though that referer has all sorts of issues, gotchas etc such that it isn't really a reliable way of doing this.

      https://security.stackexchange.com/questions/158045/is-check...

      • kijin 6 hours ago

        Caniuse.com looks wrong in this case. It marks old versions of Internet Explorer and Android browser as either unknown or not supporting Referer, when in fact they only lacked support for hiding the Referer in potentially insecure situations. That's the point I was trying to make above. Even old browsers send it, even when they shouldn't. This behavior is uncontrollable, even by potential attackers.

        A missing Referer header probably doesn't mean much one way or another. But you can at least block requests with Referer pointing to URLs outside of your origin. This fallback would seem preferable to the fail-open policy described in the article (request always allowed if neither Sec-Fetch-Site nor Origin headers are present).

  • anal_reactor 9 hours ago

    From business perspective it makes a lot of sense to just drop that bottom 5%. Actually, many businesses support Chrome only, they don't even support Firefox.

    Technological counterargument though is that you should allow people to tinker and do weird shit. Once upon a time tech wasn't about maximizing stock value, it was about getting Russian game crack to work, and making Lora's boobs bigger. Allowing weird shit is a way to respect the roots of modern tech, and allow hobbyists to tinker.

    • nchmy 9 hours ago

      Thanks for confirming. I don't know that it has to be framed as a "business perspective" though. I'm a solo dev for a non-profit project, so ignoring the 5% is just a matter of pragmatism.

      I most defintiely do not care about tinkerers, and in fact would generally classify them as something akin to attackers. I just want to allow as many people to use the app as possible, while keeping things simple and secure.

    • wongarsu 8 hours ago

      For a lot of businesses, 5% of revenue is a lot more than the cost of supporting older browsers

      What shifts the discussion a bit is that many of the bottom 5% aren't lost customers. If your website doesn't work on my minority browser, smart TV or PS Vita I might be willing to just try it in Chrome instead

    • cryptonym 9 hours ago

      Empathy and accessibility. Does it make sense to have a ramp in front of your shop for < 5% customers?

      • nchmy 8 hours ago

        My question, though, is who are the 5% of users in this case who are using some arcane browsers? Surely that's largely a choice, physical disabilities are not.

        It doesn't seem unreasonable to say to those folks "were evidently not using the same web"

        • cryptonym 8 hours ago

          Grandma or poor folk with their old device may not be "largely a choice"

          • nchmy 7 hours ago

            Don't major browsers essentially auto update? And to the extent that a device is so old that it can't support newer versions, surely it must be VERY old and perhaps is somewhat likely to be replaced sooner than later.

            I think I'll probably carry on with not supporting browsers that don't have Sec-Fetch-Site. The alternative, Csrf tokens, actually causes me immense issues (they make caching very difficult, if not impossible).

            (and I say all of this as someone who is specifically building something for the poorest folks. I'm extremely aware of and empathetic to their situation).

          • todotask2 7 hours ago

            It still depends on the target audience. Some websites or apps are single-page applications (SPAs), can older devices handle that? For example, my mum’s Android phone was too slow to even load a page.

            Secondly, users should upgrade their devices to stay safe online, since vulnerabile people are often scammed or tricked into downloading apps that contain malware.

            So we should not cater to outdated browsers when they could pose a risk.

            • nchmy 6 hours ago

              Yeah, I'm very amenable to this take. WordPress, for example, is infamous for having extreme backwards compatibility. But that often results in many sites being on ancient versions of php (and surely other tech as well). I'm of the opinion that they should all be running currently-supported versions of php and everything else. If you can't use my plugin because your server is shit, so be it.

        • littlestymaar 8 hours ago

          It's not comparable to a physical disability byt gatekeeping the people who just don't want to be tracked all day by Google doesn't sounds right to me though.

          • nchmy 7 hours ago

            There's plenty of other chromium browsers - Vivaldi seems to do a good job in this regard.

            Also, Firefox exists, though they don't seem to care about privacy much anymore either.

            And, of course, safari, which is terrible in most regards

            • littlestymaar 7 hours ago

              > There's plenty of other chromium browsers - Vivaldi seems to do a good job in this regard.

              Being chromium derivatives, they don't really have a say in what's included in “their” browser though.

              > Also, Firefox exists

              Well, you disregarded is as “arcane browser” right above.

              • nchmy 6 hours ago

                In what way did I call Firefox arcane? Firefox generally has better support for web features than safari.

                • littlestymaar 6 hours ago

                  It has less than 5% market share left…

ale 13 hours ago

Are CSRF attacks that common nowadays though? Even if your app is used by the 5% of browsers that don’t set the Origin header the chances of that being exploited are even more miniscule. Besides, most webdevs reach for token-based auth libraries before even knowing how to set a cookie header.

  • littlecranky67 10 hours ago

    Curious about that too. In a modern web-app I always set HttpOnly cookies to prevent them being exposed to anything JavaScript, and SameSite=strict. Especially the later should prevent CSRF.

    • jeremyscanvic 9 hours ago

      Erratum: What I'm saying here only applies for cookies with the attribute SameSite=None so it's irrelevant here, see the comments below.

      (Former CTF hobbyist here) You might be mixing up XSS and CSRF protections. Cookie protections are useful against XSS vulnerabilities because they make it harder for attackers to get a hold on user sessions (often mediated through cookies). It doesn't really help against CSRF attacks though. Say you visit attacker.com and it contains an auto-submitting form making a POST request to yourwebsite.com/delete-my-account. In that case, your cookies would be sent along and if no CSRF protection is there (origin checks, tokens, ...) your account might end up deleted. I know it doesn't answer the original question but hope it's useful information nonetheless!

      • RagingCactus 9 hours ago

        The SameSite cookie flag is effective against CSRF when you put it on your session cookie, it's one of its main use cases. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/... for more information.

        SameSite=Lax (default for legacy sites in Chrome) will protect you against POST-based CSRF.

        SameSite=Strict will also protect against GET-based CSRF (which shouldn't really exist as GET is not a safe method that should be allowed to trigger state changes, but in practice some applications do it). It does, however, also make it so users clicking a link to your page might not be logged in once they arrive unless you implement other measures.

        In practice, SameSite=Lax is appropriate and just works for most sites. A notable exception are POST-based SAML SSO flows, which might require a SameSite=None cookie just for the login flow.

        • hmry 9 hours ago

          This page has some more information about the drawbacks/weaknesses of SameSite, worth a read: https://developer.mozilla.org/en-US/docs/Web/Security/Attack...

          You usually need another method as well

          • RagingCactus 8 hours ago

            Yes, you're definitely right that there are edge cases and I was simplifying a bit. Notably, it's called SameSite, NOT SameOrigin. Depending on your application that might matter a lot.

            In practice, SameSite=Lax is already very effective in preventing _most_ CSRF attacks. However, I 100% agree with you that adding a second defense mechanism (such as the Sec header, a custom "Protect-Me-From-Csrf: true" header, or if you have a really sensitive use case, cryptographically secure CSRF tokens) is a very good idea.

        • jeremyscanvic 9 hours ago

          Thanks for correcting me - I see my web sec knowledge is getting rusty!

  • zwnow 12 hours ago

    Also cant you just spoof the origin header?

    • masklinn 10 hours ago

      A CSRF is an attack against a logged in user, so has to be mediated via their browser.

      If you can spoof the origin header of a second party when they navigate to a third party, a CSRF is a complete waste of whatever vulnerability you have found.

    • kevinyew 11 hours ago

      You can if you want to deliberately CORF yourself for some reason - it's there to protect you, but spoofing it doesn't give you any special access you wouldn't otherwise have.

      The point is that arbitrary user's browsers out in the world won't spoof the Origin header, which is protecting them from CORF attacks.

teiferer 13 hours ago

CSRF: Cross-Site Request Forgery

From https://developer.mozilla.org/en-US/docs/Web/Security/Attack...

In a cross-site request forgery (CSRF) attack, an attacker tricks the user or the browser into making an HTTP request to the target site from a malicious site. The request includes the user's credentials and causes the server to carry out some harmful action, thinking that the user intended it.

cientifico 12 hours ago

Killing all the fun.

Remember when you could trick a colleague into posting in Twitter, Facebook... by just sending a link?

CSRF fixes are great for security - but they've definitely made some of the internet's harmless mischief more boring

nmadden 12 hours ago

Enforcing TLS 1.3 seems like a roundabout way to enforce this. Why not simply block requests that don’t have an Origin/Sec-Fetch-Site header?

  • nchmy 9 hours ago

    I don't understand - the article is literally about origin/Sec-Fetch-Site

    • nmadden 9 hours ago

      The article has a whole section about requiring those headers by forcing the use of TLS 1.3 — the theory being that browsers modern enough to support 1.3 are also modern enough to support the headers. But why not just enforce the headers?

      • kokada 9 hours ago

        If your case is just supporting browsers and not things like curl this seems fine. But when the headers are not set the CSRF protections are "disabled" exactly to support this case, that you may want to do this request using something like curl.

        • nmadden 27 minutes ago

          I guess. But it would only impact you if you’re using cookies with curl (I assume the middleware is only applied to requests with cookies?) — and it seems pretty easy to add a -H ‘sec-fetch-site: none’ in that case.

      • nchmy 8 hours ago

        I see what you mean. You were saying why tls in addition to Sec-Fetch-Site. The sibling comment seems to have addressed it

hmcamp 7 hours ago

Would have like to have seen a complete working example at the end that addresses all the concerns

NewJazz 13 hours ago

Do most languages have good support for TLS 1.3 as the client?

Zababa 11 hours ago

"cop" as an abbreviation for "cross-origin protection" is delightful

dorianmariecom 10 hours ago

rails solved this a while ago ;)

  • nchmy 9 hours ago

    I don't use rails. How did they solve it?

    • brokegrammer 8 hours ago

      >Have we finally reached the point where CSRF attacks can be prevented without relying on a token-based check (like double-submit cookies)?

      Rails uses a token-based check, and this article demonstrates token-less approach.

      Rails didn't solve CSRF btw, the technique was invented long before Rails came to life.

      • nchmy 8 hours ago

        Yes, I assumed this is what they were ignorantly pointing towards.

        Indeed, Csrf tokens are an ancient concept. WordPress, for example, introduced nonces a couple years before rails. Though, it does appear that rails might have been thr first to introduce csrf protection in a seemingly automated way.

        • brokegrammer 7 hours ago

          True, it does seem like Rails introduced configuration-free token based CSRF protection, which "solved" CSRF for traditional server rendered apps.

          I believe the new technique is easier to use for SPA architectures because you no longer need to extract the token from a cookie before adding it to request headers.

tankenmate 12 hours ago

I would never rely on headers such as "Sec-Fetch-Site"; having security rely on client generated (correct) responses is just poor security modelling (don't trust the client). I'll stick to time bounded HMAC cookies, then you're not relying on client properly implementing any headers and it will work with any browser that supports cookies.

And having TLS v1.3 should be a requirement; no HTTPS, no session, no auth, no form (or API), no cookie. And having HSTS again should be default but with encrypted connections and time bounded CSRF cookies the threat window is very small.

  • hmry 11 hours ago

    CSRF is about preventing other websites from making requests to your page using the credentials (including cookies) stored in the browser. Cookies can't prevent CSRF, in fact they are the problem to be solved.

    • tankenmate 9 hours ago

      Somewhere auth needs to be done, somewhere, somehow, and some when. And this is done with cookies (be it CSRF, auth token, JWT, etc). There has to be some form of mechanism for a client to prove that a) it is the client it claims it is, and therefore b) it has the permission to request what it needs from the server.

      And, the server shouldn't trust the client "trust me bro" style.

      So, at the end of the day it doesn't matter whether it is a "rose by another name", i.e. it doesn't matter whether you call it a CSRF token, auth token, JWT, or whatever, it still needs to satisfy the following; a) the communication is secure (preferably encryption), b) the server can recognise the token when it sees it (headers (of which cookies are one type), etc), c) the server doesn't need to trust the client (it's easiest if the server creates the token, but it could also be a trusted OOB protocol like TOTP), and d) it identifies a given role (again it's easiest if it identifies a unique client (like a user or similar)).

      So a name is just a name, but there needs to be a cookie or a cryptographically secure protocol to ensure that an untrusted client is who it says it is. Cookies are typically easier than crypto secure protocols. Frankly it doesn't really matter what you call it, what matters is that it works and is secure.

      • RagingCactus 9 hours ago

        I work as a pentester. CSRF is not a problem of the user proving their identity, but instead a problem of the browser as a confused deputy. CSRF makes it so the browser proves the identity of the user to the application server without the user's consent.

        You do need a rigid authentication and authorization scheme just as you described. However, this is completely orthogonal to CSRF issues. Some authentication schemes (such as bearer tokens in the authorization header) are not susceptible to CSRF, some are (such as cookies). The reason for that is just how they are implemented in browsers.

        I don't mean to be rude, but I urge you to follow the recommendation of the other commenters and read up on what CSRF is and why it is not the same issue as authentication in general.

        Clearly knowledgeable people not knowing about the intricacies of (web) security is actually an issue that comes up a lot in my pentesting when I try to explain issues to customers or their developers. While they often know a lot about programming or technology, they frequently don't know enough about (web) security to conceptualize the attack vector, even after we explain it. Web security is a little special because of lots of little details in browser behavior. You truly need to engage your suspension of disbelief sometimes and just accept how things are to navigate that space. And on top of that, things tend to change a lot over the years.

        • tankenmate 6 hours ago

          Of course CSRF is a form of authorisation; "should I trust this request? is the client authorised to make this request? i.e. can the client prove that it should be trusted for this request?", it may not be "logging in" in the classic sense of "this user needs to be logged into our user system before i'll accept a form submit request", but it is still a "can i trust this request in order to process it?" model. You can wrap it up in whatever names and/or mechanism you want, it's still a trust issue (web or not, form or not, cookie or not, hidden field or not, header or not).

          Servers should not blindly trust clients (and that includes headers passed by a browser claiming they came from such and such a server / page / etc); clients must prove they are trustworthy. And if you're smart your system should be set up such that the costs to attack the system are more expensive than compliance.

          And yes, I have worked both red team and blue team.

          • dagss 4 hours ago

            You say you should "never trust the client". Well trust has to be established somehow right, otherwise you simply cannot allow any actions at all (airgap).

            Then, CSRF is preventing a class of attacks directed against a client you actually have decided to trust, in order to fool the client to do bad stuff.

            All the things you say about auth: Already done, already checked. CSRF is the next step, protecting against clients you have decided to trust.

            You could say that someone makes a CSRF attack that manages to change these headers of an unwitting client, but at that point absolutely all bets are off you can invent hypothetical attacks to all current CSRF protection mechanisms too. Which are all based on data the client sends.

            (If HN comments cannot convince you why you are wrong I encourage you to take the thread to ChatGPT or similar as a neutral judge of sorts and ask it why you may be wrong here.)

            • tankenmate 4 hours ago
              • dagss 3 hours ago

                Yes, this is documenting one particular way of doing CSRF. A specific implementation.

                The OP is documenting another implementation to protect against CSRF, which is unsuitable for many since it fails to protect 5% of browsers, but still an interesting look at the road ahead for CSRF and in some years perhaps everyone will change how this is done.

                And you say isn't OK, but have not in my opinion properly argued for why not.

        • seethishat 8 hours ago

          It's very complicated and ever evolving. It takes dedicated web app pentesters like you to keep up with it... back in the day, we were all 'generalists'... we knew a little bit about everything, but those days are gone. It's too much and too complicated now to do that.

      • hmry 9 hours ago

        I don't understand what you are getting at. CSRF is not another name for auth. You always need auth, CSRF is a separate problem.

        When the browser sends a request to your server, it includes all the cookies for your domain. Even if that request is coming from a <form> or <img> tag on a different website you don't control. A malicious website could create a form element that sends a request to yourdomain.com/api/delete-my-account and the browser would send along the auth cookie for yourdomain.com.

        A cookie only proves that the browser is authorized to act on behalf of the user, not that the request came from your website. That's why you need some non-cookie way to prove the request came from your origin. That's what Sec-Fetch-Site is.

      • nchmy 9 hours ago

        I don't think this is accurate. As your parent comment said, Csrf defenses (tokens, origin/Sec-Fetch-Site) serve a different purpose from Auth token/jwt. The latter says that your browser is logged in as a user. The former says "the request actually came from a genuine action on your page, rather than pwned.com disguising a link to site.com/delete-account.

        You're conflating the two types of Auth/Defense.

        • tankenmate 5 hours ago

          You're misunderstanding my point, the Sec-Fetch-Site is not a replacement for CSRF tokens (be they cookies (classic CSRF tokens, auth tokens, JWTs; all of these can be made to work for the client to prove to the server that they are allow to submit a form (and came from the "right" form), some obviously easier than others), a header (such as X-CSRF-Token - Ruby on Rails, Laravel, Django; X-XSRF-Token - AngularJS; CSRF-Token - Express.js (csurf middleware); X-CSRFToken - Django), a TOTP code, etc), but the Sec-Fetch-Site header is a defence in depth mechanism, not a replacement for CSRF (however that is achieved, classic cookie mechanism or other).

  • tankenmate 10 hours ago

    All the voting down but not a single comment as to why. The "Sec-Fetch-Site" primarily protects the browser against Javascript hijacking, but does little to nothing to protect the server.

    This is probably apocryphal, but Willie Sutton was asked why he kept robbing banks, he quipped "that's where the money is". Sure browser hacking occurs, but it's a means to an end because the server is where the juicy stuff is.

    So headers that can't be accessed by Javascript are way down the food chain and only provide lesser defence in depth if you have proper CSRF tokens (which you should have anyway to protect the far more valuable server resources which are the primary target).

    • nchmy 9 hours ago

      I must be missing something. What does JavaScript have to do with this? My understanding is that csrf is about people getting tricked into clicking a link that makes, for example, a post request to another site/origin that makes an undesired mutation to their account. If the site/origin has some sort of Auth (eg session cookie), it'll get sent along with the request. If the Auth cookie doesn't exist (user isn't logged in/isn't a user) the request will fail as well.

      • tankenmate 9 hours ago

        There's server security and there's client security. From what I've seen in these comments people are focused on the client security and are either a) ignoring server security, or b) don't understand server security.

        But the server security is the primary security, because it's the one with the resources (in the money analogy it's the bank).

        So yes, we do want to secure the client, but if the attacker has enough control of your computer to get your cookies then it's already game over. Like I said you can have time bounded CSRF tokens (be they cookies or whatever else, URL encoded, etc who cares) to prevent replay attacks. But at the end of the day if an attacker can get your cookies in real time you're done for, it's game over already. If they want to do a man in the middle attack (i.e. click on a fake "proxy" URL) then having the "secure" flag should be enough. If the server checks the cookie against the client's IP address, time, HMAC, other auth attributes, will then prevent the attack. If they attacker takes control of your end device, you've already lost.

        • nchmy 8 hours ago

          Sorry, but you seem to be lost.

          I, the article and most comments here quite explicitly talked about server security via Auth and csrf protections.

          None of this has anything to do with browser security, such as stealing csrf tokens (which tend to be stored as hidden fields on elements in the html, not cookies). MOREOVER, Sec-Fetch-Site obviates the need for csrf tokens.

          • tankenmate 6 hours ago

            "MOREOVER, Sec-Fetch-Site obviates the need for csrf tokens.", you're just posting misinformation, you are flat out wrong.

            "It is important to note that Fetch Metadata headers should be implemented as an additional layer defense in depth concept. This attribute should not replace a [sic] CSRF tokens (or equivalent framework protections)." -- OWASP; https://cheatsheetseries.owasp.org/cheatsheets/Cross-Site_Re...

  • jebronie 11 hours ago

    I don't understand why your post is flagged. You are 100% right. The point of CSRF protection is that -you can't trust the client-. This new header can just be set in curl, If I understand correctly. Unlimited form submissions here I come!

    • eptcyka 11 hours ago

      CSRF protects the user by not allowing random pages on the web using resources from a target website, without the user being aware of this. It only makes sense when serving people using browsers. It is not a defense against curl or skiddies.

      • nchmy 9 hours ago

        To elaborate/clarify a bit, we defend against curl with normal auth, correct? Be it session cookies or whatever. That plus origin/Sec-Fetch-Site (and tls, secure cookies, hsts) should be reasonable secure, no?

        • tankenmate 9 hours ago

          indeed, you need some form of CSRF, but the Sec-Fetch-Site is primarily focused on keeping a browser secure, not the server. Having said that it's nice defence in depth for the server as well but not strictly required as far as the server is concerned.

          • nchmy 9 hours ago

            I'm confused. In my mind, you only really need to keep the server secure, as that's where the data is. Auth cookies and csrf protections (eg Sec-Fetch-Site) are both used towards protecting the server from invalid requests (not logged in, or not coming from your actual site).

            What are you referring to when you talk about keeping the browser secure?

            • tankenmate 9 hours ago

              The Sec-Fetch-Site header can't be read / written by Javascipt (or WASM, etc), cookies (or some other tokens) on the other hand can be. In most circumstances allowing Javascript to access these tokens allows for "user friendly" interfaces where a user can log in using XMLHttpRequest / API rather than using a form on a page. OOB tokens one a one off auth basis or continuous (i.e. OAuth, TOTP with every request) are more secure, but obviously requires more engineering (and comes with its own "usability" / "failure mode" trade offs).

              • nchmy 8 hours ago

                > The Sec-Fetch-Site header can't be read / written by Javascipt

                Perfect. It's not even meant or needed to be. The server uses it to validate the request came from the expected site.

                As i and others have said in various comments, you seem to be lost. Nothing you're saying has any relevance to the topic at hand. And, in fact, is largely wrong.

                • tankenmate 6 hours ago

                  "Nothing you're saying has any relevance to the topic at hand. And, in fact, is largely wrong."; your confidence in your opinion doesn't make you right.

                  Prove it.

    • kokada 9 hours ago

      This is not what this is supposed to protect, and if you are using http.CrossOriginProtection you don't even need to add any header to the request:

      > If neither the Sec-Fetch-Site nor Origin headers are present, then it assumes the request is not coming from web browser and will always allow the request to proceed.

      • nchmy 6 hours ago

        Wait, but if those headers are missing, then isn't there a vulnerability if someone is using an old browser and clicks on a malicious link? Do we need to also check user agent or something else?

        • kokada 4 hours ago

          Exactly, the post talks about this too: older browsers will be vulnerable, this probably affects only a small amount of the population and it is even lower if you limit service to accept TLSv1.3 (for this to be useful you of course need to enable HTTPS otherwise the attacker can just strip the headers from your request).

          If you can't afford to do this you still need to use CSRF tokens.

          • nchmy 2 hours ago

            I suppose that we could just reject anything that doesnt have these tokens, depending on whether you want to allow curl etc... I might just do that, in fact.