Show HN: An open source access logs analytics script to block bot attacks

github.com

32 points by krizhanovsky a day ago

This is a small PoC Python project for web server access logs analyzing to classify and dynamically block bad bots, such as L7 (application-level) DDoS bots, web scrappers and so on.

We'll be happy to gather initial feedback on usability and features, especialy from people having good or bad experience wit bots.

*Requirements*

The analyzer relies on 3 Tempesta FW specific features which you still can get with other HTTP servers or accelerators:

1. JA5 client fingerprinting (https://tempesta-tech.com/knowledge-base/Traffic-Filtering-b...). This is a HTTP and TLS layers fingerprinting, similar to JA4 (https://blog.foxio.io/ja4%2B-network-fingerprinting) and JA3 fingerprints. The last is also available in Envoy (https://www.envoyproxy.io/docs/envoy/latest/api-v3/extension...) or Nginx module (https://github.com/fooinha/nginx-ssl-ja3), so check the documentation for your web server

2. Access logs are directly written to Clickhouse analytics database, which can cunsume large data batches and quickly run analytic queries. For other web proxies beside Tempesta FW, you typically need to build a custom pipeline to load access logs into Clickhouse. Such pipeliens aren't so rare though.

3. Abbility to block web clients by IP or JA5 hashes. IP blocking is probably available in any HTTP proxy.

*How does it work*

This is a daemon, which

1. Learns normal traffic profiles: means and standard deviations for client requests per second, error responses, bytes per second and so on. Also it remembers client IPs and fingerprints.

2. If it sees a spike in z-score (https://en.wikipedia.org/wiki/Standard_score) for traffic characteristics or can be triggered manually. Next, it goes in data model search mode

3. For example, the first model could be top 100 JA5 HTTP hashes, which produce the most error responses per second (typical for password crackers). Or it could be top 1000 IP addresses generating the most requests per second (L7 DDoS). Next, this model is going to be verified

4. The daemon repeats the query, but for some time, long enough history, in the past to see if in the past we saw a hige fraction of clients in both the query results. If yes, then the model is bad and we got to previous step to try another one. If not, then we (likely) has found the representative query.

5. Transfer the IP addresses or JA5 hashes from the query results into the web proxy blocking configuration and reload the proxy configuration (on-the-fly).

goodthink 6 hours ago

My advice to little sites: Basic Auth stops bots dead in their tracks. The credentials have to be discoverable of course, which makes it useless for most sites, but it's not that inconvenient. Once entered, browsers offer to use the same creds on subsequent visits. I have several apps that use 3D assets that are large enough for it to become worrisome when hundreds of bots are requesting them day and night. Not any more, lol.

  • krizhanovsky 6 hours ago

    That's a good advice, thank you.

    In our approach we do our best to not to affect user experience. E.g. consider an example of a company website with a blog. The company does it's best to engage more audience to their blog, products whatever. I guess quite a part of the audience will be lost due to requirement of authentication on website, which they see first time.

    However, for returning, and especially regular, clients I think that is a really simple and good solution.

gpi 13 hours ago

Where can I learn more about JA5? John Althouse (What JA stands for) had not published anything about JA5 yet.

imiric 20 hours ago

Thanks for sharing!

The heuristics you use are interesting, but this will likely only be a hindrance to lazy bot creators. TLS fingerprints can be spoofed relatively easily, and most bots rotate their IPs and signals to avoid detection. With ML tools becoming more accessible, it's only a matter of time until bots are able to mimic human traffic well enough, both on the protocol and application level. They probably exist already, even if the cost is prohibitively high for most attackers, but that will go down.

Theoretically, deploying ML-based defenses is the only viable path forward, but even that will become infeasible. As the amount of internet traffic generated by bots surpasses the current ~50%, you can't realistically block half the internet.

So, ultimately, I think allow lists are the only option if we want to have a usable internet for humans. We need a secure and user-friendly way to identify trusted clients, which, unfortunately, is ripe to be exploited by companies and governments. All proposed device attestation and identity services I've seen make me uneasy. This needs to be a standard built into the internet, based on modern open cryptography, and not controlled by a single company or government.

I suppose it already exists with TLS client authentication, but that is highly impractical to deploy. Is there an ACME protocol for clients? ... Huh, Let's Encrypt did support issuing client certs, but they dropped it[1].

[1]: https://news.ycombinator.com/item?id=44018400

  • krizhanovsky 5 hours ago

    This is quite insightful, thank you.

    This particular project, WebShield, is simple and it didn't take too long to develop. Basically, with this project we're trying to figure out what can be built having fingerprints and traffic characteristics in an analytic database. It's seems easy to make PoCs with these features.

    For now, if this tool can stop some dummy bots, we'll be happy. We definitely need more development and more sophisticated algorithms to fight against some paid scrapping proxies.

    It's more or less simple to classify DDoS bots because they have clear impact - the system performance degrades. For some bots we also can introduce the target, for the bots and the protection system, e.g. the booked slots for a visa appointments. For some scrappers this is harder.

    Another opportunity is to dynamically generate classification features and verify resulting models, build web page transition graphs and so on.

    This is a good point about possible blocking of ~50% of the Internet. For DDoS we _mitigate_ an attack, not _block_ it, so probably for bots we should do the same - just rate limit them instead of full blocking.

    Technically, we can implement verification of client side certificates, but, yes, the main problem of adoption on the client side.