janvdberg 15 hours ago

My first command is always 'w'. And I always urge young engineers to do the same.

There is no shorter command to show uptime, load averages (1/5/15 minutes), logged in users. Essential for quick system health checks!

  • mmh0000 13 hours ago

    It should also be mentioned, Linux Load Average is a complex beast[1]. However, a general rule of thumb that works for most environments is:

    You always want the load average to be less than the total number of CPU cores. If higher, you're likely experiencing a lot of waits and context switching.

    [1] https://www.brendangregg.com/blog/2017-08-08/linux-load-aver...

    • tanelpoder 5 hours ago

      On Linux this is not true, on an IO heavy system - with lots of synchronous I/Os done concurrently by many threads - your load average may be well over the number of CPUs, without having a CPU shortage. Say, you have 16 CPUs, load avg is 20, but only 10 threads out of 20 are in Runnable (R) mode on average, and the other 10 are in Uninterruptible sleep (D) mode. You don't have a CPU shortage in this case.

      Note that synchronous I/O completion checks for previously submitted asynchronous I/Os (both with libaio and io_uring) do not contribute to system load as they sleep in the interruptible sleep (S) mode.

      That's why I tend to break down the system load (demand) by the sleep type, system call and wchan/kernel stack location when possible. I've written about the techniques and one extreme scenario ("system load in thousands, little CPU usage") here:

      https://tanelpoder.com/posts/high-system-load-low-cpu-utiliz...

    • lotharcable 5 hours ago

      The proper way is to have a idea of what it normally is before you need to troubleshoot issues.

      What is a 'good load' depends on the application and how it works. Some servers something close to 0 is a good thing. Other servers a 10 or lower means something is seriously wrong.

      Of course if you don't know what is a 'good' number or you are trying to optimize a application and looking for bottlenecks then it is time to reach for different tools.

  • Propelloni 14 hours ago

    Me too! So much so that I add it to my .bashrc everywhere.

__turbobrew__ 15 hours ago

If you like this post, I would recommend “BPF Performance Tools” and “Systems Performance: Enterprise and the Cloud” by Brenden Gregg.

I have pulled out a few miracles using these tools (identifying kernel bottlenecks or profiling programs using ebpf) and it has been well worth the investment to read through the books.

  • wcunning 13 hours ago

    Literally did miracles at my last job with the first book and that got me my current job, where I also did some impressive proving which libraries had what performance with it again... Seriously valuable stuff.

    • __turbobrew__ 9 hours ago

      Yea it is kindof cheating. I was helping someone debug why their workload was soft locking. I ran the profiling tools and found that cgroup accounting for the workload was taking nearly all the cpu time on locks. From searches through linux git logs I found that cgroup accounting in older kernels had global locks. I saw that newer kernels didn’t have this, so we moved to a newer kernels and all the issues went away.

      People thought I was a wizard lol.

ch33zer 12 hours ago

Almost all of these have been replaced for me with below: https://developers.facebook.com/blog/post/2021/09/21/below-t...

It is excellent and contains most things you could need. Downside is that it isn't yet a standard tool so you need to get it installed across your fleet

  • benreesman 7 hours ago

    Oh man nostalgia city. I vividly remember meeting atop time travel debugging at 3am in Menlo Park in 2012, wild times.

5pl1n73r 9 hours ago

After this article was written, `free -m` on many systems started to have an "available" column that shows the sum of reclaimable and free memory. It's nicer than the "-/+" section shown in this old article.

  $ free -m
                 total        used        free      shared  buff/cache   available
  Mem:            3915        2116        1288          41         769        1799
  Swap:            974           0         974
CodeCompost 15 hours ago

> At Netflix we have a massive EC2 Linux cloud

Wait a minute. I thought Netflix famously ran FreeBSD.

  • craftkiller 15 hours ago

    My understanding was their CDN ran on FreeBSD, but not their API servers. But I don't work for Netflix.

    • diab0lic 15 hours ago

      Your understanding is correct.

      • achierius 12 hours ago

        Why did they not choose to use it for both (or neither)? I.e., what reasons for using FreeBSD on CDN servers would not also apply to using them for API servers?

        • seabrookmx 11 hours ago

          They are extremely different workloads so.. everything?

          The CDN servers are basically appliances, and are often embedded in various data centers (includes those ran by ISP's) to aggressively cache content. They care about high throughput and run a single workload. Being able to fine tune the entire stack, right down to the TCP/IP implementation is very valuable in this case. Since they ship the hardware and software, they can tightly integrate the two.

          By contrast, API workloads are very heterogeneous. I'd have to imagine the ability to run any standard Linux software there would also be a big plus. Linux clearly has much more vetting on cloud providers than FreeBSD as well.

          • aflag 11 hours ago

            Can't you fine tune linux as well? Does FreeBSD perform better somehow on a CDN workload? I find it difficult to imagine that the reason is performance. But I don't know what the reason is.

  • drewg123 15 hours ago

    The CDN runs FreeBSD. Linux is used for nearly everything else.

louwrentius 14 hours ago

The iostat command has always been important to observe HDD/SDD latency numbers.

Especially SSDs are treated like magic storage devices with infinite IOPS at Planck-scale latency.

Until you discover that SSDs that can do 10GB/s don't do nearly so well (not even close) when you access them in a single thread with random IOPS, with queue depth of 1.

  • wcunning 13 hours ago

    That's where you start down the eBPF rabbit hole with bcc/biolatency and other block device histogram tools. Further, the cache hit rate and block size behavior of the SSD/NVME drive can really affect things if, say, your autonomous vehicle logging service uses MCAP with a chunk size much smaller than a drive block... Ask me how I know

rkachowski 14 hours ago

it's 10 years later - what's the 60 second equivalent in 2025?

  • wcunning 13 hours ago
    • BlackLotus89 12 hours ago

      PSI (pressure stall information) are missing.

      I always use a configured!(F2) htop (not mentioned as well). Always enable PSI information in htop (some red hat systems I work with still don't offer them...).

      If you have zfs enable those meters as well and htop has an io tab, use it!

ImPostingOnHN 13 hours ago

Maybe I missed it, but checking available disk space is often a good step in diagnosing misbehaving systems.

babuloseo 15 hours ago

he forgot about rusttop

  • AnyTimeTraveler 7 hours ago

    I'm pretty sure that that didn't exist in 2015 ;)