👋 Hi, this is Gabriel with this week’s learnings. I write about software, startups and things that interest me enough to learn about. Thank you for your readership.
This week I’m sharing my top learnings about measuring CPUs, Coroutines, and Call Stacks, and more. Hope it is helpful!
Learned while reading
Read Understanding Software Dynamics Chapter 1 and 2, Coroutines for Go, and random online sources I didn’t keep track of (sorry) to learn more about call stacks.
Transaction latency is a crucial metric in modern datacenters. It follows a probability distribution, with tail latency referring to the slowest transactions (e.g., 99th percentile or p99). The primary sources that can negatively affect tail latency in datacenters are contention from shared resources such as CPU, memory, disk/SSD, network, and software critical sections requiring locks. All of these have one thing in common: waiting. Waiting is the fundamental performance bottleneck.
Coroutines and threads both provide concurrency, but they differ in terms of parallelism and how they handle shared data. Coroutines enable concurrent execution without the complexity of synchronization, as only one coroutine runs at a time. On the other hand, threads offer parallel execution but require careful synchronization via locks when accessing shared data to avoid conflicts.
Each thread maintains its own call stack, which keeps track of the sequence of function calls and their respective stack frames. These stack frames hold local variables, function parameters, and return addresses. By having separate call stacks, threads can execute independently and maintain their own function call history.
Learned while listening to podcasts
“Understand, identify, execute”. A.k.a slow down to speed things up.
Nice read :)