Git Super-Power: The Three-Way Merge

tl;dr: git config --global merge.conflictstyle diff3

In my previous post, I preached about the one true way to merge MRs in a git workflow. The answer is obviously to rebase for conflicts, and a merge commit for posterity of the MR.

What I did not talk about is that there is a reason that push so many people to heresy. And that reason is fearing having to resolve merge conflicts!

The thing is that merge conflicts are really not hard to resolve when using the right method. But it’s not the default one. The default one looks like this:

You fancy code editor might display the “green” and “white” versions in two separate panes, but this is the same principle.

Basically, you see the results of two different changes, and you have to guess how to make them come together. So you need to understand what each is doing. So many people see it has a painful task that they much prefer to have to do only once, with a single merge commit, instead of resolving conflicts commit after commit during a rebase.

But all of this is coming from the fact that you do not have enough information locally to resolve the conflict. What you would need to know is what each change was, not just its result. Thankfully, there is an option just for that in git! By setting conflict=diff3, your conflicts will look like this (example from the git documentation):

#! /usr/bin/env ruby

def hello
<<<<<<< ours
puts 'hola world'
||||||| base
puts 'hello world'
puts 'hello mundo'
>>>>>>> theirs


Instead of having 2 versions of the puts line, we have 3. Since 3 > 2, it’s much better!

More seriously, the thing that is added by diff3 is the middle one, which correspond to the original version. The version above and below that show you how it was changed.

So, from this, you can see at a glance that we are resolving a conflict between two changes:

  • Change “hello” into “hola”
  • Change “world” into “mundo”

As a human, we can deduce that the effect of these two changes is:

  puts 'hola mundo'

Thanks to this approach, you to not need to understand the rest of the branch. Most of the time, you can just look at a few lines of code and resolve the conflict almost mechanically. Not as nice as automatic resolution, but far from being insurmountable.

So set that setting globally, and fear merge conflicts no more!

git config --global merge.conflictstyle diff3

Merging Responsibly

tl;dr: I do not like merging the main branch into feature branches, and I do not like squashing MR/PRs

git commit

git has emerged as the obvious choice for SCCS (Source Code Control System). The consensus is that it is unambiguously better than the previous standard, SVN. There is some debate as to what is the current best SCCS, especially regarding the confusing command-line interface of git1, but the first mover effect gave the advantage to git, and network effects make the space a winner-take-all situation.

However, we are not here to discuss this, which is a settled issue in practice (just use git). Instead, I want to talk about another debate, which is still far from being settled. I want to talk about what happens when you want to merge a PR (pull request, in GitHub vernacular) or MR (merge request, at GitLab).

git merge

Okay, let’s say we have the typical situation represented in the graph below.

As traditional with git history, time goes upwards. Each colored disk represents a commit. Each commit points to the parent commit it builds upon.

In git parlance, we want to merge the feature branch into the main branch. This way, the changes from the feature branch will also be present in the main branch. Conceptually, this looks like this:

The merge commit (brown disk) has two parents

This is pretty and intellectually satisfying, but this hides the complexity that makes things hairy in practice. When merging the branch, there might be conflict: there could be changes in the main branch and in the feature branch that affect the same portion of the code.

git won’t always know how to resolve that automatically. You can do the manual resolution in GitHub or GitLab, but it is usually more comfortable to do it locally from the comfort of your IDE2. In general, that means you merge the main branch into the feature branch, push the updated feature branch, and get it merged into the main branch. This looks like this:

But that’s still mostly fine. And we could probably make it so that we only have one merge commit. But the practice of merging the main branch into the feature branch is opening the floodgates for git history messes. Because merge conflicts are painful, you’ll want to resolve them often. And you might have a long-lived branch that you keep having to make ready for merging, until you realize you need more changes. And so:

And that’s just a single branch. When you apply this madness to a project where multiple developers are actively contributing, this is what your history becomes:

From Resistance Against London Tube Map Commit History

git bisect

Let’s take a step back. Why are keeping all this stuff in the git history. Does the fact that we resolved conflicts really needs to be remembered for posterity? What do we actually want there?

What really matters is the final state of the code. We might need to keep the code of each previous released version in case we need to backport a fix. But we do not need all the little changes in between for that.

git history gives you too things.

First, you can check in what context the little change was made. If you are unsure what a piece of code is there, just use git blame to find the commit which added it3.

Second, you can test with any intermediate version of your software. Oh, a bug was introduced between version 2.17.6 and version 2.18.0? Instead of comparing the full diff between these versions, just find at which commit the bugs appears. And if your history is clean enough, you should be able to git bisect to find that commit in logarithmic time.

And that second one is really a magical power. Understanding a bug is much easier when you are looking at a 10-line diff than when you have basically the whole codebase as a suspect.

And guess what, merge commits are useless for either of these. Worse, if the offending commit is a merge commit, you will have to understand the conflicts it was trying to resolve. And if there is a bug here, it probably means the conflict was not trivial. Oh, and looking at the diff of the merge commit by itself is useless, so you will need to read potentially multiple commits from the two branches it brings together.

Thankfully, we can avoid all this.

git rebase

What we really want to keep is the changes that were brought by the feature branch. So why not do just that? Just reproduce the changes on top of the main branch. You will still have conflicts, but you will resolve them by updating the feature commits, instead of adding a merge commit.

The yellow commits are reproduced on top of the main branch. If there are no conflicts, this is trivial. If there are conflicts, the corresponding feature commit is updated.

The nice thing about this is that the individual rebased commits still make sense. Just by looking at their diff, you can tell what they are doing. With a merge commits, this can be more difficult.

git squash

There is one more thing. Once you have rebased, you can keep it as it, or you might make some more changes.

  • fast-forward: always rebase the branches, even if there are no conflicts, to keep a purely linear history; this gives you a very nice history, but you lose the MR/PR as a unit, which can be useful for context
  • merge-commit: add a merge commit, whether it is necessary (even without conflict, two branches might not be built upon one another) or not; this gives you most of the advantages of rebasing, but you also keep track of the MR/PR (e.g. a link in the merge commit)
  • squash: combine all the commits of the branch into a single once; so no merge commit and a purely linear history, and you can keep track of the MR/PRs (in the squash commits), but you lose the intermediate commits

My personal projects are usually fast-forward, since I do not bother with PR/MRs. However, for projects with collaboration, I actually prefer merge commits after rebasing.

To finish, merge commits after rebasing have the nice side effect that git branch --merged still work. That lets you clean up your local branches once your PR/MRs are merged into the main branch. squash commits modify your branch on GitHub/GitLab, making them different from your local ones, and fast-forwarding might rebase your branch even after you solved conflicts.

  1. I won’t dispute that the command-line interface of git makes little sense, but I think this is overblown. As a beginner, the few commands you need are mostly straightforward. The intermediary phase is a bit more painful as you learn the various different effects of git reset. However, this is a temporary state, and you quickly reach a point you use git seamlessly. ↩︎
  2. Vim ↩︎
  3. Of course, if git blame just finds a reformatting commit, or a commit which merely moved the code, you will need to git blame again, until you find the commit that actually interests you. ↩︎

ViHN: Vim for Hacker News

tl;dr: I made ViHN to read Hacker News without having to move my hands away from the keyboard. It’s freely available on Firefox and Chrome.

This is about yet another Hacker News extension. But other people with my obsession for killing the mouse might enjoy it.

Most of the content of this article is just an ancient1 talking about the good ol’ times. But you can jump directly to the description of the extension.

btw, I use Vim

I know that I am not the only one who had a keyboard-zealot phase.

When GNOME Shell happened, I was forced to go looking for a new window manager. I ended up installing Ratpoison. Like the more well-known i3, Ratpoison is a tiling window.

The peak of UI. Wait, that’s not GNOME Panel.

That means that, instead of floating windows you usually move around with the mouse, you split the screen in “tiles” however you like, and assign windows to them. By default, the screen is made of a single tile, so windows are full-screen. Then, you might split it in two halves, to show two windows side-by-side (for instance, a text editor and a Wikipedia page). It might seem pointless since you can also do this with regular (“stacking”) window manager, but the important difference is the workflow.

With a regular window manager, positioning windows can feel somewhat limited. As a beginner, you might use the mouse to arrange some windows so that they display all the information you need in a convenient way. But that takes several seconds. And people like me do not have seconds to waste — we could be using it to read memes on Reddit instead.

Why would you use maximized windows, when you can remind yourself you are using Arch, btw

So we learn the keyboard shortcuts to arrange the current window in the left/right half of the screen. Unfortunately, there are times when you would like to show three windows at the same time. And then, you have to go back to using the mouse. Can you imagine the horror?

With a tiling window manager, we can use keyboard shortcuts to display how many windows in however way we want. Want to display 81 different windows in a 9×9 grid to make pixel art maximize your productivity? Of course, just a few keystrokes.

The real benefit, however, is that no one else will be able to use your computer. You will have switched to another tiling window manager by the time they figure out the previous one.

Note: However, Ratpoison is a misnomer: since you can still use your mouse to open links in Firefox instead of Tab-Tab-Tab-Tab-Tab-Tab-Tab-Tab-Tab-Tab-Tab-Tab-Return like a normal person2. Because of Valve, you even have to click on terrorists, instead of just spotting the “g” in the middle of random punctuation. Thankfully, you can live in the TTY, read documents with fbi and watch videos with libcaca.

A goblin raid in Dwarf Fortress. They killed a dog!

Since, I have strayed from the rightful path, and switched to xfce43. I even tried Visual Code for a while (with Vim key bindings, of course), but I always come back to Vim.

Hacker News

When I was younger, and more innocent, I used to kill time by looking for script kiddies hideouts with advanced search keywords such as “hacker forum”. I would enjoy watching various people trying to impress each other with technical nonsense.

Yes, this is exactly, what using a computer was like back then, with 3D interfaces in which you could fly, and everything4

At some point, I stumbled upon Hacker News, which is actually not a hideout for script kiddies5. It’s actually a place to promote great business ideas, discuss major security flaws underpinning infrastructure of geopolitical importance, and explore advanced technical topics. And not even half the content is about the latest technological fad6.

Hacker News, an actual forum for hackers

Unfortunately, it uses that quirky system where the discussion is organized in trees, and people vote for the most interesting comments. In older forums, you simply had to read the 3000-message-long thread to find the few insightful remarks. And with 20 messages per page, that would be barely 150 pages to read!

These fancy vote-based forums allow you to skip entire side-discussions of tangentially related concepts. It’s such a shame that you won’t have to learn about the proper way to repair drywall in an American house when you were just curious about the newest JavaScript framework.

In any case, the point is that reading a comment tree involves more than just scrolling done in a linear thread. Instead, you might want to go back up to the parent, skip to the next sibling discussion, hide a sub-tree, upvote or reply to a specific comment.

And as discussed above, I am not going to do all that hard work with a mouse pointer.

Procrastinating More Efficiently

At first, I just wrote myself some user script to use J, K, and other keys to navigate Hacker News comments, after having done something similar for SlateStarCodex. That was mostly good enough, but I was not as systematic with code versioning at the time, and the format did not particularly encourage it. So I eventually lost that script.

This led me to Refined Hacker News, which is actually pretty good. Unfortunately, some things are broken, and it can take a while to load. And since it is unmaintained, I did not have any hope of being able to fix the most annoying issue (the extension sometimes not working at all).

Navigating comments with Refined Hacker News

So, I finally made my own browser extension for Firefox and Chrome. It’s called “ViHN”. “Vi” as a shorthand for Vim7, and “HN” for “Hacker News”. Get it? I know, it’s very subtle.

It is primarily focused on keyboard-based navigation, should introduce no visible delay, and minimize surprises. Of course, it has many Vim-like key bindings to navigate comments (not just J and K). You can look at the README for the list, but I have also included it below to save you a mouse click if you are not using Vimium.

Also, I stole the idea from bakkot’s SlateStarComments to quickly find new comments. I never quite liked the idea of trying to remember what comments the user might have seen. This is easily confounded by reloading the page by mistake, not having the time to read everything at that particular time, not using the same browser or computer, and so on. Instead, ViHN shows a list of comments in reverse chronological order. You can go through that list (of course using key bindings), and stop when you get to those you already read.

Searching recent comments in late SlateStarCodex

Among other features, you can quick-reply/edit/delete comments, preview comments when replying/editing, it highlights op.

A less visible feature is that manages requests (votes, favorite, flag, persistent collapse) and spread them in time to avoid the drastic rate limiting of Hacker News. If a request results in a 503 error (non-standard “too many requests”), ViHN will retry it later. This materializes in the form of “…” instead of “favorite”/“flag” in links and spinning arrow for votes8.

But this would not be worthy of Hacker News if this was not used as an occasion to bike-shed a particular JavaScript framework. The extension is written is Vanilla JS. This enables maximal performance with minimum overhead and zero build time.

Full-featured Vanilla JS bundle

Key bindings

This section is reproduced from the README, and also available in the extension by pressing ? to toggle help.

Navigate Comments/Stories

jNext comment/story
kPrevious comment/story
JNext sibling comment
KPrevious sibling comment
gGo to first story/comment
GGo to last story, last root comment or “More” link
HFocus on story/comment at the top of the screen (high)
MFocus on story/comment in the middle of the screen
LFocus on story/comment at the bottom of the screen (low)
nSwitch to Newest Items
hParent comment/story (see [#follow-links](Follow Links))
pParent comment/story (see [#follow-links](Follow Links))

Note: You can also select an item by clicking in its bounding box.

Follow links

oOpen story link/comment
OOpen story link/comment in background
cOpen comment thread
COpen comment thread in background
bOpen both story link and comment thread
BOpen both story link and comment thread in background
hFollow “context” link (go to comment thread, but focus on current comment)
pFollow “parent” link (go to parent’s page, and focus on parent comment/story)
1Open 1st link in comment (maintain shift to open in background)
9Open 9th link in comment (maintain shift to open in background)
0Open 10th link in comment (maintain shift to open in background)

Note: When on the “XXX more comments” link, you can hit either of [lLcC] to go to the next page of comments.

Note: The digits of the numeric keypad work as well to open links in comments. However, this can only open links in foreground.

Note: When using AZERTY, the key bindings to open links inside comments still work like in QWERTY. Hit the 1 key without shift (like typing &) to open the 1st link in foreground. Hit the 1 key with shift (like typing 1) to open the 1st link in background. Same for the other link numbers.


mCollapse/uncollapse comment tree
uUpvote story/comment, or cancel vote
dDownvote story/comment, or cancel vote
fFavorite/un-favorite story/comment of the current page
FFlag/unflag story/comment of the current page
rComment on story, or reply to comment (with preview)
eEdit comment (with preview)
DDelete comment
Ctrl+ReturnSubmit current form

Navigate Newest Items

In the Newest Items list, the following key bindings are available:

lShow selected comment/story
jNext comment/story
kPrevious comment/story
JJump 10 down
KJump 10 up
gGo to top story/comment
GGo to last story/comment
nSwitch back from Newest Items
  1. I mean, I was born in the previous millenium! ↩︎
  2. Of course, you could just use Vimium, but where is the fun in that? ↩︎
  3. While writing this article, I learned that Linus Torvalds also switched to Xfce4 at this time. So, maybe, I am not such a bad person. ↩︎
  4. No, it wasn’t. The still is from the marvelous Hackers movie. But we did have Packard Bell Navigator, which was just as good (no, it wasn’t). ↩︎
  5. Or maybe it is, and I have become one without even realizing. ↩︎
  6. I like making fun of Hacker News, but is genuinely pretty good place to keep oneself informed, when you are into computers. If you want some more fun, read every Hacker News thread ever. ↩︎
  7. I only ever use Vim in practice, never vi, never ed, so I like to pretend it’s all the same. ↩︎
  8. There is no visible feedback for collapsing since that happens immediately, and the request can be handled asynchroneously. ↩︎

Rewriting NHK Easier in Rust

The Christmas holidays are the perfect time of the year to shut oneself in, talk to no one, and pour days into staring at a computer screen.

So, naturally, I rewrote my longest-actually-running project in Rust 🦀.

The Challenge

Many Words

NHK Easier serves news written in simple Japanese for language learners. It has a couple of features to make the learning process easier. The most important and more complex one shows the corresponding dictionary entry when hovering a word.

The EDICT dictionary with proper names is 11 MB compressed with bzip2. We do not want to send that for a single page Web. Easy! We will only send the dictionary entries for the words that appear on the page.

The thing is, there are no spaces between words in Japanese. Let’s take the following sentence:


A Japanese speaker understands the words as:

  • neko cat
  • ga follows the subject of the sentence
  • taberu eat

So, they would read the sentence as “The cat eats”. However, even if it only uses words out of a dictionary, a naive algorithm could split the sentence as:

  • ne root
  • kogata small-size
  • beru bell

Of course, this interpretation does not make sense. But the algorithm would at least need to understand Japanese syntax. And now we are doing NLP.

Instead, the safe approach is just to display all possibilities to the user. If the user hovers ね, we should suggest:

  • neko cat
  • ne root

If they hover こ, we should suggest:

  • kogata small-size
  • ko child
  • ko counter for articles

In other words, the basic task is to iterate over all substrings, look-up the corresponding dictionary entries, and send that to the user.

But then, it gets worse.


Despite what you may have been told, Japanese does have grammar and inflection. And it’s agglutinative. This means that “he has been forced to make him eat” could become the single word “tabesaserareteita”. To figure out the fact that we should display the entry for “taberu” (eat) when the user hovers this word, we need to apply “deinflection rules”.

Basically, a deinflection rule tells you to replace a suffix of the word by a shorter one. But there are several, and you do not know which one is correct, so you just try them all. And then, you need to recursively deinflect these new words in turn. At some point, one of these words might match a dictionary entry. But you still need to check them all.

For a short article that ChatGPT 4 translates to 162 English words, we need to do 5,631 lookups. Of these, only 427 match at least one dictionary entry. In total, we output 679 dictionary entries for common names.

Early Solution

My first attempt was pretty straight-forward. However, this was using way too much memory. So I added an index; after a binary search, it would give me the offset in the EDICT file. This avoided creating multiple Python objects for each entry, and I relied on Linux file caching to avoid disk seeks.

Running a retrospective benchmark tells me it took about 1.1 s to find the relevant entries for the previous story. It was so slow that I precomputed sub-dictionaries for all pages. At least, I was paying this cost once per page, and not once per view. But it meant higher complexity (generating the dictionaries and serving them), lower flexibility (updating all the sub-dictionaries or creating a new kind of page would be cumbersome), and extra storage (1.5 GB for the sub-dictionaries of 8,997 pages).

In the benchmark below, you can see that the latency without contention is about 80 ms, and the maximal throughput is less than 13 requests/second. Running parallel requests actually lowers it!

$ wrk -c1 -t1 -H 'Connection: close'
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    79.05ms   12.37ms 137.93ms   84.80%
Requests/sec:    106.21
$ wrk -c4 -t4 -H 'Connection: close'
Requests/sec:      7.99

Note: -H 'Connection: close is required because of #488. Without it, wrk measures latency incorrectly. This actually caused all my initial benchmarks to be off by about 40 ms. I only learned about this because of a discrepancy when redoing the benchmarks for this article, where doing more work would take less time!

Note: I have run the benchmark of this article on the index page of 2024-01-12. It contains 4 stories, with 4 sub-dictionaries for common names, and 4 sub-dictionaries for proper names.

And that was the situation from 2018 to 2023.

Optimizing Python

Of course, the solution was Rust!

But, before I indulged in the pagan ritual of RIIR, I took a bit of time to properly optimize the Python version. This would give me a better point of comparison on the speedup achieved by switching languages.

Using profiling allowed me to identify various avenues of improvement:

  • move the data back in a dictionary; it would use too much memory to run on my tiny VPS, but only speed mattered at that point
  • deduplicate candidates from the deinflection
  • I noticed that I was actually deinflecting twice due to poor function naming

These optimizations combined reduced the time needed to generate the entries from 1,200 ms per story to 24 ms (-98 %).

Going further, I used a trie to quickly look up the matching deinflection rules to apply (-50%). I also only kept relevant entries from the main dictionary (-15%). I was basically down to about 10 ms per story. With 4 stories on the main page, I was at 310 ms to first byte, compared to 230 ms in the version with precomputed sub-dictionaries.

Optimizing Django

Thanks to benchmarking, I realized my page loads were atrocious either way. So I optimized them as well, to better see the effect of generating sub-dictionaries. The line_profiler was very useful to identify hotspots in Django views:

  • I ditched Django’s ORM query-builder for some complex stories, without changing the SQL query sent to the database; just building that query using dynamic Python objects at every page load was taking a very significant amount of time
  • I avoided using additional SQL queries to search a specific story, when I could just use plain Python over the list of results (using SQL queries was an oversight from using Python-like Django’s ORM)
  • used indexes1 (this actually yielded a smaller performance improvement than what I expected)
  • avoided converting fields in an SQL query

Running the benchmark again, we see that these brought down the load time to 9 ms, with most of that time spent in generating HTML from Jinja2 templates, and some in SQL queries and Django’s ORM query builder. Even better, serving queries on 2 physical cores with hyperthreading allow to increase the throughput to 124 requests/second!

$ wrk -c1 -t1 -H 'Connection: close'
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     8.94ms    2.07ms  34.97ms   90.65%
Requests/sec:    106.21
$ wrk -c4 -t4 -H 'Connection: close'
Requests/sec:    124.42

At that point, things are starting to look pretty good. However, integrating on-the-fly generation of sub-dictionaries in Python would get us back into meh territory:

$ wrk -c1 -t1 -H 'Connection: close'
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    94.42ms    7.79ms 141.33ms   90.48%
Requests/sec:     10.49
$ wrk -c4 -t4 -H 'Connection: close'
Requests/sec:     10.49

Notice in particular how the multithreaded benchmark does not improve throughput. This is because all threads need to access the full dictionary objects. And, with Python’s GIL, that means all thread must take their turn.

So, let’s Rust!

A bit of Rust

First, I migrated the Python module that handled the generation of sub-dictionaries. I migrated the code from Python to Rust, keeping mostly the same logic, compiling it to a naive Python module, making the transition transparent to the rest of the project.

Simply writing in Rust already yielded significant benefits, with many low-level optimizations enabled by the typing system, and with lifetimes reducing the number of allocations and deallocation.

The one thing I kept in mind was to avoid cloning objects unnecessarily. For instance, the module does not copy the entries from the main dictionary, but instead use references as long as possible, until returning to Python. In fact, the whole file is kept as a single string, and all entries and keys (individual words) are pointing to that string.

Doing this reduced the time to generate the sub-dictionary to about 1 ms. Also, using Rust references everywhere instead of Python strs reduced the memory footprint to 50 MB, making it viable in production.

$ wrk -c1 -t1 -H 'Connection: close'
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    22.17ms    2.51ms  32.16ms   71.95%
Requests/sec:     44.16
$ wrk -c4 -t4 -H 'Connection: close'
Requests/sec:     48.35

This is better. But notice that we are still not doing better with multiple threads. This is because we need to release the GIL. Then, we do see some improvement:

$ wrk -c1 -t1 -H 'Connection: close'
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    23.26ms   14.29ms 193.46ms   98.24%
Requests/sec:     44.57
$ wrk -c4 -t4 -H 'Connection: close'
Requests/sec:     76.25

However, calling this from Python still took about 13 ms out of the 23 ms to load the index page with 4 stories. With two sub-dictionaries per story, you would expect about 8 ms. But I did not investigate this overhead further, since I was planning to rewrite it all in Rust anyway.

All of the Rust

Having already used Axum to write a Web API, and Askama for templating, porting the Web service to Rust was unchallenging.

I did fight a bit with sqlx, however.

The two most popular ways to interact with a database in Rust are Diesel, which provides an ORM, and sqlx, which takes raw SQL. To me, the most important thing is being able to use static type checking to catch basic errors in the queries. Diesel achieves this with its query-builder; sqlx does that by verifying the SQL queries against a real database at compile time using PREPARE statements.

// with Diesel

// with sqlx
sqlx::query("UPDATE users SET name = $1 WHERE id = $2")

Using a query-builder adds a leaky layer of abstraction you need to fight with. This is especially frustrating when you already know the SQL query you want to write, but need to figure out how to make Diesel generate it for you. I already had experience with Diesel, so I opted to try sqlx, to see whether it would deliver on its promises, or whether querying a database at compile-time would add too much friction.

In fact, sqlx works mostly as described. It only becomes painful when you want to avoid making allocations for each field of each returned row! If you try to do it, you will actually lose type safety, since the queries won’t be checked anymore. This is an open issue.

$ wrk -c1 -t1 -H 'Connection: close'
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     9.25ms    1.22ms  22.02ms   88.10%
Requests/sec:    107.24
$ wrk -c4 -t4 -H 'Connection: close'
Requests/sec:    264.78

In any case, the Rust rewrite got rid of the overhead, reducing the total time to generate a page to 9 ms, virtually all of it being spent on generating sub-dictionaries!

Result of profiling the Web service using samply. I only performed requests to the index page, which is handled by the archive route. We can see that 95% of its time is spent generating sub-dictionaries.


Compiled languages with static typing have the reputation of being faster at the cost of being lower-level, involving more house-keeping, and having more gotchas.

However, in my experience, Rust actually feels high-level, almost Python-like! There are some things that can surprise a beginner, like the Borrow checker, but this is usually easily solved with a .clone(). After all, using Python is pretty close to using Rc and .clone() everywhere, implicitly!

In addition, the migration can often be done piecemeal.

Without too much effort, Rust gives very significant performance improvements. The first factor to go further is to minimize allocations and data copies, which is much easier in Rust than in C or in C++, thanks to references and lifetimes that track everything for you.

In fact, looking at the reversed callstack of samply’s measures, it looks like copying is still the thing where I spend most of my time:

There is definitely still room for improvement in my refactor, but it is more than enough to convince me that this was worth it!

  1. You should very much use indexes. ↩︎
Competitive Programming Computer Science Programming

Dynamic Programming is not Black Magic

This year’s Advent of Code has been brutal (compare the stats of 2023 with that of 2022, especially day 1 part 1 vs. day 1 part 2). It included a problem to solve with dynamic programming as soon as day 12, which discouraged some people I know. This specific problem was particularly gnarly for Advent of Code, with multiple special cases to take into account, making it basically intractable if you are not already familiar with dynamic programming.

However, dynamic programming itself is mostly natural when you understand what it does. And many common algorithms are actually just the application of dynamic programming to specific problems, including omnipresent path-finding algorithms such as Dijkstra’s algorithm.

This motivated me to write a gentler introduction and a detailed explanation of solving Day 12.

Let me start with a rant. You can skip to the following section if you want to get to the meat of the article.

The Rant

Software engineering is terrible at naming things.

Now, let’s take a look at “dynamic programming”. What can we learn from the name? “Programming” must refer to a style of programming, such as “functional programming”, or maybe “test-driven development”. Then, “dynamic” could mean:

Guess what. It means nothing of that, and it has nothing to do with being “dynamic”. It is an idea that you can use to design an algorithm, so there is a link to “programming”; I will grant it that.

Edit: There is a reason why it is named this way. When you look at the historical meaning of “programming”, the expression made sense. See niconiconi’s comment.

So, what is it?

Basic Caching

Let’s say we want to solve a problem by splitting it in smaller similar problems. Basically, a recursive function. Often, we end-up having to solve the same smaller problems many times.

The typical example is Fibonacci, where you want to evaluate f(n), which is defined as f(n - 1) + f(n - 2). If we implement it naively, we will end up evaluating f(1) many times:

Call tree for evaluating f(6), the 6-th Fibonacci number. To evaluate, f(6), we need to evaluate both f(5) and f(4). To evaluate f(5), we will need f(4) and f(3). Already, we see that we are going to need f(4) in two places. If we go further, we see that we will need f(1) 8 eight times, which happens to be f(6).

In fact, in this case, since only f(1) adds anything to the overall result, the number of times we will need f(1) is equal to f(n). And f(n) grows very fast as n grows.

Of course, we can avoid doing this. We can just cache the results (or memoize f, in terrible academic vernacular).

In the example, once we have evaluated f(4) once, there is no need to evaluate it again, saving 3 evaluations of f(1). By doing the same for f(3) and f(2), we get down to 2 evaluations of f(1). In total, f(…) is evaluated 7 times (f(0), f(1), f(2), f(3), f(4), f(5), f(6)), which is just f(n) + 1.

This is theoretically (asymptotically) optimal. But we can look at this in a different way.

Optimized Caching

With memoization, we keep the recursion: “to solve f(6), I need f(5), which will itself need f(4) […] and f(3) […], and f(4), which will itself need f(3) […] and f(2) […].”. Basically, we figure out what we need just when we need them.

Instead, we can make the simple observation that we will need f(0) and f(1) for all other evaluations of f(…). Once we have them, we can evaluate f(2), which will need for all other evaluations of f(…).

You can think of it as plucking the leaves (the nodes without descendants) from the call tree we saw before, and repeat until there are no more nodes. In other words, perform a topological sort.

With the example, if we have some array F where we can store our partial results:

  • F[0] = f(0) = 0
  • F[1] = f(1) = 1
  • F[2] = f(2) = f(1) + f(0) = F[1] + F[0] = 1 + 0 = 1
  • F[3] = f(3) = f(2) + f(1) = F[2] + F[1] = 1 + 1 = 2
  • F[4] = f(4) = f(3) + f(2) = F[3] + F[2] = 2 + 1 = 3
  • F[5] = f(5) = f(4) + f(3) = F[4] + F[3] = 3 + 2 = 5
  • F[6] = f(6) = f(5) + f(4) = F[5] + F[4] = 5 + 3 = 8

With this approach, we do not have any recursive call anymore. And that is dynamic programming.

It also forces us to think clearly about what information we will be storing. In fact, in the case of Fibonacci we can notice that we only need the two last previous values. In other words:

  • F[0] = f(0) = 0
  • F[1] = f(1) = 1
  • F[2] = f(2) = previous + previous_previous = 1 + 0 = 1
  • F[3] = f(3) = previous + previous_previous = 1 + 1 = 2
  • F[4] = f(4) = previous + previous_previous = 2 + 1 = 3
  • F[5] = f(5) = previous + previous_previous = 3 + 2 = 5
  • F[6] = f(6) = previous + previous_previous = 5 + 3 = 8

So, we can discard other values and just keep two of them. Doing this in Python, we get:

def fibo(n):
    if n == 0:
        return 0
    previous_previous = 0
    previous = 1
    for _ in range(n - 1):
        current = previous_previous + previous
        (previous, previous_previous) = (current, previous)
    return previous

I like that this gives us a natural and systematic progression from the mathematical definition of the Fibonacci function, to the iterative implementation (not the optimal one, though).

Now, Fibonacci is more of a toy example. Let’s have a look at

Edit Distance

The edit distance between two strings is the smallest number of edits needed to transform one string into the other one.

There are actually several versions, depending on what you count as an “edit”. For instance, if you only allow replacing a character by another, you get Hamming distance; evaluating the Hamming distance between two strings is algorithmically very simple.

Things become more interesting if you allow insertion and deletion of characters as well. This is the Levenstein distance. Considering this title of the present article, this is of course something that can be solved efficiently using ✨ dynamic programming ✨.

To do that, we’ll need to find how we can derive a full solution from solutions to smaller-problems. Let’s say we have two strings: A and B. We’ll note d(X, Y) the edit distance between strings X and Y, and we’ll note x the length of string X. We need to formulate d(A, B) from any combination of d(X, Y) where X is a substring of A and Y a substring of B1.

We’re going to look at a single character. We’ll use the last one. The first one would work just as well but using a middle one would not be as convenient. So, let’s look at A[a – 1] and B[b – 1] (using zero-indexing). We have four cases:

  • A[a - 1] == B[b - 1], then we can ignore that character and look at the rest, so d(A, B) = d(A[0..a - 1], B[0..b - 1])
  • A[a - 1] != B[b - 1], then we could apply any of the three rules. Since we want the smallest number of edits, we’ll need to select the smallest value given by applying each rule:
    • substitute the last character of A by that of B, in which case d(A, B) = d(A[0..a - 1], B[0..b - 1]) + 1
    • delete the last character of A, in which case d(A, B) = d(A[0..a - 1], B) + 1
    • insert the last character of B, in which case d(A, B) = d(A, B[0..b - 1]) + 1
  • A is actually empty (a = 0), then we need to insert all characters from B2, so d(A, B) = b
  • B is actually empty (b = 0), then we need to delete all characters from A, so d(A, B) = a

By translating this directly to Python, we get:

def levenstein(A: str, B: str) -> int:
    a = len(A)
    b = len(B)
    if a == 0:
        return b
    elif b == 0:
        return a
    elif A[a - 1] == B[b - 1]:
        return levenstein(A[:a - 1], B[:b - 1])
        return min([
            levenstein(A[:a - 1], B[:b - 1]) + 1,
            levenstein(A[:a - 1], B) + 1,
            levenstein(A, B[:b - 1]) + 1,

assert levenstein("", "puppy") == 5
assert levenstein("kitten", "sitting") == 3
assert levenstein("uninformed", "uniformed") == 1
# way too slow!
# assert levenstein("pneumonoultramicroscopicsilicovolcanoconiosis", "sisoinoconaclovociliscipocsorcimartluonomuenp") == 36

As hinted by the last test, this version becomes very slow when comparing long strings with lots of differences. In Fibonnacci, we were doubling the number of instances for each level in the call tree; here, we are tripling it!

In Python, we can easily apply memoization:

from functools import cache

def levenstein(A: str, B: str) -> int:
    a = len(A)
    b = len(B)
    if a == 0:
        return b
    elif b == 0:
        return a
    elif A[a - 1] == B[b - 1]:
        return levenstein(A[:a - 1], B[:b - 1])
        return min([
            levenstein(A[:a - 1], B[:b - 1]) + 1,
            levenstein(A[:a - 1], B) + 1,
            levenstein(A, B[:b - 1]) + 1,

assert levenstein("", "puppy") == 5
assert levenstein("kitten", "sitting") == 3
assert levenstein("uninformed", "uniformed") == 1
# instantaneous!
assert levenstein("pneumonoultramicroscopicsilicovolcanoconiosis", "sisoinoconaclovociliscipocsorcimartluonomuenp") == 36

Now, there is something that makes the code nicer, and more performant, but it is not technically necessary. The trick is that we do not actually need to create new strings in our recursive functions. We can just pass arounds the lengths of the substrings, and always refer to the original strings A and B. Then, our code becomes:

from functools import cache

def levenstein(A: str, B: str) -> int:
    def aux(a: int, b: int) -> int:
        if a == 0:
            return b
        elif b == 0:
            return a
        elif A[a - 1] == B[b - 1]:
            return aux(a - 1, b - 1)
            return min([
                aux(a - 1, b - 1) + 1,
                aux(a - 1, b) + 1,
                aux(a, b - 1) + 1,
    return aux(len(A), len(B))

assert levenstein("", "puppy") == 5
assert levenstein("kitten", "sitting") == 3
assert levenstein("uninformed", "uniformed") == 1
# instantaneous!
assert levenstein("pneumonoultramicroscopicsilicovolcanoconiosis", "sisoinoconaclovociliscipocsorcimartluonomuenp") == 36

The next step is to build the cache ourselves:

def levenstein(A: str, B: str) -> int:
    # cache[a][b] = levenstein(A[:a], B[:b])
    # note the + 1 so that we can actually do cache[len(A)][len(B)]
    # the list comprehension ensures we create independent rows, not references to the same one
    cache = [[None] * (len(B) + 1) for _ in range(len(A) + 1)]
    def aux(a: int, b: int) -> int:
        if cache[a][b] == None:
            if a == 0:
                cache[a][b] = b
            elif b == 0:
                cache[a][b] = a
            elif A[a - 1] == B[b - 1]:
                cache[a][b] = aux(a - 1, b - 1)
                cache[a][b] = min([
                    aux(a - 1, b - 1) + 1,
                    aux(a - 1, b) + 1,
                    aux(a, b - 1) + 1,
        return cache[a][b]
    return aux(len(A), len(B))

assert levenstein("", "puppy") == 5
assert levenstein("kitten", "sitting") == 3
assert levenstein("uninformed", "uniformed") == 1
# instantaneous!
assert levenstein("pneumonoultramicroscopicsilicovolcanoconiosis", "sisoinoconaclovociliscipocsorcimartluonomuenp") == 36

The last thing we need to do is to replace the recursion with iterations. The important thing is to make sure we do that in the right order3:

def levenstein(A: str, B: str) -> int:
    # cache[a][b] = levenstein(A[:a], B[:b])
    # note the + 1 so that we can actually do cache[len(A)][len(B)]
    # the list comprehension ensures we create independent rows, not references to the same one
    cache = [[None] * (len(B) + 1) for _ in range(len(A) + 1)]
    for a in range(0, len(A) + 1):
        for b in range(0, len(B) + 1):
            if a == 0:
                cache[a][b] = b
            elif b == 0:
                cache[a][b] = a
            elif A[a - 1] == B[b - 1]:
                # since we are at row a, we have already filled in row a - 1
                cache[a][b] = cache[a - 1][b - 1]
                cache[a][b] = min([
                    # since we are at row a, we have already filled in row a - 1
                    cache[a - 1][b - 1] + 1,
                    # since we are at row a, we have already filled in row a - 1
                    cache[a - 1][b] + 1,
                    # since we are at column b, we have already filled column b - 1
                    cache[a][b - 1] + 1,
    return cache[len(A)][len(B)]

assert levenstein("", "puppy") == 5
assert levenstein("kitten", "sitting") == 3
assert levenstein("uninformed", "uniformed") == 1
# instantaneous!
assert levenstein("pneumonoultramicroscopicsilicovolcanoconiosis", "sisoinoconaclovociliscipocsorcimartluonomuenp") == 36

Now, if you really want to grok dynamic programming, I invite you to try it yourself on the following problems, preferrably in this order:

  1. longest common subsequence (not to be confused with longest common substring, but you can do that one too with dynamic programming)
  2. line wrap
  3. subset sum
  4. partition
  5. knapsack

Once you are comfortable with dynamic programming, Day 12 should become much less daunting!

Advent of Code, Day 12

In the Advent of Code of December 12th, 2023, you have to solve 1D nonograms. Rather than rephrasing the problem, I will let you read the official description.

.??..??...?##. 1,1,3

This can be solved by brute-force. The proper technique for that is backtracking, another terrible name. But the asymptotic complexity is exponential (for n question marks, we have to evaluate 2n potential solutions). Let’s see how it goes with this example:

  • .??..??...?##. 1,1,3 the first question mark could be either a . or a #; in the second case, we “consume” the first group of size 1, and the second question mark has to be a .
    1. ..?..??...?##. 1,1,3 the next question mark could be either a . or a #; in the second case, we “consume” the first group of size 1, and the next character has to be a ., which is the case
      1. .....??...?##. 1,1,3 the backtracking algorithm will continue to explore the 8 cases, but none of them is a valid solution
      2. ..#..??...?##. (1),1,3
        • and so on…
    2. .#...??...?##. (1),1,3
      • and so on…

There are 32 candidates, which would make 63 list items. I’ll spare you that. Instead, I want to draw your attention to the items 2.2 and 2:

  • 2.2. ..#..??...?##. (1),1,3
  • 2. .#...??...?##. (1),1,3

They are extremely similar. In fact, if we discard the part that has already been accounted for, they are more like:

  • 2.2. .??...?##. 1,3
  • 2. ..??...?##. 1,3

There is an extra . on the second one, but we can clearly see that it is actually the same problem, and has the same solutions.

In other words, just like with Fibonacci, the total number of cases is huge, but many of them will just be repeats of other ones. So we are going to apply memoization. And then, dynamic programming.

When we implement the “backtracking” algorithm we’ve overviewed above, we get something like this (not my code):

def count_arrangements(conditions, rules):
    if not rules:
        return 0 if "#" in conditions else 1
    if not conditions:
        return 1 if not rules else 0

    result = 0

    if conditions[0] in ".?":
        result += count_arrangements(conditions[1:], rules)
    if conditions[0] in "#?":
        if (
            rules[0] <= len(conditions)
            and "." not in conditions[: rules[0]]
            and (rules[0] == len(conditions) or conditions[rules[0]] != "#")
            result += count_arrangements(conditions[rules[0] + 1 :], rules[1:])

    return result

Note the program above handles ? by treating it as both . and #. The first case is easy, but the second case need to check that it matches the next rules; and for that, it needs to check that there is a separator afterwards, or the end of the string.

Since it’s Python, to memoize, we just need to add @cache.

To make it dynamic programing, we use the same trick as in the example of the edit distance: we pass the offset in the string, and the offset in the rules as parameters in the recursion. This becomes:

def count_arrangements(conditions, rules):
    def aux(i, j):
        if not rules[j:]:
            return 0 if "#" in conditions[i:] else 1
        if not conditions[i:]:
            return 1 if not rules[j:] else 0

        result = 0

        if conditions[i] in ".?":
            result += aux(i + 1, j)
        if conditions[i] in "#?":
            if (
                rules[j] <= len(conditions[i:])
                and "." not in conditions[i:i + rules[j]]
                and (rules[j] == len(conditions[i:]) or conditions[i + rules[j]] != "#")
                result += aux(i + rules[j] + 1, j + 1)

        return result
    return aux(0, 0)

Then, we implement our own cache and fill it in the right order:

def count_arrangements(conditions, rules):
    cache = [[0] * (len(rules) + 1) for _ in range(len(conditions) + 1)]
    # note that we are in the indices in reverse order here
    for i in reversed(range(0, len(conditions) + 1)):
        for j in reversed(range(0, len(rules) + 1)):
            if not rules[j:]:
                result = 0 if "#" in conditions[i:] else 1
            elif not conditions[i:]:
                result = 1 if not rules[j:] else 0
                result = 0
                if conditions[i] in ".?":
                    # since we are at row i, we already filled in row i + 1
                    result += cache[i + 1][j]
                if conditions[i] in "#?":
                    if (
                        rules[j] <= len(conditions[i:])
                        and "." not in conditions[i:i + rules[j]]
                        if rules[j] == len(conditions[i:]):
                            # since we are at row i, we already filled in row i + rules[j] > i
                            result += cache[i + rules[j]][j + 1]
                        elif conditions[i + rules[j]] != "#":
                            # since we are at row i, we already filled in row i + rules[j] + 1 > i
                            result += cache[i + rules[j] + 1][j + 1]
            cache[i][j] = result
    return cache[0][0]

And, voilà! You can also have a look at a Rust implementation (my code, this time).

Note: In this case, it looks like the dynamic programming version is slower than the memoized one. But that’s probably due to it being written in unoptimized Python.

Note: Independently from using a faster language and micro-optimizations, the dynamic programming version allows us to see that we only need the previous column. Thus, we could replace the 2D array by two 1D arrays (one for the previous column, and one for the column being filled).


I’ll concede that dynamic programming is not trivial. But it is far from being unreachable for most programmers. Being able to understand how to split a problem in smaller problems will enable you to use memoization in various contexts, which is already a huge improvement above a naive implementation.

However, mastering dynamic programming will let us understand a whole class of algorithms, better understand trade-offs, and make other optimizations possible. So, if you have not already done them, I strongly encourage you to practice on these problems:

  1. longest common subsequence (not to be confused with longest common substring, but you can do that one too with dynamic programming)
  2. line wrap
  3. subset sum
  4. partition
  5. knapsack

And don’t forget to benchmark and profile your code!

  1. Excluding, of course, d(A, B) itself ↩︎
  2. B could be empty as well, in which case we need to insert 0 characters ↩︎
  3. Note that we could permute the inner and outer loops as shown below. In this case, it works just as well:
    for b in range(0, len(B) + 1):
    for a in range(0, len(A) + 1):

The Secret to a Green CI: Efficient Pre-commit Hooks with checkout-index


Linting in the CI

Most people are familiar with the concept of “CI” as in “CI/CD”.

For the ones who are not, it usually refers to a set of automatic tests that are run when a developer sends their code for review. These tests typically include “linters”, which will check that the code follows certain stylistic rules, and detect common mistakes.

It can be pretty annoying to wait for the CI to complete, only to discover that it failed because that I forgot a “;” in TypeScript, added an extra “;” in Python, did not format my Rust code properly, etc.

Of course, I can always run the proper linting command before sending my code to make sure everything is okay. And what were the options to pass to cppcheck again? Oh right, I am supposed to use clang-tidy in this project. What’s the proper syntax to call mypy? Oh, and I do not want to run eslint on all the source files, it takes too long, which files did I change since the last time?

I know, I’ll just write a script. I will have to find out which files were changed since the last time it was call to avoid verifying everything every time. I could use the git history for that, but what was the last commit when the script was run? I would have to store that somewhere…

Things would be simpler if I just ran it on every commit. But then I am definitely going to forget. If only I could run that script for every commit. Oh, and I should ignore changes that I have not committed, since I am not going to send these, and it could create false positives and false negatives. I could use git stash, but I should be extra cautious about not messing things up.

The pre-commit hook

Fortunately, this is a common predicament, and a solution for this exists: pre-commit hooks.

In git, a hook is a script (or executable in general) that is run automatically at certain times. For instance, when running git push, or when making a commit.

When creating a repository, git automatically populates the hooks with some examples. Take a look at .git/hooks/ in your repository.

The hook when making a commit is .git/hooks/pre-commit. So if you make a script and put it there, it will be run when you do git commit. Try it by writing the following code in .git/hooks/pre-commit. Do not forget to make it executable with chmod +x .git/hooks/pre-commit.

#!/usr/bin/env bash
echo "Hello from the pre-commit hook!"
exit 1

If you git add some files and run git commit, you should see:

$ git commit
Hello from the pre-commit hook!

git did not open up your editor to let you write a commit message. In fact, not commit was actually made. The reason is that we made the script exit with the error status “1”. For success, that would be “0”. The status of the script is the same as that of the last executed command. Here, the last executed command is exit 1, which unsurprisingly exits with status “1”.

So, for your TypeScript project, you can just use something like the script below, and no more errors!

#!/usr/bin/env bash
npx eslint src/
npx tsc

Well, not quite.

Making it Work Right

First, we are writing a shell script. And, although they can be convenient, they come with their lot of surprises. In the previous pre-commit script, a linting error found by eslint won’t stop the commit. We can see why in this example:

$ cat false-true
#!/usr/bin/env bash
$ ./false-true
$ echo $?

The false command exits with “1”. But it does not stop the execution of the script! So true is then run, which exits with “0”. This behavior is usually unwanted. Thankfully, we can disable it by adding set -e:

$ cat false-true
#!/usr/bin/env bash
set -e
$ ./false-true
$ echo $?

That’s better!

We can also set the -E option, to ensure that the ERR trap is inherited in functions. What that means is that, if you do some clean-up in your bash script (usually done with traps), and happen to write a function where a failure occurs, the clean-up is done properly. Setting the -u option will give you a proper error when you make a typo in a variable name, instead of just behaving like the variable contains the empty string.

Note: We can also set the “-o pipefail” option, but it comes with its own surprises. Basically, without “-o pipefail”, if you have “command1 | command2”, the script will only fail if command2 fails. With “-o pipefail”, the script will also fail if command1 fails. This can be what you want. But sometimes, command1 failing just means that there is nothing to check. In short, use this option on a case-by-case basis.

Alright, we are not linting the files in the current directory before making a commit. That’s good. But that’s not what we want. What if we only commit some of the changes?

To run the linting command on the exact state corresponding to the commit we are about to do, we could just stash the changes, but that’s dangerous. What if the script fails? What if the script succeeds? What if there is nothing to stash? What if we make a mistake somewhere and randomly corrupt the working directory?

Okay, there is a simple rule to avoid all that: don’t touch the working directory. Don’t stash, don’t move files, don’t do anything in the current directory.

Instead, just copy the state corresponding to the commit in a temporary directory far away from the working directory.

But my repository has so many files! It’s going to take so long to copy the full thing!

We’re not cloning the repository. Just doing a checkout-index. And git is fast.

The proper way to create a working directory is with mktemp, which will avoid any collision. Also, we are not going to leave random files lying around, so we ensure that the temporary directory is removed once the script finishes (succeeds, fails, crashes, is killed, etc.). In Bash, the proper way to do this is with traps. So we do that immediately. Finally, we copy the state of the current commit to that temporary directory with checkout-index:

TEMPDIR=$(mktemp -d)
git checkout-index --prefix=$TEMPDIR/ -af

Once this is done, we can just cd to that temporary directory and run our linting commands. And we’re done.

Well, almost. You did not expect it to be that easy, do you?

Making it Work Fast

See, modern build tools such as npm and cargo do this clever thing where they put dependencies in a local subdirectory (node_modules/ for npm, target/ for cargo).

It’s definitely an improvement about Python’s virtual environments. However, it does mean that, when we run our linting command from the temporary directory, we start from scratch1. Every time we try to make a commit.

Thankfully, we can just tell npm and cargo to use the usual subdirectory2. But where is this subdirectory again? We could use pwd, but maybe the user is in another subdirectory of the project. Maybe we could try visiting the parent’s until we find node_modules/ or target/?

There is much simpler. The thing we are looking for is usually easy to locate from the repository’s root. Often, it is just directly at the repository’s root. So, if the repository is at ~/src/my-project, we should look in ~/src/my-project/node-modyles/ and ~/src/my-project/target/. Conveniently, we can just ask git for the repository’s root:

GIT_ROOT=$(git rev-parse --show-toplevel)

Then, we’ll just set the relevant environment variables:

export NODE_PATH="${GIT_ROOT}/node_modules"
export CARGO_TARGET_DIR="${GIT_ROOT}/target"

Now things are starting to look pretty good!

But we can do better.

If you only change 1 file out of 10,000, you might want to just check that one file and skip the rest. We can do just that.

First, let’s get the list of the files we should check:

git diff --cached --name-only --diff-filter=AM

Here, the “AM” value passed to diff-filter is two flags, “A” for “added files” and “M” for “modified files”. We can filter further with grep to keep only the files ending in “.py” for example.

Then, we feed this list as arguments for our linting executable. This is done easily with xargs, which allows will take each line from the input and give it as an argument to the command. For instance:

(echo a; echo b; echo c) | xargs ls
# same as
ls a b c

Since the list might be empty, remember to use --no-run-if-empty. Without it, you will run the linting command without argument, which might fail, or worse3, lint all the files. So we could have:

git diff --cached --name-only --diff-filter=AM |
    grep '\.[hc]$' |
    { cd $TEMPDIR; xargs --no-run-if-empty ./lint-fast c; }  # lint from temp directory

Note: The paths listed by git diff are relative to the root of the repository, so you’ll want to run the linting command from that root, even if all the files are in a subdirectory. This could also mean that you may have to explicitly point the linting command to the configuration file. For instance, if your TypeScript files are in a front/ directory along with the Biome configuration file, you will need to do:

git diff --cached --name-only --diff-filter=AM | grep -E '\.tsx?$' |
    (cd $TEMPDIR; xargs --no-run-if-empty npx @biomejs/biome check --config-path="front/")

The Script

Finally, with all the pieces coming together, we have a pre-commit hook which is both reliable and fast. Don’t forget to add the pre-commit script itself to your repository, and enable it with ln -s ../../pre-commit .git/hooks/ (you cannot directly track files in .git/).

You might not need TypeScript and Rust and Python and C and C++. But you can just pick the relevant parts.

#!/usr/bin/env bash
# Usage: copy this file to .git/hooks/

# Exit at first error
set -Eeu

# To handle partially committed files, we must copy the staged changes to a
# separate location
# See also
TEMPDIR=$(mktemp -d)
git checkout-index --prefix=$TEMPDIR/ -af

# keep using the same node_modules/ and target/ directories, not a new one in
# the temporary directory this avoids re-parsing everything from scratch every
# time we run the script
GIT_ROOT=$(git rev-parse --show-toplevel)

# TypeScript
export NODE_PATH="${GIT_ROOT}/node_modules"
git diff --cached --name-only --diff-filter=AM | grep -E '\.tsx?$' |
    (cd $TEMPDIR; xargs --no-run-if-empty npx @biomejs/biome check)

# Rust
export CARGO_TARGET_DIR="${GIT_ROOT}/target"
(cd $TEMPDIR; cargo fmt --check)
(cd $TEMPDIR; cargo clippy --all -- --deny warnings)

# Python
(cd $TEMPDIR/back; ruff check .)

# C
git diff --cached --name-only --diff-filter=AM |  # list modified files
    grep '\.[hc]$' |  # filter C files
    { cd $TEMPDIR; xargs --no-run-if-empty ./lint-fast c; }  # lint from temp directory

# C++
git diff --cached --name-only --diff-filter=AM |  # list modified files
    grep '\.[hc]pp$' |  # filter C++ files
    { cd $TEMPDIR; xargs --no-run-if-empty ./lint-fast cxx; }  # lint from temp directory

For C and C++, I use the following lint-fast script:

#!/usr/bin/env bash
# Exit at first error
set -Eeuo pipefail


# trailing space check
if grep -Hn '\s$' "$@"; then
    echo "Error: trailing space"
    exit 1

# check for NULL instead of nullptr in C++ files
if test "${MODE}" = "cxx"; then
    grep -Hn --color NULL "$@" && exit 1

    # do not actually compile
    # fail script when issue is found
    # usual paranoid options

# C++ specific options
if test "${MODE}" = "cxx"; then

gcc "${GCCFLAGS[@]}" "$@"

#note: one flag might come from pre-commit hook
    # no point when we only check a subset of the files
    # disable lookup for system includes
    # sometimes does stupid suggestions
    # otherwise, it complain when not enough files are included
    # otherwise, it will complain it has not found a suppressed warning
    # enable everything else
    # fail script when issue is found
    # only print something when there is an issue
    # ignore inlined JS code

cppcheck "${CPPCHECK_FLAGS[@]}" "$@"

# C++ specific options
if test "${MODE}" = "cxx"; then
clang-tidy "${CLANG_FLAGS[@]}" "$@" --
  1. Of course, if we’re linting with a call to npx, it will probably just fail since the command is not installed in the local project. ↩︎
  2. Yes, I know, I just said we shouldn’t touch the working directory. But we’re not, npm and cargo are! … Okay, let’s just say it’s a special case and we are being very careful here. ↩︎
  3. This is worse because this could stay under the radar for a long time and waste time. An error can usually be diagnosticed quickly and fixed once and for all. ↩︎
Morse Code

Learning Morse with Koch

If you take an interest in learning Morse code, you will quickly hear about “Koch method” and “Farnsworth method”. You will also read many opinions stated with absolute certainty, often contradicting each other. Some advice is straight out non-actionable.

I have recently read Koch Dissertation on Learning Morse Code. Yeah, I have unusual ways to spend my time. Anyway, the reason for this is that I wanted to actually understand what Koch actually claimed, and what he actually proved.

I start with a rant about reading a badly scanned German paper from almost 90 years ago, and a small digression about who Koch is, but you can directly skip to the main part of the article. Or directly to the conclusion.

The Rant

There is an English translation of the dissertation, but it is a machine-generated one. Although significant effort has been done to make it look nice, the content is often too mangled to be usable.

The only version I could find of Koch’s dissertation is an old scan. Although it was uploaded in 2020, the file was created in 2008. Unfortunately, this is a time when JBIG2 was a thing1. Sure enough, when extracting the images from the PDF, I see various files ending in “.jb2e” or “.jb2g”.

Internet Archive serves a text version of the document. Since the document was typewritten and not handwritten, the OCR actually did a pretty good job. However, since it was clearly OCR-ed after the conversion to JBIG2 took place, some characters are mixed up, in particular:

  • h and b
  • n and u
  • c and e

Of course, most of the time, the resulting word did not make sense, and I just needed to fix the typo. In German. By the way, do you know about agglutination? Oh, and since it’s 1935, it uses various words that are obsolete today. Also, it’s orthography reform of 1996, so “ß” is still widely used. The OCR did not like that.

Thankfully, it’s 2023, and machine translation has removed all the borders, and made using foreign languages transparent. Or not.

Maybe it’s German, but neither DeepL nor Google Translate could make natural sentences out of the transcript. At least, they were mostly grammatically correct. But, between the fact that they still followed German word order and expressions, and the fact that they poorly cut sentences (sometimes duplicating them), it was far from a piece of cake.

Of course, after the transcription and the translation, we are not done. The argument itself is not a shining piece of literature. It was written by an engineer for a limited audience. On a deadline. And a typewriter.

Koch himself often repeats himself (in separate paragraphs, so it’s definitely not the machine translation), uses interminable sentences (the translation did not change the length), sometimes keeps the actual point implicit, and sometimes omits information altogether.

Extracting the gist of this paper was like peeling onions: each layer you remove only makes you cry harder.

Dr.-Ing. Ludwig Koch

Finding information about Ludwig Koch is not easy. It does not help that there were many other people named Ludwig Koch, including several doctors, and even a Prof. Dr. Ludwig Koch, born in 1850. And another teaching today. One thing to keep in mind is that our Koch was not a Doctor of Philosophy (PhD). His dissertation granted him the title of Doctor of Engineering, which would make him Dr.-Ing. Ludwig Koch.

What we do have to work with is the curriculum vitae attached to the end of the dissertation, where Koch states that he was born in 1901 in Kassel.

There was a “Dr-Ing. Koch” in the Nazi’s club2 of the technical university of Dresden in 1944, but it looks like it was Paul-August Koch, who studied textile and paper technology. The only other reference I have been able to find online of Morse-Koch is a 1971 document referring to “Dr.-Ing. Koch Ludwig”, which would match his title. He would have been 69 at the time.

Koch defended his thesis at the technical university Carolo-Wilhelmina. The main reporter was Prof. Dr. phil. Bernhard Herwig, who studied psychology3 and was mostly interested in studying working conditions for the good of the fatherland4. The co-reporter was Prof. Dr.-Ing. Leo Pungs, who was interested in more technical aspects of radio engineering as the dean of the Department of Electrical Engineering5. For the record, it looks like Prof. Dr. Phil. Bernhard Herwig was in the Nazi’s club, but Prof. Dr.-Ing. Leo Pungs was not6.

I tried to clarify whether Koch was actually interested in Morse code in and by itself, or was only using it as an arbitrary example to apply Arbeitspsychologie. He did have a degree in electrical engineering, but I found no evidence that he continued studying the learning of Morse code after he defended his thesis. In fact, there is no other trace of him in the academic record.

We think it was the same Koch famous for nature recordings


No he’s not. Sound-recordist-Koch was born in 1881.

Koch was able to teach a class of students to copy code at 12 words per minute in under 14 hours. However, the method wasn’t often used until recently.


I do not know why they think that “the method wasn’t often used until recently”. Koch’s method was known in the United States at least since The Learning of Radiotelegraphic Code in 1943.

Koch Dissertation

The bold parts are a summary of the summary I posted before. The rest are my comments.

For other summaries of the dissertation, look here, or here. I did my own summary independently of these, and before reading them.

We study how holism can help improve work conditions.

First, we need to tackle the fact that the dissertation is heavily oriented in favor of Gestalt psychology, or holism, which was very in at the time. Koch worked under the supervision of Prof. Dr. phil. Bernhard Herwig, who was mostly concerned with improving work productivity for the fatherland (see previous section), but it does not look like he was particularly focused on holism. This point would require more investigation, but my understanding is that Koch was somewhat biased towards holism due to the historical context, but not overtly.

When experienced radio operator send Morse code at a speed lower than 10wpm, the resulting timings are distorted

However, this was done with only 4 subjects. They do show the same overall pattern, but there are significant differences between them. So, it is hard to conclude with anything as precise as a “10 wpm threshold”. The associated graphs would tend to show that the distortion is still very visible even at 16 wpm (80 chars/min). The main point stands, but the point where sending is slow is very debatable.

Experienced radio operators have difficulty receiving Morse code at lower speed

This is associated with the graph below. Again, the main point is pretty clear. The accuracy of reception falls off a cliff as the speed decreases. However, the speed where this happen is again debatable. 90 % is already a pretty accuracy for an experienced operator. That’s one error every tenth character!

Success rate of experienced operators receiving low-speed standard Morse code. The abscissa is the speed in chars/min and the ordinate is success rate in percent.

If we look more closely at the graph, we notice that the curves bend downwards somewhere above 10 wpm. And if we pay attention, we notice that there are actually very few data points from which the curves are drawn. There is not enough information to know when the accuracy falls below 95 %, for example. There might be multiple data points in the upper right corner, but it’s hard to tell without the raw data, which is not available.

In short, slow speed is terrible, but there is no clear evidence that 10 wpm and above should be considered good enough.

  • Naive: first, learn all the characters at a slow speed, then increase the speed
  • Farnsworth7: first learn all the characters at a high speed, but with ample spacing, then decrease the spacing, then increase the speed
  • Koch: first learn the rhythm at 12 wpm, then learn 2 characters, then add other characters one at a time and finally increase the speed

Because this matches my intuition, I would like to say that the previous experiences show that the naive approach is clearly mistaken. But, looking at it objectively, it’s not what the experiences show. What they do show is that using low-speed Morse is not strictly easier than using high-speed Morse, to the extent that experienced operators do not automatically master low-speed Morse.

The main point of Koch can be summed up in two statements:

  1. At lower speeds, we need to think, while higher speeds rely on automatic recognition
  2. Starting to practice Morse with the thinking-mode is a waste of time

Regarding statement 1, it is hard to find explicit evidence, but it is at least sound, from the fact that operators just do not have the time to think about each individual dit and dah at high speed. However, this does not preclude the possibility of progressing to automatic recognition at lower speed. Also, at higher-speed, it is definitely possible to think about what you hear, just on a more selective basis.

For statement 2, I want to highlight the fact that the thinking- and automatic- modes directly map to the cognitive and autonomous stages in Fitts and Posner’s model. Thus, there is nothing inherently unnatural in progressing from thinking-mode to automatic-mode. The harsh plateau when switch from the former to the latter might just be a requisite passage in the learning process8.

In this part, the arguments of Koch for statement 2 are not supported (nor invalidated) by evidence. The evidence comes later in the paper from comparing Koch method with the other procedures.

Harder characters (X, Y, P, Q) should not be learned at the very first. Koch uses L, F, C, K, R, P, X, Y, Q, D, B, G, A, N, Z, U, V, W, J, H, S, I, E, O, M, T.

That makes sense, at least to avoid frustration. As far as I know, Koch did not systematically investigate the learning order, so that’s a minor point.

Same for similar characters (E/S/H/5, U/V, D/B, G/6, W/J)

Same as above.

Training sessions should be spread out in time instead of packed together, and should not exceed half an hour.

Spoiler: this is the truest claim of the paper. But it is not surprising. The spacing effect has been known from even before Koch was born. In fact, Koch cites a paper from 1897 on this exact topic.

The experience was conducted with students, tradesmen and businesswomen with two or three sessions a week, so it was at a disadvantage relatively to military training

As one knows, psychology is the study of psychology students. To be fair, Koch states that it was not just students. But, since we do not have the raw data, we cannot tell whether it was 10 % students, or 90 % students.

In any case, this argument is kind of weird. Koch argues that his study was biased in a way that should have made good results harder with relation to military training. Morse code training is offered to anyone who joined the military, while student are a select class of individuals, especially at the time.

In addition, he just argued that short sessions spread out in time were more effective than long, packed sessions (like the military does).

Two-tone method: using different tones for the dits and dahs helps at the start of the training

This is an idea that I see very rarely when I read about the Koch method online. Actually, I think the only instance is at LICW. I find it very appealing, since it sounds like it could really help with recognizing the characters more easily at first, and thus improve the feedback loop for learning.

However, the evidence for its effectiveness provided in Koch dissertation is very weak.

Learning curve for the single-pitch (dashed line) and two-pitch (solid line) methods. The abscissa shows the number of half-hour sessions, and the ordinate the number of characters mastered.

The argument is that the two-tone curve is always above the one-tone curve. However, there are a number of issues:

  1. We do not know how many participants there were in each case, so we cannot assess the significance of the result
  2. The curve have lots of variations, which suggest random differences were not smoothed out by the law of averages
  3. The relation between the two curves do not show a continuous advantage, since the spread does not increase over time. In fact, they even merge for a time!
  4. Even if we ignore the fact that they merge, the spread is mostly set during the second half-hour. We expect to see the most variance at the beginning of the training, when students are not familiarized with the process, and differ in their learning patterns. So it might explain the spread entirely.

Again, the two-tone method could actually be useful, but we cannot tell from the evidence. If anything, we do learn that the advantage is marginal at best.

In fact, LICW did try the two-tone method, but they found no advantage:

we experimented with introducing two-tone characters to new students and the feedback was overwhelmingly negative

Long Island CW club

The characters were added as H, F, A, B, G, C, D, E, CH, L, K, I, O, T, M, P, Q, R, S, N, U, W, V, Y, X, Z (no J)

With the new methods, students take 14 hours to master Morse code, instead of 10 weeks

This is definitely the most important claim of the paper. Military training typically uses several hours every day for months on end. Reducing this to only 14 hours, every if spread over several weeks, is phenomenal.

Well, 14 hours with 3 half-hour sessions a week is still about 10 weeks. So the whole gain would be about spreading the sessions instead of packing them. But still, let’s say that the claim is that this is only made possible thanks to the new method.

This is supported by the two following graphs9.

Learning curve for the Farnsworth method. The abscissa is the number of weeks of practice; the ordinate is the speed attained by the student.
Learning curve for the single-pitch (dashed line) and two-pitch (solid line) methods. The abscissa shows the number of half-hour sessions, and the ordinate the number of characters mastered.

Alas, there are a number of confounding factors that are not addressed.

One. In the second graph, we can see that the training completed once students had mastered the 26 letters of the alphabet. This excludes digits, punctuations, which are necessary for normal operation. But the first graph is actually not based on Koch experiments. It was “kindly provided by the Deutsche Verkehrsfliegerschule”, a military-training institution. Of course, we have no more information, so we do not know how many characters they had to learn.

In comparison, it took me less than 20 hours to reach 26 mastered characters (lesson 25 at LCWO) at 20 wpm, but 35 more to master all LCWO lessons. I did follow the Koch method, in the fact that I trained at 20 wpm from the get-go, without extra spacing. But the point is that, at 26 characters, I was still very far from being done.

Of course, the first graph could be about learning only the 26 letters, but it’s never stated so, and we have no way to check.

Two. The target speed is set to 12 wpm, which is actually pretty low. Koch himself states that the passing speed is usually at 20 wpm:

Radio communication in the ship service is under the same conditions, i.e. perfect mastery of Morse code at 100 char/min.. The examination conditions of the radio operator course are, for example:

“Keying a telegram of 100 words of 5 letters each in 5 minutes on the Morse apparatus in perfect form. Receiving a code telegram of 100 words and an open text of 125 words of 5 letters each in 6 minutes; transcribing the latter on the typewriter.”

Koch claims that going from 12 wpm to 20 wpm would be relatively easy, since all the characters have been learned in the automatic-mode. However, this was never tested, so there is no concrete evidence showing this. As an illustration, I took 55 hours to master 20 wpm (with 41 characters), but I am barely reaching 28 wpm 30 hours later.

Three. Remember how, in the previous quote, it said “perfect form”? Koch sets the bar at “less than 10% error rate”. It took me 44 hours to reach lesson 40 of LCWO at 10 % error rate, but still 11 more hours to get it down under 5 %.

And now, for the most damning issue.

Aptitude test: those who fail to master 4 characters at a speed of 60 char/min in the first 2 half-hours never mastered all characters.

This could indeed be a useful aptitude test. However, it points to a fundamental flaw with the study. What does it mean by “never”?

Does Koch mean that these students were allowed to continue the practice for months until he wrote his dissertation? And, more importantly, that he took their learning time in the above graph?

Or does it mean that they were allowed to continue the practice until the 28th session? In that case, it should show as a value strictly lower than 26 for the number of mastered letter at the end of the experience.

Or does it mean that they were removed from the experience early, when they gave up, or when it became clear that they could not keep up?

In any case, this is never addressed in the paper. Yes, military trainings eliminate students aggressively as they fall behind. And it could even be that Koch had a much higher success rate. But we do not know. At all.


Koch brings interesting ideas. However, none are supported by the evidence he brings forth. Except one, which is to use spaced repetition. This one is actually well-known and supported by evidence. Koch himself recognizes this.

To stress the point: it is very well possible that some parts of this method actually have a positive impact, but we do not have enough evidence from this paper to conclude. I have read many times people say that using Koch method, or Farnsworth method, or whatever method helped them. But, until we have concrete evidence, we cannot conclude that one approach is better than the others.

Then, how should I learn Morse code?

Just use Koch method on LCWO at 20 wpm. Lower if it is too fast. Practice 10 minutes a day. Every day. And keep at it.

But you just said that Koch method was not supported by evidence!

Yup. But it is not disproven either, and no other method is proven to work better10. Don’t get stuck in analysis paralysis trying to find the one best method. Use Koch because it’s the one for which there are the most tools, including LCWO, the most convenient one. Or just because I said so. Use 20 wpm because that’s the default in various places. Or because that’s what I did. But don’t worry about lowering it if you really cannot manage.

What does matter is actually practicing. So spaced repetition and discipline.

How long does it take to learn Morse Code?

Count 50 to 100 hours if you want to be comfortable at 20 wpm. Count a minimum of 10 to 20 hours for the bare basics if you’re aiming for 12 wpm.

This is a personal question but on average students get on the air doing their first QSO in 3-4 months.

We do not have requirements but recommend as a minimum 2 classes per week and 15–20 minutes daily practice.

FAQ of Long Island CW Club

We teach the code at 12 words per minute. This speed was determined by Koch to be optimum for initial learning of CW.  It is also optimum for making first contacts on the air.

Description of Morse code training at Long Island CW Club

Should I learn the Visual Representation?

Sure, if you find it fun. It’s a pretty easy goal, and that could be quite enough if you want to shine in escape games. If you intend to actually use it over the air, don’t wait until you have learned it, and just start practicing what you actually want to do.

Should I Learn Mnemonics?

No. Seriously, they’re all terrible, and it’s not that hard to learn the 26 letters, even if it’s just for an escape game.

  1. JBIG2 is a “clever” image compression format in that it notices when two parts of the image looks similar, and only store one. So, if two characters look a bit similar in a document scan, it could decide to merge them, making them identical. This obviously did not go well. ↩︎
  2. Dresden 1944–1945 program, page 31 of the document, page 33 of the document (note: “NSD” stands for “Nationalsozialisticher Deutscher Dozentenbund”) ↩︎
  3. Carolo-Wilhelmina 1929–1930 program, page 13 of the document, page 12 of the PDF, left half ↩︎
  4. Braunschweig during the National Socialism, pages 118–134 ↩︎
  5. Carolo-Wilhelmina 1929–1930 program, page 8 of the document, page 10 of the PDF, left half ↩︎
  6. School year 1944–1945 program, pages 21, 22 of the document, page 15 of the PDF, right half and page 16, left half ↩︎
  7. It was only named “Farnsworth method” much later though, as analyzed by the Long Island CW Club ↩︎
  8. I do not claim this is the case. But, so far in the paper, we have no evidence either way. ↩︎
  9. Koch also refers to another study, Methoden der Wirtschaftspsychologie (Handbuch der biologischen Arbeitsmethoden, page 333) which have a graph similar to the first one. However, the plateau is described has happening at 14 wpm, not 12 wpm. It also goes up to 22 wpm for reception, not merely 16 wpm. ↩︎
  10. Again, based on Koch’s dissertation. Other research might actually prove that one method works better. I have just not read one yet. ↩︎

Tiny Docker Containers with Rust

The Magic

Thanks to Python, I have gotten used to Docker images that takes minutes (or dozens of minutes) to build and hundreds of megabytes to store and upload.

FROM python3:3.14

# install everything
RUN apt-get update \
    && apt-get install -y --no-install-recommends
    libsome-random-library0 libanother-library0 \
    libmaybe-its-for-pandas-i-dunno0 libit-goes-on0 liband-on0 \
    liband-on-and-on0 \
    && rm -rf /var/lib/apt/lists/*

# install more of everything
COPY requirements.txt .
RUN pip3 install -r requirements.txt

ENTRYPOINT ["gunicorn", "--daemon", "--workers=4", "--bind=", "app:create_app()"]

But there is another way. The Rust way. Where you build your image in a second, and it only takes 5 MB1.

FROM scratch
COPY target/release/my-binary .
ENTRYPOINT ["./my-binary"]

The Trick

This is possible thanks to the points below.

  1. Rust is compiled to binary
  2. Rust is statically2 compiled to binary
  3. You can easily use musl3 with Rust

The first point means that there is no need for a runtime to interpret the script or the bytecode.

The second point means that the binary contains the code for all the libraries4. And, more specifically, only the required code. So no need to install them externally, you remove some overhead, and the total size is reduced.

With these two, you get down to just a few runtime dependencies. Namely, glibc and the ELF loader5.

$ ldd target/release/my-binary (0x00007fffbf7f9000) => /lib/x86_64-linux-gnu/ (0x00007f7a759b2000) => /lib/x86_64-linux-gnu/ (0x00007f7a758d3000) => /lib/x86_64-linux-gnu/ (0x00007f7a756f2000)
/lib64/ (0x00007f7a762a6000)

But you can go further! We do not have to use glibc. It turns out you can just statically compile musl. And with Rust, using musl is only a matter of installing musl-tools and adding --target=x86_64-unknown-linux-musl to your cargo build command.

And, since we just got rid of our last runtime dependency, we do not even need the ELF loader! So now, we get:

$ ldd target/x86_64-unknown-linux-musl/release/my-binary
	statically linked

All the user code is now in a single file and will talk to the kernel directly using int 0x80.


This is a nice trick, and can help you if you really care about small Docker images, but it does have a drawback: targeting musl typically gives you lower performance. So keep this in mind!

  1. Sure, I am simplifying things here. In practice, you would also build the binary in a Dockerfile, and then use staging. But that does not change the point. ↩︎
  2. Static linking means including external libraries within the binary, as in opposition to dynamic libraries, where they are not included. Binaries that were dynamically linked will need to be provided with the external libraries at runtime (.so files on Linux, .dll files on Windows). ↩︎
  3. musl is an implementation of libc, the standard library for C. Virtually all binaries depend on the libc. glibc is the most common implementation of libc on Linux. ↩︎
  4. Of course, it won’t work if a library is using FFI to talk to OpenSSL for instance ↩︎
  5. The ELF loader is another program that helps your program to start up. In particular, it tells your program where to find the dynamically linked libraries. ↩︎
Morse Code

Koch’s Dissertation on Learning Morse Code

This post is a summary of the dissertation that Dr.-Ing. Ludwig Koch submitted in 1935 to become doctor engineer. This is what is referred to by “Koch method” in the context of learning Morse code. I will publish my commentary in another post.

You can find the scan, the transcription, and the full translation in the dedicated repository. The scan comes from the Internet Archive.

Edit: For another full translation, look here. For other summaries of the dissertation, look here, or here. I did my own summary independently of these, and before reading them.

Koch uses char/min as the unit for speed, where a character is 10.2 dots. In modern convention, we use the length of “PARIS”, that is, 50 dot, including the word separation. So, divide the char/min value by 5 to convert into word-per-minute (wpm).

Work-Psychological Study of The Process of Receiving Morse Code

Or, a new training procedure for radio operators

A. Introduction

Work psychological aims at improving the effectiveness of workers by studying how the human mind adapts to the work procedures. The profession of radio operator is chosen since their main activity is that of receiving Morse signals, which can be easily studied and quantified.

B. General Considerations

Professional radio operators are required to perfectly understand Morse code at 100 chars/min. Even with automation, man remains indispensable in certain cases. Morse code encodes characters as a series of short elements, or “dots” (·), and long elements, or “dashes” (–) tones. In standard Morse code, the proportions are:

  • a: length of a dot
  • b: length of a dash, equal to 3 a
  • c: separation between two elements, equal to a
  • d: separation between two characters, equal to 3 a

In the human mind, a concept is not the sum of its parts, but an indivisible whole, or “gestalt” in German. The reader recognizes whole words instead of deciphering each letter. The same applies to Morse code.

C. Rhythm when Sending

We first study the effect of transmission speed on the psychological process. For this, experienced operators send the sequence “B C V Q F L H Y Z X” and the exact timings are recorded. Characters have different durations — E (·) takes 1 dot, but Ch (––––) 15 — so the speed is an average.

Subject 1Subject 2
Subject 3Subject 4

Average values of a, b, c, d for the 4 operators, at different speeds. Abscissa is the speed in chars/min and ordinate a, b, c and d, in a unit proportional to the duration.

At low speed, a and c are close (except in subject 4), as they should be. However, b is more than 3 a. More importantly, d is much longer than b, even though it should be equal to it. The longer pauses between the characters actually correspond to the fact that, between the pauses, characters are sent faster than the standard proportions would allow.

Subject 1Subject 2
Subject 3Subject 4

Lengths of a, b, c, d stacked. From the abscissa upwards, this would read as the letter “a”, including a letter pause at the end. The recorded timings are marked with “s”, and the standard timings as “o”.

Subject 4, who sent at standard proportions at low speeds, belongs to the optical learning type, and learned Morse from the visual symbols first. She can send reliably at a speed of 110-120 chars/min but cannot receive at more than 60-70 chars/min.

Thus, the operators adapt the proportions to keep the character “whole”.

Success rate of experienced operator receiving low-speed standard Morse code. The abscissa is the speed in chars/min and the ordinate is success rate in percent.

To verify this, an automatic device was set to send 30 characters at the standard proportions. Experienced operators had trouble receiving the characters at low speeds. So the “whole” effect appears at a speed of 50 chars/min.

Characters themselves are not sent regularly. The table below shows the difference between standard proportions, and the effective rhythm at slow speed.

L·–···  –··
F··–···–  ·
C–·–·–·  –·
Q––·–––  ·–
X–··––  ··–
Y–·––– ·––
K–·––  ·–
D–··–  ··

Similarly, the operator breaks the character down in subunits. In standard Morse, C is –·–·. But it is often broken into –· –· (dah dit … dah dit) or, more rarely, into –·– · (dah dit dah … dit). Other characters, such as S (···) cannot be broken down this way.

In summary, standard Morse code is only usable at a speed of 50 char/min or above.

D. Learning to Receive

I. Classical Training Procedures

a) The Analytical Method

First, the student memorizes the visual representation of the Morse code, sometimes with mnemonics. Then, they start training to receive auditory Morse code at a low speed, and gradually increase the cadence. The disadvantages of this method are:

  1. Having to learn the visual representation first
  2. Waiting for a character to finish
  3. No holistic effect at the beginning
  4. Having to switching from non-holistic to holistic sound patterns
  5. Having to switching from counting the dots and dashes to recognizing the sound pattern
  6. Adapting as the rhythm changes
b) The Sound Image method
Learning curve for the sound image method. The abscissa is the number of weeks of practice; the ordinate is the speed attained by the student.

Students learn the visual representation. When practicing reception, a longer pause is used between the individual characters, which are sent at a high speed from the beginning. Students use the pause between characters to think about what they just heard. As the speed increases, this becomes impossible, and they have to switch to immediate recognition. This shows as a plateau in the learning curve around a speed of 50 char/min. The disadvantage of this method are:

  1. Having to learn the visual representation first
  2. Thinking about the elements of the sound pattern during the long breaks.
  3. Having to switching from counting the dots and dashes to recognizing the sound pattern
  4. Adapting as the rhythm changes

II. A New Learning Procedure

To help the student directly recognize the sound pattern, the training should not teach the visual representation and should use a high speed from the start. Since the plateau is around 50 char/min, it should start with at least 60 char/min. Higher speeds make it hard to learn new characters, so just use 60 char/min.

Because of this higher starting speed, the training should start with just two characters. Other characters should be added one by one gradually as the previous ones are mastered. The speed is not increased until all characters are mastered.

The training starts with two very different characters. At first, the student is not told what the two characters are and just puts marks in the answer sheet:

This trains the student to recognize the sound pattern as a whole. Afterward, the characters corresponding to the sounds are named.

When writing the characters, a common mistake is to pause when not grasping a character immediately to try to think about it. However, this causes a cascading effects where the next characters are not grasped immediately either, or even entirely missed. To avoid this, the student should just write a dot when they do not grasp a character immediately.

The student should master these first two characters after about 10 minutes. Then, a letter is added, again without revealing which it is at first. Once the student accurately puts marks when this new character is sent, the character is revealed. This process is then repeated for each character. Once all characters are mastered, the speed can be increased.

Some characters are harder to learn. In general, X, Y, P, Q. They should be added in the last third of the training. Characters were added as L, F, C, K, R, P, X, Y, Q, D, B, G, A, N, Z, U, V, W, J, H, S, I, E, O, M, T:

Some Morse code characters are hard to distinguish, such as S (···) and H (····), U (··–) and V (···–), D (–··) and B (–···). At a higher speed, distinguishing characters composed of dash is easier: O (–––) and Ch (––––), G (––·) and 6 (–····), W (·––) and J (·–––). The characters V (···–) and B (–···) are too different to be confused. Starting the training with similar characters is not efficient: E (·), I (··), S (···), H (····).

Characters should be added gradually one by one. An experience was conducted where the characters were added several at a time: T, M, O, Ch, then E, I, S, H, then D, B, G, then U, V, W. Each addition reset the progress of the student.

Training sessions should be spread out in time instead of packed together, and should not exceed half an hour.

The experience was conducted with students, not with soldiers in training, so the schedule was not regular. In the first half-hour, trainers could learn up to 5 characters. Other characters are added more slowly. Before adding a new character, the error rate should be below 10 %. The duration of the training was limited to 24 to 28 half-hour sessions.

In addition, early practice can be helped by using a different pitch for the dots and for the dashes. The pitches are gradually brought together during the training. Merging the two pitches after the 15th or after the 20th letter makes no difference.

Learning curve for the single-pitch (dashed line) and two-pitch (solid line) methods. The abscissa shows the number of half-hour sessions, and the ordinate the number of characters mastered.

The characters were added as H, F, A, B, G, C, D, E, CH, L, K, I, O, T, M, P, Q, R, S, N, U, W, V, Y, X, Z (no J):

Plateaus corresponds to more difficult characters: P, X, V, Y, Q, Z. Starting the training with these characters in the first third of the training, but not at the very beginning, helps in ensuring they are mastered by the end of the training, without discouraging the student who just started.

Learning curve, with the hard characters added between the 10th and 13th half-hour of practice. The abscissa shows the number of half-hour sessions, and the ordinate the number of characters mastered.

13 to 14 year-old students have the same progression rate as adults.

Mastery of the alphabet at 60 char/min is reached in 14 hours with this method, against 7 weeks with the classical ones.

E. Aptitude Test

Those who fail to master 4 characters at a speed of 60 char/min in the first 2 half-hours never mastered all characters. So the beginning of the training also serves as an aptitude test before the candidate spent dozens of hours in training. Students should be grouped by aptitude to avoid slowing down the most apt. Some students could be impeded by slow writing, but this can be remediated with practice. Those accustomed to concentrating are best, in particular those of the acoustic imagination type. Among about 100 subjects, none of the diagnoses made on the basis of the first two half-hours led to a wrong judgment. Biegel does a similar aptitude test, but she starts at 40 char/min.


Where to Start with Rust

As a Rust evangelist, I have been asked a few times for resources to start learning Rust. I have compiled my thoughts in this article. I have ordered the resources in a rough order: you should first take a look at the first resources and later at the other ones. Do not take it as gospel, and do peek at the rest of the list before completing even the official tutorial.

The Rust Programming Language

This is the official tutorial for Rust. It is also known as “the Rust book”. I do not recommend you read it entirely, but you should read at least the first 12 chapters. Of course, the rest of the chapters are worth a read, but the more advanced topics are better covered by dedicated resources.

Rust for functional programmers

If you are used to programming in Haskell or OCaml, this tutorial might help you get you to speed quicker.

Tour of Rust

For a more hands-on approach, you can use this tutorial, which drops you in a Rust playground with minimal exercises that each train you on a new concept.

Rust by Example

If the Tour of Rust is not enough, you can also try this one to complement your knowledge.

Learning Rust With Entirely Too Many Linked Lists

This tutorial is about getting over one of the main pain points of Rust: cyclic references. It goes over various solutions, how they work, and their pros and cons. A must-read if you want to become more comfortable with Rust.

Learn Rust the Dangerous Way

You can see this tutorial as an extension of the previous one. When you are used to developing in C, switching to idiomatic Rust can seem like a daunting task. This website will take you from a C program (the n-body problem from the Benchmarks Game) to a fully idiomatic and optimized Rust one.


At this point, you should be getting comfortable with the syntax and main idiosyncrasies of Rust. This collection of exercises will let you hone your skills.

Zero To Production In Rust (book)

To go further than writing small snippets of Rust, you will want to start taking into account higher-level considerations. This book narrates every step in setting up a simple mailing-list server in Rust.

Macros in Rust

Writing macros is a fairly advanced topics of Rust. This tutorial will get you more familiar with writing declarative macros (or “macros by example”).

The Rustonomicon

Most of the time, you should not need to touch unsafe. But, when you do, you enter a parallel world where every step you take could lead you to the dreaded undefined behaviors. Using unsafe safely requires thinking about many things. This tutorial will help you get started with this.

Rust for Rustaceans (book)

Like all non-trivial software projects, Rust has its quirks and unintuitive mechanisms. This book is a very dense explication of some of this crust. You might also enjoy the author’s YouTube channel.

This Week in Rust

This is the one-stop-shop to keep learning more about Rust, its ecosystem and its community. You do not need to follow all the links, or even read it from start to finish, but skimming the latest edition every week will let you stay on top of things.

Rust Koans

I conclude this list of heavy technical content with some light reading, likely inspired from the Rootless Root.