The Secret to a Green CI: Efficient Pre-commit Hooks with checkout-index


Linting in the CI

Most people are familiar with the concept of “CI” as in “CI/CD”.

For the ones who are not, it usually refers to a set of automatic tests that are run when a developer sends their code for review. These tests typically include “linters”, which will check that the code follows certain stylistic rules, and detect common mistakes.

It can be pretty annoying to wait for the CI to complete, only to discover that it failed because that I forgot a “;” in TypeScript, added an extra “;” in Python, did not format my Rust code properly, etc.

Of course, I can always run the proper linting command before sending my code to make sure everything is okay. And what were the options to pass to cppcheck again? Oh right, I am supposed to use clang-tidy in this project. What’s the proper syntax to call mypy? Oh, and I do not want to run eslint on all the source files, it takes too long, which files did I change since the last time?

I know, I’ll just write a script. I will have to find out which files were changed since the last time it was call to avoid verifying everything every time. I could use the git history for that, but what was the last commit when the script was run? I would have to store that somewhere…

Things would be simpler if I just ran it on every commit. But then I am definitely going to forget. If only I could run that script for every commit. Oh, and I should ignore changes that I have not committed, since I am not going to send these, and it could create false positives and false negatives. I could use git stash, but I should be extra cautious about not messing things up.

The pre-commit hook

Fortunately, this is a common predicament, and a solution for this exists: pre-commit hooks.

In git, a hook is a script (or executable in general) that is run automatically at certain times. For instance, when running git push, or when making a commit.

When creating a repository, git automatically populates the hooks with some examples. Take a look at .git/hooks/ in your repository.

The hook when making a commit is .git/hooks/pre-commit. So if you make a script and put it there, it will be run when you do git commit. Try it by writing the following code in .git/hooks/pre-commit. Do not forget to make it executable with chmod +x .git/hooks/pre-commit.

#!/usr/bin/env bash
echo "Hello from the pre-commit hook!"
exit 1

If you git add some files and run git commit, you should see:

$ git commit
Hello from the pre-commit hook!

git did not open up your editor to let you write a commit message. In fact, not commit was actually made. The reason is that we made the script exit with the error status “1”. For success, that would be “0”. The status of the script is the same as that of the last executed command. Here, the last executed command is exit 1, which unsurprisingly exits with status “1”.

So, for your TypeScript project, you can just use something like the script below, and no more errors!

#!/usr/bin/env bash
npx eslint src/
npx tsc

Well, not quite.

Making it Work Right

First, we are writing a shell script. And, although they can be convenient, they come with their lot of surprises. In the previous pre-commit script, a linting error found by eslint won’t stop the commit. We can see why in this example:

$ cat false-true
#!/usr/bin/env bash
$ ./false-true
$ echo $?

The false command exits with “1”. But it does not stop the execution of the script! So true is then run, which exits with “0”. This behavior is usually unwanted. Thankfully, we can disable it by adding set -e:

$ cat false-true
#!/usr/bin/env bash
set -e
$ ./false-true
$ echo $?

That’s better!

We can also set the -E option, to ensure that the ERR trap is inherited in functions. What that means is that, if you do some clean-up in your bash script (usually done with traps), and happen to write a function where a failure occurs, the clean-up is done properly. Setting the -u option will give you a proper error when you make a typo in a variable name, instead of just behaving like the variable contains the empty string.

Note: We can also set the “-o pipefail” option, but it comes with its own surprises. Basically, without “-o pipefail”, if you have “command1 | command2”, the script will only fail if command2 fails. With “-o pipefail”, the script will also fail if command1 fails. This can be what you want. But sometimes, command1 failing just means that there is nothing to check. In short, use this option on a case-by-case basis.

Alright, we are not linting the files in the current directory before making a commit. That’s good. But that’s not what we want. What if we only commit some of the changes?

To run the linting command on the exact state corresponding to the commit we are about to do, we could just stash the changes, but that’s dangerous. What if the script fails? What if the script succeeds? What if there is nothing to stash? What if we make a mistake somewhere and randomly corrupt the working directory?

Okay, there is a simple rule to avoid all that: don’t touch the working directory. Don’t stash, don’t move files, don’t do anything in the current directory.

Instead, just copy the state corresponding to the commit in a temporary directory far away from the working directory.

But my repository has so many files! It’s going to take so long to copy the full thing!

We’re not cloning the repository. Just doing a checkout-index. And git is fast.

The proper way to create a working directory is with mktemp, which will avoid any collision. Also, we are not going to leave random files lying around, so we ensure that the temporary directory is removed once the script finishes (succeeds, fails, crashes, is killed, etc.). In Bash, the proper way to do this is with traps. So we do that immediately. Finally, we copy the state of the current commit to that temporary directory with checkout-index:

TEMPDIR=$(mktemp -d)
git checkout-index --prefix=$TEMPDIR/ -af

Once this is done, we can just cd to that temporary directory and run our linting commands. And we’re done.

Well, almost. You did not expect it to be that easy, do you?

Making it Work Fast

See, modern build tools such as npm and cargo do this clever thing where they put dependencies in a local subdirectory (node_modules/ for npm, target/ for cargo).

It’s definitely an improvement about Python’s virtual environments. However, it does mean that, when we run our linting command from the temporary directory, we start from scratch1. Every time we try to make a commit.

Thankfully, we can just tell npm and cargo to use the usual subdirectory2. But where is this subdirectory again? We could use pwd, but maybe the user is in another subdirectory of the project. Maybe we could try visiting the parent’s until we find node_modules/ or target/?

There is much simpler. The thing we are looking for is usually easy to locate from the repository’s root. Often, it is just directly at the repository’s root. So, if the repository is at ~/src/my-project, we should look in ~/src/my-project/node-modyles/ and ~/src/my-project/target/. Conveniently, we can just ask git for the repository’s root:

GIT_ROOT=$(git rev-parse --show-toplevel)

Then, we’ll just set the relevant environment variables:

export NODE_PATH="${GIT_ROOT}/node_modules"
export CARGO_TARGET_DIR="${GIT_ROOT}/target"

Now things are starting to look pretty good!

But we can do better.

If you only change 1 file out of 10,000, you might want to just check that one file and skip the rest. We can do just that.

First, let’s get the list of the files we should check:

git diff --cached --name-only --diff-filter=AM

Here, the “AM” value passed to diff-filter is two flags, “A” for “added files” and “M” for “modified files”. We can filter further with grep to keep only the files ending in “.py” for example.

Then, we feed this list as arguments for our linting executable. This is done easily with xargs, which allows will take each line from the input and give it as an argument to the command. For instance:

(echo a; echo b; echo c) | xargs ls
# same as
ls a b c

Since the list might be empty, remember to use --no-run-if-empty. Without it, you will run the linting command without argument, which might fail, or worse3, lint all the files. So we could have:

git diff --cached --name-only --diff-filter=AM |
    grep '\.[hc]$' |
    { cd $TEMPDIR; xargs --no-run-if-empty ./lint-fast c; }  # lint from temp directory

Note: The paths listed by git diff are relative to the root of the repository, so you’ll want to run the linting command from that root, even if all the files are in a subdirectory. This could also mean that you may have to explicitly point the linting command to the configuration file. For instance, if your TypeScript files are in a front/ directory along with the Biome configuration file, you will need to do:

git diff --cached --name-only --diff-filter=AM | grep -E '\.tsx?$' |
    (cd $TEMPDIR; xargs --no-run-if-empty npx @biomejs/biome check --config-path="front/")

The Script

Finally, with all the pieces coming together, we have a pre-commit hook which is both reliable and fast. Don’t forget to add the pre-commit script itself to your repository, and enable it with ln -s ../../pre-commit .git/hooks/ (you cannot directly track files in .git/).

You might not need TypeScript and Rust and Python and C and C++. But you can just pick the relevant parts.

#!/usr/bin/env bash
# Usage: copy this file to .git/hooks/

# Exit at first error
set -Eeu

# To handle partially committed files, we must copy the staged changes to a
# separate location
# See also
TEMPDIR=$(mktemp -d)
git checkout-index --prefix=$TEMPDIR/ -af

# keep using the same node_modules/ and target/ directories, not a new one in
# the temporary directory this avoids re-parsing everything from scratch every
# time we run the script
GIT_ROOT=$(git rev-parse --show-toplevel)

# TypeScript
export NODE_PATH="${GIT_ROOT}/node_modules"
git diff --cached --name-only --diff-filter=AM | grep -E '\.tsx?$' |
    (cd $TEMPDIR; xargs --no-run-if-empty npx @biomejs/biome check)

# Rust
export CARGO_TARGET_DIR="${GIT_ROOT}/target"
(cd $TEMPDIR; cargo fmt --check)
(cd $TEMPDIR; cargo clippy --all -- --deny warnings)

# Python
(cd $TEMPDIR/back; ruff check .)

# C
git diff --cached --name-only --diff-filter=AM |  # list modified files
    grep '\.[hc]$' |  # filter C files
    { cd $TEMPDIR; xargs --no-run-if-empty ./lint-fast c; }  # lint from temp directory

# C++
git diff --cached --name-only --diff-filter=AM |  # list modified files
    grep '\.[hc]pp$' |  # filter C++ files
    { cd $TEMPDIR; xargs --no-run-if-empty ./lint-fast cxx; }  # lint from temp directory

For C and C++, I use the following lint-fast script:

#!/usr/bin/env bash
# Exit at first error
set -Eeuo pipefail


# trailing space check
if grep -Hn '\s$' "$@"; then
    echo "Error: trailing space"
    exit 1

# check for NULL instead of nullptr in C++ files
if test "${MODE}" = "cxx"; then
    grep -Hn --color NULL "$@" && exit 1

    # do not actually compile
    # fail script when issue is found
    # usual paranoid options

# C++ specific options
if test "${MODE}" = "cxx"; then

gcc "${GCCFLAGS[@]}" "$@"

#note: one flag might come from pre-commit hook
    # no point when we only check a subset of the files
    # disable lookup for system includes
    # sometimes does stupid suggestions
    # otherwise, it complain when not enough files are included
    # otherwise, it will complain it has not found a suppressed warning
    # enable everything else
    # fail script when issue is found
    # only print something when there is an issue
    # ignore inlined JS code

cppcheck "${CPPCHECK_FLAGS[@]}" "$@"

# C++ specific options
if test "${MODE}" = "cxx"; then
clang-tidy "${CLANG_FLAGS[@]}" "$@" --
  1. Of course, if we’re linting with a call to npx, it will probably just fail since the command is not installed in the local project. ↩︎
  2. Yes, I know, I just said we shouldn’t touch the working directory. But we’re not, npm and cargo are! … Okay, let’s just say it’s a special case and we are being very careful here. ↩︎
  3. This is worse because this could stay under the radar for a long time and waste time. An error can usually be diagnosticed quickly and fixed once and for all. ↩︎
Morse Code

Learning Morse with Koch

If you take an interest in learning Morse code, you will quickly hear about “Koch method” and “Farnsworth method”. You will also read many opinions stated with absolute certainty, often contradicting each other. Some advice is straight out non-actionable.

I have recently read Koch Dissertation on Learning Morse Code. Yeah, I have unusual ways to spend my time. Anyway, the reason for this is that I wanted to actually understand what Koch actually claimed, and what he actually proved.

I start with a rant about reading a badly scanned German paper from almost 90 years ago, and a small digression about who Koch is, but you can directly skip to the main part of the article. Or directly to the conclusion.

The Rant

There is an English translation of the dissertation, but it is a machine-generated one. Although significant effort has been done to make it look nice, the content is often too mangled to be usable.

The only version I could find of Koch’s dissertation is an old scan. Although it was uploaded in 2020, the file was created in 2008. Unfortunately, this is a time when JBIG2 was a thing1. Sure enough, when extracting the images from the PDF, I see various files ending in “.jb2e” or “.jb2g”.

Internet Archive serves a text version of the document. Since the document was typewritten and not handwritten, the OCR actually did a pretty good job. However, since it was clearly OCR-ed after the conversion to JBIG2 took place, some characters are mixed up, in particular:

  • h and b
  • n and u
  • c and e

Of course, most of the time, the resulting word did not make sense, and I just needed to fix the typo. In German. By the way, do you know about agglutination? Oh, and since it’s 1935, it uses various words that are obsolete today. Also, it’s orthography reform of 1996, so “ß” is still widely used. The OCR did not like that.

Thankfully, it’s 2023, and machine translation has removed all the borders, and made using foreign languages transparent. Or not.

Maybe it’s German, but neither DeepL nor Google Translate could make natural sentences out of the transcript. At least, they were mostly grammatically correct. But, between the fact that they still followed German word order and expressions, and the fact that they poorly cut sentences (sometimes duplicating them), it was far from a piece of cake.

Of course, after the transcription and the translation, we are not done. The argument itself is not a shining piece of literature. It was written by an engineer for a limited audience. On a deadline. And a typewriter.

Koch himself often repeats himself (in separate paragraphs, so it’s definitely not the machine translation), uses interminable sentences (the translation did not change the length), sometimes keeps the actual point implicit, and sometimes omits information altogether.

Extracting the gist of this paper was like peeling onions: each layer you remove only makes you cry harder.

Dr.-Ing. Ludwig Koch

Finding information about Ludwig Koch is not easy. It does not help that there were many other people named Ludwig Koch, including several doctors, and even a Prof. Dr. Ludwig Koch, born in 1850. And another teaching today. One thing to keep in mind is that our Koch was not a Doctor of Philosophy (PhD). His dissertation granted him the title of Doctor of Engineering, which would make him Dr.-Ing. Ludwig Koch.

What we do have to work with is the curriculum vitae attached to the end of the dissertation, where Koch states that he was born in 1901 in Kassel.

There was a “Dr-Ing. Koch” in the Nazi’s club2 of the technical university of Dresden in 1944, but it looks like it was Paul-August Koch, who studied textile and paper technology. The only other reference I have been able to find online of Morse-Koch is a 1971 document referring to “Dr.-Ing. Koch Ludwig”, which would match his title. He would have been 69 at the time.

Koch defended his thesis at the technical university Carolo-Wilhelmina. The main reporter was Prof. Dr. phil. Bernhard Herwig, who studied psychology3 and was mostly interested in studying working conditions for the good of the fatherland4. The co-reporter was Prof. Dr.-Ing. Leo Pungs, who was interested in more technical aspects of radio engineering as the dean of the Department of Electrical Engineering5. For the record, it looks like Prof. Dr. Phil. Bernhard Herwig was in the Nazi’s club, but Prof. Dr.-Ing. Leo Pungs was not6.

I tried to clarify whether Koch was actually interested in Morse code in and by itself, or was only using it as an arbitrary example to apply Arbeitspsychologie. He did have a degree in electrical engineering, but I found no evidence that he continued studying the learning of Morse code after he defended his thesis. In fact, there is no other trace of him in the academic record.

We think it was the same Koch famous for nature recordings


No he’s not. Sound-recordist-Koch was born in 1881.

Koch was able to teach a class of students to copy code at 12 words per minute in under 14 hours. However, the method wasn’t often used until recently.


I do not know why they think that “the method wasn’t often used until recently”. Koch’s method was known in the United States at least since The Learning of Radiotelegraphic Code in 1943.

Koch Dissertation

The bold parts are a summary of the summary I posted before. The rest are my comments.

For other summaries of the dissertation, look here, or here. I did my own summary independently of these, and before reading them.

We study how holism can help improve work conditions.

First, we need to tackle the fact that the dissertation is heavily oriented in favor of Gestalt psychology, or holism, which was very in at the time. Koch worked under the supervision of Prof. Dr. phil. Bernhard Herwig, who was mostly concerned with improving work productivity for the fatherland (see previous section), but it does not look like he was particularly focused on holism. This point would require more investigation, but my understanding is that Koch was somewhat biased towards holism due to the historical context, but not overtly.

When experienced radio operator send Morse code at a speed lower than 10wpm, the resulting timings are distorted

However, this was done with only 4 subjects. They do show the same overall pattern, but there are significant differences between them. So, it is hard to conclude with anything as precise as a “10 wpm threshold”. The associated graphs would tend to show that the distortion is still very visible even at 16 wpm (80 chars/min). The main point stands, but the point where sending is slow is very debatable.

Experienced radio operators have difficulty receiving Morse code at lower speed

This is associated with the graph below. Again, the main point is pretty clear. The accuracy of reception falls off a cliff as the speed decreases. However, the speed where this happen is again debatable. 90 % is already a pretty accuracy for an experienced operator. That’s one error every tenth character!

Success rate of experienced operators receiving low-speed standard Morse code. The abscissa is the speed in chars/min and the ordinate is success rate in percent.

If we look more closely at the graph, we notice that the curves bend downwards somewhere above 10 wpm. And if we pay attention, we notice that there are actually very few data points from which the curves are drawn. There is not enough information to know when the accuracy falls below 95 %, for example. There might be multiple data points in the upper right corner, but it’s hard to tell without the raw data, which is not available.

In short, slow speed is terrible, but there is no clear evidence that 10 wpm and above should be considered good enough.

  • Naive: first, learn all the characters at a slow speed, then increase the speed
  • Farnsworth7: first learn all the characters at a high speed, but with ample spacing, then decrease the spacing, then increase the speed
  • Koch: first learn the rhythm at 12 wpm, then learn 2 characters, then add other characters one at a time and finally increase the speed

Because this matches my intuition, I would like to say that the previous experiences show that the naive approach is clearly mistaken. But, looking at it objectively, it’s not what the experiences show. What they do show is that using low-speed Morse is not strictly easier than using high-speed Morse, to the extent that experienced operators do not automatically master low-speed Morse.

The main point of Koch can be summed up in two statements:

  1. At lower speeds, we need to think, while higher speeds rely on automatic recognition
  2. Starting to practice Morse with the thinking-mode is a waste of time

Regarding statement 1, it is hard to find explicit evidence, but it is at least sound, from the fact that operators just do not have the time to think about each individual dit and dah at high speed. However, this does not preclude the possibility of progressing to automatic recognition at lower speed. Also, at higher-speed, it is definitely possible to think about what you hear, just on a more selective basis.

For statement 2, I want to highlight the fact that the thinking- and automatic- modes directly map to the cognitive and autonomous stages in Fitts and Posner’s model. Thus, there is nothing inherently unnatural in progressing from thinking-mode to automatic-mode. The harsh plateau when switch from the former to the latter might just be a requisite passage in the learning process8.

In this part, the arguments of Koch for statement 2 are not supported (nor invalidated) by evidence. The evidence comes later in the paper from comparing Koch method with the other procedures.

Harder characters (X, Y, P, Q) should not be learned at the very first. Koch uses L, F, C, K, R, P, X, Y, Q, D, B, G, A, N, Z, U, V, W, J, H, S, I, E, O, M, T.

That makes sense, at least to avoid frustration. As far as I know, Koch did not systematically investigate the learning order, so that’s a minor point.

Same for similar characters (E/S/H/5, U/V, D/B, G/6, W/J)

Same as above.

Training sessions should be spread out in time instead of packed together, and should not exceed half an hour.

Spoiler: this is the truest claim of the paper. But it is not surprising. The spacing effect has been known from even before Koch was born. In fact, Koch cites a paper from 1897 on this exact topic.

The experience was conducted with students, tradesmen and businesswomen with two or three sessions a week, so it was at a disadvantage relatively to military training

As one knows, psychology is the study of psychology students. To be fair, Koch states that it was not just students. But, since we do not have the raw data, we cannot tell whether it was 10 % students, or 90 % students.

In any case, this argument is kind of weird. Koch argues that his study was biased in a way that should have made good results harder with relation to military training. Morse code training is offered to anyone who joined the military, while student are a select class of individuals, especially at the time.

In addition, he just argued that short sessions spread out in time were more effective than long, packed sessions (like the military does).

Two-tone method: using different tones for the dits and dahs helps at the start of the training

This is an idea that I see very rarely when I read about the Koch method online. Actually, I think the only instance is at LICW. I find it very appealing, since it sounds like it could really help with recognizing the characters more easily at first, and thus improve the feedback loop for learning.

However, the evidence for its effectiveness provided in Koch dissertation is very weak.

Learning curve for the single-pitch (dashed line) and two-pitch (solid line) methods. The abscissa shows the number of half-hour sessions, and the ordinate the number of characters mastered.

The argument is that the two-tone curve is always above the one-tone curve. However, there are a number of issues:

  1. We do not know how many participants there were in each case, so we cannot assess the significance of the result
  2. The curve have lots of variations, which suggest random differences were not smoothed out by the law of averages
  3. The relation between the two curves do not show a continuous advantage, since the spread does not increase over time. In fact, they even merge for a time!
  4. Even if we ignore the fact that they merge, the spread is mostly set during the second half-hour. We expect to see the most variance at the beginning of the training, when students are not familiarized with the process, and differ in their learning patterns. So it might explain the spread entirely.

Again, the two-tone method could actually be useful, but we cannot tell from the evidence. If anything, we do learn that the advantage is marginal at best.

In fact, LICW did try the two-tone method, but they found no advantage:

we experimented with introducing two-tone characters to new students and the feedback was overwhelmingly negative

Long Island CW club

The characters were added as H, F, A, B, G, C, D, E, CH, L, K, I, O, T, M, P, Q, R, S, N, U, W, V, Y, X, Z (no J)

With the new methods, students take 14 hours to master Morse code, instead of 10 weeks

This is definitely the most important claim of the paper. Military training typically uses several hours every day for months on end. Reducing this to only 14 hours, every if spread over several weeks, is phenomenal.

Well, 14 hours with 3 half-hour sessions a week is still about 10 weeks. So the whole gain would be about spreading the sessions instead of packing them. But still, let’s say that the claim is that this is only made possible thanks to the new method.

This is supported by the two following graphs9.

Learning curve for the Farnsworth method. The abscissa is the number of weeks of practice; the ordinate is the speed attained by the student.
Learning curve for the single-pitch (dashed line) and two-pitch (solid line) methods. The abscissa shows the number of half-hour sessions, and the ordinate the number of characters mastered.

Alas, there are a number of confounding factors that are not addressed.

One. In the second graph, we can see that the training completed once students had mastered the 26 letters of the alphabet. This excludes digits, punctuations, which are necessary for normal operation. But the first graph is actually not based on Koch experiments. It was “kindly provided by the Deutsche Verkehrsfliegerschule”, a military-training institution. Of course, we have no more information, so we do not know how many characters they had to learn.

In comparison, it took me less than 20 hours to reach 26 mastered characters (lesson 25 at LCWO) at 20 wpm, but 35 more to master all LCWO lessons. I did follow the Koch method, in the fact that I trained at 20 wpm from the get-go, without extra spacing. But the point is that, at 26 characters, I was still very far from being done.

Of course, the first graph could be about learning only the 26 letters, but it’s never stated so, and we have no way to check.

Two. The target speed is set to 12 wpm, which is actually pretty low. Koch himself states that the passing speed is usually at 20 wpm:

Radio communication in the ship service is under the same conditions, i.e. perfect mastery of Morse code at 100 char/min.. The examination conditions of the radio operator course are, for example:

“Keying a telegram of 100 words of 5 letters each in 5 minutes on the Morse apparatus in perfect form. Receiving a code telegram of 100 words and an open text of 125 words of 5 letters each in 6 minutes; transcribing the latter on the typewriter.”

Koch claims that going from 12 wpm to 20 wpm would be relatively easy, since all the characters have been learned in the automatic-mode. However, this was never tested, so there is no concrete evidence showing this. As an illustration, I took 55 hours to master 20 wpm (with 41 characters), but I am barely reaching 28 wpm 30 hours later.

Three. Remember how, in the previous quote, it said “perfect form”? Koch sets the bar at “less than 10% error rate”. It took me 44 hours to reach lesson 40 of LCWO at 10 % error rate, but still 11 more hours to get it down under 5 %.

And now, for the most damning issue.

Aptitude test: those who fail to master 4 characters at a speed of 60 char/min in the first 2 half-hours never mastered all characters.

This could indeed be a useful aptitude test. However, it points to a fundamental flaw with the study. What does it mean by “never”?

Does Koch mean that these students were allowed to continue the practice for months until he wrote his dissertation? And, more importantly, that he took their learning time in the above graph?

Or does it mean that they were allowed to continue the practice until the 28th session? In that case, it should show as a value strictly lower than 26 for the number of mastered letter at the end of the experience.

Or does it mean that they were removed from the experience early, when they gave up, or when it became clear that they could not keep up?

In any case, this is never addressed in the paper. Yes, military trainings eliminate students aggressively as they fall behind. And it could even be that Koch had a much higher success rate. But we do not know. At all.


Koch brings interesting ideas. However, none are supported by the evidence he brings forth. Except one, which is to use spaced repetition. This one is actually well-known and supported by evidence. Koch himself recognizes this.

To stress the point: it is very well possible that some parts of this method actually have a positive impact, but we do not have enough evidence from this paper to conclude. I have read many times people say that using Koch method, or Farnsworth method, or whatever method helped them. But, until we have concrete evidence, we cannot conclude that one approach is better than the others.

Then, how should I learn Morse code?

Just use Koch method on LCWO at 20 wpm. Lower if it is too fast. Practice 10 minutes a day. Every day. And keep at it.

But you just said that Koch method was not supported by evidence!

Yup. But it is not disproven either, and no other method is proven to work better10. Don’t get stuck in analysis paralysis trying to find the one best method. Use Koch because it’s the one for which there are the most tools, including LCWO, the most convenient one. Or just because I said so. Use 20 wpm because that’s the default in various places. Or because that’s what I did. But don’t worry about lowering it if you really cannot manage.

What does matter is actually practicing. So spaced repetition and discipline.

How long does it take to learn Morse Code?

Count 50 to 100 hours if you want to be comfortable at 20 wpm. Count a minimum of 10 to 20 hours for the bare basics if you’re aiming for 12 wpm.

This is a personal question but on average students get on the air doing their first QSO in 3-4 months.

We do not have requirements but recommend as a minimum 2 classes per week and 15–20 minutes daily practice.

FAQ of Long Island CW Club

We teach the code at 12 words per minute. This speed was determined by Koch to be optimum for initial learning of CW.  It is also optimum for making first contacts on the air.

Description of Morse code training at Long Island CW Club

Should I learn the Visual Representation?

Sure, if you find it fun. It’s a pretty easy goal, and that could be quite enough if you want to shine in escape games. If you intend to actually use it over the air, don’t wait until you have learned it, and just start practicing what you actually want to do.

Should I Learn Mnemonics?

No. Seriously, they’re all terrible, and it’s not that hard to learn the 26 letters, even if it’s just for an escape game.

  1. JBIG2 is a “clever” image compression format in that it notices when two parts of the image looks similar, and only store one. So, if two characters look a bit similar in a document scan, it could decide to merge them, making them identical. This obviously did not go well. ↩︎
  2. Dresden 1944–1945 program, page 31 of the document, page 33 of the document (note: “NSD” stands for “Nationalsozialisticher Deutscher Dozentenbund”) ↩︎
  3. Carolo-Wilhelmina 1929–1930 program, page 13 of the document, page 12 of the PDF, left half ↩︎
  4. Braunschweig during the National Socialism, pages 118–134 ↩︎
  5. Carolo-Wilhelmina 1929–1930 program, page 8 of the document, page 10 of the PDF, left half ↩︎
  6. School year 1944–1945 program, pages 21, 22 of the document, page 15 of the PDF, right half and page 16, left half ↩︎
  7. It was only named “Farnsworth method” much later though, as analyzed by the Long Island CW Club ↩︎
  8. I do not claim this is the case. But, so far in the paper, we have no evidence either way. ↩︎
  9. Koch also refers to another study, Methoden der Wirtschaftspsychologie (Handbuch der biologischen Arbeitsmethoden, page 333) which have a graph similar to the first one. However, the plateau is described has happening at 14 wpm, not 12 wpm. It also goes up to 22 wpm for reception, not merely 16 wpm. ↩︎
  10. Again, based on Koch’s dissertation. Other research might actually prove that one method works better. I have just not read one yet. ↩︎

Tiny Docker Containers with Rust

The Magic

Thanks to Python, I have gotten used to Docker images that takes minutes (or dozens of minutes) to build and hundreds of megabytes to store and upload.

FROM python3:3.14

# install everything
RUN apt-get update \
    && apt-get install -y --no-install-recommends
    libsome-random-library0 libanother-library0 \
    libmaybe-its-for-pandas-i-dunno0 libit-goes-on0 liband-on0 \
    liband-on-and-on0 \
    && rm -rf /var/lib/apt/lists/*

# install more of everything
COPY requirements.txt .
RUN pip3 install -r requirements.txt

ENTRYPOINT ["gunicorn", "--daemon", "--workers=4", "--bind=", "app:create_app()"]

But there is another way. The Rust way. Where you build your image in a second, and it only takes 5 MB1.

FROM scratch
COPY target/release/my-binary .
ENTRYPOINT ["./my-binary"]

The Trick

This is possible thanks to the points below.

  1. Rust is compiled to binary
  2. Rust is statically2 compiled to binary
  3. You can easily use musl3 with Rust

The first point means that there is no need for a runtime to interpret the script or the bytecode.

The second point means that the binary contains the code for all the libraries4. And, more specifically, only the required code. So no need to install them externally, you remove some overhead, and the total size is reduced.

With these two, you get down to just a few runtime dependencies. Namely, glibc and the ELF loader5.

$ ldd target/release/my-binary (0x00007fffbf7f9000) => /lib/x86_64-linux-gnu/ (0x00007f7a759b2000) => /lib/x86_64-linux-gnu/ (0x00007f7a758d3000) => /lib/x86_64-linux-gnu/ (0x00007f7a756f2000)
/lib64/ (0x00007f7a762a6000)

But you can go further! We do not have to use glibc. It turns out you can just statically compile musl. And with Rust, using musl is only a matter of installing musl-tools and adding --target=x86_64-unknown-linux-musl to your cargo build command.

And, since we just got rid of our last runtime dependency, we do not even need the ELF loader! So now, we get:

$ ldd target/x86_64-unknown-linux-musl/release/my-binary
	statically linked

All the user code is now in a single file and will talk to the kernel directly using int 0x80.


This is a nice trick, and can help you if you really care about small Docker images, but it does have a drawback: targeting musl typically gives you lower performance. So keep this in mind!

  1. Sure, I am simplifying things here. In practice, you would also build the binary in a Dockerfile, and then use staging. But that does not change the point. ↩︎
  2. Static linking means including external libraries within the binary, as in opposition to dynamic libraries, where they are not included. Binaries that were dynamically linked will need to be provided with the external libraries at runtime (.so files on Linux, .dll files on Windows). ↩︎
  3. musl is an implementation of libc, the standard library for C. Virtually all binaries depend on the libc. glibc is the most common implementation of libc on Linux. ↩︎
  4. Of course, it won’t work if a library is using FFI to talk to OpenSSL for instance ↩︎
  5. The ELF loader is another program that helps your program to start up. In particular, it tells your program where to find the dynamically linked libraries. ↩︎
Morse Code

Koch’s Dissertation on Learning Morse Code

This post is a summary of the dissertation that Dr.-Ing. Ludwig Koch submitted in 1935 to become doctor engineer. This is what is referred to by “Koch method” in the context of learning Morse code. I will publish my commentary in another post.

You can find the scan, the transcription, and the full translation in the dedicated repository. The scan comes from the Internet Archive.

Edit: For another full translation, look here. For other summaries of the dissertation, look here, or here. I did my own summary independently of these, and before reading them.

Koch uses char/min as the unit for speed, where a character is 10.2 dots. In modern convention, we use the length of “PARIS”, that is, 50 dot, including the word separation. So, divide the char/min value by 5 to convert into word-per-minute (wpm).

Work-Psychological Study of The Process of Receiving Morse Code

Or, a new training procedure for radio operators

A. Introduction

Work psychological aims at improving the effectiveness of workers by studying how the human mind adapts to the work procedures. The profession of radio operator is chosen since their main activity is that of receiving Morse signals, which can be easily studied and quantified.

B. General Considerations

Professional radio operators are required to perfectly understand Morse code at 100 chars/min. Even with automation, man remains indispensable in certain cases. Morse code encodes characters as a series of short elements, or “dots” (·), and long elements, or “dashes” (–) tones. In standard Morse code, the proportions are:

  • a: length of a dot
  • b: length of a dash, equal to 3 a
  • c: separation between two elements, equal to a
  • d: separation between two characters, equal to 3 a

In the human mind, a concept is not the sum of its parts, but an indivisible whole, or “gestalt” in German. The reader recognizes whole words instead of deciphering each letter. The same applies to Morse code.

C. Rhythm when Sending

We first study the effect of transmission speed on the psychological process. For this, experienced operators send the sequence “B C V Q F L H Y Z X” and the exact timings are recorded. Characters have different durations — E (·) takes 1 dot, but Ch (––––) 15 — so the speed is an average.

Subject 1Subject 2
Subject 3Subject 4

Average values of a, b, c, d for the 4 operators, at different speeds. Abscissa is the speed in chars/min and ordinate a, b, c and d, in a unit proportional to the duration.

At low speed, a and c are close (except in subject 4), as they should be. However, b is more than 3 a. More importantly, d is much longer than b, even though it should be equal to it. The longer pauses between the characters actually correspond to the fact that, between the pauses, characters are sent faster than the standard proportions would allow.

Subject 1Subject 2
Subject 3Subject 4

Lengths of a, b, c, d stacked. From the abscissa upwards, this would read as the letter “a”, including a letter pause at the end. The recorded timings are marked with “s”, and the standard timings as “o”.

Subject 4, who sent at standard proportions at low speeds, belongs to the optical learning type, and learned Morse from the visual symbols first. She can send reliably at a speed of 110-120 chars/min but cannot receive at more than 60-70 chars/min.

Thus, the operators adapt the proportions to keep the character “whole”.

Success rate of experienced operator receiving low-speed standard Morse code. The abscissa is the speed in chars/min and the ordinate is success rate in percent.

To verify this, an automatic device was set to send 30 characters at the standard proportions. Experienced operators had trouble receiving the characters at low speeds. So the “whole” effect appears at a speed of 50 chars/min.

Characters themselves are not sent regularly. The table below shows the difference between standard proportions, and the effective rhythm at slow speed.

L·–···  –··
F··–···–  ·
C–·–·–·  –·
Q––·–––  ·–
X–··––  ··–
Y–·––– ·––
K–·––  ·–
D–··–  ··

Similarly, the operator breaks the character down in subunits. In standard Morse, C is –·–·. But it is often broken into –· –· (dah dit … dah dit) or, more rarely, into –·– · (dah dit dah … dit). Other characters, such as S (···) cannot be broken down this way.

In summary, standard Morse code is only usable at a speed of 50 char/min or above.

D. Learning to Receive

I. Classical Training Procedures

a) The Analytical Method

First, the student memorizes the visual representation of the Morse code, sometimes with mnemonics. Then, they start training to receive auditory Morse code at a low speed, and gradually increase the cadence. The disadvantages of this method are:

  1. Having to learn the visual representation first
  2. Waiting for a character to finish
  3. No holistic effect at the beginning
  4. Having to switching from non-holistic to holistic sound patterns
  5. Having to switching from counting the dots and dashes to recognizing the sound pattern
  6. Adapting as the rhythm changes
b) The Sound Image method
Learning curve for the sound image method. The abscissa is the number of weeks of practice; the ordinate is the speed attained by the student.

Students learn the visual representation. When practicing reception, a longer pause is used between the individual characters, which are sent at a high speed from the beginning. Students use the pause between characters to think about what they just heard. As the speed increases, this becomes impossible, and they have to switch to immediate recognition. This shows as a plateau in the learning curve around a speed of 50 char/min. The disadvantage of this method are:

  1. Having to learn the visual representation first
  2. Thinking about the elements of the sound pattern during the long breaks.
  3. Having to switching from counting the dots and dashes to recognizing the sound pattern
  4. Adapting as the rhythm changes

II. A New Learning Procedure

To help the student directly recognize the sound pattern, the training should not teach the visual representation and should use a high speed from the start. Since the plateau is around 50 char/min, it should start with at least 60 char/min. Higher speeds make it hard to learn new characters, so just use 60 char/min.

Because of this higher starting speed, the training should start with just two characters. Other characters should be added one by one gradually as the previous ones are mastered. The speed is not increased until all characters are mastered.

The training starts with two very different characters. At first, the student is not told what the two characters are and just puts marks in the answer sheet:

This trains the student to recognize the sound pattern as a whole. Afterward, the characters corresponding to the sounds are named.

When writing the characters, a common mistake is to pause when not grasping a character immediately to try to think about it. However, this causes a cascading effects where the next characters are not grasped immediately either, or even entirely missed. To avoid this, the student should just write a dot when they do not grasp a character immediately.

The student should master these first two characters after about 10 minutes. Then, a letter is added, again without revealing which it is at first. Once the student accurately puts marks when this new character is sent, the character is revealed. This process is then repeated for each character. Once all characters are mastered, the speed can be increased.

Some characters are harder to learn. In general, X, Y, P, Q. They should be added in the last third of the training. Characters were added as L, F, C, K, R, P, X, Y, Q, D, B, G, A, N, Z, U, V, W, J, H, S, I, E, O, M, T:

Some Morse code characters are hard to distinguish, such as S (···) and H (····), U (··–) and V (···–), D (–··) and B (–···). At a higher speed, distinguishing characters composed of dash is easier: O (–––) and Ch (––––), G (––·) and 6 (–····), W (·––) and J (·–––). The characters V (···–) and B (–···) are too different to be confused. Starting the training with similar characters is not efficient: E (·), I (··), S (···), H (····).

Characters should be added gradually one by one. An experience was conducted where the characters were added several at a time: T, M, O, Ch, then E, I, S, H, then D, B, G, then U, V, W. Each addition reset the progress of the student.

Training sessions should be spread out in time instead of packed together, and should not exceed half an hour.

The experience was conducted with students, not with soldiers in training, so the schedule was not regular. In the first half-hour, trainers could learn up to 5 characters. Other characters are added more slowly. Before adding a new character, the error rate should be below 10 %. The duration of the training was limited to 24 to 28 half-hour sessions.

In addition, early practice can be helped by using a different pitch for the dots and for the dashes. The pitches are gradually brought together during the training. Merging the two pitches after the 15th or after the 20th letter makes no difference.

Learning curve for the single-pitch (dashed line) and two-pitch (solid line) methods. The abscissa shows the number of half-hour sessions, and the ordinate the number of characters mastered.

The characters were added as H, F, A, B, G, C, D, E, CH, L, K, I, O, T, M, P, Q, R, S, N, U, W, V, Y, X, Z (no J):

Plateaus corresponds to more difficult characters: P, X, V, Y, Q, Z. Starting the training with these characters in the first third of the training, but not at the very beginning, helps in ensuring they are mastered by the end of the training, without discouraging the student who just started.

Learning curve, with the hard characters added between the 10th and 13th half-hour of practice. The abscissa shows the number of half-hour sessions, and the ordinate the number of characters mastered.

13 to 14 year-old students have the same progression rate as adults.

Mastery of the alphabet at 60 char/min is reached in 14 hours with this method, against 7 weeks with the classical ones.

E. Aptitude Test

Those who fail to master 4 characters at a speed of 60 char/min in the first 2 half-hours never mastered all characters. So the beginning of the training also serves as an aptitude test before the candidate spent dozens of hours in training. Students should be grouped by aptitude to avoid slowing down the most apt. Some students could be impeded by slow writing, but this can be remediated with practice. Those accustomed to concentrating are best, in particular those of the acoustic imagination type. Among about 100 subjects, none of the diagnoses made on the basis of the first two half-hours led to a wrong judgment. Biegel does a similar aptitude test, but she starts at 40 char/min.


Where to Start with Rust

As a Rust evangelist, I have been asked a few times for resources to start learning Rust. I have compiled my thoughts in this article. I have ordered the resources in a rough order: you should first take a look at the first resources and later at the other ones. Do not take it as gospel, and do peek at the rest of the list before completing even the official tutorial.

The Rust Programming Language

This is the official tutorial for Rust. It is also known as “the Rust book”. I do not recommend you read it entirely, but you should read at least the first 12 chapters. Of course, the rest of the chapters are worth a read, but the more advanced topics are better covered by dedicated resources.

Rust for functional programmers

If you are used to programming in Haskell or OCaml, this tutorial might help you get you to speed quicker.

Tour of Rust

For a more hands-on approach, you can use this tutorial, which drops you in a Rust playground with minimal exercises that each train you on a new concept.

Rust by Example

If the Tour of Rust is not enough, you can also try this one to complement your knowledge.

Learning Rust With Entirely Too Many Linked Lists

This tutorial is about getting over one of the main pain points of Rust: cyclic references. It goes over various solutions, how they work, and their pros and cons. A must-read if you want to become more comfortable with Rust.

Learn Rust the Dangerous Way

You can see this tutorial as an extension of the previous one. When you are used to developing in C, switching to idiomatic Rust can seem like a daunting task. This website will take you from a C program (the n-body problem from the Benchmarks Game) to a fully idiomatic and optimized Rust one.


At this point, you should be getting comfortable with the syntax and main idiosyncrasies of Rust. This collection of exercises will let you hone your skills.

Zero To Production In Rust (book)

To go further than writing small snippets of Rust, you will want to start taking into account higher-level considerations. This book narrates every step in setting up a simple mailing-list server in Rust.

Macros in Rust

Writing macros is a fairly advanced topics of Rust. This tutorial will get you more familiar with writing declarative macros (or “macros by example”).

The Rustonomicon

Most of the time, you should not need to touch unsafe. But, when you do, you enter a parallel world where every step you take could lead you to the dreaded undefined behaviors. Using unsafe safely requires thinking about many things. This tutorial will help you get started with this.

Rust for Rustaceans (book)

Like all non-trivial software projects, Rust has its quirks and unintuitive mechanisms. This book is a very dense explication of some of this crust. You might also enjoy the author’s YouTube channel.

This Week in Rust

This is the one-stop-shop to keep learning more about Rust, its ecosystem and its community. You do not need to follow all the links, or even read it from start to finish, but skimming the latest edition every week will let you stay on top of things.

Rust Koans

I conclude this list of heavy technical content with some light reading, likely inspired from the Rootless Root.


Client-Side Password Hashing

tl;dr: When you think about it, hashing password only client-side (in the browser, in JavaScript) is not as terrible as you might think. And if you do most of the work client-side, you get something that is actually pretty secure.

Storing Passwords

So, I have been working on modernizing a Web application and fixing some of its issues. As someone who has spent way too much time implementing cryptographic primitives and cracking password hashes for fun, one of the main problems was that passwords were only hashed client-side.

For the context, we have learned quite a long time ago that storing raw passwords in a database was a bad idea. If the database leaked, all passwords were immediately compromised. Since people tend to reuse the same passwords for various services, it meant that compromising a non-sensitive service would allow an attacker to access very sensitive accounts of many individuals.

So we encrypt them. Except we don’t. If the database leaks, the encryption key will probably leak as well. Instead, we cryptographically hash the password. That is, we transform them in something completely different, irreversibly. So, “correct horse battery staple” would become “c4bbcb1fbec99d65bf59d85c8cb62ee2db963f0fe106f483d9afa73bd4e39a8a”.

How your password is saved when you register, and how it is checked when you log in

Because the same password always get hashed the same way, we can authenticate people by comparing the hashes instead of the passwords themselves. No need to remember the password in the database! And if the database leaks, no attacker cannot find “correct horse battery staple” from “c4bbcb1fbec99d65bf59d85c8cb62ee2db963f0fe106f483d9afa73bd4e39a8a”.

Well, except they can.

Slowing down attackers

Although cryptographic hashes are designed to be hard to directly reversed, an attacker who got access to the hashes can just take a guess, hash it, and check whether it matches the hash of a user. If it does, it has found a password.

Since cryptographic hashes are also designed to be very fast, an attacker can try many guesses in a short amount of time.

The average password entropy is 40 bits. The median is likely to be lower. A modern CPU can compute more than 1 million SHA-256 hash computation per seconds. So trying every password with 40 bits of entropy would take less than 2 weeks.

The number of characters is not everything. What matters is how it is generated. Don’t pick a password: since humans are bad at randomness, the entropy will be even lower!

Thankfully, we can make things more difficult for the attacker. What if, instead of storing SHA-256(password), we stored SHA-256(SHA-256(password)). It would make things twice as hard for the attacker, and would not change much when we validate a user’s password for authentication. We can go further! Instead of requiring to compute SHA-256 twice, we can make it so that it takes ten, a hundred, a thousand iterations! We can choose a number that makes it so that hashing the password takes 10 ms on the server. In that case, it would take the attacker 350 years of CPU time! This is basically PBKDF2.

The more iterations, the longer it will take to crack a password. Of course, verifying the password when authenticating will also take longer, but it should only be a few milliseconds.

Except that the attacker can do something that we cannot: it can try many guesses in parallel. Today’s CPUs can have dozens of cores. GPUs scale even more. And, with Bitcoin/blockchain/web3/NFTs, people have started building dedicated hardware to hash SHA-256 en masse. For a few thousand dollars, you can get hardware to crack SHA-256 at tens of billions of guesses per second. Even with our clever scheme, the attacker is back to 2 weeks to try every password1.

That’s cheating!

Yeah, it is! But we can do something about that. GPUs and dedicated hardware are designed for very specific tasks. We can take advantage of that to make it very hard to crack passwords with them. The trick is to replace SHA-256 with a hash function that requires a lot of memory to compute. It turns out, people have designed a very good one! Argon2id2

That’s basically the state of the art in storing passwords3.

Client-Side Only Hashing

The client (your web browser) just sends the password over the network. Then, the servers hash to hash it before storing or checking it.

Now, that’s quite a bit of work for our server. We need to do a significant amount of computation every time a user tries to connect. What if we had the user’s client do it instead?

Instead of sending the password, and having the server hash it, let’s imagine we use some JavaScript to hash the user’s password with Argon2id, and then send it over the network. Now, the server only needs to compare the hash to the one in the database!

Now, the client does most of the work.

There is one issue though: from the point of view of our Web application, the actual credential is not the password anymore, but the hash. So if an attacker gets the hash, they can authenticate as the user. In other words, a database leak means the credentials leak!

It’s not as bad as in the initial scenario, however. These credentials are only valid for our service. And it is still very hard to find the original passwords from the hashes.

Mostly Client-Side Hashing

We can improve this with a simple observation: since it is very hard to get the password back from the hash, we can treat the hash like a high-entropy password.

In this case, brute-force attacks become irrelevant, and we do not need to iterate or use memory-hard functions. Instead, we could just use a fast hash. It looks like that:

  1. The browser computes an intermediate hash with a slow memory-hard function
  2. The browser sends the intermediate hash to the server
  3. The server hashes the intermediate hash with a fast function, getting the final hash
  4. The server compares the final hash with the one in the database

From the point of view of the server, the effective credential is the intermediate hash. However, if the database leak, the attacker only gets the final hash. It would be very hard to find the intermediate hash directly from the final hash, since the input space is very large. And it would be very hard to find the user’s password from the final hash, since a memory-hard function is involved.

The nice thing is that all the heavy work is offloaded to the browser, while still keeping the password secure!

The one drawback that I can think of is that this is not enforced by the back-end. If the front-end (especially a third party one) does not do proper memory-hard hashing, passwords might become exposed.

A positive side effect is that the back-end never sees raw passwords. It can be a good mitigation for inadvertently leaking information in the logs. Leaking the intermediate hash still allows an attacker to authenticate as a user, but they are not able to get their raw password and compromise the user’s accounts on other websites.


A priori, client-side hashing is a terrible idea from the point of view of security. However, looking more closely at it, it looks like it can actually make sense. But we should still use hashing on the server to avoid any risk of leaking credentials.

  1. Note that, without the iterated hashing, it would take just 2 minutes with dedicated hardware. So that’s still better than not doing it. ↩︎
  2. scrypt came first, and is pretty good as well ↩︎
  3. There is also salting, but that’s orthogonal to this discussion ↩︎

Strongly Typed Web Apps

I Am Old

I like software to be correct. I know. Controversial, right? But as I have gained experience tinkering with various technologies, and building hobby project after hobby project, with some professional projects in-between, I have come to realize that I do not want to just pray and hope. I want to know that my programs work. And I do not want to spend 99 % of my time manually testing them.

When I first encountered Python, coming from a QBasic, C and C++ background, it was a bliss not having to declare types manually. Instead of wasting time writing boilerplate, I could actually make my programs do what I wanted them to do.

But, as my projects grew more ambitious, I had to become stricter and stricter to make sure I found bugs as early as possible. So wrote tests, I set up linters, wrote tests, added back typing through type annotations, and wrote tests.

Type annotations get a bad rap, but they are actually useful, unlike C types.

In C, types are mostly about memory layout, and not so much about correctness. When it comes to semantics, a char is an int, a short is an int, and an int could be a long, or a float. Also, an int could be a short1. And of course, you can add strings to arrays, pointers to integers, and floating point values to enumerations. Because, in the end, it’s all the same to C2.

With type annotations, you start to get the point that functional aficionados have been trying to make since forever. “Yeah, that value could be None, so you need to check for that before you use it. By the way, you’re evaluating the addition between a hash table and a list; don’t.” Thanks, compiler3.

And when you start thinking like that, you might want to say: “Okay, this variable holds euros. I really do not want to add it to kilowatts.”. That’s when you know it’s too late. But now you have implemented a full-fledged system for dimensional analysis in your type system. And it is glorious. You have forgotten what it was meant for, though.

What’s this to do with Web Apps? Seriously, not much. But there is a point!

Typed APIs

Web Apps have two fully separated software components talking to each other: the frontend, and the backend. And it’s very easy to introduce bugs at the interface between the two. Especially when they use different languages4.

Of course, the solution is XML. As everyone knows, you simply write an XSD or DTD specification for your format. Then, you can easily transform your data for consumption thanks to XSLT and even presentation with XLS. You might even have something working within ten years!

Okay, okay, everyone is doing JSON these days. Thankfully, we do have a way to describe what a JSON document contains: Swagger OpenAPI.

Fine. So you write this OpenAPI thing. Then what?

Well, that’s where the current picture is still lacking. We now have three components and two interfaces:

  • backend code
  • OpenAPI specification
  • frontend code

How do we get out of manually checking everything and make sure it works?

First, we can generate some frontend code from the OpenAPI specification, which will handle all communications with the backend. For this, I use openapi-typescript-codegen. Okay, that one mostly works fine.

Now, the fun part is to make the backend code and the OpenAPI code work together.

The convention is to generate the OpenAPI specification from the backend code, and not backend code from the OpenAPI specification. In Python, you might be familiar with Flask-RESTful Flask-RESTplus Flask-RESTX. In Rust, you could be using Utoipa.

In principle, this is great: you add some annotations to your backend code, and you can be sure that your frontend code sees the same types!

However, this is still a very immature space. You can very easily document the wrong type in the annotations, and you won’t get a problem until you try to actually use the generated OpenAPI specification in the front-end.


Things have definitely improved since the days I was writing PNG parsers in QBasic. But we still have ways to go until we really pick all the low-hanging fruit for software correctness. I want the compiler to shout at me more often!

  1. Yes, I am aware of type ranks. The point is that none of this helps you make your program more correct. ↩︎
  2. Sure, you cannot add a struct to a bit-field. But guess how you pass a struct around? ↩︎
  3. Or mypy in this case. ↩︎
  4. Sure, you might enjoy using JavaScript in your backend, since you already have to use it for your frontend. Not. ↩︎

Overkill Debugging

Hack that Code!

I have a not-so-secret technique that I use whenever I encounter a bug that evades my grasps. Or when I am lazy. Okay, mostly when I am lazy.

The idea is no more than applying the concept of binary search to source code:

  1. Remove half the code
  2. Compile and run
  3. If the bug is still present, repeat with the remaining code
  4. Otherwise, repeat with the code that was removed in 1

Of course, this naive approach is rarely practical. For once, removing half the code indiscriminately tend to make things not compile. Also, sometimes, a bug manifests because of the way the first half and the second one interact.

So, in practice, it looks more like:

  1. Remove some big chunk of code
  2. If that fails to compile/run, try another chunk, or reduce the size
  3. Compile and run
  4. If the bug is still present, repeat with the remaining code
  5. Otherwise, put back the chunk of code, and try removing another one (can be a sub-chunk of the initial chunk)

Note: sometimes, you will still need to do mini-refactor to keep making progress

This works surprisingly well. And it is very satisfying to remorselessly hack through code that was written painstakingly over hours, months or even years. And then, getting to a small subset of code that clearly exemplify the bug.

Automate the Fun Parts

Software engineers might be the group of people who work the hardest to make themselves redundant.

Don't forget the time you spend finding the chart to look up what you save. And the time spent reading this reminder about the time spent. And the time trying to figure out if either of those actually make sense. Remember, every second counts toward your life total, including these right now.
I am guilty of spending way too much time automating tasks that only took a trivial amount of time

So, of course, someone already automated the process I described above. It is called C-Reduce. It is meant to find bugs in the compiler rather than in the compiled program, but the principle is the same. If your program is C and you can write a small script that test whether the bug is present, then it will automatically slash through your code.

To give you an idea, consider the code below.

#include <stdlib.h>
#include <stdio.h>

void f(void) {
    int x = 42;
    int y = 83;
    int z = 128;
    printf("%d\n", x  + y);

void g(void) {
    int x = 1;
    int y = 2;
    int z = 3;
    printf("%d\n", x  + y);

int h(void) {
    return 42 * 43;

int main(void) {

#include <stdarg.h>
#include <time.h>

As well as the following Bash script.

#!/usr/bin/env bash
set -Eeuo pipefail
gcc -o main main.c
./main | grep -q ^125$

Then creduce main.c reduces the C source file to:

main() { printf("%d\n", 42 + 83); }

This was really a toy example, but that this program is very useful when you have large source files. For instance, C++ ones *shivers*.

Audio Millisecond Mistiming

I have been writing on a tool to practice Morse code using jscwlib, the library behind LCWO. I want to play an infinite flow of words in Morse code. The algorithm is the fruit of decades of research:

  1. Play a word
  2. When it has finished playing, play another word

While dogfooding, I realized that the speed sometimes increased unexpectedly. This was subtle enough that I did not notice the transition from normal speed to the higher speed.

Over my next practice sessions, I kept an eye ear on this phenomenon, and eventually realized that:

  • The space between the words did get shorter
  • The duration of the characters remained the same

From the two first points, I was able to devise a more reliable way to detect the presence of the bug. First, I made all the words just the letter “e”, which is a single “dit” in Morse. Second, I measured the time between the start of two words with new Date(). Finally, I showed the durations on the console.

This is not a very precise method but, it enabled me to see the duration of spaces reducing millisecond by millisecond over a few seconds. The effect became noticeable after a few hundred words were played.

With this, I could test the presence of the bug in mere seconds, instead of minutes of practice. So I went slashing.

After a short while, I found the culprit, wrote a fix, and opened a PR!


In the end, it took me much more time noticing the bug, understanding what it looked sounded like, and finding a way to check for it effectively, than actually identifying its origin in the source code.

Amateur Radio

HamFox: Forking Firefox for Fun and no Profit

Null Cipher

The radio amateur service does not allow encryption of communications. This is a fundamental rule meant to ensure that the amateur service is not used for unintended purposes, such as commercial ones.

As a consequence, we cannot transmit passwords securely over the amateur frequencies. But we can use cryptographic authentication. Just not anything that would be used to hide the contents of the communications.

Cryptographic authentication is something that HTTPS/TLS allows out-of-the-box. We just need to make sure that we are not using encryption. In this article, we’ll see how to do this with Mozilla Firefox.

Forking Firefox

The code base of Firefox is an unwieldy Mercurial repository. Maintaining a fork would be cumbersome.

Git has won. And learning Mercurial for a small project adds quite a bit of friction, and I could not host on GitHub or GitLab. Of course, I can use git-remote-hg, but this adds another layer of abstraction, and another source of problems. I tried both, but it turned out it did not matter.

Since the official build process assumes that you are using the official repository with the official URLs and so forth, forking the repository itself would require also patching the build process. And I really did not want to touch that.

Another approach is to make a repository that just contains the patches to apply to the Firefox codebase, and a script that fetches the Firefox repository, applies the patches, and builds the project. This is gluon.

If you look into the HamFox repository, you will see that it contains just a few small files. The most important one is the null cipher patch itself.

The Patch

Grepping through the code for the TLS names of the cipher, we uncover an array sCipherPrefs that lists the supported ciphers. Here is an extract:

static const CipherPref sCipherPrefs[] = {




Each entry in the array is a CipherPref. The definition is given right above the array:

// Table of pref names and SSL cipher ID
typedef struct {
  const char* pref;
  int32_t id;
  bool (*prefGetter)();
} CipherPref;

The first field is the string used to toggle the cipher in the about:config page. There is no point is creating a toggle for this cipher, so we’ll just leave it empty.

The second field is the identifier of the cipher; we find that the macros are defined in security/nss/lib/ssl/sslproto.h. When looking for the null ciphers, we find:

$ grep NULL security/nss/lib/ssl/sslproto.h
#define SSL_RSA_WITH_NULL_MD5                  TLS_RSA_WITH_NULL_MD5
#define SSL_RSA_WITH_NULL_SHA                  TLS_RSA_WITH_NULL_SHA
#define TLS_NULL_WITH_NULL_NULL                 0x0000
#define TLS_RSA_WITH_NULL_MD5                   0x0001
#define TLS_RSA_WITH_NULL_SHA                   0x0002
#define TLS_RSA_WITH_NULL_SHA256                0x003B
#define TLS_ECDH_ECDSA_WITH_NULL_SHA            0xC001
#define TLS_ECDHE_ECDSA_WITH_NULL_SHA           0xC006
#define TLS_ECDH_RSA_WITH_NULL_SHA              0xC00B
#define TLS_ECDHE_RSA_WITH_NULL_SHA             0xC010
#define TLS_ECDH_anon_WITH_NULL_SHA             0xC015
#define SRTP_NULL_HMAC_SHA1_80                  0x0005
#define SRTP_NULL_HMAC_SHA1_32                  0x0006
#define SSL_FORTEZZA_DMS_WITH_NULL_SHA          0x001c

The SSL_* ones are just aliases for the TLS_* ones. We do want authentication. So, this eliminates TLS_NULL_WITH_NULL_NULL. We then have the choice between MD5, SHA-1 and SHA-256, and I opted for the last one, so TLS_RSA_WITH_NULL_SHA256.

The third one is a pointer to the function that tells us whether the toggle is enabled or not. There is no toggle. But we can just write a function that always returns true.

We replace the contents of the array with a single entry containing these fields, and we’re good!

To minimize the risk of conflict when the upstream code changes, I have put the other ciphers in a block comment instead of removing them altogether from the code. With this, we get:

diff --git a/security/manager/ssl/nsNSSComponent.cpp b/security/manager/ssl/nsNSSComponent.cpp
index 5844ffecfd..e2da79480a 100644
--- a/security/manager/ssl/nsNSSComponent.cpp
+++ b/security/manager/ssl/nsNSSComponent.cpp
@@ -1054,9 +1054,15 @@ typedef struct {
   bool (*prefGetter)();
 } CipherPref;
+bool AlwaysTrue(void) {
+  return true;
 // Update the switch statement in AccumulateCipherSuite in nsNSSCallbacks.cpp
 // when you add/remove cipher suites here.
 static const CipherPref sCipherPrefs[] = {
+    {"", TLS_RSA_WITH_NULL_SHA256, AlwaysTrue},
+    /*
@@ -1103,6 +1109,7 @@ static const CipherPref sCipherPrefs[] = {
     {"security.ssl3.rsa_aes_256_sha", TLS_RSA_WITH_AES_256_CBC_SHA,
+    */
 // These ciphersuites can only be enabled if deprecated versions of TLS are
Amateur Radio

HTTPS Without Encryption

As I have discussed previously, we can use cryptography over the amateur radio service, as long as we disable encryption. I have shown how to do this with SSH. Now, let’s see how to do this with HTTPS.

Modifying a Web browser, or even just a Web server, is an involved process, so we’ll start small, with the openssl command-line tool. On with crypto-crimes! No, not that crypto.

OpenSSL Server

The openssl command-line tool can be used like netcat to open a TLS session. To start a server, instead of:

$ nc -nlvp 4433
Listening on 4433

We’ll do:

$ openssl s_server
Could not open file or uri for loading server certificate private key from server.pem
4067D713C67F0000:error:16000069:STORE routines:ossl_store_get0_loader_int:unregistered scheme:../crypto/store/store_register.c:237:scheme=file
4067D713C67F0000:error:80000002:system library:file_open:No such file or directory:../providers/implementations/storemgmt/file_store.c:267:calling stat(server.pem)

Uh oh.

Right. We’re doing TLS, so we’ll need some cryptographic key.

If you are on a Debian-based Linux distribution (e.g. Ubuntu), you can always use the “snakeoil” key:

$ sudo openssl s_server -key /etc/ssl/private/ssl-cert-snakeoil.key -cert /etc/ssl/certs/ssl-cert-snakeoil.pem -www
Using default temp DH parameters

Note that we need to run the command as root to be able to read the private key. The -www option starts a basic Web server behind the TLS server, so you can test the setup by browsing to It will serve a Web page giving you the command-line you use as well as debugging information about the TLS connection.

Note: your Web browser does not know the emitter of the cryptographic key, so it will show a scary warning as shown below. This is expected, so you can just click on “Advanced…” and then on “Accept the Risk and Continue”.

Never trust the people at

We’ll want to create our own cryptographic key instead of using the “snakeoil” one. Also, we’ll need a corresponding “.pem” file, or certificate, that will give a name (“CN” for “Common Name”) to this key. This is done with the following commands.

$ openssl genrsa -out server.key 4096
$ openssl req -new -x509 -key server.key -out server.pem -subj "/CN=localhost"

Once this is done, we can start a TLS server with:

$ openssl s_server -key server.key -cert server.pem -www

As with the “snakeoil” certificate, you can check the connection from your Web browser:

The warning sign on the padlock in the address bar is telling you that the connection might not be fully secure. Do not worry. It is going to become worse. Much worse.

OpenSSL Client

We’ve seen nc -nlvp 4433. Now, let’s see nc -v 4433:

openssl s_client

By default, it will try to connect to port 4433 on the local loopback. If you have a running s_server without the -www option in another terminal, you will be able to send messages from one another.

Note: if you do not remove the -www option from the s_server command, you will not see your inputs as you might expect, since they will be either ignored (server side) or sent as part of the HTTP protocol (client side).

Note: input is sent line-by-line, so you won’t see your message in the other terminal until you hit Return.

If you want to feel like a leet-h4x0r, you can use the s_client command to connect to any public TLS server. For instance, instead of clicking on the link, you may run the following command

$ openssl s_client -connect

Then, type the following and hit Return twice (you need to send a blank line).

GET / HTTP/1.1

You should see the HTML for the Web page being printed in your terminal:

<html><head><title>Vous Etes Perdu ?</title></head><body><h1>Perdu sur l'Internet ?</h1><h2>Pas de panique, on va vous aider</h2><strong><pre>    * <----- vous &ecirc;tes ici</pre></strong></body></html>

Disabling Encryption

What we’ve done so far is basically the equivalent of the code shown below, except everything was encrypted.

$ nc -nvlp 4433
$ nc -v 4433
$ nc 80
GET / HTTP/1.1

To convince ourselves of it, we can use Wireshark. Once it’s started, select the any interface, type tcp.port==4433 as the filter, and hit Return.

Mind like a blank slate. Like a baby. Or me when I have to read reduced standard C++ headers.

Oh! And go play with the s_server and s_client command like we’ve seen above. You should see stuff like this:

Seriously, Wireshark is cool. It knows stuff.

This shows network packets being exchanged on the TCP port 4433 (the one used by default by s_server and s_client). The one I have selected corresponds to the transmission of “Hello World” from the client to the server. You can see it. It’s right here in the “Encrypted Application Data”. Okay, you cannot really see it, since it’s encrypted, by take my word for it.

Now that we have done all of that, we can ask ourselves the fun question: how can we make all of this pointless?

Let’s disable this all “confidentiality-preserving” technobabble “encryption” stuff.

First, let’s ask openssl to tell it what encryption modes it supports. On my computer, I get:

$ openssl ciphers -s -v
TLS_AES_256_GCM_SHA384 TLSv1.3 Kx=any Au=any Enc=AESGCM(256) Mac=AEAD
TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 Kx=any Au=any Enc=CHACHA20/POLY1305(256) Mac=AEAD
TLS_AES_128_GCM_SHA256 TLSv1.3 Kx=any Au=any Enc=AESGCM(128) Mac=AEAD
ECDHE-RSA-AES256-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AES(256) Mac=SHA384
DHE-RSA-AES256-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AES(256) Mac=SHA256
ECDHE-RSA-AES128-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AES(128) Mac=SHA256
DHE-RSA-AES128-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AES(128) Mac=SHA256
AES256-GCM-SHA384 TLSv1.2 Kx=RSA Au=RSA Enc=AESGCM(256) Mac=AEAD
AES128-GCM-SHA256 TLSv1.2 Kx=RSA Au=RSA Enc=AESGCM(128) Mac=AEAD
AES256-SHA256 TLSv1.2 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA256
AES128-SHA256 TLSv1.2 Kx=RSA Au=RSA Enc=AES(128) Mac=SHA256
AES256-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA1
AES128-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(128) Mac=SHA1

Each line is a possible setting (“cipher suite”). What matters is the Enc= part. All of these settings are using some mode of “AES” as encryption. “AES” stands for “Advanced Encryption Standard”. Booooring.

We want no security. None. Zilch. Seclevel zero.

$ openssl ciphers -s -v 'NULL:@SECLEVEL=0' 
TLS_AES_256_GCM_SHA384         TLSv1.3 Kx=any      Au=any   Enc=AESGCM(256)            Mac=AEAD
TLS_CHACHA20_POLY1305_SHA256   TLSv1.3 Kx=any      Au=any   Enc=CHACHA20/POLY1305(256) Mac=AEAD
TLS_AES_128_GCM_SHA256         TLSv1.3 Kx=any      Au=any   Enc=AESGCM(128)            Mac=AEAD
ECDHE-ECDSA-NULL-SHA           TLSv1   Kx=ECDH     Au=ECDSA Enc=None                   Mac=SHA1
ECDHE-RSA-NULL-SHA             TLSv1   Kx=ECDH     Au=RSA   Enc=None                   Mac=SHA1
AECDH-NULL-SHA                 TLSv1   Kx=ECDH     Au=None  Enc=None                   Mac=SHA1
NULL-SHA256                    TLSv1.2 Kx=RSA      Au=RSA   Enc=None                   Mac=SHA256
NULL-SHA                       SSLv3   Kx=RSA      Au=RSA   Enc=None                   Mac=SHA1
NULL-MD5                       SSLv3   Kx=RSA      Au=RSA   Enc=None                   Mac=MD5 

That’s more like it! We have several options to choose from. You see these Mac=MD5? I know we are trying to give a syncope to every security researcher on the planet, but we still respect ourselves. We’ll use NULL-SHA256. It’s the most secure of the totally insecure cipher suites. Trust me, I have a PhD in cryptography.

Let’s redo the s_server and s_client stuff again, with this cipher suite.

$ openssl s_server -key server.key -cert server.pem -cipher NULL-SHA256:@SECLEVEL=0 -tls1_2
$ openssl s_client -cipher NULL-SHA256:@SECLEVEL=0
Hello World

Note: we need to add the -tls1_2 option; otherwise, OpenSSL will offer TLS 1.3. The cipher suite we have selected is only available for TLS 1.2, and OpenSSL uses a separate option, -ciphersuites (yes, seriously) to configure TLS 1.3 cipher suites. Just add -tls1_2 and don’t think about it too much.

Hey, did you notice we were doing IPv6? That’s totally transparent, thanks to the magic of the OSI model. Wait no, TCP/IP is not actually OSI. Also, TCP and IP are not independent. Also learning the OSI model will probably never be useful to you. No, that had nothing to do with the contents of this article. Mostly.

Totally different, you see?

Actually, yes. 48656c6c6f20576f726c64 is the hexadecimal representation of the ASCII encoding of Hello World:

Who can’t read ASCII fluently?

Great! Now, we’re really back at Netcat-level security.

Note: sure, the client is checking the certificate of the server. And we could do all that with a certificate signed by Let’s Encrypt so that it’s not actually fully useless. Oh yeah, right, no, the Web browser is all panicky about the “no security” thing and won’t even let you enable the NULL-SHA256 cipher.

The error messages are getting worse. It means we are going in the right direction.

Client Authentication

Okay, okay, wait up. There is one more thing we can do with this, and it might serve some kind of purpose. The browser is checking the authenticity of the server. But what if, like, the server checked the authenticity of the client, man?

Yes, we can do this!

Modern Web browsers are a bit fussy when we disable encryption, so the command-lines in this section will not do so. That way, you can try in your Web browser. But, know that you can combine with the NULL-cipher, and test with the s_client command.

First, let’s tell the server to check the authenticity of the client. We tell it to use server.pem as the authority for client certificates. That is, to be considered valid, client certificates must have been signed by the private key of the server.

$ openssl s_server -key server.key -cert server.pem -verify 1 -CAfile server.pem
verify depth is 1
Using default temp DH parameters

Then, let’s create a certificate for the client. Since it should be signed by the server’s key, that’s what we do.

$ openssl genrsa -out client.key 4096
$ openssl req -new -key client.key -out client.csr -subj '/CN=John Doe'
$ openssl x509 -req -CA server.pem -CAkey server.key -in client.csr -out client.pem
$ openssl pkcs12 -export -out client.p12 -in client.pem -inkey client.key -passout pass:

Note: the last one is only useful if you want to import the client certificate in a Web browser.

And now, we can use it to authenticate:

$ openssl s_client -cert client.pem -key client.key

On the server’s side, you should see something like:

$ openssl s_server -key server.key -cert server.pem -verify 1 -CAfile server.pem
verify depth is 1
Using default temp DH parameters
depth=1 CN = localhost
verify return:1
depth=0 CN = John Doe
verify return:1
subject=CN = John Doe
issuer=CN = localhost

Note: alternatively, you can add the -www option to s_server and navigate to At the bottom, you’ll see no client certificate available. You can change that by loading the client certificate in your Web browser.

Note: to add the certificate in Firefox, go to:

  1. “Settings”
  2. “Privacy & Security”
  3. “Certificates”
  4. “View Certificates”
  5. Your Certificates”
  6. “Import…”
  7. Select the client.p12 file
  8. Give an empty password

Okay, but… Why?

Good question! When we do this, we are allowing the client to authenticate using a secret file on the computer (the private key). This could be used in an actual Web service to let people connect to their account without using a password. That’s a good thing. Since we’re going to disable encryption, we better not have passwords flying on the air!


Now, all of this was a bit academic. But it has allowed us to test out the waters of disabling encryption and using client authentication. It’s also going to be useful when we want to check that a modified Web browser can connect to an s_server instance with NULL encryption, or that we can authenticate to an HTTPS service with a client certificate.

But this will be the topics for other posts!