This is a discussion between John Ousterhout and Martin, who advocated in “Clean Code” to omit comments and split code in extremely small functions. Ousterhout takes that to town by asking Martin to explain an algorithm which Martin presented in his book on “Clean Code”, and algorithm that generates a list of prime numbers. It turns out that Martin essentially does not understand his own code because of the way it is written - and even introduces a performance regression!
Ousterhout: Do you agree that there should be comments to explain each of these two issues?
Martin: I agree that the algorithm is subtle. Setting the first prime multiple as the square of the prime was deeply mysterious at first. I had to go on an hour-long bike ride to understand it.
[… .] The next comment cost me a good 20 minutes of puzzling things out.
[…] I refactored that old algorithm 18 years ago, and I thought all those method and variable names would make my intent clear – because I understood that algorithm.
[ Martin presents a re-write of the algorithm]
Ousterhout: Unfortunately, this revision of the code creates a serious performance regression: I measured a factor of 3-4x slowdown compared to either of the earlier revisions. The problem is that you changed the processing of a particular candidate from a single loop to two loops (the increaseEach… and candidateIsNot… methods). In the loop from earlier revisions, and in the candidateIsNot method, the loop aborts once the candidate is disqualified (and most candidates are quickly eliminated). However, increaseEach… must examine every entry in primeMultiples. This results in 5-10x as many loop iterations and a 3-4x overall slowdown.
It gets even more hilarious when one considers where Martin has taken from the algorithm, and who designed it originally:
Martin took it from a 1972 publication of Donald E. Knuths seminal article on Literate Programming:
http://www.literateprogramming.com/knuthweb.pdf
In this article, Knuth explains that the source code of a program should be ideally understood as a by-product of an explanation which is directed at humans, explaining reasoning, design, invariants and so on. He presents a system which can automatically extract and assemble program source code from such a text.
Even more interesting, the algorithm was not invented by Knuth himself. It was published in 1970 by Edsger Dijkstra in his “Notes on Structured Programming” (with a second edition in 1972).
In this truly fascinating and timeless text, Dijkstra writes on software design by top-down problem decomposition, proving properties of program modules by analysis, using invariants to compose larger programs from smaller algorithms and design new data types, and so on. Also, how this makes software maintainable. In this, he uses the prime number generation algorithm as an extended example. He stresses multiple times that both architecture and invariants need to be documented on their own, to make the code understandable. (If you want that feeling you are standing on the shoulders of giants, you should read what Dijkstra, Knuth, and also Tony Hoare and Niklaus Wirth wrote).
So, Robert Martin is proven wrong here. He does not even understand, and could not properly maintain, the code from his own book. Nor did he understand that his code is hard to understand for others.
( I would highly recommend Ousterhout’s book.)



I think you missed the part where Martin clarified that the book code was just a quick refactor of some other code, and that he agrees it’s not a great example. He acknowledged that if readers can’t understand the code, it is necessarily unclear. He also fixed the performance regression in his refactor, providing a method that’s even faster than Ousterhout’s is. He also demonstrated quite fairly why he thought Ousterhout’s comments were unclear, and imo he had fair points. I think I kinda prefer Martin’s final version over Ousterhout’s, but I don’t have a strong preference.
I think a lot of the disagreement came from their personal experiences, which is hard to really invalidate. In Martin’s experience, comments can become outdated or just outright state something different than what the code does. To him, these comments are misleading. Ousterhout has a different experience, and neither is invalid. I can imagine that in a codebase with more abstraction or deep method nesting, comments can more easily become outdated; whereas with less abstraction or less method nesting, it’s easier to keep comments updated as they’re more likely to be in the same file (or near it).
I think some of Ousterhout’s comments were a bit pedantic and not really real concerns. I found him a bit dogmatic in this discussion at the start, though he was open to Martin’s viewpoint near the end a bit more. Martin came across as less dogmatic than his book, and more receptive to criticism I thought.
I did find myself agreeing with Martin a couple times over Ousterhout. Take the clearTotals example, which resets two properties to 0. It’s then used in a method that calls clearTotals and then returns some stuff. Ousterhout prefers inlining the method, Martin puts it in a clearTotals method. I prefer Martin’s solution here, or at least think Ousterhout’s solution should add a oneline comment.
I don’t think Clean Code is entirely invalid, and I think some of Ousterhout’s criticism isn’t really valid. Some of the examples in the book are bad, and some rules when applied dogmatically are also bad. Ousterhout’s book is generally better, but I think it’s wrong to dismiss Martin’s professional experience outright or to “dunk” on it.
Interesting discussion!