Category Archives: Thoughtlets

Be a multiplier

You may have heard the analogy that some software engineers add productivity, while some multiply productivity. Today I’d like to dig a little deeper into this and share my own thoughts.

What does it mean?

For those who haven’t heard the phrase before, let me try to unpack my understanding of it. Consider a tale of two programmers – let’s call them Alice and Bob, to pick two arbitrary names. Alice’s boss gives her a task to do: she is told to add a new thingamajig to the whatchamacallit code. She’s diligent, hardworking, and knows the programming language inside out. She’s had many years of experience, and especially knows how to add thingamajigs to whatchamacallits, because she’s done it many times before at this job and her last job. In fact, she was hired in part because of her extensive experience and deep knowledge with whatchamacallits, and at this company alone must have added over a hundred thingamajigs.

Because of her great skill and experience, she gives her boss an accurate time estimate: it’s going to take her one week. She knows this because she did almost exactly the same thing two weeks ago (as many times before), so she’s fully aware of the amount of work involved. She knows all the files to change, all the classes to reopen, and all the gotcha’s to watch out for.

One week later, she’s finished the task exactly on schedule. Satisfied with the new thingamajig, her boss is happy with the value she’s added to the system. Her boss is so grateful for hiring her, because she’s reliable, hard working, and an expert at what she’s doing.

Unfortunately for the company, Alice’s great experience gets her head-hunted by another company, where she’s offered a significantly higher salary and accepts immediately. The company mourns the loss of one of their greatest, who soon gets replaced by the new guy – Bob.

Bob is clearly wrong for the job by all standards, but some quirk of the job market and powers-that-be landed him up taking Alice’s place. He has no prior experience with whatchamacallits, let alone thingamajigs. And he doesn’t really know the programming language either (but he said he knows some-weird-list-processing-language-or-something-I-don’t-remember-what-he-said-exactly, and said that he’d catch on quickly). His new boss is very concerned and resents hiring him, but the choice was out of his hands.

On his first week, his boss asks him to add a thingamajig to the whatchamacallit code, as Alice had done many times. He asks Bob how long it will take, but Bob can’t give a solid answer – because he’s never done it before. It takes bob an abysmal 2 weeks just to figure out what thingamajigs are exactly, and why the business needs them. He keeps asking questions that seem completely unnecessary, digging into details that are completely irrelevant to the task. Then he goes to his boss and says it will take him 3 weeks to do it properly. “3 Weeks! OMG, what I have I done? Why did we hire this idiot”.

There’s not much to be done except swallow the bitter pill. “OK. 3 weeks”. It’s far too long. The customers are impatient. But, “oh well, what can you do?”

3 weeks later Bob is not finished. Why? Well again, he’s never done this before. He’s stressed. He’s missing deadlines in his first months on the job, and everyone’s frustrated with him. When all is said and done, and all the bugs are fixed, it takes him 2 months to get this done.

By now there is a backlog of 5 more thingamajigs to add. His boss is ready to fire him, but he optimistically dismisses the 2 months as a “learning curve”, and gives Bob another chance. “Please add these 5 thingamajigs. How long will it take you?”

Bob can’t give a solid answer. He swears it will be quicker, but can’t say how long.

The next day Bob is finished adding the 5 more thingamajigs. It took him 30 minutes to add each one, plus a few hours debugging some unexpected framework issues. What happened? What changed?

What happened is that the first 10 weeks that Bob was spending at his new job, he immediately noticed a big problem. There were 150 thingamajigs in the whatchamacallit codebase, and they all had a very similar pattern. They all changed a common set of files, with common information across each file. The whole process was not only repetitive, but prone to human error because of the amount of manual work required. Bob did the same thing he’s always done: he abstracted out the repetition, producing a new library that allows you just to define the minimal essence of each thingamajig, rather than having to know or remember all the parts that need to be changed manually.

To make things even better, another employee who was also adding thingamajigs, Charlie, can also use the same library and achieves similar results, also taking about 30 minutes to add one thingamajig. So now Charlie can actually handle the full load of thingamajig additions, leaving Bob to move on to other things.

Don’t do it again

The development of the new library took longer than expected, because Bob never done it before. This is the key: if you’ve done something before, and so you think you have an idea of the work involved in doing it again, this may be a “smell” – a hint that something is wrong. It should light a bulb in your mind: “If I’ve done this before, then maybe I should be abstracting it rather than doing almost the same thing again!”

You could say, in a way, that the best software engineers are the ones that have no idea what they’re doing or how long it will take. If they knew what they were doing, it means they’ve done it before. And if they’ve done it before then they’re almost by definition no longer doing it – because the best software engineers will stop repeating predictable tasks and instead get the machine to repeat it for them1.

Adding and Multiplying

In case you missed the link to adding and multiplying, let’s explore that further. Let’s assign a monetary value to the act of adding a thingamajig. As direct added value to the customer, let’s say the task is worth $10k, to pick a nice round number ($1k of that goes to Alice, and the rest goes to running expenses of the company, such as paying for advertising). Every time Alice completed the task, which took her a week, she added $10k of value. This means that Alice was adding productive value to the company at a rate of $250 per hour.

Now Bob doesn’t primarily add value by doing thingamajigs himself, but instead develops a system that reduces an otherwise 40 hour task to 30 minutes. After that, every time a thingamajig is added, by anyone, $10k of value is added in 30 minutes. Bob has multiplied the productivity of thingamajig-adders by 80 times. In a couple more weeks, Bob would be able to add more value to the company than Alice did during her entire career2.

Is it unrealistic?

The short answer is “no”. Although the numbers are made up, the world is full of productivity multipliers, and you could be one of them. Perhaps most multipliers don’t add 7900% value, but even a 20% value increase is a big difference worth striving for.

The laws of compound interest also apply here. If every week you increase 10 developers’ productivity by just 1%, then after 2 years you’d be adding the equivalent value of 6 extra developers’ work every day.

The alternative

What happens if Bob was never hired? Would the company crash?

Perhaps, but perhaps not. What might happen is that Microsoft, or some big open source community, would do the multiplying for you. They would release some fancy new framework that does thingamajigging even better than the way Bob did it, because they dedicate many more people to the task of developing the library. The company will take 5 years before they decide to start using the fancy new framework, in part because nobody on the team knew about it, and in part because they now have 250 thingamajigs to migrate and the expected risks are too high for management to accept. But in the end, most companies will catch on to new trends, even they lag behind and get trodden on by their competitors.

Final notes

In the real world, it’s hard to tell Alice from Bob. They’re probably working on completely different projects, or different parts of the same project, so they often can’t be directly compared.

From the outside it just looks like Bob is unreliable. He doesn’t finish things on time. A significant amount of his work is a failure, because he’s pushing the boundaries on the edge of what’s possible. The work that is a success contributes to other people’s success as much as his own, so he doesn’t appear any more productive relative to the team. He also isn’t missed when he leaves the company, because multiplication happens over time. When he leaves, all his previous multiplicative tools and frameworks are still effective, still echoing his past contributions to the company by multiplying other people’s work. Whereas when an adder leaves the company, things stop happening immediately.

Who do you want to be – an adder or a multiplier?


  1. This is not entirely true, since there is indeed some pattern when it comes to abstracting code to the next level, and those who have this mindset will be able to do it better. Tools that should come to mind are those such as the use of generics, template metaprogramming, proper macro programming, reflection, code generators, and domain specific languages 

  2. How much more do you think Bob should be paid for this? 

Needy Code

Today I was reading about an implementation of a MultiDictionary in C# – a dictionary that can have more than one value with the same key, perhaps something like the C++ std::multimap. It all looks pretty straight forward: a key can have multiple values, so when you index into the MultiDictionary you get an ICollection<T> which represents all the values corresponding to the key you specified. This got me wondering, why did the author choose to use an ICollection<T> instead of an IEnumerable<T>? This wasn’t a question specific to MultiDictionary, but more of a philosophical musing about the interfaces between different parts of a program.

What is IEnumerable/ICollection?

In case you’re not familiar with these types, the essence is that IEnumerable<T> represents a read-only set of items (of type T), which can be sequentially iterated using foreach. Through the use of LINQ extension methods we can interact with IEnumerable<T> at high level of abstraction. For example we can map an IEnumerable<T> xusing the syntax x.Select(...), and there are methods for many other higher-order functions. Another great thing about IEnumerable<T> is that its type parameter T is covariant. This means that if a function is asking for an IEnumerable<Animal>, you can pass it an IEnumerable<Cat> – if it needs a box of animals, a box of cats will do just fine.

An ICollection<T> is an IEnumerable<T> with some extra functionality for mutation – adding and removing items from the collection. This added mutability comes at the cost of covariance: the type parameter of ICollection<T> is invariant. If a function needs an ICollection<Animal>, I can’t give it an ICollection<Cat> – if it needs a storage for animals, I can’t give it a cat storage because it might need to store an elephant.

What to choose?

IEnumerable<T> is a more general type than ICollection<T> – more objects can be described by IEnumerable<T> than by ICollection<T>. The philosophical question is: should you use a more general type or a more specific type when defining the interface to a piece of code?

So, if you’re writing a function that produces a collection of elements, is it better to use IEnumerable or ICollection1 (assuming that both interfaces validly describe the objects you intend to describe)? If you produce an IEnumerable then you have fewer requirements to fulfill, but are free to add more functionality that isn’t covered by IEnumerable. That is, even if the code implements all the requirements of ICollection, you may still choose to deliberately only expose IEnumerable as a public interface to your code. You are committing to less, but free to do more than you commit to. This seems like the better choice: you’re not tied down because you’ve made fewer commitments. In the future if you need to change something, you have more freedom to make more changes without using a different interface.

If you are consuming a collection of elements, then the opposite is true. If you commit to only needing an IEnumerable then you can’t decide later you now need more – like deciding you need to mutate it. You can demand more and use less of what you demanded, but you can’t demand less and use more than you asked for. From this perspective, even if your code only relies on the implementation of IEnumerable it seems better to expose the need for ICollection in its interface because it doesn’t tie it down as much.

This strikes me to be very similar to producers and consumers in the real world. The cost of producing a product benefits from committing to as little as possible, while the “cost” of consuming a product benefits from demanding as much as possible. But there is also another factor: a consumer who demands too much may not be able to find the product their they’re looking for. That is, more general products can be used by more people. In software terms: a function or class with very specific requirements can only be used in places where those very specific requirements are met – it’s less reusable.

Is it Subjective?

Perhaps this all comes down to a matter of taste or circumstance. Code that’s very “needy”2 is less reusable, but easier to refactor without breaking the interface boundaries. Code that’s very “generous”((“Generous” = produces very specific types but consumes very general types)) is more reusable but more restricted and brittle in the face of change.

I personally prefer to write code that is very generous. That is, given the same block of code, I write my interfaces to the code such that I consume the most general possible types but produce the most specific possible types. In many ways this seals the code – it’s very difficult to change it without breaking the interface. But on the other hand, it makes different parts more compatible and reusable. If I need to make a change, I either change the interface (and face the consequences), or I simply write a new function with a different interface.

I also think that it may be better to write very generous interfaces to code that is unlikely to change and will often be reused, such as in libraries and frameworks. Domain-specific code which is likely to change across different versions of the program may benefit from being more needy.

MultiDictionary

Although this whole discussion is not specifically about the MultiDictionary, the idea was seeded by my encountering of MultiDictionary. So lets spend a moment considering how I might have implemented MultiDictionary myself.

Firstly, most of this discussion so far has been centered around how you would define the interface, or function signature, of a piece of code that either produces or consumes generic types. I haven’t really gone into what that code should actually look like, but rather what its interface should look like given that the code already has intrinsic demands and guarantees. If we apply this to the MultiDictionary, then it probably has its most generous signature already. The code already implements ICollection, so it’s generous to expose it in its interface. The dictionary itself doesn’t really consume any interfaces beyond it’s basic insert and delete functions, so we can’t talk about its generosity as a consumer3.

On the other hand, if I had written the code myself I possibly would have only exposed IEnumerable. Not because I write needy interfaces4, but because it just seems like a better representation of the underlying principle of a multi-dictionary. As the code stands, there are two different ways of adding a value to the dictionary: one by calling the add method on the dictionary, and one by calling the add method of the collection associated with a key. This duplication strikes me as flaw in the abstraction: the MultiDictionary is probably implemented using a Dictionary and some kind of collection type, such as a List (I imagine Dictionary<TKey, List<TValue>>). It could just as easily, although perhaps not as efficiently, have been implemented as a List of key-value pairs (List<KeyValuePair<TKey,TValue>>). Would the author still have exposed ICollection<TValue>? Probably not. This strikes me as a leaky abstraction in some ways, and I would probably have avoided it. In my mind, exposing IEnumerable<TValue> is a better choice in this particular case. The neediness would actually simplify the interface, and make it easier to reason about.

Perhaps this is about not saying more than you mean. You define upfront what you mean by a MultiDictionary, and then you describe exactly that in the interface, before implementing it in code. If you mean it to be a set of collections, then ICollection is great. If you mean it to be a Dictionary which happens to support duplicate keys, then ICollection seems to say too much.

 


  1. For the remainder of the post I will use the term IEnumerable to mean IEnumerable<T>, and the same for ICollection 

  2. “Needy” = produces very general types but consumes very specific types 

  3. Actually, I could argue that since ICollection is mutable and accepts data in both directions, the exposing of ICollection makes it a consumer in a way that IEnumerable wouldn’t, but I won’t go into that 

  4. Perhaps a better word here would be stingy 

The speed of C

Plenty of people say C is fast [1][2]. I think C is faster than other languages in the same way that a shopping cart is safer than a Ferrari.

Yes, a shopping cart is safer.

… but it’s because it makes it so hard to do anything dangerous in it, not because it’s inherently safe.

I wouldn’t feel comfortable in my shopping cart rattling around a Ferrari race track at 150 mph, because I’d be acutely aware of how fast the tarmac is blazing past my feet and how inadequate my tools are to handle things that might go wrong. I would probably feel the same way if I could step through the inner workings of a moderate Python program through the lens of C decompiler.

To put it another way: almost everything that makes a program in language XYZ slower than one in C probably uses a feature which could you could have synthesized manually in C (as a design pattern instead of a language feature), but that you chose not to simply because it would have taken you painfully long to do so.

I bet that you could probably write C-like code in almost any language, and it would probably run at least as fast (with a reasonable optimizer). You could also drive a Ferrari as slow as a shopping cart and it would be at least as safe. But why would you?


[1] 299 792 458 m/s
[2] the_unreasonable_effectiveness_of_c.html