16 Questions for the acquisition due diligence
Measuring productivity and rewriting code from scratch
Hey folks, I hope y'all having a great week. This week's Business of Software is a lot more technical. I've been reading some software engineering books that were gathering dust in my Kindle, so I've been thinking about engineering a lot this week.
For the non-technical folks on this list, I apologize! I will try to strike a better balance between technical vs non-technical in the upcoming coming weeks.
Article of the week
Remember last year, when people "hopped on a plane" for reasons that would be unthinkable today? This week's article is from that time—a time when I was part of a team that did due diligence for a possible acquisition.
I have to admit. I was in over my head. I was new to my role, excited with the opportunity, but I had zero experience in that area. My goal was to figure out if, from an engineering perspective, the team and the product were worth acquiring. I did what I had to do: I got to work, asked for help from more experienced managers, and came up with 16 questions I should answer to guide my decision.
I think other people in the same position could benefit from these questions, so I'm sharing them in this article.
16 Questions for the Acquisition Due Diligence →
Have you ever been through the process yourself? Have any extra tips? Hit reply and let me know!
Measuring the performance of development teams
Will Larson said that every company he's worked at has, at some point, started a task-force to define and measure developer productivity – and produced something unsatisfying. My experience has been similar. That's why I found the definition of performance in Accelerate quite interesting.
Their definition is composed of 4 different metrics: Deployment Frequency, Lead Time, Change Failure Rate and Mean Time to Recover.
"In 2017 we found that, when compared to low performers, the high performers have: 46 times more frequent code deployments; 440 times faster lead time from commit to deploy; 170 times faster mean time to recover from downtime; 5 times lower change failure rate (1/5 as likely for a change to fail)" - Accelerate.
Deployment frequency: the frequency of code delivery works as a proxy for batch size, as teams that work on small batches tend to perform better. It could be measured as the number of deployments in a given period (month or sprint) or as a daily average.
Lead time: the time it takes to implement, test, and deliver a feature. A sufficiently robust ticket tracking system could help in calculating this metric.
Change failure rate: How many changes result in degraded service and require remediation (such as a hotfix, a rollback, or a fix-forward). This metric could be challenging to track unless you can trace and link incident reports to deployments.
Mean time to recover: How long does it take to resolve an incident, either by rolling back a deployment or shipping a fix? If one's already tracking their change failure rate, it would be a matter of calculating the mean time to resolve the incident report tickets.
None of these metrics would work in a culture where people are afraid of making mistakes, or one that uses metrics as goals. If there's any incentive in place to improve a metric, it will be gamed and lose its value.
Link of the week
Things You Should Never Do, Part I by Joel Spolsky
I'm revisiting a classic software engineering article this week. It was published on April 6, 2000. That's right, 20 years ago in internet years, which is like 140 years in human years.
Joel describes how, in his view, the decision to rewrite the code from scratch is what killed Netscape Navigator (did I mention the article is old?). For 20 years, he has been saying that rewrites are "the single worst strategic mistake that any software company can make." Still, organizations are marching into this minefield day in and day out.
He argues that we, engineers, always want to rewrite code because of a fundamental law of programming: it's harder to read code than to write it. We invariably think code is hard to understand, and we believe we could do a much better job if we rewrote it from scratch.
Old code is often confusing because, after years of battle-testing in production, it packs hotfixes, forward-fixes, patches, and a little hack here and there.
When you throw away code and start from scratch, you are throwing away all that knowledge. All those collected bug fixes. Years of programming work.
I happen to have some experience in this area. I've been part of a large rewriting project that didn't get anywhere for three years. We tried approaching it from many angles and using different techniques, to no avail. Only when we stopped trying to rewrite the whole thing and decided to extract code, refactor, and rewrite just tiny bits that we started seeing some progress.
I do not plan to make that same mistake ever again.
That's it for this week; thanks for reading! I hope you have a great weekend, and I'll see you here again next Friday!
As always, if you feel like someone could benefit from the content here, feel free to hit that Share button below. I'll appreciate it. 🙇🏻♂️