We all understand the value of Unit Testing, but how come so few organisations maintain unit tests for their in-house applications? We can no longer pretend that unit testing is a universal panacea for ensuring less-buggy applications. Instead, we should be prepared to actively justify the use of unit tests, and be more savvy about where in the development cycle the unit test resources should be most effectively used.
Despite the pervasive dictum within software engineering that unit tests must exist for all code, such tests are little-used for the development of enterprise applications. In over fifteen years of consulting, I can count on the fingers of my hand the number of organizations maintaining unit tests for their applications. Why is it that so many organizations ignore such a popular software practice? In order to answer this question we first need to explore the myths and real world practices of unit testing, and then go on to describe the points within development where their value is well-established.
Myths
Two of the more generally-held myths that prop up the dogma of unit testing concern the professed benefits. The first myth, and most loudly claimed, states that unit testing inevitably saves money. The second myth, and zealously professed by many developers and engineers, promises a reduction in the bug count.
The Saving Money Myth
The idea that unit testing inevitably lowers the cost of application development rests on the reasonable assumption that fixing a bug as soon as possible saves money. Graphs such as the one below typically compare two application’s costs to support this claim. Each solid black line represents an application’s total cost over time.
The application labeled “No Unit Tests” starts out with lower expenses than that labeled “Unit Tests” as shown by the red line. Not a surprising fact since someone either coded or didn’t code unit tests. Over time though the “No Unit Tests” application costs more money because of increased support, bug fixes, deployment, lost customers, etc. According to this assumption about cost-savings, organizations can even calculate the “Breakeven” point at which their unit testing pays off. And at any time past this point they can find the total savings earned from writing unit tests as shown by the green line on the far right.
There’s a problem with the underlying assumptions supporting the above cost profiles. They do not factor in the financial repercussions of a delay in the delivery of today’s enterprise applications.
The first problem stems from the increasing importance of enterprise applications to an organization’s profitability. For example, it is not unusual for a company to build some widget that’ll save identifiable staff several hours a day. Delaying the widget’s release in order to write unit tests causes not only angst but measurable, lost productivity. I have rarely seen a developer convince an end user that adding unit tests justifies delaying a much-anticipated feature just to prevent of a few potential bugs.
The second problem is the cost associated with the consequences of writing tests. Time expended coding unit tests keeps other features un-built and idling on backlog. It disrupts the development process. The process of implementing unit tests cannot be readily scaled up to prevent backlog because, for most organizations, there are only one or two folks possessing the domain knowledge to add new or enhanced features to a specific application. When these developers code unit tests they are not coding new stuff.
When we update our cost assumptions for Unit Tests with only these two omissions, our graph alters as shown below. The “Unit Tests” application’s total cost over time line shifts up with dramatic implications.
Suddenly the savings advantage of the “Unit Tests” application shrinks. To make matters worse, it also takes longer for an organization to realize these diminished savings as shown by the rightward movement of the “Breakeven” point. No wonder that any IT management that takes a financial vantage will wish to minimise unit tests. Adding them to a project may not save an organization as much money as it had previously calculated.
The Reduce Bugs Myth
The myth of the ‘reduce bug count’ originated in the early days of software engineering. It sprung from the reality of working with the tools and technologies that were available at the time.
When enterprise applications were built 20 years ago with C and C++, unit tests helped to minimize the number of bugs escaping into the wild. Even the crudest test caught defects that were attributable to undeclared variables, mixed case method names, missing header file, calling functions without parenthesis, etc.
Enter today’s improved integrated development environment (IDE) tools and managed memory runtimes, and many C/C++ lifesaving checks handled via unit testing became superfluous. The technologies of choice for enterprise applications, such as, .NET and Java, placed many golden oldie bugs on the extinction list. Unit testing was no longer required to catch all those pesky technical code flaws.
Improved technology further eroded unit testing as the premier bug management tool by promoting software construction practices which leveraged multiple components to create an enterprise application. Dynamically cobbling together different data stores, libraries, web services via dependency injection, configuration files, and plug-ins spawned an entire new breed of defects for which unit testing proved marginally effective.
The below graphic depicts a simplified scenario of how “SomeMethod” might pull from several component’s to complete its task.
In such a scenario, it is only some form of integrated, or user acceptance, testing that will guarantee “SomeMethod” behaves appropriately. Properly-constructed unit tests only ensure that a component works as coded. Service X and Service Y may pass their respective unit tests but both may not produce the desired results when end users execute “SomeMethod” within a specific context.
Practices
If the preceding discussion leaves one believing that unit tests only provide mythical benefits, real-world experience suggests otherwise. In several circumstances they play a critical role in helping to deliver enterprise applications. This section discusses a few them.
Building Libraries
Even the most powerful frameworks and runtimes omit features that are required by an organization. Because of this, organizations wisely create their own custom libraries for all their developers. As these libraries change of over time, whether via refactoring or feature updates, unit testing is essential to quickly validate the change and can save an organization a great deal of time and money.
Enhancing Technology
Sometimes the required technology cannot work its magic alone. Unaided it allows developers to write buggy code all too easily. In such situations enhancing the technology with unit testing comes to the rescue.
Developing browser-based User-interfaces is an example of this situation. Despite the existence of such powerful plug-ins for building Web-Based User-Interfaces as Flex and Silverlight, it is HTML and JavaScript that remain the tools of choice for responsive web applications. Although JavaScript is powerful, it looks more like C++ than C#. This means developers will need unit tests to avoid the classic pitfalls of type checking, improperly cased names, overwritten functions, etc.
Helping Startup Projects
When a new project with several engineers starts from scratch it will initially generate much code that will ultimately require synchronization. But until reconciling all of the project’s components each engineer usually needs a way to execute their code without the other components. Unit testing tools and products can provide such services.
As an aside, I find test driven development (TDD) demonstrates this value proposition. It too does not require developers to have all of their components’ dependencies lined up and running. TDD allows engineers and developers to productively and safely write a lot code courtesy of unit testing. Developers only require the scantiest knowledge of the project’s other components.
Implementing Smoke Tests
Regularly deploying code to any environment can be a risky proposition for even the smallest fix or feature. We repeatedly relearn this fact. (Remember that little tweak that broke the entire application and left the testing staff without work?) In order to combat this problem many organizations write a few key unit tests which are automatically executed immediately after a build and before deployment. These few “smoke tests” provide enough coverage to ensure the basic functionality of the application.
If smoke testing sounds like continuous integration (CI), there’s a reason. They share similar objectives, albeit with vastly different scope. CI typically demands organizational commitment and resources; incorporating smoke tests typically requires a few commands by the build manager.
Writing Clearer Code
By having to Craft unit tests, developers are forced to think as consumers of their code. This gives them an extra motivation to build an application programming interface (API) that is clear, and easy to implement.
The drive to write better unit tests generally motivates better design, too. By incorporating unit tests into their components, Developers are encouraged to follow many good software design tenets, such as, avoiding hidden dependencies, and coding functions with a well-defined mission.
Note: Bug Management Practices
If unit testing provides less than ideal bug management support, what options exist for software developers and engineers? After all the integrated and UAT tests are completed prevailing practices suggest two broad strategies for better managing bugs, one passive and one active.
The passive policy amounts to organizations providing a full featured help desk. In such an environment end users report bugs which get triaged until resolved. While an effective practice from a managerial perspective, it tends to frustrate end users and place developers in a reactive mode.
Actively managing bugs is an alternative strategy of growing popularity. It requires applications to self-report exceptions and such. This can happen via home grown or 3rd party tools, such as, Red Gate’s SmartAssembly. This strategy acknowledges the difficulty of preventing bugs with the belief that knowing about them as soon as possible without explicit user complaints mitigates pain.
Conclusion
Forgive me if I have left any readers thinking ill of unit testing. Nothing could be further from my intent! The goal was to challenge the premise that writing unit tests is always a wise practice in all contexts. Because the IT culture within the enterprise application space is no longer so uncritically accepting of the value of unit testing it is up to software engineers and developers to actively justify the cost of building unit tests in terms of benefits to the enterprise. I’ve noted in this article several circumstances where unit testing is vital for the timely delivery of robust applications, and I strongly suspect that many enterprise application engineers and developers who read this will think of more.
We can no longer rely on a general acceptance of the myth that unit testing is a universal panacea, but need to focus unit testing on aspects of development where it is most effective, and be prepared to actively justify its use.