Efficient SDLC

To be tested continuously, as term CI suggests, the code under the test must be running fast, failing fast, be as idempotent as possible, be easily configurable and produce adequate and quickly actionable output and logging

  • If requirements like these are taken into account up-front, they change dramatically the way we think about code we release. They may sound like irrelevant concerns in the beginning of development cycle, but practice shows that at the end, software that satisfies such requirements is stronger, more resilient, and of higher quality. Also, it implies that we are “forced” to think about big picture when “simply coding” in our seemingly isolated environments
  • If code that we release is to be tested continuously it must be “nice” to the database, file system and any limited or expensive resource that it may eventually hit. Also, it is very desirable that state of any “external” resource after the test will be left intact (not always possible, but at least desirable)
  • Notifications of the test failures should be concise and well targeted – this why we should be concerned with efficient and precise error logging and reporting at all stages of development
  • During Agile planning it is very useful to keep in mind Unit Testing. If scope of the Task makes Unit Testing effort impossible, it could be a sign of warning and prompt for an additional breakdown of a Task or a Story (it is not a rule that set in stone, but it is useful point to keep in mind)
  • Thinking of Fast tests vs Slow tests, or Unit Tests vs Integration Tests and their effects on CI process helps to optimize this process, make the most of it, and in addition, helps to better understand the code base itself


Knowledge Transfer and Documentation by example

  • One of the best ways to explain how system works internally is to show a fellow engineer a few running Unit Tests. Of course, tests could be too fine-grained to reveal the big picture. This why it is always good to have a few very straightforward, high-level tests that simply demonstrate the essence of the feature or selected piece of logic. We can think of them as Demo Test, or Spec Test. In a team environment and especially in any distributed team environment it could mean a lot
  • Good Unit Test will self-document the code and can efficiently complement or illustrate any Javadoc entry. More importantly,  “documentation” of this kind will naturally be kept up to date – otherwise build will break 🙂
  • Good Unit Test can demonstrate not only a feature, but the idea or concept behind the feature


POC and R&D work plus some non standard applications

  • When prototyping a new feature or when doing preliminary research we often end up writing standalone programs that are eventually become a throwaway code. Implementing these tasks as Unit Tests has certain merit – they become part of the code base (but not part of the production code line), they are easily executable as part of the standard testing framework, available to be revisited later, and discoverable by others for review and study (again, it does not apply to every situation, but it is something to think about)
  • Many proprietary frameworks and 3rd party systems often feature certain internal or proprietary  configuration parameters that frequently lack documentation. We often make use of such parameters, usually after considerable effort to tune them up for certain needs of our applications. But those parameters and their effect may change from release to release, or from environment to environment (for example, Oracle DB, its optimizer with its proprietary hints…, or only suggested JDBC fetch sizes used for batch processing). If effects and baselines of such “boundary” configurations are captured as part of automated Unit Testing, any unwanted change in behaviour could be caught proactively before it will reach Production environment. For example, Unit Test can automatically produce and capture explain plan for the query used in Java code, and assert against absence of Full Table Scans. This technique is also useful to detect changing data selectivity as it  affects applicability of existing indexes.


Efficient bug fixing and troubleshooting

  • When Unit Test coverage for some module or feature is sufficient enough, we can often completely avoid expensive cycles of deploying and running application in the container, requirements of database connectivity and other configurations, as well as need to attach to the running process with IDE or debugger of choice. Running a few unit tests (even after some modifications) to reproduce the scenario based on data collected from error logs or from user input, may be all that is required to locate the culprit


Happy Unit Testing!

Formal Unit Testing is usually treated as a trivial and boring subject concerned with tedious testing of low-level implementations. Developers often see it as an overhead, as something that takes time away from feature development, and requires constant maintenance as code changes and evolves. But it is not so, or at least this is not the whole story…


The truth is that if approached properly Unit Testing becomes a powerful method or technique, that not only assures certain quality of the end product, but naturally enlightens all stages of the Software Development Life Cycle (SDLC). It is not something that needs to to be done at the end of the development cycle, but rather a discipline that should be practiced starting at the very inception of the design process.  And as such, it becomes a fundamental design principle in its own accord – just like the famous Open/Close principle, for example. For the sake of this post, let’s name this design principle as Principle of Testability (PoT).


This principle by no means guarantees “bugs free software.” It rather implies that the software product should be designed and implemented in such a way that it could be efficiently tested on any chosen level and in any chosen environment (from local machine to CI server)! It also gives a promise, that if we do focus on achieving and retaining Testability throughout the development cycle, the whole software ecosystem naturally aligns, self-organizes and yields high quality results on many levels.


Let’s now review a some practical examples that illustrate how PoT naturally leads toward strong design and implementation solutions


Selected design principles and best practices

  • Favoring Composition over Inheritance

It is much easier to test code designed around Composition approach rather than code based on Inheritance – because when we can “take system apart” we can focus on testing its individual parts in isolation – which is in fact a definition of Unit Testing. Thinking of PoT will innately promote Composition

  • Favoring Convention over Configuration

From such trivial aspect as naming convention for discovering tests based on their name patterns (e.g. FooManager vs FooManagerTest), to conventions for discovering properties and context files at certain locations at runtime – we can see that lightweight Conventions could be extremely useful and efficient tool for multipurpose testing framework. The PoT will lead to giving preference to flexible Conventions over involved Configurations – and not only in the domain of testing, but potentially in the respective areas of software product itself

  • Adhering to Open/Close Principle

All software systems evolve over time and require changes and modifications. Focusing on PoT will naturally promote keeping public APIs sealed, as well as leaving intact their original tests. Instead of changing existing code, and making a considerable effort to ensure backward compatibility by modifying all corresponding tests, it is simply wiser and safer to add new flavors of APIs, and to complement them with number of new tests – which in essence is favoring OCP over other possible approaches

  • Adopting IoC and DI

For efficient testing of core logic in classes that depend on other classes and resources, it is extremely important to be able to exclude at runtime all unrelated dependencies (example: database based audit trail or logging could be completely irrelevant when testing actual biz logic of a given method; but because auditing is an integral part of the method, it requires full setup and access to the database during the test. When DI is used we can easily provide another audit manager implementation that does nothing at runtime). As such, thinking of these concerns up-front will naturally lead toward use of IoC and DI to enable mocking and stubbing later on for the sake of testing

  • Favoring “Fail Fast” behaviour

When we think about PoT up-front, we are aware that any test may legitimately fail at any point. As such, it may leave the system in some “unfinished state” preventing subsequent re-execution. Detecting failure as early as possible, aborting further processes, including asynchronous and resetting the state of the system are properties that are required for efficient unit testing. At the same time they will most probably benefit the system as a whole

  • Built in retries, recovery, self healing features

When application runs in Production most of use cases related to failures are often interactive – user may be prompted to try again, may be asked to re-login or call support. During the test we can’t rely on such luxuries. We need sometimes to be able to retry number of times, simply as part of the valid test, or allow for a failure in a negative test, followed by restoration of a valid state (even if asynchronously) before the next test. Thinking of these concerns up-front may benefit the overall design. It may help to choose adequate 3rd party framework based on such capabilities (good example is @Rollback(true) feature of Spring Testing Framework)


Optimal granularity and composition of the system

  • Self-contained and properly scoped classes, packages, jar and war files of optimal composition

It is hard to strike the ideal breakdown, but thinking about PoT in this context will help a lot in quest for proper balance. A single class’s content may be revised when we think up-front about complexity of corresponding test class. We may opt for breaking one monolithic class down into number of composing classes, simply in order to simplify and better scope their corresponding test classes. Same may apply to packages and deployment modules. Another aspect that comes to mind is benefits of keeping test classes in the same packages as their corresponding “parents.” This will allow testing of any protected methods (such protected methods could be a legitimate trade off to use of a traditionally prescribed private methods – especially with heavy logic)

  • Using APIs parameters adequate to the application tier or layer

Often we can see parameters that clearly belong to the presentation layer make their way down to the persistence layer, for example (think of HTTP Request, or Session, or some Thread Local Context established at the entry point into the system). It simplifies signature, may be…, but completely hides required inputs and need for validation, making it very hard to test such a method. In addition is wrong conceptually. This can be avoided by applying PoT early enough – more precisely at the time of class / method creation

  • Resisting a tendency to treat commodity services provided by the Container as something granted

Modern application containers provide a full array of commodity services at our disposal: enterprise messaging, transaction management, distributed cache services, etc. These services are easily available almost everywhere in the code. While container offers ease and transparency in accessing these – otherwise hard to configure services – the application itself becomes fully entangled with its container. We realize it, when trying to unit test areas of the application: as the use of the container provided services proliferates through the code, it becomes harder and harder to test application outside of the running container. The PoT suggests number of approaches to this subtle and often overlooked problem. Typically they boil down to creating a well defined layer where application registers with services available in the container and overall treats them as something external. There we can choose to register with the services provided in some alternative way (mocked or standalone). Or not register them at all. Of course, provided that the code consistently checks for their availability and has some default behavior where it is acceptable (example: methods under the test may broadcast some audit events using JMS; this can be completely irrelevant to the business logic that we are trying to test; if we consistently check for JMS provider availability before using it, we can avoid exceptions during test phase, log relevant message and proceed). At the end, addressing or at least acknowledging concerns like these, will result in stronger, more flexible and resilient architecture

  • Wire-On / Wire-Off mechanisms

One setting or wiring may enable or disable a whole feature or a module. It could be extremely important when you want to be able to test some potentially invasive new code without “releasing” it or making it available in the UI. Being concernted with mere Testability in a case like this may bring into the focus originally overlooked but beneficial functionality of Wire-On/Off

(to be continued)