Thursday, June 05, 2008

Agile Intelligence: Test Driven Development

Test Driven Development (TDD) is a widely popular practice within the Agile practitioners. As the name suggests, this approach puts extra emphasis on passing the tests. I will try to draw up the popular activity block diagram a little later, before that, let's try to understand what this approach intends to achieve.

By and large, the developers are used to do coding by looking at the software requirements specification, aka spec. Developers, or the project teams that are not aware of the mature engineering processes (software or elsewhere), tend to write lines after lines of codes by looking at the spec without realizing the purpose. They tend to forget that the codes produced by them are also intended to run at some point of time. So, they code, and they code, only to realize at the fag end that all those beautifully thought out lines are of no use, because those did not get compiled in first place. Even if those were lucky enough to get compiled, those didn't run. For those luckiest fellows, even if it ran, it did not run as desired. The bottom line is, all those effort had been meaningless. For these developers, TDD brings in a gentle reminder in its name itself: Guys, don't just code; do some testing as well.

Test harness and test bed preparation concepts have been around for a while, primarily to boost the developer's mind and encourage them to do testing. At a latter stage, the professional testers took these two terms to a more sophisticated level by incorporating theories and mature practices. Nevertheless, if we still consider the older practices, all it talked about was unit testing of the codes after completing the code writing of the smallest possible unit (say, a method within a class). While the development is on, the same test on the same unit is to be repeated many times since the new codes are getting developed and interlinked with the existing ones. Instead of putting the developers into the rigor of manual repeats of the same tests, they were encouraged to write test codes so that the testing could be done by the programs. Historically, these were the first attempts in asking the developers to eat their own cooked food.

TDD formalizes the process from a different perspective. TDD assumes that the developer is not required to develop codes that would be perfect in all counts. Rather, it initiates the idea of developing codes that would pass the fitness tests employed, doesn't necessarily would pass with flying scores. To put this into implementation, it says the developer to develop just enough codes to pass the test cases. The diktat puts the developer to write the test cases first, then do the development, and do the development in such a way that it passes through the test cases. Sometimes, the developer doesn't write the test cases still, a tester writes the test cases for him/her. The developer then uses those test cases to conduct his testing and remain compliant to TDD.

I am not too sure if the image above is readable enough, so you may need to click on it too see the big picture. This is the so-famous diagram depicting a classic TDD. The process addresses the requirement preparation, followed by test cases preparation for customer and developer. By customer test cases, we can safely assume to be UAT cases or business cases. From left to right it starts from requirements, test cases, and the last block is for development. The backward direction indicates successful completion of each of these activities, i.e. after the code getting developed passes through the developer test cases successfully, and so on.
I have already talked about the benefits of the unit testing before code release, and the diagram shows how TDD implements that. There is a significant risk too. The success of the codes depend on the adequacy of the test cases and its coverage of the requirements, both explicit and implicit. With this, the developer gets tempted to pass the test cases instead of looking into the overall objective in mind. For a business consultant, this may be far detrimental than it sounds since the consultant is also expected to keep the respective business objectives in mind. Fortunately, the lack of coverage gets detected when the customer sits in for testing. The flip side is that the customer would need to sit in for testing same piece several times, and this would dilute the overall value proposition of your institutional ability to deliver a solution for business. The only emphasis I would like to make here is not to stretch the TDD to the extent where it actually doesn't; and at the same time build this practice within your developer self. Testing your own code before release would always remain a good practice.