Wednesday, March 6, 2013

Why Agile and Test Driven Development (Part 2)

Classic physics...
(Originally published by me on www.servicevirtualization.com)

In part 1, we briefly examined the reasons why application development is challenged: namely, architectures have to be more complex to address the similarly more complex needs of the business.  We also briefly looked at the primary goals of Agile (SCRUM, specifically) and Test Driven Development (TDD) with the promise of further scrutiny to see why, although they take steps in the right direction toward better management of this complexity, they still fall short.


The impact of increased application complexity is that there is an increased rate of change in all of the cogs and wheels that will (hopefully) mesh together to produce the final result expected by the business.  Over the same amount of time, this increased rate of change will yield more points where failure can occur.  And given the same distribution of probabilities, this will ultimately yield a greater number of defects.

This becomes a problem in even SCRUM and TDD but for very different reasons:

...becomes code creation
In SCRUM, the functionality that needs to be added in any given sprint will require the inclusion of multiple applications components that need to be written and validated.  This may be approached from the "everyone jump in the pool together" perspective where all components are being written simultaneously and thus the potential for error is large, or artificial latencies are introduced because functionality further along the project timeline has prerequisites that need to be written and validated before they can be built upon.

In TDD the objective is to understand use cases (written against the business requirements) that are not met by the application in its current state and then make the necessary code changes so that failure becomes success.  In other words, the potential for failure is already realized, but the challenge is that the onus is on the developer to understand the complete set of defects that exist within the components being modified.  This is extremely challenging at best and NP-complete at worst (if the portion that is changing is large enough).  As a result, the responsibility of validating the overall application is also challenging or nearly impossible depending on the size and complexity of the application.

In the third and final part, we'll examine how Service Virtualization (also known as LISA) by CA Technologies addresses both of these scenarios to substantially reduce the risk of successful project completion.

No comments:

Post a Comment