Tuesday, March 12, 2013

Why Agile and Test Driven Development (Part 3)

(Originally published by me on www.servicevirtualization.com)

Complexity yields defects
In part 2, we examined why SCRUM and TDD exhibit problems when measured from the perspective of the number of defects that they both yield. Before we can begin to understand why Service Virtualization helps address both of these reasons, it's worth elaborating on statements made in part 2.

You'll recall the equation to the right, presented last time. c represents the degree of complexity, which has a direct correlation to the amount of code that must be written to meet the business requirements that yielded the complexity to begin with. Because t is fixed and c continues to trend upward (over several releases) then the number of defects will also increase over time. Therefore, t is the primary constraint around which everything else revolves.
Expected number of defects

In SCRUM, an increase in the complexity of an individual sprint or the total application expressed as a sum of the functionality in each sprint, as seen to the left (s is a single sprint, n is the total sprints required to implement the full set of business requirements, and cs is the code to be developed in any given sprint), results in a corresponding increase in the number of defects produced given a fixed time.

TDD is slightly different. Here, the time must be "divided" (figuratively, not mathematically) into the time required to implement the tests that initially fail and the time required to implement the code that satisfies the test conditions. As the complexity increases, the number of tests that must be written also increases since the number of execution paths also increases. But, as we see in the illustration to the right, if more time must be allocated to writing tests then that leaves less time to write the code to satisfy those tests.
Mutually exclusive goals in TDD

So what about Service Virtualization? You've undoubtedly read elsewhere that the four primary benefits of CA's solution are...
  1. Shifting quality left. Allowing developers the ability to test code earlier in the SDLC substantially compresses each release cycle.
  2. Reduce infrastructure costs. Providing an environment that behaves "just like the real McCoy" but runs on commodity hardware reduces the need to purchase expensive development and test environments.
  3. Enable performance readiness. Stability is one aspect of availability management, but scalability is as well. While many companies still struggle to understand and implement cloud bursting to handle peak usage periods, their applications must still handle increased load or else suffer outages due to events like "the Oprah effect."
  4. Manage test scenarios. Drastically reducing the need to acquire and consume production data (which has a long turnaround time for each request due to data scrubbing requirements for regulatory reasons) allows testing to be both more effective and efficient.
With SCRUM, shifting quality left is the benefit we are primarily concerned with here. When examined in more detail, we find that this is possible because the amount of complexity is reduced by removing the constraints on other components or even entire applications or infrastructure components that would normally prevent efficient validation of the code produced.

"But hold on!" you say. "If complexity is directly related to the amount of code produced, aren't you contradicting yourself by implying that Service Virtualization reduces the amount of code that needs to be written?" You are correct and clarification is necessary: c in the equations at the top refers to the complexity of the code that is the responsibility of each individual developer. It does not include the additional complexity that results from other aspects of the architecture that are out of the control of the developer.

So if developers have dependencies on other components in the application, authored by other developers, or downstream infrastructure components then their complexity increases further since they represent variability that they cannot control. Therefore, the overall complexity is reduced when the variability is removed by replacing live systems with virtualized services that behave like their live equivalents but do not change while they are writing code to interact with them.

With TDD, test scenario management is the benefit we are primarily concerned with here. Since developers have a fixed amount of time to write tests that fail as well as the code to address those failures, the probability of having enough variance in the data driving the test harnesses to completely cover every permutation (or at least a large percentage of them) of data that causes the test to fail is incredibly small. This pushes defect discovery to the quality organization in spite of the best efforts of TDD as a discipline to avoid this exact scenario.

Furthermore, the time to develop test harnesses themselves results in inefficiencies. Service Virtualization solution includes a full featured test script IDE with a Visio-like interface that makes test development much easier than it would be using other, data panel-based applications from other solution producers.

Questions or comments? Leave a comment below!

Wednesday, March 6, 2013

Why Agile and Test Driven Development (Part 2)

Classic physics...
(Originally published by me on www.servicevirtualization.com)

In part 1, we briefly examined the reasons why application development is challenged: namely, architectures have to be more complex to address the similarly more complex needs of the business.  We also briefly looked at the primary goals of Agile (SCRUM, specifically) and Test Driven Development (TDD) with the promise of further scrutiny to see why, although they take steps in the right direction toward better management of this complexity, they still fall short.


The impact of increased application complexity is that there is an increased rate of change in all of the cogs and wheels that will (hopefully) mesh together to produce the final result expected by the business.  Over the same amount of time, this increased rate of change will yield more points where failure can occur.  And given the same distribution of probabilities, this will ultimately yield a greater number of defects.

This becomes a problem in even SCRUM and TDD but for very different reasons:

...becomes code creation
In SCRUM, the functionality that needs to be added in any given sprint will require the inclusion of multiple applications components that need to be written and validated.  This may be approached from the "everyone jump in the pool together" perspective where all components are being written simultaneously and thus the potential for error is large, or artificial latencies are introduced because functionality further along the project timeline has prerequisites that need to be written and validated before they can be built upon.

In TDD the objective is to understand use cases (written against the business requirements) that are not met by the application in its current state and then make the necessary code changes so that failure becomes success.  In other words, the potential for failure is already realized, but the challenge is that the onus is on the developer to understand the complete set of defects that exist within the components being modified.  This is extremely challenging at best and NP-complete at worst (if the portion that is changing is large enough).  As a result, the responsibility of validating the overall application is also challenging or nearly impossible depending on the size and complexity of the application.

In the third and final part, we'll examine how Service Virtualization (also known as LISA) by CA Technologies addresses both of these scenarios to substantially reduce the risk of successful project completion.