This past weekend was my high school's 30th year reunion get together in my hometown in Beaufort, SC, which is home to Parris Island and the awesome Marine Corps Air Station across town. My companion purchased the tickets for us both using reward miles on her airline of choice via that airline's website. She is a frequent flier on that airline for professional reasons so her profile contained her rewards program number and her TSA pre-check number. I, on the other hand, don't fly on this particular airline much so I don't belong to their rewards program. In fact, I don't fly often at all, since my job typically keeps me in the greater NYC metro area where I can drive everywhere that I need to go so I never filed for a TSA pre-check number either.
Imagine my dismay, then, when browsing our boarding passes revealed that my boarding pass said "TSA pre" on it. "That's odd," we thought, and were sure that some other system would flag that as a mistake. That didn't happen, though, and I was allowed to waltz through the high speed line without taking my shoes or belt off and walk through the much more lenient metal detector rather than the full body scanner.
Still, we chalked it up to a fluke. After all, my companion printed the boarding passes at home. We wouldn't have that luxury for the return trip and would be required to use the kiosk at the ticketing counter, and we were sure that this wouldn't happen again. It did, however, and once again I was able to make it through security with the greatest of ease.
I can't say with 100% certainty what the cause is, but it is most likely an application defect that resides somewhere in the backend systems of either the airline or Sabre (which is the largest Global Distribution Systems provider for air bookings in North America, according to Wikipedia). Regardless, you can understand why this is a real problem since any person with malicious intent could exploit this easily with very harmful results.
We've all read various research reports on the impact of application defects, but here are a few numbers for you:
[1] Forrester Research
[2] NIST
These numbers, while interesting, really hit home when you consider that organizations are already under intense pressure to release new features and functionality more quickly than ever before. This is compounded by the fact that many organizations have still not fully made the transition from Waterfall to an Agile SDLC methodology, so the process of releasing these new features is still quite cumbersome. When a lot of time is spent remediating application defects, that means less time is being spent developing code or performing more unit tests.
What can be done? One of the obvious solutions is to switch to an Agile SDLC methodology. Other viable solutions that address significant parts of the problem are:
Virtualize the backend systems. Using a solution like CA Service Virtualization or a similar solution from IBM or HP, you can isolate live code so that defects uncovered during testing are more quickly found. Furthermore, virtual services developed by such solutions eliminate the need to have access to downstream systems all of the time, allowing developers to work in parallel, resulting in savings of 30% or more time from inception to release.
Accelerate testing. This means more than just developing test scripts that can be automated. Automatic generation of test scripts - "automating the automation" - as well as the synthetic generation of test data that eliminates the multi-week wait time between regression suite runs that are required to allow the DBAs to pull a new copy of the production data and mask it so that regulatory standards are met. CA Continuous Application Insight, CA Data Finder, BMC AppSight, and IBM Optum provide various capabilities in these areas.
Get insight into your application. Applications are complex, and long gone are the days when people understood what happens after you issue a request to a downstream component or another application with which you integrate. Being able to see the various components being invoked along with the data used in each invocation is invaluable to a developer, especially if the tester can generate a document containing this information when the defect is first observed.
Contrast that with the effort to reset the environment, re-execute the test case, get the defect to happen for the exact same reason, all while documenting everything that they were doing. Forrester once reported that 25% of all defects are rejected as irreproducible and that nearly half of their respondents said they spend over an hour cumulative per defect documenting what happened. CA Continuous Application Insight, CA APM, and BMC AppSight (and others, undoubtedly) provide various capabilities in this area.
The point is that application defects will always exist. But there really aren't legitimate reasons why something as simple as the defect that I personally encountered have to exist. And when the stakes are as high as they are when you have a plane full of passengers, it is the responsibility of every organization to ensure that the applications they produce are of the highest quality.
Imagine my dismay, then, when browsing our boarding passes revealed that my boarding pass said "TSA pre" on it. "That's odd," we thought, and were sure that some other system would flag that as a mistake. That didn't happen, though, and I was allowed to waltz through the high speed line without taking my shoes or belt off and walk through the much more lenient metal detector rather than the full body scanner.
Still, we chalked it up to a fluke. After all, my companion printed the boarding passes at home. We wouldn't have that luxury for the return trip and would be required to use the kiosk at the ticketing counter, and we were sure that this wouldn't happen again. It did, however, and once again I was able to make it through security with the greatest of ease.
I can't say with 100% certainty what the cause is, but it is most likely an application defect that resides somewhere in the backend systems of either the airline or Sabre (which is the largest Global Distribution Systems provider for air bookings in North America, according to Wikipedia). Regardless, you can understand why this is a real problem since any person with malicious intent could exploit this easily with very harmful results.
We've all read various research reports on the impact of application defects, but here are a few numbers for you:
- 29% of an application developer's time is spent in some part of the problem resolution process, whether that is root cause determination or remediation of the actual defect.[1]
- 6.7 days elapse from the time a defect is first observed until it is fixed[1]
- The annual impact of application defects on the U.S. economy is $60 billion.[2]
- It costs 30 times as much to fix a defect if it is detected in production.[2]
[1] Forrester Research
[2] NIST
These numbers, while interesting, really hit home when you consider that organizations are already under intense pressure to release new features and functionality more quickly than ever before. This is compounded by the fact that many organizations have still not fully made the transition from Waterfall to an Agile SDLC methodology, so the process of releasing these new features is still quite cumbersome. When a lot of time is spent remediating application defects, that means less time is being spent developing code or performing more unit tests.
What can be done? One of the obvious solutions is to switch to an Agile SDLC methodology. Other viable solutions that address significant parts of the problem are:
Virtualize the backend systems. Using a solution like CA Service Virtualization or a similar solution from IBM or HP, you can isolate live code so that defects uncovered during testing are more quickly found. Furthermore, virtual services developed by such solutions eliminate the need to have access to downstream systems all of the time, allowing developers to work in parallel, resulting in savings of 30% or more time from inception to release.
Accelerate testing. This means more than just developing test scripts that can be automated. Automatic generation of test scripts - "automating the automation" - as well as the synthetic generation of test data that eliminates the multi-week wait time between regression suite runs that are required to allow the DBAs to pull a new copy of the production data and mask it so that regulatory standards are met. CA Continuous Application Insight, CA Data Finder, BMC AppSight, and IBM Optum provide various capabilities in these areas.
Get insight into your application. Applications are complex, and long gone are the days when people understood what happens after you issue a request to a downstream component or another application with which you integrate. Being able to see the various components being invoked along with the data used in each invocation is invaluable to a developer, especially if the tester can generate a document containing this information when the defect is first observed.
Contrast that with the effort to reset the environment, re-execute the test case, get the defect to happen for the exact same reason, all while documenting everything that they were doing. Forrester once reported that 25% of all defects are rejected as irreproducible and that nearly half of their respondents said they spend over an hour cumulative per defect documenting what happened. CA Continuous Application Insight, CA APM, and BMC AppSight (and others, undoubtedly) provide various capabilities in this area.
The point is that application defects will always exist. But there really aren't legitimate reasons why something as simple as the defect that I personally encountered have to exist. And when the stakes are as high as they are when you have a plane full of passengers, it is the responsibility of every organization to ensure that the applications they produce are of the highest quality.