"Pró-gress must pro-gréss" - Unknown
Nowhere more than in the world of technology is the notion of velocity and acceleration more evident in terms of its ability to contribute to the success of a business. It is imperative for companies to evolve if they intend on staying in business. In fact, Gartner said in 2014:
"In 2014, CEOs must focus on leading their organizations to think like and become more like 'tech' companies, because within a few years, digital business capabilities will dominate every industry. Urgent action is needed because first-mover advantage is common in digital business, and fast followers must be very fast." (CEO Resolutions for 2014 - Time to Act on Digital Business, published by Gartner in March 2014.)
Contrast this with the Fortune 500. Established in 1955, it chronicled the evolution of the top 500 global companies. But in spite of the implied size of the companies on this list, less than 20% of those companies are still in existence. And although attribution to events such as M&A must be given, it is reasonable to assume that the majority of those companies are no longer around due to their own inability to adapt to the drumbeat of change in the marketplace.
Is this the inevitable fate of most companies? I would argue that it isn't. What is needed to avoid this is the ability to recognize the need to evolve. As a great example of this, the vast majority of industry pundits were ready the shepherd the decline and ultimately the demise of the mainframe after the Y2K efforts had concluded. Not only has the mainframe survived, though, its use has increased since that time. (See the 2014 Mainframe Study published by BMC in January 2015.)
If the oldest computing platform can continue to evolve then this begs the question: why haven't all legacy, general purpose computing applications also done the same? A great example of this is workload automation. Originally designed to be simply a job scheduling system, its purpose then was to simply coordinate the execution of tasks, or jobs, that were developed by other IT operations or application development teams. The problem that has developed, however, is that the number of systems with which workload processes may possibly interact with have increased exponentially over the years. And so organizations are faced with Sophie's choice: either continue to retain staff that have specialized knowledge on systems that are now considered ancient or implement a costly modernization effort to remove those systems with more modern equivalents that, assuming such replacements exist, have a lower burden from a staffing perspective on an organization.
Even if such modernization potential exists, companies are frequently loathe to undertake them due to the heightened amount of operational risk they incur during the initiative's overt phases and immediately after until time has proven such modernization to have been successful due to the absence of production errors. While data relating to this is scarce at best, we can see such reluctance exhibited within the realm of IT's newest darling, DevOps. In 2013, IDG published a survey describing how 45% of its respondents said that application release automation was a key enabler for DevOps, yet only 11% had implemented such automation. Considering that DevOps was born in 2008 the rate of adoption has been pitiful to say the least.
All is not lost, however. Even though some workload automation systems are still antiquated in terms of capabilities, they are only antiquated in the sense that they have peers who have evolved over time. Integration with existing infrastructure, monitoring capabilities, and the addition of operationally focused capabilities (like SAP system copy or processing of big data warehouses) exist in solutions produced by companies exhibiting thought leadership.
In summary, there is no need to settle for less. While the tendency exists for company to mark time with their existing operational systems footprint, there is typically no requirement to do so. If your systems are not providing the capabilities you need in a way that allows you to optimize your run-rate then perhaps you should be looking elsewhere for systems that do meet those requirements.
Nowhere more than in the world of technology is the notion of velocity and acceleration more evident in terms of its ability to contribute to the success of a business. It is imperative for companies to evolve if they intend on staying in business. In fact, Gartner said in 2014:
What is your weakest link? |
Contrast this with the Fortune 500. Established in 1955, it chronicled the evolution of the top 500 global companies. But in spite of the implied size of the companies on this list, less than 20% of those companies are still in existence. And although attribution to events such as M&A must be given, it is reasonable to assume that the majority of those companies are no longer around due to their own inability to adapt to the drumbeat of change in the marketplace.
Is this the inevitable fate of most companies? I would argue that it isn't. What is needed to avoid this is the ability to recognize the need to evolve. As a great example of this, the vast majority of industry pundits were ready the shepherd the decline and ultimately the demise of the mainframe after the Y2K efforts had concluded. Not only has the mainframe survived, though, its use has increased since that time. (See the 2014 Mainframe Study published by BMC in January 2015.)
If the oldest computing platform can continue to evolve then this begs the question: why haven't all legacy, general purpose computing applications also done the same? A great example of this is workload automation. Originally designed to be simply a job scheduling system, its purpose then was to simply coordinate the execution of tasks, or jobs, that were developed by other IT operations or application development teams. The problem that has developed, however, is that the number of systems with which workload processes may possibly interact with have increased exponentially over the years. And so organizations are faced with Sophie's choice: either continue to retain staff that have specialized knowledge on systems that are now considered ancient or implement a costly modernization effort to remove those systems with more modern equivalents that, assuming such replacements exist, have a lower burden from a staffing perspective on an organization.
Even if such modernization potential exists, companies are frequently loathe to undertake them due to the heightened amount of operational risk they incur during the initiative's overt phases and immediately after until time has proven such modernization to have been successful due to the absence of production errors. While data relating to this is scarce at best, we can see such reluctance exhibited within the realm of IT's newest darling, DevOps. In 2013, IDG published a survey describing how 45% of its respondents said that application release automation was a key enabler for DevOps, yet only 11% had implemented such automation. Considering that DevOps was born in 2008 the rate of adoption has been pitiful to say the least.
All is not lost, however. Even though some workload automation systems are still antiquated in terms of capabilities, they are only antiquated in the sense that they have peers who have evolved over time. Integration with existing infrastructure, monitoring capabilities, and the addition of operationally focused capabilities (like SAP system copy or processing of big data warehouses) exist in solutions produced by companies exhibiting thought leadership.
In summary, there is no need to settle for less. While the tendency exists for company to mark time with their existing operational systems footprint, there is typically no requirement to do so. If your systems are not providing the capabilities you need in a way that allows you to optimize your run-rate then perhaps you should be looking elsewhere for systems that do meet those requirements.