If posed with this question, leaders of most organizations will put their own companies into the minority camp that actually measure project success. And in a vast majority of cases, they would be wrong. As I pointed out in a prior post on this topic, there are unfortunately as many ways of measuring success as there are projects being delivered. The lack of a standard way of measuring success results in most organizations not focusing on the one thing that truly matters – “Did the software deliver the targeted dollar returns by way of cost savings and / or revenue increases that were initially planned for?” This is the Seilevel definition of success for commercial software implementation, and one that we are urging the industry to standardize on.
Measuring project success is not the same as evaluating the success or failure of development and delivery of a given piece of software with a set of features. If the software is never delivered or very poorly adopted, then we have an automatic and default measurement – failure. In these cases, it really does not matter whether the organization explicitly measures project success because we know that none of the financial goals will be met by software that never saw the light of day or is barely used. Failed projects are analyzed and either abandoned because they never made business sense to begin with or resurrected for redevelopment. The problem of measuring success then ironically lies with the projects that are actually delivered, deployed and in wide use! By default, we have defined success as the “absence of failure.”
So why then do most organizations not measure true project success – the economic value delivered by the software? Here are the top six reasons based on my observations of working on many large implementations.
1. Lack of Understanding the Importance of Measurement
The use of the default measure of success – “not a failure” – is largely the result of lack of understanding and a false assumption. The belief is that, if a project is actually delivered and used, by extension, it must have delivered the desired financial returns. This is a dangerous assumption to make and one that is impossible to validate unless measurements are actually made.
2. Lack of Understanding What to Measure
This goes directly to the heart of the problem – most organizations do not use the financial returns generated from a software deployment as the only valid measure of project success. Without this knowledge, most measurements if they are made, do not truly measure success.
3. Lack of Understanding How to Measure
This is a non-trivial problem even for organizations that measure project success by attempting to nail down the financial returns to the business as a result of a software development exercise. Ascribing causality to software, especially when it comes to measured revenue that is impacted by a wide range of external and internal factors, is not easy to do. In a lot of cases the means to measure performance have to be developed before a measurement can even be taken. For example, if a financial target of reducing support costs by $100,000 is predicated on 5,000 problems being solved by an online interactive knowledge base, we must first be able to measure how many problems are actually solved using the online tool in addition to the actual dollars saved. If there is no way to measure the number of problems solved online, it has to be developed in conjunction with the original feature or after the fact to permit measurements and readings to be possible.
4. Not Part of the Development Process
Most good Development Processes I have seen start with some kind of justification for a feature or application – typically in the form of a Cost Benefit Analysis or, if they are using Seilevel Methodology, a Business Objectives Model – and culminate with a Lessons Learned Exercise for future improvement once the software has been delivered and deployed. There is NO formal process step for measuring success at any time after the delivery is completed. Unless success measurement gets baked into the actual business processes around delivering software, they will almost certainly never get done.
5. No Budgets for Success Measurement
In Corporate America, it is not possible to even buy a pencil unless a budget exists for the purpose, the needed funds have been requested and approved, and made available to be spent. In my experience, I have never seen any formal budgets set aside for success measurements. Project Managers will include the personnel needed and an estimate of the time they will spend on a Lessons Learned or Formal Sign Off process step when they create their project plans and budget estimates. There will however never be a budget request in terms of manpower and time needed for project success measurements. If no one is paying for it, trust me, it will never get done.
6. No Person or Team is Responsible for Measuring Success
I do not believe that success measurement is the responsibility of any person, department, or team in either the IT or Business groups of most organizations. It is entirely possible that both IT and Business measure success in their own unique ways once software has been delivered, but it is never done holistically. Unless these measures of success are defined at the beginning of a project and measured systematically at a later time, no meaningful and actionable information will be available to either team once a project is complete. If no specific person or team in an organization is tasked with measurement, it simply will not get done.
The bottom line is that success measurements do not get done because we have a systemic failure. First, most organizations are unaware of the importance of measurement, do not know what to measure or how to measure it. This lack of understanding is clearly manifested in their organizational structure and budgetary actions – no one is responsible for measuring project success or, assuming that there is someone who wants to do it, there is no money to perform the measurements.
Organizations throughout the world spend billions of dollars a year on development projects. Yet no one can tell us if they really got their money’s worth. Is it just me, or is that crazy?