Wednesday, January 19, 2011

Patterns of Success - Peter Hill

Peter Hill is a recent acquaintance that I met through Capers Jones. He lives in Melbourne Australia and runs an organization called the International Software Benchmarking Standards Group (more about that in the interview). I found him to have a very deep insight into the fundamental reasons why software projects are successful.  

John - Peter, could you tell us something about the ISBSG  and what you are trying to achieve?

Peter -  The name ISBSG is a bit unfortunate now because our focus is less on benchmarking standards and more on estimation and project management. In the early 1990's a group of IT professionals came together because they felt that the industry would benefit from the collection of data on software development and analysis of that data to see if  there were practices that could be publicized to help the industry improve its performance.
They came up with a questionnaire  to collect that data. Then there were discussions with people from other countries who had heard about the project. They wanted to use the questionnaire to collect data in their country and, to cut a long story short, they decided to create an international organization and a common repository. Over the years we have grown the number of member countries participating until we now have twelve. We have a small permanent staff who collects data, analyzes it, and publishes papers and books including our third edition of Practical Software Project Estimation that has just been published by McGraw-Hill. We also make our data available at a reasonable cost. So in summary, the ISBSG is a not-for-profit organisation that exists simply to try to help the IT industry improve its performance.

John - How large is the repository? I don't know how you would measure the size. Number of projects?

Peter - We have two repositories. One that is relatively recent focuses on the metrics of applications that are under maintenance  and support. That currently has 500 applications. However, for this interview we will focus on the other repository comprises Development and Enhancement projects. That repository has 5,600 projects that come from all over the world and not only from the member countries.

John - One of the things I am really curious about is that since you are collecting data from all these countries, if you were to normalize all factors that influence project performance except the country where the project was done, do you see any differences? OR is a software project the same all over the world?

Peter - We don't report our findings by country.   We want to protect the integrity of the data and want to prevent a country sending data to the ISBSG that has been doctored to make it appear that they have high productivity...that Outer Mongolia has the best software developers in the world.

John - Oh I see. So outsource your projects to Outer Mongolia where they will be done better?

Peter - I can answer your question in one way... we have done a study to compare  onshore vs offshore projects. We have published this as a special report that is available from the web site:‘Outsourcing, Offshoring, In-house – how do they compare?’ If you look at what we call the project development rate, the number of hours to deliver a Function Point, then Offshore is about 10% more productive and speed of  delivery is much better for offshore vs onshore. Unfortunately, the offshore projects deliver with a much higher defect rate.. almost three times higher.

John - I can see offshore projects being divided into two categories, one where the sourcing company has a division offshore and has worked together on several projects. The second is a sourcing company that enters into a contract with another firm like TaTa or Infosys to perform the project. I would imagine that the performance of the first type of project would be better than the second.

Peter -   You may be right, but in our report we haven’t differentiated between the two types of offshore development: in-house and outsourced. However, I do think your idea has some validity because one of the significant drivers of project performance is communication. You would expect communication to be better within a single organisation even if the development team is offshore. The ISBSG does collect what we call ‘soft factors’, for some offshore projects communication was listed as a problem.
The general impact of "outsourcing" a project (not offshore), even if one division of a company doing work for another, is a 20% decrease in productivity rate and the defect densities are worse by about 50%, with speed of delivery being similar.

John - I used to work at IBM during the period when the company was moving to build a significant offshore capability and offer clients projects with mixed onshore/offshore delivery. We struggled with early deliveries and adapted our practices, infrastructure, and tools to get better. I suspect that in your repository if you captured the level of experience a team has had with doing offshore work, both in sourcing and delivering, you would find that the more experienced teams have better results.

Peter - I think that is a sound assumption. In general we have found that projects benefit from an experienced and stable team; an experienced project manager; known technology; and stable requirements, these all lead to successful outcomes. Other contributing factors include having an educated customer, single site delivery, and having a smaller number of deliverables (including documents).
Something to remember about the statistics I am giving you is that the ISBSG does not keep data on failed projects. We are given data on projects that have been delivered into production. So our data is biased. It is also biased because it is not a random survey of all projects but is data volunteered by organizations that probably have a bias towards getting better at Software Engineering. The mere fact that the organization is aware of the ISBSG is an indicator of its maturity. So our data probably represents the upper end of the industry. 

John - Does this give users of the repository false expectations on what might be possible for their organization?

Peter - We try to offset that bias by providing a ‘how to use’ document with all the material we send out. We have what we call a reality check to make sure a consumer of the data is using it wisely.

John - So, lets move on to some Patterns of Success. We have already touched on many factors that lead to successful outcomes but tell me what you think the top few contributing factors are for a project to be successful.

Peter -  The things that stand out are the size of the development team and the project complexity. The more complex a project the higher the risk of not delivering, and the larger the team size the lower the productivity. Another significant contribution to success is where an organization is pursuing process improvement via CMMI. Even organizations that have only a CMMI level two demonstrate higher productivity. You might think "Hold on, what about all the bureaucratic overhead to comply with CMMI?"  Turns out these organizations have only slightly slower speed of delivery, they have somewhat higher productivity, and their defect rates are much better.

John - Why do you think that complying with a CMMI model makes a difference?

Peter -  I think it reflects on something you said earlier... level of maturity. These people have looked at what they were doing and have decided that they could get better and are making an investment in doing so. This focus of the organization on process improvement instills a work attitude that influences day to day behaviors.

John - If all these factors are in place for a given project, will that project have an outcome that is an order of magnitude better then average? What is the distribution curve look like as we keep piling on improvements?

Peter - I have not  mentioned yet the programming language which is a major influence on productivity. For all these improvements there is an initial negative impact at the time of introduction. For example if a company takes on a modern programming language the productivity is terrible and then over two years of use the productivity improves until it is twice the old baseline. So if it used to take 18 hours per function point it is now 9. These are big improvements.
Here is one example we have seen from our analysis... Putting aside the size of the project, then there is a correlation between team size and productivity. A team of 1-4 developers achieved a median productivity of 6 hours per function point. Teams of 5-8 achieve 9.5 hours per function point. More than 9 people in the team results in 13 hours per function point. So if you can split things up so that they are delivered by small teams then the overall productivity is greatly improved.

John - And that assumes that each developer has individual productivity equal to everyone else. And we know from historical data that there is a four fold difference from worst to best individual productivity. As a hiring manager we were always looking for the top talent individuals to bring into the company and create small teams with superstars. However to use a sports analogy, this could sometimes be like an all star game. A team brought together with superstars might under perform another team who has a better fit of the individuals and more experience working together. Does this kind of phenomena every show up in the reports in your repository? 

.Peter -  We can't collect the experience levels of the the individuals that make up the teams reported to our repository. We do collect some experience data on the project manager.

John - In the second topic of the interview, Failures to Launch, you have not collected data in the repository on failures. I think it would be a fascinating addition... to be able to submit an anonymous report on a failed project and the root causes for the failure.

Peter -  That is an area where a researcher needs to ask probing questions. Capers Jones collects data in that manner and he also participates as an expert witness in litigation over failed projects. He would be  a better source than the ISBSG.  Can I switch the conversation around and talk about some more things that work really well?

John - Sure

Peter - We have done a lot of research on what works and does not work (a report called: ‘Techniques and Tools – their impact on projects). We have collected a lot of projects where iterative development is used, Rapid Application Development and Agile Development are examples. Agile projects have a 30% improvement in productivity, and speed of delivery is improved by about 30% as well. We don't have enough data yet to comment on quality improvements.

Another method that seems to work very well is Joint Application Development. Productivity is 10% above average and speed of delivery is 20% above average.
Some things don't seem to make much difference. For example, Object Oriented Development does not seem to improve productivity or speed of delivery over the average.
I don't know if people are still using CASE tools but they had a positive impact on project performance, particularly with much lower defect rates. Given the early rush to CASE tools and the investment in learning to use them, I am not sure people got the dramatic improvements they had hoped when the tools were so popular.

John - I have reviewed one of the ISBSG questionnaires and among a lot of questions you ask if the project is using a particular method such as Agile or TSP or RAD etc. As these reports flow into the repository over time have you seen any trends forming? An increase or decrease in use of a specific method?

Peter - The increase in people using Agile development is the most significant trend we have seen. We also saw a dramatic increase in the use of Java and C++ several years ago. There is always a problem with perceived technical silver bullets. As I said earlier, our data underlines that it takes time for a project team to become competent with a new method or new technology. So when there is a sudden general adoption of something new, we see a corresponding decrease in project performance until the ‘silver bullet’ is absorbed.
One of the very interesting findings we have at the ISBSG is that over the last fifteen years of collecting data we have seen no improvements in average productivity across the industry. All the improvements I mentioned earlier, have not been adopted universally by the industry and are offset by projects continuing to develop in a chaotic manner. So there has been no overall improvement in the way we develop applications.  I got the ISBSG Analyst to look at the average productivity over time. We have grouped into groups of five year periods. This is the result (for all software development – New and Enhancements) shown as the median number of hours taken to produce a function point of software:

                      Median PDR (hours per FP)
 1989-1993              7.6
 1994-1998              6.7
 1999-2003            11.0
 2004-2008            12.6

John - Wow. That is a significant finding. Has quality improved or is it flat as well?

Peter – A recent analysis we did for the report: ‘Software Defect Density’ shows that it is flat. The ISBSG data shows no evidence of any improvement in defect rates over the fifteen year period. Certain projects using particular methods or tools can show dramatic improvements, but they are offset by projects with poorer quality results such that the overall median defect rate has not improved.
So much for silver bullets!

John - I guess that satisfies my need for a "Failure to Launch"... in this case the whole industry has been stagnant in improving project performance of the last decade.

John - So the last part of the interview is the NEXT BIG THING. And for you this will be interesting because I wonder if you see in the latest data submissions that ISBSG is getting any indicators of what might be an important trend?

Peter -  Oh, that is a difficult one. What we are seeing is that the really successful projects tend to be following the fundamentals, they have small, stable teams, with an experienced project manager, working with languages and infrastructure that they are familiar with. The projects that follow these patterns always out perform any project trying something new. This may disappoint you but all the work we have done looking for real silver bullets tells us that there are none. 
John - Thank you very much for your time and your insights

No comments:

Post a Comment