Wednesday, December 22, 2010

Patterns of Success - Capers Jones

During my consulting years at Yourdon I had used Barry Boehm's COCOMO estimation model. I was always on the outlook for something that was independent of lines of code, so I grabbed onto "Applied Software Measurement" by Capers Jones. This was back in the early 1990's and I used Function Point analysis as common practice in estimating my projects for clients. 
While I have never worked directly with Capers, I have come to respect his objective analysis of software engineering. I believe he takes a very scientific approach to gathering as accurate a data set as possible,  and is always open to new measurements from other sources.








John - Thanks for taking the time to speak with me on the topic of Patterns of Success. You have done a lot of work in Software Engineering. Is there a particular focus area you would like to cover today?


Capers - Most of my clients ask me to help them improve software development and maintenance processes to improve development productivity and quality. I have worked with about 600 companies, government organizations like the US Army, Navy, Air Force, the IRS, NASA, even a couple of state governments.


John - So you've seen a few projects in your time? What have been some patterns of success?


Capers - The main weakness I have seen in companies, where they botch up and fail is:
Quality control for very large applications. They can do OK on small projects where programming is the main activity. But when the project is bigger and there are lots of requirements errors, or design errors, or architecture errors, they don't know how to deal with them. So the bigger the application, the more cost goes into bug removal, and more bugs move into front-end project artifacts. And there are some techniques to deal with this like formal inspection and quality functional deployment. But there is low penetration of these techniques in the industry.
Now there are other problems as well. Change control management is an issue. The rate at which change is introduced during a project is about one percent per month. So over a three year project, the final deliverable will have required changes of 36%. A lot of companies don't handle that very well and some contracts don't even include provisions for these kind of changes.


John - That is why on some projects Agile Development works well because of its ability to accommodate change.


Capers - Thats what Agile was designed for with embedded users and many iterations in a release. But if you are Microsoft trying to develop Windows7 with over a thousand developers and a million users, Agile has problems. It is hard to come up with a "user" to embed in a development team that represents the market accurately, and it is hard to coordinate a team of 1000 developers using face to face Agile techniques.


John - Other projects you have seen which have been successful and how they did it. For example, I was looking at a chart you had produced comparing different software development techniques and one that stood out to me was the projects that had >85% reuse of components. Beat everyone on quality and productivity

Extracted from spreadsheet sent by Capers Jones


Capers - That is true, but there are only a restricted number of applications today where that level of reuse is possible, some are applications like compilers where in going from one language to another a lot can be reused. Similarly, in accounting packages where the functions remain consistent and are certified reusable.
You might be able to get 20-25% of reuse on a typical application
But since we are talking about reuse, a very successful story has been with ERP Vendors. In essence, they are replacing with a single package, several redundant departmental applications in an organization. This is a type of reuse. Unfortunately, most ERP packages have defects that depend on the vendor to fix, and most companies still need an IT shop to extend and integrate functionality into areas of the company that the ERP does not cover. But these packages have been a net benefit over custom development for most medium to large organizations.
While reuse today may be limited, I think it does hold a great promise for the future. Methods such as Agile and RUP are only minor improvements.  They are similar to applying first aid and stopping the bleeding.  To really make software cost effective and achieve consistent quality, we have to stop custom design and hand coding and switch to construction from certified components. Here is an excerpt from my upcoming book "The Economics of Software Quality" that shows what I mean.

...Let us leave software for a moment and consider automobiles.  If the automotive industry were at the same level of technology as the software industry, when a reader of this book wanted a new car, it would not be possible to visit a dealer and buy one.
Instead, a team of automotive designers and mechanics would be assembled and the reader's requirements would be recorded and analyzed.  Then the car would be custom-designed to match the user's needs. 
Once the design was finished, construction would commence.  However, instead of assembling the automobile from standard components, many of the parts would have to hand turned on metal lathes and other machine tools.  Thus instead of buying an automobile within a day or two, the customer would have to wait a period of months for construction.
Instead of spending perhaps $30,000 for a new automobile, custom design and custom construction would make the car about as expensive as a Formula 1 race car, or in the range of $750,000.
Although no doubt the automobile would have been tested prior to delivery, it probably would contain more than 100 defects, with 25 of them serious, that would need to be removed prior to safe long-distance driving by the owner.
(If the automobile were built under the same legal conditions as software end-user license agreements there would be no warranty, expressed or implied.  Also, ownership of the automobile would not be transferred but would remain with the company that built the automobile.  There would also be restrictions on the number of other drivers who could operate the automobile.  If there were more than three drivers, additional costs would be charged.)
Worse, if something breaks or goes seriously wrong after the car is delivered, there would probably not be neighborhood mechanics who could fix it.  Due to the custom hand-made parts, a repair center with a great deal of machinery would be needed.  Something as basic as replacing the brakes might cost more than $5,000 instead of $300.

With a large percentage of custom-designed hand made parts, maintenance would be an expensive proposition.  Worse, small variations in the hand-made parts would decrease reliability over time, which would lead to more breakdowns.  But due to the high replacement cost, the owner would be stuck with the unpleasant choice of paying ever higher annual maintenance costs, or spending close to another million and waiting another year for a new version, which might not be any better than the original.....



John - Where have you seen some companies and projects that have gotten everything right.


Capers - There are a lot of things that work. But if you look at an established company like an IBM or an HP or a Microsoft, you see thousands of developers scattered over dozens  of locations around the world. And these teams often have varying levels of software engineering sophistication.  There was a study done by IBM that should about one-third of its labs were using advanced techniques, a third was OK, and a third were below average. But if you look at the best locations in the best companies they tend to be very proactive in quality control, they are very proactive in change control, and they also have a very good system for tracking accumulated costs and accumulated problems so that management can immediately correct and issues.
The companies that have a significant number of teams doing things very well are:


  • IBM
  • HP
  • Microsoft
  • Raytheon
  • Sony
  • Northrop Grumman
  • parts of Boeing
  • Motorola
  • Google


Its probably the top tier of companies you've already heard of because they do other things very well.


John - All the companies you mention are large organizations that have been in business for a while. Does this mean that it takes time and money to get good at software engineering?


Capers -  Yes they have to invest $10-12K per person per year over a 5-6 year period to see the results that they have. They also have to put aside 5-7 days of training per year to achieve this.


John - If we take the other extreme in size and longevity... the stereotypical Silicon Valley start-up, don't they achieve phenomenal results with just a few people in a garage?


Capers - Well their productivity and quality are actually not very good. They have very bright people who are able to launch a product but the real success is in their innovation. The point is that really neat innovation has value that will attract customers.  The quality and productivity are based for new applications often depend upon smart individuals rather than methods


John - I am thinking of a U shaped curve of Success where we start with a startup having great innovation but as it gets bigger the innovation is stifled yet it is not big enough to have the investments in software engineering that the larger companies have.


Capers - Thats a good point. When you start small and up to 25 people you can do well just on the quality of your personnel. From 25-100 people you need to put in some formal processes. At 100-500 people with project teams of 10-20 you need to have rigorous quality control, instead of just testing, you need up-front inspections. And there is a gap with the mid sized companies that have not made the investment in software engineering improvements and are trying to deliver with brute force waterfall methods.


John - In the area of failures you have already mentioned some of the factors. It seems for every metric of success there is the negative measure for failure. But could you tell us about a specific project you were involved with that was a dramatic failure.... and why?


Capers - I was an expert witness on a trial between a state government and a vendor. The state was trying to consolidate applications used at the county level into a single state system. But the counties all did things differently. And for political reasons some counties did not like other counties and would not accept the state system requirements if they thought it was derived from a competing county. The vendor who was trying to develop the system did not quite understand the requirements in the first place, and was careless and did not do up-front inspections, or static analysis, and truncated the testing phase in order to meet the installation date. And when it was installed, it was hard to learn, so users did not like it. The performance was about 12% slower then the legacy systems, and the errors the users made because of the learning curve increased by about 30%. And there were a high number of defects in the software. One of the worst of these was an intermittent bug that caused the modifications being entered in someones billing record to be actually written onto someone else's record. This resulted in innocent people being accused of nonpayment, while the deadbeats were not pursued.


John - So what was the outcome of the trial?


Capers - The judge thought that the state had not given the vendor enough time to correct mistakes so the vendor was given an extension of one year on their contract.


John - What do you think THE NEXT BIG THING will be about three years from now.


Capers - There are a couple of areas where I think we will see something interesting. The first is what I am hearing in the press and scientific literature, true holographic displays will start to be used as computer interfaces. Today they are being prototyped in university labs and they are small and expensive, but that could change as the technology can be manufactured in volumes.


John - What would we use these displays for?


Capers - It would open up the door for 3D system models that could include the dynamics of current performance. You could see what happens when viruses enter the system. Things like that. I think that would improve the quality of system by five fold and double the productivity.


John - And what would another BIG THING be?


Capers - I think that development teams will take advantage of social networking tools like Wikis, and Facebook, and Twitter to integrate with current development platforms and allow distributed teams to achieve the same levels of productivity as face to face offices allow.


John - I want to thank you for sharing your insights with us.

3 comments:

  1. Thank you for the info. It sounds pretty user friendly. I guess I’ll pick one up for fun. thank u



    Function Point Estimation Training

    ReplyDelete
  2. Hey, nice site you have here! Keep up the excellent work!
    Function Point Estimation Training

    ReplyDelete
  3. My cousin recommended this blog and she was totally right keep up the fantastic work!

    Function Point Estimation Training

    ReplyDelete