Sunday, May 22, 2011

Patterns of Success: lntermission

Over the last few months I have spoken with fourteen industry leaders who shared with me their thoughts on what makes an organization successful in their particular domain of expertise.
This was not a random survey. It was based on me going down through my social network and finding people who:

A) Had achieved a level of public recognition of their industry expertise
B) Were willing to speak with me and have their ideas published.

 Most of the participants are from the Software industry and are reflecting on some aspect of Information Technology.

Recently, I was able to summarize and share the opinions of these experts at a Triangle Technology Executives Council Meeting.

Instead of presenting the material using a PowerPoint I experimented with Prezi.
Here is the presentation.
There are lots of ways to view the prezi. I suggest clicking on the More/Full Screen and then using the right arrow key  to move through the material as a first pass.

Since I was limited to 20 minutes for my presentation, I only spoke on a subset of the findings. That is why the focus of the presentation shifts to findings highlighted in blue. However, one of the cool things about Prezi is that at any time you can stop moving along the author's directed path, and manually move around the screen, zooming in/out etc, then resume the path my hitting the arrow key.

So were there any common opinions of the experts?

For success patterns, the idea of keeping projects simple was common. Keeping the number of developers, the number of requirements, and  the number of technologies used, smaller then larger. Another common idea was that projects that had frequent feedback loops to correct errors were more successful.

For failure patterns, it seemed that most of the time the experts spoke of an anti-success pattern. So if they thought a pattern of success was to keep a project small and simple, then an example of failure was a large complex project.

For the next big thing, I was focusing on three years into the future. Some of the experts wanted to look farther out, but for the nearer horizon a theme that came up often, was the increasing impact of mobile technology.

I said that this blog is an intermission. I want to line up another group of experts and gather some more opinions. If you meet the two criteria I have stated above  please contact me to discuss.

Finally, I want to thank the experts who participated in Patterns of Success for sharing their opinions with us.

Thursday, April 21, 2011

Mobile blogging at NCT4G

This is an experiment in creating a blog via my Android device. I am sitting in a quiet corner waiting to present at the North Carolina Technology 4 Good conference. Jim Ray and I will be talking about cloud computing. The presentation itself is an experiment since I used Google Sites to create and share the content. I tried to use the Google Docs presenter but it was too limited in how it could display stuff (e.g. I want to click on a logo and jump to a url).
So this conference has a lot of talks on using the web to support a non-profit organization. For example, Jeff Stern spoke on how social media is being used at the Museum of Life and Science in Durham.
Well, back to mobile blogging...
Not sure if it is a great fit for me and my HTC Inspire. The version I am using to compose this does not have all the easy widgets and composition editing I have on my laptop. PLUS I am still slow using the soft keyboard, so this is taking longer then I like. I will need to go back later to add links.
I don't want to use this to create simple status updates... that's why I have Twitter.

Wednesday, April 6, 2011

Patterns of Success - Andy Hunt

Last year, as Chair of our IEEE Computer Society Chapter I put together a meeting on "Practical Software Development". Karen Smiley did a great job of documenting the meeting on her blog. It was not until that meeting that I had realized that one of the signers of the Agile Manifesto lived in my back yard. Since that time, I have heard Andy speak on a couple of other occasions. I have been impressed by his no-nonsense approach (very pragmatic) to software development.
I am on a quest to interview as many of the signers of the Agile Manifesto as I can.











John - Andy, tell me what is going on with you lately.

Andy - Lots of new and exciting things in the works. At any given time we probably have 50-80 new titles that are being worked on. We keep coming out with what we think is interesting. That is the metric we use when considering a new title... will it be something that we would want to read. Unfortunately, what we see with a lot of other publishers is that the content may be interesting in an academic sense, but it is not helpful. If it does not help you with your job or you becoming a better person then it is a waste.

John - I have found a few archetypal books in my career. In fact remember last year at the IEEE session we met Fred Brooks. His Mythical Man Month was such a book for me. Other books like, Software Engineering Econimics or Applied Software Measurement, or Structured Analysis And System Specification or...

Andy - You know, all those books have a timeless quality. Take Brooks' book for example and substitute the IBM 360 / OS 360 with modern computers and operating systems and book would still be useful to us. That book addresses problems with people working together and that has not changed very much since the 60's. And that is the same issue that I have been focusing on in my recent work. We have better languages, we have better tools, we have better methodologies, but were still having the same old problems.

John - So for you and in your career what have been your archetypal books?

Andy - Its interesting because some of them have been off the beaten path. For me, when a book hits you is as important as what it says. For example Jim Coplilen's "Advanced C++ Programming Styles and Idioms". That was one of those things where you get to the top of the hill and see over and say "Oh! It could be like this!". Just one of those Ah ha moments. And with Brooks' book, I remember first reading it and thinking "Poor fools back in the day. We are so much better off today". I was a lot younger back then.

John - You said earlier that the thing that has not changed over the years is people.Their psychology, the politics, the cultural norms, have not changed much.

Andy - And that is the same all over, You know the one book that changed my life the most was "The Pragmatic Programmer". It is interesting how that came about. Dave and I did not set out to write a book. We were out doing the consulting game, advising them on how to develop software more effectively, and we we tell the client some anecdotal examples from other clients or what we had read on Usenet. Eventually, all this story telling was taking too much time, so we decided to write a little white paper, a pre-read for a client before we came on-board. A bullet list of ideas we had collected. And like many projects this one had some scope creep and ended up being the first draft of our book. And we knew nothing about publishing a book.We got some advice from a friend of ours who was trying to become a novelist. He said "Look on your bookshelf and pick the books that you really like and approach that publisher. So we contacted Addison-Wesley and we thought they would laugh us right out the door, but they sent us a contract. So we wrote the book, we typeset the book, we edited the book. From that experience we eventually realized that we could publish books ourselves, and in 2003 we started the publishing business.

John - One of the other people I interviewed for Patterns of Success is Ed Yourdon. He was also an author that started a publishing company (Yourdon Press). In fact he bought one of the first commercial Unix systems so that he could typeset the books. Of course today, with places like Lulu and Amazon it is easy for an author to self publish.

Andy - It is easy to go from a Word document to a published book but they are missing several key elements. We assign a development editor to each author.  We curate the proposals that come in. There is significant value added to the authors initial draft. For good books in the future that team of author and editor must still be in place somehow.

John - What about the marketing side of the business? A good publisher will get the title and synopsis in front of a broad audience to create demand.

Andy - I have never cared for the word "marketing". It sounds too much little convincing someone to buy something that they don't really want. With a really good book, all you need is some publicity to let people know the book exists. And that is what we do, we make sure people hear about it. For example, one of our recent books "Hello, Android" is just flying off the shelves because it is a really good book.

John - Its a really good book, its dealing with a subject that is of great interest, and Android developers are flocking to the platform to strike it rich. But it is a good example of how a publisher can keep a book current. Android is coming out with a new version every six months. Unlike Mythical Man Month which is timeless, the half-life of Hello, Android is about three months.

Andy - And we do offer e-books to our customers in addition to ink/paper books. And if a customer buys an e-book they will get automatic updates as the content is revised to keep it current.

John - So lets talk about some Patterns of Success. For the last few minutes we have been more at a meta-level of how you get great concepts out to the public, like your publishing efforts. But I would like to look at the actual ideas that you have tried with customers that have been the most successful for them.

Andy - The number one thing that I have seen projects adopt that has brought the most success are feedback mechanisms. When writing the book "Practices of an Agile Developer" with Venkat Subramaniam we needed to come up with a definition of Agile. So we said "Agile development uses feedback to make adjustments in a highly collaborative environment". The most successful project I was ever on had a highly technical user right next door, We got immediate answers to the issues that would pop up during development. They key is that it is all about feedback, and using the feedback. So at all levels of feedback, from pair programming to User Acceptance Testing, there are two parties: The producer of the feedback and the consumer of the feedback, and there are two activities: Identifying and clearly communicating the feedback, and hearing / taking action on the feedback. Feedback will not work unless both parties and both types of activities are in place.

John - So at one end of our feedback spectrum we have the classic waterfall scenario where development gets requirements from a customer and then goes off for several months until User Acceptance Test when the customer may or may not get what they needed. If we could quantify the feedback instances it would be a low number. At the other end of the spectrum is the nirvana that you are speaking of with rich and continuous feedback that is immediately acted on.
Is it possible to have too much feedback?

Andy - It is always possible to overdo it. If you get feedback that is not telling you anything then it becomes noise. However, since Agile encourages everything to be done is small increments, the feedback is not dumped all at once. It is not a torrent, it should be a gentle Spring shower.
An analogy I use for this is how you drive a car. It is a continuous series of small adjustments to the car, the traffic, the environment. This has taken on a whole new meaning as I am teaching my child how to drive.

John - I don't know about you, but the way I learned to drive, was my dad took me out in our subdivision, where there was no traffic, easy straight roads, and taught me the fundamentals of gas/brake/steering before he took me out on the highway.

Andy - That is a great extension of the analogy. Many companies are making a fundamental mistake in adopting Agile practices by trying too much, too quickly, with people fresh out of school and toss them on a mission critical project that is already under an aggressive schedule. And of course it blows up. At several companies I have recommended that they create an A list team, made up of the skilled, experienced people, to be used on difficult projects, and a farm team with the newbies and less skilled people, that is used on less critical projects. And they always tell me "What a great idea!" However, in most companies the seniority and implied expertise is correlated with how many years you have been out of school. And that is not a good correlation. Another bad gage of effectiveness is certifications. In my Wetware book, I quote a study where they proved that just acquiring knowledge in a subject area does not improve effectiveness. It is the experience of applying that knowledge to a real world problem that makes people effective. So certificate programs by themselves are useless.

John - It seems that in addition to individuals needing education and then experience, teams made up of these individuals need to work together to gain a level of team effectiveness. One of the other improvements that Agile fostered was the small, self governing team, using frequent retrospectives to improve their effectiveness.

Andy - I think that is another area where we can fault our education system. It places so much emphasis on individual learning and accomplishment. When people get together to solve a problem they call it cheating. So recent graduates have no experience with pair programming or continuous builds.
Another fundamental problem I see is that humans are not wired to do Agile. I wrote a blog recently on Why Johnny can't be Agile that points out some of the conflicts between recommended Agile Practices and the way our brain works.

John - So do you overcome these innate deficiencies through  process or checklists? How do teams learn to do things correctly?

Andy - Same way we do anything else: Through awareness and feedback. Awareness is a big thing. A lot goes by that we are oblivious to. Its funny, but as a consultant we go into a company and after a while we make some observation and suggestion, and the client is amazed how insightful we are. When it is more that they have become so used to the situation that it is invisible to them.
There is an undercurrent in most of the current literature on adopting Agile, that says you will not be successful on your own. You need an outside coach who can observe with a fresh pair of eyes.

John - Of the Agile Practices of which there are dozens, which do you think are the most challenging for a company to adopt?

Andy -  Pair programming is really hard, retrospectives (when done correctly) are difficult. At the other end of the spectrum there are some practices that are easy to adopt and have a tremendous bang for the buck. The daily 10 minute stand-up meeting is a great way to communicate status within a team and improve cohesiveness. Unit test based on TDD type thinking is another big win. Both unit test and stand-ups are a variation on the feedback success pattern.

John - So lets shift gears. You thought that the number one success factor was having feedback loops. Over your career when you have encouraged clients to embrace feedback loops, what have been some to the dramatic failures and why?

Andy - I think the primary reason is you get feedback that is politically undesirable.  People that are made aware of a problem choose to ignore it because they will have to admit they made a mistake and fear they will lose power, prestige, their job. All projects have symptoms and warning signs of failure. Teams always are aware of what the problems are but are reluctant to make hard decisions.  Sometimes a problem is known at the team level but kept hidden from management, sometimes the team reports the problem to immediate management but they hide it from upper management. The hierarchical reporting structures in companies often impede corrective actions.

John - I did a blog post a while back on Learning from Mistakes. There are some institutionalized practices including the Failed Product Review at companies like Toyota, or the US Army After Action Report. I compared them to Retrospectives. In all cases for the process to work the culture must have a degree of trust where people will not come to harm because they share their opinions.

Andy - We often tell clients fix the problem don't fix the blame. I think smaller, more entrepreneurial teams are more able to accomplish this because they share a common vision and have gelled as one team.

John - It does seem that the larger corporations have more difficulty in general with adopting these practices.

Andy - I think in industries with non-tangible products, we are seeing a shift towards smaller companies. The barrier of entry in Software is becoming less and less. So we see two-three person companies developing remarkable initial products.

John - Of the dozen or so interviews that I have done so far in Patterns of Success, I think you are the third or fourth person to mention this phenomena of micro-sized companies that quickly enter the market.

Andy - There is a great book by Paul Graham called Hackers and Painters who tells a story of how he built a store front generator with a small team using server side Lisp. They could envision and deploy a new feature in an afternoon, when their competitors were looking at a six month release cycle. They kicked butt and they got bought out and it became Yahoo Stores.
At one point after the Manifesto was signed, there were a bunch of us, Fowler, Cockburn, at a speakers table at a conference someplace, and the question was put to the table... "If you could do one thing for a team to improve their productivity what would it be?" And to a person the common answer was "Fire the bottom two thirds of the team".  Get a small number of really sharp people and you will be better off.

John - Sounds like a corollary to Brook's Law..."Adding more people to a late project makes it latter" BUT "Adding people to a project makes it later". Does not matter if you are late or not.

John - So lets move on to THE NEXT BIG THING. If we were doing this interview again in three years what would be the new idea taking hold?

Andy - I don't know if I could pick out something specific, but in general we see the acceleration of the fact that mobile rules. The desktop computer will become less relevant to how we work. So me are migrating from Desktop to Laptop to Tablet to Smartphone.

John - So in addition to becoming smaller, they are mobile. How will this mobility affect our work?

Andy - It makes a huge difference. I follow the music industry (Ed: Actually Andy does more then that. Check it out)  and look how that had changed from physical vinyl or cassette collection to archives of mp3 files being downloaded to devices, but still a personal collection. Now we are seeing, with always available high speed internet connections, a streaming mindset where you subscribe to a cloud based music universe of all recorded music. Thats a huge change. If you look at books or software delivery, lots of innovation based on being always connected to the cloud.
For Dave and I, we run our business off our laptops. We can do anything we need to do from that platform.

John - And pretty soon you will be able to do it off you iOS or Android platform.

Andy - I can almost run the business from my iPad now. There are just a few things that we need when editing and building books that are not on the iPad. Yet.
Look at what that will do to our society. Look at our transportation system based on commuting patterns, our housing based  on commuting patterns, taxation based on commuting patterns. When we can work when and where we want the social landscape will change.

John - Thanks for sharing your insights with us.

Wednesday, March 30, 2011

Patterns of Success - Sam Adams


I first met Sam Adams back in 1992. I was an independent consultant giving advice on object technology and Sam was working at Knowledge Systems Corporation, helping customers learn how to develop applications using Smalltalk.
He had this kind of magic trick where he would sit in front of the computer and ask somebody to describe a business problem and as the person was talking he would be building the application in front of your eyes. Every 5-10 minutes he would present the latest iteration and ask if this was the solution he/she was talking about. Very Agile development before its time. Sam and I both moved on to IBM where we were part of IBM's first Object Technology Practice. In 1996, Sam was named one of IBM's first Distinguished Engineers and has spent the past 10 years in IBM Research.

John - Thanks for joining me on the Patterns of Success interview series. What kind of projects have you been working on recently?

Sam - Last year I worked on IBM's Global Technology Outlook (GTO). Every year IBM Research goes through an extensive investigation of major trends and potential disruptions across all technologies that are relevant to IBM's business. My GTO topic area was peta-scale analytics and ecosystems. This topic emerged from our thinking about commercialization of our current BlueGene high performance computing technology as we push higher toward exascale computing. Another major influence was the coming disruptions in systems architecture anticipated when very large Storage Class Memories (SCM) become affordable over the next 5 years.

John - Let me calibrate this another way. When you talk about the Bluegene and the peta-scale how does that compare to the recently popular Watson computer that won the Jeopardy! match?




Sam - In terms of raw computing power, Watson is about an order of magnitude less powerful than a BlueGene/P, which can provide sustained calculations at 1 petaflop..

John - That helps.

Sam - Another trend that we considered and an area I have been working on for the last three years is the single-core to multi-core to many-core transition. How are we going to program these things? How are we going to move everybody to a massively parallel computing model? One problem we are working on is that CPU availability is no longer the limiting factor in our architectures.The most critical factor these days is I/O bandwidth and latency. As we move to a peta-flop of computing power we need to be able to feed all those cores as well as empty them of results very, very quickly. One of the things we realized is that this scale of compute power will need a new model of storage, something beyond our current spinning disk dominated approach. Most current storage hierarchies are architected assuming that CPU utilization was the most important factor. In the systems we envision, that is no longer the case. Current deep storage hierarchies (L1 - L2 - DRAM - Fast Disk - Slow Disk - Tape) have lots of different latencies and buffering built in to deal with the speed of each successive layer. Petascale systems such as those we envision will need a very flat storage hierarchy with extremely low latency, much closer to DRAM latency than that of disks.

John - It seems to me that one of the more significant successes in this area has been the map/reduce, Hadoop movement used by Google for their search engine. How does the research you are working on compare/contrast to this approach?

Sam - We see two converging trends, the supercomputing trend with massively parallel computing being applied to commercial problems, and a trend of big data / big analytics which is where Hadoop is being used. The growth of data on the internet is phenomenal, something like 10 fold growth every five years. The business challenge is how do you gain insight from all this data and avoid drawing in the flood. Companies like Google and Amazon are using Hadoop architectures to achieve amazing results with massive data sets that are largely static or at least "at rest". In the Big Data space, we talk about both data-at-rest and data-in-motion. The storage problem and map/reduce analytics are largely focused on massive amounts of data at rest. But with data-in-motion you have extreme volumes of fast moving data with very little time to react. For instance, imagine dealing with a stream of data like all the transactions from a stock exchange being analyzed in real-time for trends. IBM has a product call Infosphere Streams  that is optimized for such data-in-motion applications.
So the combination of many-core supercomputers, data-at-rest analytics, and data-in-motion analytics at the peta-scale is where the leading edge is at today.

John - So with the data-in-motion stream analytics is not one limitied by the performance of the front end dispatcher which looks at the event in the stream and then decides where to pass it? If the stream keeps doubling will not that component eventually choke?

Sam - Everything is bound by the ingestion rate. However, the data is not always coming in on the same pipe. Here you are getting into one of the key architectural issues... the system interconnect. In most data centers today use a 1Ge or 10Ge inter-connect bandwidth. This becomes a bottleneck, especially when you are trying to move hundreds of terabytes of data all around the data center.

John - So as much as we hold Google up as a massive computing system with its exabytes of storage and its zillions of processors, it is dealing with a very parallel problem, with all the search queries coming in over different communications infrastructure to different data centers dealing with random data sets. Compare this to a weather forecasting application that can reduce the problem to separate cells  for parallel operation but must assemble all these results to produce the forecast.

Sam - The most difficult parallel computing problems are the ones that require frequent synchronization of the data and application state. This puts a severe strain on I/O, shared resources, locking of data, etc. 
At the end of the day the last bastion we have for performance improvements is in reducing the latency in the system. And to reduce end-to-end latency, we must increase the density of the system. Traditional chip density has just about reached its limit because of thermal issues (There has been some work at IBM Zurich that could shrink a supercomputer to the size of a sugar cube). Beyond increasing chip density there has been a growth in the number of cores, then the number of blades in a rack, and the number of racks in a data center, and the number of data centers that can be shared for a common problem. While each tier of computing increases the computing power enormously, the trade-off is that the interconnect latency increases significantly and eventually halts further improvement in overall system performance.
One big area for innovation in the next 5-10 years will be how do we increase this system density, primarily by reducing the interconnect latency for each computing tier. The ultimate goal would be for any core to access any memory element at almost the same speed.

John - So in your area of research on high performance computing, particularly working with customers who have tried to adopt some of these emerging ideas, what have been the successful outcomes, and did customers do anything special to be successful? I guess because you are in IBM Research, even the work with a customer is considered an experiment with a high risk of failure.

Sam - If you look at the whole shift towards massive parallelism, the successes have, unfortunately, all been in niches. I say unfortunately because we would love to have some general solution that applies to all computing problems. The example we spoke of earlier with Google using massive parallel computing to solve its search problem. They have optimized their solution stack from the hardware up through the OS to their application architecture. It solves their problem but it is a niche solution. 
The functional programming folks have introduced solutions like Haskell that supports concurrency and parallelism. The problem with functional programming is that the programming model provided in the various languages that is not intuitive enough and difficult for the large majority of programmers to grasp. Contrast this with the success of the object oriented movement. The programming model mapped cleanly with the real world and still allowed the programmer to manage the organizational complexity. 

John - And in the OO programming model each object is separated from other objects by a defined set of sending and receiving communications. So, in theory, these objects could be distributed and run concurrently.

Sam - We need something like that to be successful with high performance parallel computing... a programming model that allows someone to develop in the abstract without explicitly thinking about the issues involved with the underlying system implementation, and then a very clever virtual machine that can map the code to the chips / cores / blades / servers / data centers so that the best performance is achieved.

John - It seems like some of the successes have been because the nature of the problem happened to fit the ability of the technology at that time. 

Sam - To a point. For example in the Google search problem it is often quite challenging for the programmer to figure out the map and reduce details so that it works efficiently. So successes have been niche areas where the application was exploited to successfully use parallelism.

John - Like with weather forecasting. Because the forecast is based on the combination of many cells, with each cell representing the physical conditions within a given space, then calculations for each cell are the same with the results varying depending on the initial conditions. To increase the accuracy of the forecast, increase the number of cells in the model. The algorithm stays the same. You just need more resources. 

Sam - If you increase the number of cells (for example going from a 10km resolution to a 1km resolution) you also have to increase the frequency of the calculation because the physical conditions change more rapidly for any one cell at that resolution. This requires a lot more resources. But the algorithm does stay basically the same. An excellent example of a niche solution. IBM Research actually did this with a project call Deep Thunder.

John - Now tell me about some failures to launch. Examples of where the technology just did not work out as expected, And some of the reasons why.

Sam - Rarely do I see the issue being the emerging technology. More often, it is the surrounding ecosystem of people, business models, and other systems not willing to adapt to the disruption the emerging technology introduces. Could we have built an iPhone thirty years ago? Well maybe. But it would not have mattered. The ecosystem was not in place, a wireless internet, an app store business model, third party developers building apps like Angry Birds, or Twitter. A generation of consumers familiar with carrying cell phones. All these elements needed to be in place. Somebody has to come up with a compelling application of the emerging technology that demonstrates real value in order to move people over Moore's chasm.





John - So bringing us back to the area of emerging high performance computing... Is this a reason why IBM develops computers like Watson? To demonstrate a compelling application of the technology?

Sam - We tackle these grand challenge problems for a couple of reasons. One of them is to actually push technology to new levels. But the other is to educate people on what might be possible. After developing Watson to solve a problem on the scale of Jeopardy! we will see pilots using data in fields like medicine and energy and finance. Domains that have enormous amounts of unstructured data. 

John - Final topic is THE NEXT BIG THING. In the area of high performance computing what do you think we will see in about three years that will be a disruptive innovation?

Sam - I think there will be a widespread adoption of storage class memory. This means 100's of gigabytes to petabytes (on high end systems) of phase change memory or memristor-based memory. Flash memory will be used early on but it has some issues that will not let it scale to the higher end of what I envision. What you are going to see a movement away from disk based systems. Even though disks will continue to decrease in cost, you reach a tipping point where the cost of the storage class memory is cheap enough when you consider the 10,000 times lower latency.
The other significant change will be the many-core processors available for servers. By many-core, I mean at least 100 cores. This will dramatically increase the capacity for parallel processing on typical servers and open up fresh territory for innovation.
Taken together these two trends will produce systems that are very different architecturally from those we see today. For example, we will see the emergence of operating systems based on byte addressable persistent memory instead of the class file metaphor. Content-addressable memories will also become more common, which will support more biomorphic styles of computing.

John - So if this three year projection of many-core processors and storage class memory comes to pass, how will our day-to-day lives be different?

Sam - I think you will see a lot more mass customization of information. Custom analytics, tuned to your needs at that time, will produce predictions of what you might be interested in at that very moment. Aside from the obvious retail applications, like the shopping scene in "Minority Report", think how this could impact healthcare, government, engineering and science. Consider how these timely yet deep insights could affect our creativity.

John - Thanks for sharing your insights with us Sam.

Wednesday, March 23, 2011

Patterns of Success - Ward Cunningham

When I joined the Object Technology Practice at IBM, Sam Adams taught me about this cool way of capturing an object model called CRC. He had gotten this technique from an x-colleague at Textronix... Ward Cunningham.  
My use of CRC and other personal interactions with Ward are covered on my web site.
As his LinkedIn profile states
 "I have devoted my career to improving the effectiveness of technical experts, mostly by creating new computer tools, but also by radically simplifying methods"


This will be the focus of my interview with Ward for Patterns of Success


John - Ward, thanks for taking the time for this interview. As I explained in the email I am looking to cover three topic areas:

  • Patterns of Success
  • Failures to Launch
  • THE NEXT BIG THING
But first I wanted to ask you about the work you are doing at AboutUs as the Chief Inventor. I got an account at AboutUs back in 2008, but never really used it that much. Then in preparation for this interview I thought I would go back and dust it off to become familiar with the changes. I currently use Google Sites, Blogger, LinkedIn, and Twitter to give eTechSuccess an internet presence. What is the value add that AboutUs will provide me?


Ward  - I would say our focus now is in helping small business use those services and especially in search engine optimization. We realize that it is getting traffic to the site, the right people to the site that matters.


John  - So in the areas of Patterns of Success, what are some patterns that you have seen over the years?


Ward - I think there are a couple of different kinds of success. And one is getting your job done on time. And the thing there is not to make the job bigger then it needs to be.We are sometimes unsure of what we are supposed to do so we do everything what we might be asked to do. Sometimes developers avoid having a conversation with the customer asking "Would it be OK if we just did this?" A large part of Agile is the notion that we plan often, so we do not make these giant plans of everything we might want.  Instead we say "Maybe we should do the first half and see if maybe thats enough".


John  - Is that just a matter of we don't know where we are going til we get there, meaning these big plans try to anticipate things way out in the future OR is it that we think better in the small, in smaller units of complexity?


Ward - Its more like its easy to imagine software that has almost unbounded number of problems that you don't think about in the beginning. For example I wrote a report program once that sorted on the first column only in ascending order. People told me it was a terrible program.


John - Why did they think it was terrible?


Ward  - Oh because it should sort on any column or select any combination of columns to sort on. And the problem was not that it could be programmed that way but that I could not make an easy to use interface that would explain how it would work. When my users opened my report sorted on the first column it was easy to understand and they could get onto reading the report.


John  - So it was good enough to get the job done?


Ward  - It was good enough for the moment. In XP terms that would be called taking a split. Lets split the functionality into a release that is basic and then talk about adding extras in a later release.
So the idea is to be willing to do less and that is a skill that comes with confidence. If you do not feel that you need to defend your programming ability or ability to conceive a system then it is easier to do something in a minimal way. I think that has grown into the concept of a minimally marketable product. That is at the product level, but as an individual programmer it sure feels good to get something done at the end of the day.
So a very important skill is the ability to separate out of a big project lots of little projects that are worth doing and doing quickly.


So that is one type of success. But I want to shift to another kind of success that I call exceeding expectations.When it comes to exceeding expectations I have a little saying... "The path to exceeding expectations probably does not go through meeting expectations".
In other words, if you are going to delight somebody, you are going to give them something that they didn't expect. So if the first thing you do is do everything that is expected and second do something that is beyond.... it is too linear. It is like delivering the asked for twelve sort functions and saying you are exceeding expectations by giving the customer fourteen sort functions. 
For example, nobody asked for wiki, so how was it that I was able to make something so popular? Well there is a certain minimalism that allowed me to make it, but more important there are things in there were not expected like spec linking just because I was playing around with Hypercard and trying to figure out what it could do. So instead of trying to meet expectations you have to redefine the problem and ask what if they asked for this? Could I do that better then this?
One thing I discovered pretty early on is that if I went into staff meetings and delighted people with one thing they would forget about all the things I was supposed to do.


John  - But there is a kind of genius... an inventing light bulb going on in the developer's head when they're listening to the customer saying what they want and offering up the unexpected,  theres something about their own domain knowledge, thinking outside the box, their inventiveness that allows them to give back the unexpected. What is it? Are there  just some individuals that can do this? Or is there a prescription that someone can follow to achieve the result?


Ward  - The formula is to do a lot of it. Over many attempts to build software you build up patterns that you can draw on to solve the next problem. I look back at my own career and I started computer programming for fun. I did not take the class my high school offered when they got a computer, instead I sneaked in during my free period, made up problems and solved them. And even during my professional career I have done a lot of good work for my employers or clients but the stuff I am most know for I just did for fun. That willingness you have to invest your own time on a project gives you the freedom to turn a problem around and play with different solutions.


John  - We've been speaking of wiki as a collaboration tool. Have you had a chance to play around with Google Wave?


Ward  -  Yes. I thought Wave was fantastic. I told people that Wave was more like wiki then wiki. I think that one of the things that happened to Wave was that people did not know how to write in the medium. When wiki first started people did not understand that you need to revise the document relentlessly to make it match your current understanding.
People ended up using Wave in a very conversational way instead of this document emergent way. When they could not get in touch with the people they needed to, they would just stop using Wave.


John  - Well lets hope Google has learned some patterns from Wave and refactors that knowledge into some of their new and improved services.


Ward  - If we want to talk about a Failure to Launch then Wave would be a good example. You need a critical mass of participation to be successful. That is also a classic problem with wikis. Companies will tell me that they need some of that wiki stuff and if it fails it is because the community around the wiki never formed correctly. First they need to be given a sense of what they are supposed to do in the wiki. Then you have to help them do it until they get good. Ah! Here is a formula for being successful at propagating ideas


  1. You have to have a technology, a computer tool that supports the propagation.
  2. You have to have a methodology, a way to use the tool to deliver its promise
  3. You need to have a community, the correct number of people using the tool and following the methodology.
In fact I talked to a group at Microsoft once and they told me that they had a wiki but were not getting much use from it. I asked them how they were using it and they told me they would put meeting notes in. I asked what the entry was called and they said "Meeting notes December 19th."  And i said that entry name did not roll off the tongue... they were just replacing a paper system. They did not have a methodology.... so I gave them a methodology.. at the end of the meeting for the last five minutes what were the three most important ideas that were surfaced and what would be the proper name for that idea... so that it would be the page on the wiki and enter the vocabulary of the community. This is a style of note taking that gives the wiki power.


John - Have you written down this methodology of how to use a wiki?


Ward - No, you might get me started on that, though. There is a nice book on wikipatterns that I wrote the forward for. It included several patterns for how an organization should launch the wiki.


John  - So give me another dramatic failure to launch.


Ward  -  Hmmm. You know most of my ideas flop. But out of the failures it sensitizes me to the missing element for success... so whenever I fail it teaches me something I don't know how to do. 


John  - Let me change gears to our final topic... THE NEXT BIG THING. What do you think it will be three years out?


Ward - I recently made a prediction for Cutter Consortium. Something that could happen but isn't happening. And that something is the way systems are evolving... Software as a Service. I think that there will be refactoring across system and organizational boundaries.We need to allow APIs to evolve without allowing things to break.


John - What technology does this refactoring run on?


Ward - Well I have seen something in the Eclipse platform called refactoring scripts. I can save a refactoring script and can send it to you, where you can run it against your program without me having to know what the internals of your program are. In order for this to work, I would need dozens of examples of use of my API including refactoring scripts that can be applied to my demo programs. As part of my SLA I would promise to provide refactoring scripts for each of my API demo programs whenever I made a change to the interface.


John - So lets use a concrete example. Suppose we both work at Walmart and are working with Proctor and Gamble on a new Order Management System. We have an API and would send P&G some refactoring scripts that could modify their Order Management System. Right?


Ward - We have this dream that if we hold the API constant we can change anything behind it without impacting our users. But I believe that is false. Anything worth doing is exposed through the API.
 If this is going to really work we will need to evolve those services so they do not have intimate knowledge of the other side. Thats why I say refactoring across organizations. Suites of demo programs and scripts that can be applied to them, available to the community of cooperating companies.


John - So if I go up to Eclipse Foundation will I find an example of refactoring scripts?


Ward - Yep, In fact when I was doing research for my Cutter article, I found a blog post that gave that example but it did not suggest  the refactoring across organizations. In fact when I spoke with the authors they did not think it would be a good idea.  They thought the API should remain stable. But if we are going to make something that has emergent properties instead of re-writing big programs over and over again we need to figure out a way for them to evolve.


John - Well thank you for taking the time to speak with me and share your ideas.





Friday, March 4, 2011

Mobile Device Dilemma part IV

I finally upgraded my mobile device from a Blackberry Bold to an HTC Inspire. Many of the trade-off issues are documented in parts I-III. The four Blackberries below are my last four mobile devices (photo courtesy of HTC Inspire).



My AT&T contract for the Bold was due to expire this April and that is the trigger for me to upgrade. In the process of getting the Inspire I considered a few other options. Below are my factors in deciding to go with the Inspire.

Atrix - This device was available at the same time as the Inspire and while I liked the screen resolution, I was not attracted to the screen size. However, the main reason for not going with the Atrix was I would not take advantage of the peripheral  options like the laptop and multi-media dock.

Infuse - This was a close second for me in my decision process. However, I did not want to wait until May/June to get the device and the Samsung build quality and Android update frequency had turned me off. I was attracted to the Super AMOLED + display with Gorilla Glass but not enough to wait.

Samsung S2 - While announced for Europe at the Mobile World Congress, this device was probably in the late summer for AT&T. Not sure I would get that big a boost from the dual core. I was intrigued by the Near Field Communication Chip but I think that will be something I latch onto in my next phone two years from now.

Thunderbolt - The "4G" from AT&T is a joke out of the box. If I was willing to take my family plan over to Verizon, I could have jumped on LTE. However, I decided that where I live in Raliegh would not light up until mid summer and did not want to wait.

Given all the above options, I chose the Inspire because of HTC build quality, and the fact that in a few weeks I will root the phone to get much faster network data speeds. Stay tuned for my adventures in rootville.

Twenty-four hours later:

  • I am still getting used to the on-screen keyboard but can see myself going up the learning curve.
  • The battery life was a concern before purchase, but is not really an issue for me now. I have the power saver enabled and have not run low yet. With my typical use, I got a full fifteen hours.
  • I have been downloading some apps and widgets from the HTC Sense Hub and the Android Market. Very easy. Instead of using one of the six alternative screens beyond Home, I find myself opening up the all apps and scrolling around. Figuring out a better way to get to a specific app is something I need to work on.
  • I saw a post on malware from Android Apps and decided to purchase an anti-virus app (Anti-Virus Pro)
  • Finally, I am disappointed that there is not a tighter integration with my Google Apps account. Gmail, Calendar, and Contacts are well integrated, but others like Docs are based on a web interface and you have to manually create a bookmark or shortcut to quickly access. 

Wednesday, March 2, 2011

Patterns of Success - Jim Stikeleather

One of the benefits of our modern social networking tools like LinkedIn is being able to meet people virtually. Jim reached out to me and invited me to join his network on LinkedIn. For people that I have not met before, I like to review their background a bit before hitting the accept button. In Jim's case, he had been a CTO at Perot Systems, MeadWestvaco and others. I asked him if he would be interested in participating in Patterns of Success and he said yes.













John - Thanks for spending some time with me on this interview. First off, how did you find out about me? What drove you to send me an invitation on LinkedIn?

Jim - The tool itself makes recommendations based on common connections. We had several people in our intersecting networks so I asked you to join my network.

John - By the way, have you played with the LinkedIn Social Map?

Jim - Yes! It is very interesting how it clusters individuals into different groupings that show the concentrations of your career over time. I got 6-7 clusters mainly associated with companies I had worked for .

John -  Tell me about what you do at Dell Services as the Chief Innovation Officer.

Jim - We are still forming the Innovation Group here at Dell Services, We have worked up the team’s initial charter, and our charter likely to be a constant work in progress – in fact, the role of an innovation office always should be a work in progress. In prior work at places like Perot Systems where I was CTO, I was looking over the horizon at emerging technologies and figuring out their impact on our business. That is sort of what I am doing at Dell, but at Dell the CTO is much more focused on products with an 18-24 month horizon. So, Innovation is the new title. As we were combining the acquired Perot Systems into the existing Dell Services, we decided to create this office that would be looking over the horizon farther and more broadly then the current CTO does. Initially we needed to get a good definition of what Innovation is at Dell. How to measure it, how to know when you are successful. We also needed to develop a repeatable innovation process. In most companies innovations occurs in an adhoc fashion, almost by accident.
We have defined the process and it starts with Visioning. What we do in Visioning is to look at environmental trends. Trends in laws or culture or business that could influence the adoption of technology. For example, there are more and more laws dealing with privacy on the internet. How will these laws impact current application portfolios or future development?
We don't look initially at trends in technology because we feel that the legal/cultural/business trends need to be in place first before a technology will take hold.
So we paint this picture of the direction that the world wants to move and then use techniques like Metcalfe's Law to understand the value of connections between the different trends. This helps us decide what technologies to focus on and understand what applications of the technologies could bring most value.
Next we go into an Innovating phase where we will pick a promising technology and do some trial applications to see how well it really works. Based on these results we will select a few to take into the final phase of Production.



John - So when you are doing these steps of innovation is this only for Dell Services OR are you creating a services offering to take to your clients?

Jim - Where we are taking this is in two paths. One is for Dell Services but the other is an offering for our customers. For example, a customer who has a particular problem and who wants to issue an innovation challenge to solve the problem. We can help the customer understand where innovation can be applied to both Products/Services but to also the Processes used to manufacture/sell those Products/Services. So a customer can issue the challenge to some community (inside the company, outside the company, or both) to feed ideas into the process that changes Products/Services/Processes.

John - A while back I had listened to a lecture on YouTube  by Douglas Merrill about Innovation at Google.



He described innovation as being a combination of transformational, incremental, and  incremental with a side effect. Do you see Innovation in similar shades?

Jim - Yes. We see innovations that if applied to existing Products/Services/Processes are almost six sigma continuous improvements. Then as you move farther away from existing Products/Services/Processes the changes are larger and require organizational change or new technologies. Finally, if it is a completely new business model with a completely new technology then it is a game changer... a disruptive innovation.


John - Is there a correlation between the high risk/reward of a game changer vs a low risk/reward of the incremental innovation?

Jim - That is where the practical and academic literature just falls on its face. I don't think there is a correlation. You can't predict the financial reward from the potential of the disruption. Jeffrey Moore talks about companies who focus on differentiating parts of their business falling into one of three categories. The first is if you are competing in a market that with all the other competitors then you need to be always optimizing to compete against them. The second is when the market shifts  and you need to innovate to keep up. Finally, there is the the opportunity to be in a new market. This is where I disagree with some of my colleagues who think you need to move your company to where that market is. I think you should try to move the market to where your company is already able to operate effectively. The problem is figuring out where the market is going to move. There is a famous quote by Henry Ford who said "If I had listened to all the market researchers I would have built a faster horse." 
I think the key is not to swing for home runs. Instead try out several innovative ideas at relatively low investment and see which ones gain market acceptance before investing significantly in those innovations.

John - That reminds me of what I had learned about McDonalds' innovation program. This was many years ago so it might have changed, but at the time they had an innovation program that would collect ideas on changes for the restaurants. Hundreds would enter an evaluation cycle and an initial short list would be based on analysis. Then the short list ideas would each be tried out in a test restaurant. Those that worked out well would be rolled out to the entire franchise. What I thought was really exciting was that McDonalds saw this as an on-going program of improvement and waited until an innovation was proven before heavy investment.

Jim - We tell people that innovation is not R&D. R&D is about taking capital and turning that into knowledge. Innovation is about taking knowledge and turning it into capital. The key with innovation is to discover how to modify what I already know into something that is better. One of the neat things that is starting to happen is that with cloud computing platforms a start-up can try out a new innovation at very low cost. I think you will see more and more of the innovation taking place in small start-ups because the cost to fail is so low. Then if they reach a point of sustainability they will be acquired by a larger company.

John - I did an interview with Ed Yourdon a few weeks back and we were on a similar point of view. He thought that with the advances with mobile technology and cloud computing we would see new apps and businesses created by high school students.Quickly creating apps that go into app stores and some of them becoming wildly popular.

Jim - Right. As the technology has advanced, as the costs have come down, the cost to fail has reduced so people are more willing to try something out. 

John - Over the last few years, as you were dealing with these waves of technology, what have been the things customers have done in innovation to be successful?

Jim - Thats an extremely interesting question. I do suspect that going forward the patterns might be different then they were in the past. In the past, innovations were driven largely to satisfy the business world. What you are seeing now is a lot of innovation being driven from the consumer side. Mobile devices, converged communications, social networks like Facebook. All of these game changers were initially developed to satisfy a consumer need. Business innovations often followed from them. On the consumer side of innovation there is less concern about being perfect. If it is good enough and can offer the opportunity for follow-on improvements then the rate of innovation goes up.

John - Well lets use Facebook as an example of a success. We even have a movie we can use as a reference
.
 

We have the initial game changing idea launched after several weeks of furious work, then the business becomes almost self sustaining, because all the real value is in the content created by the users. And the more users you have the more people want to join. There is a critical mass of success driven by the participation of the consumer.

Jim - The value proposition on the social media side really follows Metcalfe's Law. The more connections you have the more you value the network. And with Wikipedia, it is a source of information that is good enough. It is not as authorative as a refereed research paper but it is your first source of information, and because it has a network of authors for each article you can be sure of its currency.
So the real value of a product/service is no longer its stand alone value, but its value in the context of its ecosystem.
So one of the predictors of success will be a companies ability to create a network, an ecosystem, around their product/service.

John - It is always easy in retrospect to highlight one of the big winners of today and talk about how innovative they were starting out. But I do believe that a lot of the success is a matter of luck. Having the right product/service available when the market is ready for it.

Jim - In management theory there is this thing called superstitious learning. We were successful, we did the right things, we followed our strategy. When in fact, we were lucky. The lessons you take away from these successes, you need to maintain the context.
The key to long term success is that once you get lucky and have an initial success, can you execute as a business to grow the success?
I think a very common pattern of failure is a company that is initially successful, cash starving themselves and not able to meet market demand.

John - The flip side to success is failure. I call it a Failure to Launch. You have mentioned a few patterns of failure. Any other examples?

Jim - Oh wow! Its funny because you tend to forget the failures, but those are the ones you should remember the best. One of the ones I always felt bad about was Convergent Technologies. They built AT&T's first unix servers, they also built computers for Unisys and Burroughs. They were brilliant engineers. They built a tablet computer called the Workslate that had a word processor, a spread sheet, a voice recorder. A very early iPad. It was ahead of its time and the market ecosystem was not ready for it.
Along the same branch of technology we have the Apple Newton. The device itself had remarkable stand alone technology, but because it predated wireless connectivity, it could not link its user to the wider world. In general, while PDAs were not a dramatic failure, they did not take off nearly like the smartphones of today.

John - Final topic is THE NEXT BIG THING. What do you think will be a game changing innovation that we will be talking about three years from now?

Jim - I think that the big game changer, is that we will be thinking of software applications in a radically different way. In Anderson's Long Tail model 
there is a market of very specialized applications in the tail that have been difficult to create and market because of the traditional costs have been too high. But with Cloud Computing development and hosting capability and app stores and internet marketplaces for distribution, the costs go way down. So we will see 1-2 person companies developing very unique solutions for mini-markets. So the NEXT BIG THING will be the opposite of what we have seen from vendors like Microsoft with broad functionality, shallow domain depth, swiss army knife  Office products. 

John - As a consumer, will I go to a trusted brand like a Microsoft and through configuration get the special app I want OR will I be searching in an app store and finding the unique app that fits my needs developed by a high school student?

Jim - That is the question of the broker. It might provide some life to traditional systems integration firms to assemble the solution for you. It is very difficult to predict the ecosystem of companies that will provide these unique solutions.
More on the 10 year horizon we will see a lot more meta-data and semantics being associated with data and applications functionality so that the searching and assembling can be done automatically. So that based on our query to the computer a one-time, ephemeral application and data will be assembled by the system to solve our need.

John - Thank you very much for sharing your insights with us.