Thursday, December 3, 2009

a mobile device dilemma

The exclusive deals that device manufacturers are making with service providers will ultimately cause a family crisis for me.
Today my family of three is using an AT&T unlimited family plan. On that plan we have three devices:

  • Blackberry Bold - me
  • iPhone 3GS - wife
  • iPhone - daughter

I picked the Blackberry last year when my old one died. It was not my ideal choice. I really wanted to jump on the Android bandwagon. I am an avid Google user and wanted to have a state-of-the-art experience of integration. Alas, I did not do so at the time because:
1. AT&T did not have an Android device (I suspect they may never have one).
2. The T-mobile HTC G1 was not a game changing device

However, at that time my wife/daughter were using dumber devices mainly with voice/texting capabilities and I probably could have migrated the family over to T-mobile.

BUT since I got my Blackberry (which has OK Google apps), my wife/daughter became iFans. At this point, I would have to pry the iPhone from my wife's cold dead hand.

So you see my dilemma. At some point there will be this wonderful, amazing device (like an HTC HD2 running Android 3.x) on a service provider like Verizon and I will either have to ignore it, convince my family to abandon their iPhones, or split the family unit into separate accounts ($$$).

I had hopes that Google's push for an open network would have worked out. Being able to purchase any device and activate on any network (radio compatibility assumed) would make this little family crisis in the making disappear.

Wednesday, December 2, 2009

What is the perfect Agile Tool? - It Depends.

Within the Agile/Scrum/XP community there is a love/hate relationship with tools that support the development process. I used to call these Computer Aided Software Engineering (CASE) tools but that term has fallen out of favor. CASE is more associated with the waterfall wold the the Agile Manefesto revolted against.  There is one camp of Agilistas that will only use 3x5 index cards posted on a board in a war room. And even when considering tooling, the complexity of the tool is a major discussion point.
Last night, I attended an agile tools shootout hosted by the aRTP group. We looked at the following tools:
ScrumWorks Pro
Microsoft Team System
IBM Rational Team Concert

Actually, the demos were given in two separate rooms and I was only able to personally see the Zen, PivotalTracker, ScrumWorks Pro, and Rational demos. From the demos and what I could see from their web sites I would broadly seperate the Microsoft and IBM tools from the rest and put them into the more complex category. However, this is because with both of these tools the vendors are attempting to cover the full development cycle and making sure at detailed design and coding they have things covered. For example, Team Concert was demoed as an Eclipse plugin with source code control, build management, real time notifications, project management features all enabled
Other tools, such as Zen and PivotalTracker tended to be more like electronic 3x5 cards with electronic boards. They did offer the advantage over a manual system of being able to automatically calculate burn down and other statistics.
In the middle of the complexity spectrum were Rally and ScrumWorks Pro because they added more project management features and the ability to integrate with other tools.

So which would I pick? Like any good consultant the answer is "Depends".

It depends on the size and complexity of the organization using the tool.
It depends on the target architecture and technologies used (e.g. one tool I did not classify above is JIRA/Grasshopper which is specifically used for Ruby development)
It depends on the level of contol over tool content needed (e.g. several tools were SaaS with concerns over security)
It depends on the sophistication of the developers.
It depends on the level of formality required of the process (e.g. if federal certification of the software is required then more traceability and reporting will be needed)
It depends on the risk accommodation of the users (Want to go with a small flexible rapidly changing tool/company OR stick with a slowly moving but more stable large vendor)

I did enjoy the exposure to the tools and we will be holding another shootout in the future.

Thursday, November 5, 2009

Virtual Conferences

In a previous post I mentioned attending the Enterprise 2.0 conference being held in Boston MA...only I was not able to be in Boston... I was sitting in my home office in Cary NC.

Today, I upped my game and simultaneously attended Enterprise 2.0 from San Francisco CA and the Internet Summit from Raleigh NC. Plus I did some billable work for a client and was writing this post. Can I have a productive day? OR am I evolving to the attention span of a squirrel. Anyway, here is how I would compare the two conferences from a virtual attendance perspective:

Enterprise 2.0
This conference offers a quality virtual experience. There are several remote mechanisms offered to gather information:
Twitter - #e2conf 2719 followers, On a typical full conference day there were 17 tweets from the conference. However, approximately 1500 tweets related to the conference occurred in the same time period. For example,

RT @ terrigriffith: Linden Labs brilliant. Gifted laser pointers allowed responses to ?s shown on screen
BTW I thought the use of the lasers for audience participation is cool but they need to give me a virtual laser as well.

Facebook - 881 fans, There are a posts for each of the sessions, posts by exhibitors, and posts by participants.
LinkedIn - 50 members, 1 discussion posted. This is a weaker presence for this conference.
Video on Demand - David Berlind hosts demo presentations from the exhibit floor. Each session is about 10 minutes and what I like is that David asks some penetrating questions that expose weaknesses as well as strengths of the vendor's product.
Streaming Video - The A/V team uses multiple camera shots with some good closeups of the speaker and interweave slide images into the stream. Well managed scheduling.

Conference Blog - The blog is written by approximately 8 people and includes immediate summary of current conference presentations (e.g. Integrating Google Wave into the Enterprise) but also has entries between conferences (e.g. 10 posts in September).

Internet Summit -
This conference also focuses on the emerging web technologies and business models with content that covers both enterprise as well as consumer use of the web. They support the following virtual attendance tools:

Twitter - #isum09, 777 followers, all conference status posts. There were approximately 225 tweets per hour from individuals during the conference at mid day. Lots of commentary of speaker points that correlate directly with the video stream.
Facebook - 155 fans, 22 posts mainly status announcements.
Video on Demand - Replay of the main tent sessions.
Streaming Video - The A/V work is a little rough in a few areas 1) Listening to Richard Jalichandra of Technorati and was not able to see the slides he was showing. 2) Just before a session was to start the A/V person turned on the video of the stage including mikes that picked up some amusing "open mike" comments from the speakers. 3) They have a background music channel that they sometimes forget to turn off when the presentation starts.
However, during one panel session I experienced a first for me... the speaker was referencing a good book "Naked Conversations" by Scoble and Israel... while continuing to listen to the panel, I reached over to my Kindle, found the book online, and purchased / downloaded it.

I enjoyed both conferences but think the better virtual conference goes to Enterprise 2.0 put on by TechWeb.

So what is coming over the horizon? Recently, Ruven Cohen hosted a beta CloudCamp in the Cloud. This was a completely virtual event. A replay is available on UStream
So why is this not just a webinar? Well during the unpanel session the participating audience could ask questions which would be displayed and then people could raise their hands (virtually) to discuss the question raised.
I think that using a better social networking platform for real-time sharing of data / voice / video will give a better result.
There have also been several virtual conferences held in Second Life. The New Media Consortium (NMC) has thousands of members who participate in virtual conferences. The nice thing about a second life conference is the immerse experience... your avatar walks around the conference, talking with speakers and other participants, strolling through an exhibitor floor and even participating in pre/post conference social events. Unlike the remote participation in conferences like the two discussed above, the second life conference gives me the opportunity to actively contribute. After a virtual conference session I can walk up to the presenter and introduce myself and ask a question. What is lacking, is the ultimate realism of being at a real life conference or even watching a presenter over a good video stream. A lot of subtle communications is non verbal through facial expressions and hand gestures. Perhaps someday we will have technology that captures real time video of our faces and renders that onto an avatar.

Given the cost of travel and time away from work I imagine that more and more conferences will be moving to remote access it not complete virtual participation.

Saturday, August 8, 2009

BarCamp 2009

This Saturday I attended my first BarCamp. The BarCamp RDU 2009 was hosted by Red Hat at their HQ on the NC State University campus in lovely Raleigh. Approximately 200 people attended and from that audience 36 people were able to present on a diverse set of topics.
Similarly to the CloudCamp I participated in last fall, the BarCamp is run in an "unconference format". At the beginning of the day people that thought they had a good idea would put the name of their topic on a sheet of paper, queue up for the podium, and when your turn came, pitch your concept to everyone for a minute or two. Then you paste your paper into an available room/time slot and after everyone is done pitching the voting begins. People walk up to the wall (see picture) and if they like the concept they put a mark on the paper. This allows the organizers to shuffle presentations between small and large rooms based on interest and if no one voted for your topic you can gracefully cancel.

Here are the topics that were presented this year:

  • HTML 5 Discussion
  • Power Present in 15 Minutes
  • Learn how to Juggle
  • Secrets of Effective Nomading
  • Recommender Systems: Lessons Learned
  • Free: Profit Killer, Inevitable, Necessary, or all of the above
  • Static on the Line: How to handle feedback
  • Server side Javascript with Dojo
  • Palm Pre: Development for noobs
  • Potpourri for $500
  • Bughouse (a chess variant with two boards and four people)
  • Polyphasic Sleep Q&A
  • The Intersection of Usability, Accessibility, and SEO
  • Building your A Team
  • Rapid Return on Investment: Achieve 12 month break even using emerging technologies
  • CALEA: Lawful Intercept
  • GeekDads
  • Soft Appliances
  • How to do Social Networking when there is no "Network"
  • What's up with OpenSocial
  • WTF is Biz Dev
  • Intro to jQuery
  • Alternate JVM Language overview
  • Polka! - Triangle Vintage Dance
  • When things go horribly wrong
  • Which Languages and Technologies will be around in 10 years?
  • Productivity of a Submariner
  • Google Wave
  • Managing the performance of servers in a large network... on the cheap
  • Webkit Debugger
  • How Smart Startups Win
  • The Small Business Web
  • Zombie
  • Self Publishing Roundtable
  • Query optimization in PostgenSQL
  • Twitter Roundtable

As you can see this is not your typical technical conference. For example, I was exposed to the community of Polyphasic Sleepers for the first time at this conference.

So here are the talks that I went to:

Free - presented by Martin Smith

Marty led a discussion on the aspects of marketing concerned with the niche markets (ala The Long Tail) and with new business models where content/services are offered for free to the consumer and revenue is generated via ads or with premium services offered to the free subscribers.

From a long tail perspective we discussed some of the benefits of business that operates in that market:

Distribution of Risk - Palm is betting the business on the Pre. If it does not succeed the company will probably not survive. If instead of a single product, a company was able to offer a large number of products to niche markets the risk would be distributed. One commenter mentioned that some products (like cell phones or pharma) require a large production to offset the development expense.

Marty mentioned the increasing complexity of the Internet and recommended NonZero by Robin Wright as good background on how our society is evolving to deal with increasing complexity.

Someone else in the room said that the Long Tail principle applied to more than commercial products. She thought that ideas were also finding small niche groups of people. And those people tended to be more passionate about the idea and more likely to take action in the small group.

Rapid Return on Investment - John Baker

That’s right; I was able to get the new material that Chris Hanebeck and I have been working on in front of this audience as a beta test of concepts.

The basic premise that Chris and I have is that one can find projects in a company that can achieve break even ROI within twelve months. We use a combination of out-of-the-box thinking, emerging technologies, and discovery of analogies solutions from other industries to achieve the results. To get a copy of the material I presented go to my website. A few people drifted out of the room during the presentation and afterwards a participant mentioned to me that the examples I used in the presentation (RFID used in the Supply Chain) was probably not familiar to the audience of BarCamp. I am planning on developing a version that does focus on emerging Internet technologies and will be ready for next year.

What’s up with OpenSocial - Dave Johnson

This one was definitely more technical. Dave presented the basics of OpenSocial and the progress some companies like LinkedIn, Google, Ning, and Yahoo are making using the standard to share data and gadgets associated with social networking. The official site has a wealth of information. And Dave has his own personal blog where he covers OpenSocial and other efforts like BarCamp.

With as many companies investing in OpenSocial it would seem that current problems (e.g. poor security) will be solved.

What Languages and Technologies will be around in 10 Years? - Jeff Terrell

Jeff is graduating from UNC Chapel Hill and wanted to speculate with the audience on what languages/technologies it might make sense to invest time in learning. For example, will Ruby on Rails be around for a long time?

This lead to a diverse discussion on a wide range of topics:

  • The language/technology will depend on the solution being developed. COBOL is still being maintained on mainframes in banks while C is common on embedded systems.
  • The "browser" based interface is likely to continue grow in ability to support more and more applications.
  • The browser based rendering engine is complimented by the continued penetration of always available high bandwidth wireless networks.
  • The current keyboard I/O may be replaced by gestures or by voice recognition.
  • Augmented Reality will become more common (see Layar )

Google Wave - Joe Gregorio

Joe demoed the Google Wave using a couple other members of the audience to mutate the wavlets being created. He also showed how Robots worked (very cool implications) and how Gadgets (using a semi-OpenSocial structure) can be dropped into a wave.

Google wants Wave to be a true replacement for email (and I suspect a lot more) and therefore is opening up the control of its future to Open Source.

The audience (this was the most popular session I attended during the day) asked a ton of questions. For example, how will Wave work for the user on an airplane (assuming they are disconnected) who continues to work for four hours mutating the wave they left the ground with. What happens when they land and resync? Joe explained that the Operational Transforms would be processed and that the wave would be left in a correct state for all users. Joe said Google does not promise that the results are meaningful, just consistent. So there will need to be some common sense applied to the approach. One idea I have for that is since most documents being created by a team effort would have division of labor it might make sense to add an optional check in / checkout protocol. So just before leaving on my trip, I check out Chapter 4 in the wave and when somebody else tries to touch it that person is told to wait till the material is checked back in for common use.

Google Wave is really cool.

So my Saturday at BarCamp was well worth the time and if you have not experienced one yourself I encourage you to jump on the web and see if one will be happening in your location soon.

Wednesday, August 5, 2009

Learning from Mistakes

I attended a brown bag conference call hosted by the Industrial Research Institute on the topic of "Learn from New Product Failures". Our speaker was Jim Hlavacek, one of the authors of the paper by that name published in Research-Technology-Management. They have had experience in reviewing failed projects at manufacturing companies based on techniques used in the medical profession. In many teaching hospitals, when there is an adverse patient outcome to treatment, the clinical team undergoes a Mortality and Morbidity Conference, where the course of treatment is reviewed by an objective team and mistakes identified.
Hlavacek reported experiences at companies like Intuit, Toyota, and 3M where a regular process of review is built into the engineering/marketing culture.
Under Hlavacek's approch the Failed Product Review (FPR) would consist of the following:

Name of the failed venture
Dates project began and was terminated or shelved
New Venture leader and cross-functional team members
Objective and qualified principal investigators
Face-to-face interviews with people who were particpants on the project
Face-to-face interviews with OEM and end-use customers who were involved
Face-to-face interviews with distributors/dealers and/or key suppliers
Obtain all e-mails, business plans, documents, trials, and project presentations
Develop timelines and milestones of critical events or decisions
Doucment the unfavorable outcomes with data
Develop fishbone diagrams for the project and processes
Develop root-cause analysis of the fishbone diagrams
What went well for the project
What went wrong for the project
Lessions learned and corrective actions

This approach to learning from mistakes reminded me of some similar approaches I am familiar with. Those of you with a military background may have participated in an After Action Review. This is used primarily during training exercises to understand what happened, evaluate everyones performance, and discuss what could be done better. From the USArmy manual comes a similar outline for an AAR:

Introduction and rules
Review of objectives and intent
Training objectives
Commanders mission/intent
OPFOR commander's mission/intent
Relevant doctrine, tactics, techniques, and procedures

Summary of recent events (what happened)
Discussion of key issues

Chronological order of events
Battlefield operating system
Key events/themes/issues
Discussion of optional issues

Soldier/Leader skills
Tasks to sustain/improve

Discussion of force protection (safety)

Closing Comments

The final example is from my experience as a Certified Scrum Master. Under SCRUM a small team produces a product release using a succession of short time boxed miniprojects called Sprints. Sprints typically take anywhere from 2-4 weeks for the team to produce a functional version of the product. After every Sprint the team gets together for a Restrospective Meeting. In this meeting the team discusses the following:

What worked well last Sprint that we should continue doing?
The practices that worked well during the previous Sprint should be identified and continued in the coming Sprint.

What didn’t work well last Sprint that we should stop doing?
The team or customers should identify practices that worked against the team during the last Sprint and focus on stopping those things during the next Sprint.

What should we start doing?
The team identifies practices that should be implemented during the coming Sprint that will help them work better together.

Out of this discussion comes a list of actions that the Scrum Master captures and and is responsible for implementing during the next Sprint.

So lets compare/contrast these three approaches:

  1. The FPR is conducted when things go wrong. The AAR and Retrospective occur for all outcomes.
  2. Both the FPR and AAR require the participation of one or more objective reviewers. The Retrospective depends on the team and, sometimes, invited guests.
  3. The FPR and AAR both use detailed analysis to find root causes. The Retrospectice is more adhoc.
  4. The AAR assumes that particpants will change behavior based on issues being surfaced, the FPR has a deliverable of lessons learned but no clear followup for change, and the Retrospective has a set of actions with the Scrum Master responsible for seeing they are implemented immediately.

In my opinion, the key to making any of these techniques work is to have a culture of trust surounding the proceedings where everyone understands that people make mistakes and that for the majority of participants mistakes that are surfaced will not be used to punish them. I say majority, because if the same individual is constantly exposed as making repeated mistakes and not correcting behavior then it may result in termination.
Over my career, I have participated in and led a lot of project/product reviews. Almost always, people are defensive and guarded about what happened. To set the right tone in establishing a review program executives should lead by example, being willing to have their actions reviewed and also demonstrating with their subordinates that mistakes they make are not used to influence annual appraisals.

Friday, July 10, 2009

Pampered Pooch 2

I am a sucker for a "59 Minute Scrum" hosted by Bob Galen. I had participated in one last November as documented in this Blog. Last night the IIBA hosted Bob and I was again a member of a six person team producing a brochure for the Pampered Pooch Day Care. I wanted to get another feel for the team dynamics during the sprints and to compare the results with the last session.

Here are my observations:

1. While the aRTP session was comprised mainly of programmers and the IIBA was comprised of business analysts (duh), there was little difference in the results of producing a brochure. I guess if Bob ran a session at the Pet Care Services Association the outcome might be different.

2. Before we started the Day 2 Sprint, Bob pulled the four Scrum Masters aside and told two of them to go back and emphasize the quality and completeness of the brochure, and told the remain two Scrum Masters to tell there teams to push for as much content as possible in the time remaining. The results were telling. The two teams pushing for Quantity delivered 13 and 8 user stories respectively, and the two focused on Quality delivered 6 and 4 user stories. So teams will listen to the direction of the Scrum Master. Ultimately to have a released product both the Quantity and Quality need to be good enough. So is it better to get lots of 60% quality content in early sprints and then tighten it up all at once towards the end of the iteration? OR do you push for 80% quality content and achieve less content per sprint. A real trade off that the team needs to decide based on coupling/cohesion of the user stories. If the stories have few dependencies then push for the higher quality per sprint. With lots of dependencies you need the total content present to debug and refactor.

Wednesday, June 24, 2009

Enterprise 2.0 Conference

I had some local commitments so I could not fly up to Boston again (See earlier post on Johnny goes to Harvard) and thought I would miss the Enterprise 2.0 Conference Bummer. Little did I know that these people are trying to practice what they preach. I joined the Twitter stream #e2conf and am getting real-time tweets from all over the conference. Also there is a blog where I can read material from most of the presenters and see the opinions from other attendees.
Finally there is a e2TV video stream of the general sessions and vendor demos.
This morning I participated in the Launchpad contest. Four vendors who were finalists got a chance to do a quick demo for the audience and then the audience voted for a winner via SMS.
The four finalists were:

Bantam Networks - they have a enterprise project workspace that allow team mates to communicate, share info and manage relationships.

Brainpark - they also create a social network for projects. The difference seems to be an engine that suggests to the user people who have skills that might help, docs with info that might help, feeds/links with info that might help.

Manymoon - another social network for developers. This one has a good integration with Google apps and with external participants.

YouCalc - this is a different one... an analytics environment that can pull info from a lot of different sources and present graphic analysis. Also it is a product based on "wikinomics". The apps are created by the user community. If you create an app it must be made publicly available for others to use modify. The data that is analyzed remains private.

So the voting took place (I thought Manymoon was really good) and the winner was YouCalc with 53% of the vote.
As I am finishing up this post I am listening to a demo of Lotus Live from IBM. Last night they won the big Buyers Choice Award for best product of the show. This was a vote by attendees.

This type of virtual conference experience still lacks the level of deal making / networking that can happen in a f2f environment but you can't beat the price (FREE) and not having to sit on an airplane and then that ride in from Logan airport (Ugh).

Thursday, June 18, 2009

It's Raining, It's Pouring

I attended a Webinar today where IBM discussed their Cloud Computing initiative including their "Cloudburst" offering. David Dworkin of the Tivoli business unit took the audience through
justifications for going to cloud which included a survey conducted last year of companies that had implemented a cloud application. The top three reasons for going to cloud for these companies was:
  1. Innovation
  2. Time to Profitability
  3. Reduced costs
IBM is recommending that companies first move to internal clouds that reside safely inside the corporate firewall but consolidate various departmental applications. They claim such a move will have following benefits:
  • Can reduce IT Labor costs by 50%
  • Can improve capital utilization by 75%
  • Reduce provisioning cycle times from weeks to minutes
  • Can reduce end user IT support costs by 40%
In my opinion it seems Amazon EC2 is more SMB start-ups with quick roll out and low up front costs while IBM is aiming at Fortune 500 with large IT budgets under pressure.

During the webinar the host asked the audience (I did not see how many were attending) a couple of survey questions...

Which best describes your organization's level of adoption of Cloud Computing services?
None, but not evaluating.
None, but currently evaluating one or more services.
Currently getting ready to trial a Cloud Computing service.
Limited trial adoption of one Cloud Computing service.
Currently running one or more crucial set of business tasks through the Cloud.

David thought this was a little surprising compared with survey results IBM sponsored last year. He speculated that companies may be using clouds without being aware of it.

What are the biggest reasons your organization has yet to migrate any services off to the Cloud?
Concerns over security
Need to "own" and manage the data center
Regulatory obstacles
Management does not see the potential for quick ROI
No skepticism, just looking for the right solution

Of course remember that this population had already self selected to having enough of an interest in cloud to invest time in attending the webinar so these answers do not represent the general market.

Friday, June 5, 2009

Web2.0 revisited

On June 4th I presented my Web2.0 briefing to 21 participants representing 17 companies at an event hosted by Matrix Resources. Matrix offers briefings to its customers as a complimentary service and from the comments I heard as people were settling in for my talk it seemed to be a much appreciated program.
During the event I captured some informal statistics from the audience on specific Web2.0 usage patterns.

As a percentage of participants how many:

Use Wikipedia?- 100% as reader. 0% as author.
Author a blog? - 0%
Use Digg? - 10%
Have a LinkedIn Account? - 76%
Have a Facebook Account? - 76%
Have used Craigslist? - 76%
Participate in Second Life? - 0%
Use Twitter? - 24%
Use Ajax to develop apps? - 15%

So what does this mean? Like most statistics with small samples... not much. But I like to ask people and see if any trends are appearing that are different then the official surveys.

Also during the briefing we had a lot of discussion about how companies developing web2.0 apps are realizing revenue. The table below is my analysis of some of the more popular companies I mentioned in the talk.

I have added this table to my Web2.0 presentation page 32. In general I found that the strategy for most of these companies is to give the functionality away for free and rapidly grow a large user base. As the application matures and more users are locked in they obtain revenue streams through a combination of advertising and premium services.

Tuesday, April 28, 2009

Johnny goes to Harvard

Last weekend I attended the Deep Agile 2009 conference at Harvard University. This two day conference was put on by Agile Bazzaar, an ACM Chapter dedicated to the improvement of all things agile. Approximately 90 attendees participated in the conference and I thought that this was a very well run production. Kudos to the Agile Bazzaar volunteers and especially Nancy Van Schooenderwoert who chaired the program. 
The links above give the highlights from the events and I only want to add my own observations:

Jack Ganssle represented the non-agile development community and gave several presentations on his approach to embedded systems development. He favors object oriented development and follows Bertrand Meyer's Design by Contract process.

James Grenning was one of the original signers of the Agile Manefesto  He says he attended that event for the skiing but I suspect he had more involvement... He is a strong advocate of Test Driven Development, Pair Programming, and SCRUM. He showed all these elements to the audience but in some cases they were the regular versions without a significant embedded twist.

Russell Hill is a development manager at Key Technologies. He is an advocate of Test Driven Development for embedded and thoughout his presentations talked about his experiences at Key building a reusable set of boards for the various products Key manufactures. This system includes hard real-time behavior developed for an FPGA connected to a Motorolla micro processor handeling the UI and control logic. He brought a valuable perspective to the conference on what is achievable for embedded development using agile.

Here are my notes from a panel session:

In practice how can a HW based system be delivered incrementaly?
James - I don't do a lot of HW dev BUT "is it working?" is a good test.
Russel - Our HW is based on FPGA and has some flexibility. But boards would be developed incrementaly.
Jack - There is a lot of religion in agile.... HW tends to be late and broken.... and we don't anticipate that for SW

Do you expect to modify an embedded system on a 2-3 week sprint cycle?
James - would like visable progress without neccessarily being deliverable
Russel  - Key Technology can turn around a HW change very rapidly (2 days!)

When an embedded system is used for a consumer product who is the product owner?
Russel -  We have thousands of customers, Marketing dept, Field Serices represents the interests of the customer. Nancy - any problems with that. Russel - Sometimes. Only in last couple years has marketing been strong participant.
James - this is an organizational problem (of getting participation)

How to reconcile "HW requires long lead times" vs "Agile is incremental" ?
Russel - Board was not available for a couple years. SW was written to spec and was able to deliver 2 weeks after HW release.
Jack - Up front commitment to HW features and to the schedule is different for embedded.
James - At the beginning of an agile project one needs a vision of both HW and SW.

How do you handle scheule constraint.
James - in agile nothing is negotiable until it is late.
Russel - when we are late our internal customers now about it real soon. Visibility to decision makers is important.
Jack - This is not unique to agile. Quality, Schedule, Features pick any two. Jack thinks features should be unconstrained while keeping Quality and Schedule fixed.

Any experience including HW engineers in the SCRUM.
Russel - intersted but not active daily 
James - integrate early and often. Some teams use HW engineers as customer. One client in Finland does overnight board turns.

For an embedded system can a product backlog include User Stories that target either HW or SW? If so, what does a HW User Story look like.
Russel - We have not done that.
James - if you try it write a paper.
Jack - I think it would be very difficult.

I am concerned about the lack of upfront design. What if a User Story requires a big change?
Russel - we just experienced this with a project that required a substantial redesign. The requirement was given a year ago on an eight year project.

In classic agile the key to development is decomposition of the epic story into slices. How do you decompose in embedded?
Russel - Frankly we struggle getting our HW engineers to think agile but it is getting better. We had one example where a story to eliminate a spurious image required both sensor, hw platform, and sw.

What role if any does architecture have in agile?
James - we work on it every day. Every time a sprint is completed there is an architecture that supports SW completed to that date.
Russel - Some of our architecture is harder to change. 
Jack - Architecture and Design in embedded world is a lot less maliable. Up front design is very important.
Nancy - Some companies have a culture of detail which is driven by politics.

My question was the one about HW User Stories.... I am still looking for a way to weave the EE participation into a high tempo devlivery using agile approaches. I will take James up on his suggestion to write a paper when I have it all figured out.

Tuesday, March 3, 2009

mobile check-in

As a road warrior I am always interested in anything that will improve my experience in the airport. I recently saw where Delta Airlines and the TSA have teamed up to trial a paperless boarding pass at the Memphis airport.  The traveller can download an electronic boarding pass that is displayed on the device. The TSA has a 2D bar code scanner that verifies the boarding pass and Delta uses its current gate scanner. So as a time saver, the time at security and at the gate is not reduced, however I save the time in line at a kiosk to pick up a boarding pass. 
Getting the TSA and Delta personnel familiar with the mobile device is a good step to what I hope will be the next big change... Near Field Communication. Instead of using an optical scan the NFC enabled device can be waived over the reader and convey equivalent information as the 2D Bar code. So why is that an improvement? 
1. In an NFC enabled device the eBoarding Pass is kept as encrypted content on a separate chip that is more secure then the device general memory. Hackers can attack a mobile device in several ways and could steal or corrupt the information. The NFC devices resist unauthorized access.
2. NFC will be used primarily for electronic replacement of credit/debit cards. Using the tap and go ISO 14443 based interaction will become second nature to users of the device.

Friday, January 23, 2009

SES becomes eTechSuccess

When I restarted Software Engineering Strategies last year I had planned to incorporate as an LLC at the beginning of 2009. I submitted all my paperwork and then was told by the North Carolina Secretary of State that the word “engineering” is reserved for those companies licensed by the North Carolina Board of Examiners for Engineers and Surveyors. They in turn required the company to have 2/3 ownership by certified professional engineers.  My options were to request a letter of non-objection to using the E word, to become certified and then licensed, or to change the identity of my company. I chose the latter. So what to name my new and improved company?… I had not been 100% satisfied with the SES name because I was doing a lot more then offering strategies on software engineering. So I went back to my web site and read the opening sentence “My passion has been the application of emerging technologies to successfully solve real world business problems.”  The first rename was to call it Emerging Technology Services but while the LLC was not in use in NC the domain names were hard to come by. Finding a good domain name these days is a struggle. So the next variation was Emerging Technology Success because I thought a key experience I had was the successful application of an emerging technology. The statistics I quoted on my website are “Over a period of nine years while I was the practice executive we had approximately 2000 engagements with customers. These could range from small two day workshops up to multi-year development projects. Of these 2000 engagements approximately 100 were considered "Troubled". This meant that the project had slipped and the contract profitability was at risk and/or customer satisfaction was bad. Of these 100 Troubled projects all were eventually resolved. We never had a contract canceled due to our failure to perform. “ So a key idea was taking an inherently risky technology and being able to successfully deliver an application that provided business value. So Emerging Technology Success it was. But the words just did not trip off the tongue and I imagined people trying to type   so I contracted it to eTechSuccess and it sounds pretty cool. I also added the little wave logo to the business card and website to symbolize waves of technology. It reminded me of the times when I was a young boy on vacation in Florida. We stayed on the Atlantic side and I really liked playing in the surf. But when the surf was high it felt like the waves would keep crashing on me and I would barely recover from one when the next would try to topple me over. Eventually I learned a strategy for coping with the surf just like over many years of working with emerging technologies I have learned how to successfully ride a new wave of technology.

Tuesday, January 13, 2009

Failure to Launch

When I attended the Cloud Camp last November I had run a session called "Failure to Launch" which was members of the audience talking about their early experiences with Cloud Computing. I was looking for some lessons learned and possibly some unique risk factors associated with Cloud Computing. What I heard was that projects in the cloud are influenced by factors common to most other emerging technology projects. The best example was given by Uri Budnik of RightScale. He had to keep the customer anonymous but did share the following:

Name of Project - Planned Major News Event for major news media 

Project Dates - Project began three weeks before hard news deadline

What happend? - The system was unacceptably slow in early versions and could not be improved. Some of the content would not load. The customer introduced a last minute architectural change the morning of the event that required rollback in order to launch. 

Lessons Learned - Need more through testing.

I heard some similar profiles from other participants about classic software engineering problems...  scope creep, lack of testing, lack of communication with stakeholders, unrealistic schedule expectations.