Tuesday, February 9, 2010

Simple taxonomy of electronic media

With three permissions we can characterize eight forms of communications.
It is assumed that the user can read all messages. The other permissions are:
  • user can write x--
  • world can read -x-
  • world can write --x
000 is a private read-only file
100 is a private writable file
010 is a public read-only file
110 is a write-once blog like Twitter
001 is email inbox where user can't edit messages
101 is email inbox where user can edit
011 is ???
111 is a Wiki

Google Buzz is like a Wiki.
672DVMTUSPHK
EAVB_IYHESFPAZC

Thursday, February 4, 2010

Translation

The work I did at IBM to lead and guide the process of development has taught me much that can be applied beyond the field of software. Every time I talk to another engineer I find that we talk about problems and processes in more or less the same language. When I talk to a writer or a teacher I find that when I explain my methodology in ways that are conventional for my field, that the terms and concepts still make sense in other fields although the language may be different.

Tuesday, February 2, 2010

SDLC Best Practices for Startups

In my work as a development lead at IBM Rational I learned about the software development life cycle and the best practices as they were taught by Rational and used by our organization. This blog is an attempt to apply those concepts to the management of software development in a small company.

I recently approached a local startup with the idea. I proposed to teach them the principles of SDLC management. What I wanted to learn was how these ideas would affect their process and how this in turn would affect their growth. My theory is that the growth of a development group is inhibited from time to time by the way it is organized. Growth tends to level off (temporarily) as the group reorganizes to meet new challenges. What would happen if the structures and practices necessary for a large group were already in place before the group grew large? Would their growth curve be smoother? Would the group produce more in three years than a group that grew organically?
The way to find out is to do the experiment. The control groups are everywhere. What I want is a company that is willing to take on the overhead of more "process" as an investment toward a long term result.
I found this to be very difficult to sell.

I had more success with a friend who wants to start an e-learning web site. She is a teacher and has a curriculum that she believes has a market. Her challenge at this point is to enlist technical help from contract developers and make the best use of the contractors' abilities. What I did with her was to go over of the ideas captured elsewhere in this blog and teach her the terminology she needed to know to communicate effectively with her technical staff.
She found it easy to understand what I described but the idiom was somewhat new for her. What we were able to do was to translate her knowledge of what she wanted into language that a developer would understand. This made her interviews for new technical help much easier. Here's what she wrote:
"... just had a meeting with a potential technical contractor and I felt very confident in talking with him about what I need and how I’d want the process of working together to go – he actually congratulated and thanked me for knowing that they would like a document with my requirements mapped out on it and he was impressed that I understood that process."

Now I don't have any influence on how her contractors will do their work but I have been able to help start her off on solid footing as she learns how to manage a development effort.

Sunday, January 31, 2010

Design for Test and Reliability

Design for Test is part of standard practice in the microcontroller industry. It is discussed in the context of software engineering as well - but it is absolutely essential to the makers of large-scale microprocessors. The key concept is that the programming interface to a CPU does not allow access to many of the internals of the chip and additional logic is therefore added to the hardware to allow test and diagnosis of the device outside the scope of normal use. DFT is necessary to handle the fact that minor flaws can prevent execution of all diagnostic self-test operations.

Some years ago I had the opportunity to apply this principle to software architecture.
The application was a data abstraction layer in a multi-tier Windows application. The app used a relational database for persistence and presented the user with a set of objects. The layers in the design were as follows:
  • GUI
  • Business logic
  • Object model
  • Abstraction
  • ODBC
  • Database
The use of ODBC allowed a choice of databases with minimal configuration in the front end.
The part that I designed and built was the Abstraction layer and my goal was to make it as generic as possible. To accomplish this I put metadata in the database that defined the object model. The code in the Abstraction layer knew almost nothing about the business objects and their relationships to each other. It knew that there was such a thing as an object, that an object had properties, that objects could have relationships and dependencies, and what the property types could be. At first, all the other details were in data in the database that matched the expectations of the business logic. This allowed us to change the front end and the object model with little or no impact on the design or implementation of the Abstraction layer.

One way to apply the DFT principle to software is to add code that does some sort of internal self-test or consistency check. In this case, we didn't add any code that wasn't used in normal operation. In fact, we went in the other direction and stripped out any code we didn't absolutely need to get the Abstraction function to run. This led to an implementation that was very generic and concise. Even the definition of the generic data object was in the database.
Obviously some bootstrap code was needed to make this work. And in the revised and rewritten code in the second release we took out the initial bootstrap to improve performance.
The effect on test was simply that no unit test was needed for this part of the implementation. It either worked or it didn't. Like the FORTH language, which has a tiny interpreter, we started with a very compact core and built out from there.

Sunday, January 24, 2010

Velocity and Scale

These two concepts are central to my approach. If these concepts are understood and mastered, your great idea will have no limits. Here's how it works.

Velocity refers the speed at which a new idea can move from the real world (a designer or customer) through the development process and back out to the customer through a new version of your product. Your process needs to do two things:
  1. optimize resources by managing the velocity of each idea, and
  2. track the progress of each idea to measure velocity.
You cannot control what you do not measure, at least in science.

Scale refers to the way systems change as they grow. An example from biology is how organisms are built at different sizes. Exoskeletons don't work above a certain size: elephants have huge legs while spiders don't, etc. The rules of physics determine what is possible at every geometric scale. Surface to volume ratios constrain the shape of living things.

Your product is a living thing. It has a complex metabolism that will not scale well under the conditions of stress caused by your success and growth. You might be lucky, or very smart, but why not be systematic and consider the effects of scale on the velocity of the parts of your process?

This blog will continue to explore these ideas. The problems of scale will be the subject of a later article.

To measure velocity, we need to set up checkpoints and we need to mark the ideas so that we can track them through the process. If we rely on rough time-to-market measures and just look at the endpoints of the process, we miss the chance to find choke points along the way. We need a way to break the development process into manageable steps, and that is what the earlier posts in this blog have started to explore.

Once we can measure velocity we can also start to control it. It is not wise to move as fast as possible at all times. In the business world there is always pressure to produce results but there is also a trade-off between speed and quality. To improve quality we must be prepared to control the velocity of each step of the process.

The key to tracking ideas through the development process is to identify the artifacts that represent the ideas at each step. The artifacts are almost always documents of some kind. Only in the world outside can ideas be free-form. The type of artifact used to represent an idea will vary for each step and for each business context. For a very small development group, a single document may hold many ideas and a scrap of paper may contain a valuable idea. For a very large organization, a change management database is essential and provides great efficiency and flexibility at high cost.

Monday, December 21, 2009

Requirements

Requirements artifacts are the documents that are created, managed, and published by business analysts or product management to define the scope and content of the development project.

Requirements must be captured and tracked in a version control system just like any other class of project artifact. In addition, specific requirements must be labeled in a way that allows items to be tracked through the entire life cycle: specifications, implementation, test, release, and support.

The granularity or specificity of the labels employed depends on the rigor of the project. A product with very high quality standards, such as a medical application, will have very detailed requirements that can be tracked in precise detail.

Requirements are often sketched out in natural language and carry informal instruction to the design specification team. A very effective way to organize functional requirements is to use the concepts of User Roles and Scenarios to develop Use cases. This process creates a formal model of the system. The model can then be analyzed to create formals specification of the components of the system.

An advantage of this approach is that formal requirements and specifications can be created as relatively small modules that can be worked on separately. This allows development and test of the modules or components of the system to proceed in parallel and supports the use of an Agile development process.

Friday, December 18, 2009

Customer to Requirements

Customer reports are triaged into categories. Some will be treated as defects in the product.

Defect reports must be prioritized and fed back into the development cycle at the appropriate point. For example, critical defects can be sent directly to development for action while defects with low impact can be sent to design specification group for consideration.

Many customer requests are valuable inputs to the product management, analysis, and requirements gathering process.

All customer reports must be captured in a database, typically one that is optimized for the purpose, called a defect tracking system.