Sunday, January 31, 2010

Design for Test and Reliability

Design for Test is part of standard practice in the microcontroller industry. It is discussed in the context of software engineering as well - but it is absolutely essential to the makers of large-scale microprocessors. The key concept is that the programming interface to a CPU does not allow access to many of the internals of the chip and additional logic is therefore added to the hardware to allow test and diagnosis of the device outside the scope of normal use. DFT is necessary to handle the fact that minor flaws can prevent execution of all diagnostic self-test operations.

Some years ago I had the opportunity to apply this principle to software architecture.
The application was a data abstraction layer in a multi-tier Windows application. The app used a relational database for persistence and presented the user with a set of objects. The layers in the design were as follows:
  • GUI
  • Business logic
  • Object model
  • Abstraction
  • ODBC
  • Database
The use of ODBC allowed a choice of databases with minimal configuration in the front end.
The part that I designed and built was the Abstraction layer and my goal was to make it as generic as possible. To accomplish this I put metadata in the database that defined the object model. The code in the Abstraction layer knew almost nothing about the business objects and their relationships to each other. It knew that there was such a thing as an object, that an object had properties, that objects could have relationships and dependencies, and what the property types could be. At first, all the other details were in data in the database that matched the expectations of the business logic. This allowed us to change the front end and the object model with little or no impact on the design or implementation of the Abstraction layer.

One way to apply the DFT principle to software is to add code that does some sort of internal self-test or consistency check. In this case, we didn't add any code that wasn't used in normal operation. In fact, we went in the other direction and stripped out any code we didn't absolutely need to get the Abstraction function to run. This led to an implementation that was very generic and concise. Even the definition of the generic data object was in the database.
Obviously some bootstrap code was needed to make this work. And in the revised and rewritten code in the second release we took out the initial bootstrap to improve performance.
The effect on test was simply that no unit test was needed for this part of the implementation. It either worked or it didn't. Like the FORTH language, which has a tiny interpreter, we started with a very compact core and built out from there.

Sunday, January 24, 2010

Velocity and Scale

These two concepts are central to my approach. If these concepts are understood and mastered, your great idea will have no limits. Here's how it works.

Velocity refers the speed at which a new idea can move from the real world (a designer or customer) through the development process and back out to the customer through a new version of your product. Your process needs to do two things:
  1. optimize resources by managing the velocity of each idea, and
  2. track the progress of each idea to measure velocity.
You cannot control what you do not measure, at least in science.

Scale refers to the way systems change as they grow. An example from biology is how organisms are built at different sizes. Exoskeletons don't work above a certain size: elephants have huge legs while spiders don't, etc. The rules of physics determine what is possible at every geometric scale. Surface to volume ratios constrain the shape of living things.

Your product is a living thing. It has a complex metabolism that will not scale well under the conditions of stress caused by your success and growth. You might be lucky, or very smart, but why not be systematic and consider the effects of scale on the velocity of the parts of your process?

This blog will continue to explore these ideas. The problems of scale will be the subject of a later article.

To measure velocity, we need to set up checkpoints and we need to mark the ideas so that we can track them through the process. If we rely on rough time-to-market measures and just look at the endpoints of the process, we miss the chance to find choke points along the way. We need a way to break the development process into manageable steps, and that is what the earlier posts in this blog have started to explore.

Once we can measure velocity we can also start to control it. It is not wise to move as fast as possible at all times. In the business world there is always pressure to produce results but there is also a trade-off between speed and quality. To improve quality we must be prepared to control the velocity of each step of the process.

The key to tracking ideas through the development process is to identify the artifacts that represent the ideas at each step. The artifacts are almost always documents of some kind. Only in the world outside can ideas be free-form. The type of artifact used to represent an idea will vary for each step and for each business context. For a very small development group, a single document may hold many ideas and a scrap of paper may contain a valuable idea. For a very large organization, a change management database is essential and provides great efficiency and flexibility at high cost.