#287 from Innovative
Leader Volume 6, Number 7
Mr. Fohl is
president of Technology Integration Group, Inc., 681 South St.,
Carlisle, MA 01741 (phone 508-371-0194).
Let’s say that
you’re a general manager trying to get a new product out.
What you’d like to do is get production started as soon
as possible, coordinate the marketing push and convince the
financial people to free up the money for all of this.
All these steps lend themselves to nice neat charts such as
PERT, except for one ominous block: when will product development be completed?
products can be developed by systematically putting together known
components in predictable ways, many cannot.
The development process involves pushing into unknown
technical territory in search of hitherto unavailable performance,
cost and/or reliability. In
other words, some inventing has to be done.
These blocks of time can be large, often running for years.
only source of information about progress toward the performance
goals of development programs is a series of mind-numbing meetings
in which the various protagonists defend their failures to meet
(usually overly optimistic) expectations.
From time to time, a senior R&D manager reminds you
that, “You can’t schedule invention,” but you should commit
to building the production facility anyway!
Clearly, it would be a whole lot better if we had a more
objective measure of whether and when the goals of the development
program will be achieved.
We were faced
with this situation regularly at a major lighting company and
decided to see if there weren’t some better tools.
What we felt was needed were indices of the state of the
development process that could be tracked in a straightforward
way, similar to the way in which companies routinely track
financial numbers. These parameters should be simple and easily understandable
by those in production planning, market research, sales, finance
and legal. It’s
essential that they personally feel comfortable with the data.
On the other hand, the parameters must truly reflect the
state of the program’s progress.
Simply tracking statistics such as the number of people on
the job or the number of experiments carried out tells the company
very little. As it
turned out, there was very little in the business literature on
this subject so we set out on our own.
The first problem
is to decide what you’re going to track. We tried to collect all the data available on ongoing and
past programs. That
brought us immediately to the next problem:
how to get real
data. Aside from the sometimes weak recording and discarding of
data, there’s a certain suspicion when we (as “outsiders”)
began to look at development program details.
It’s necessary to build a level of trust.
We found a few
parameters which were good indicators of the state of progress in
the development process. These
parameters were simply the properly
stated objectives of the product development program.
This may sound trivial, but sometimes these objectives
aren’t clear or don’t remain constant.
In order to make progress, performance objectives must be
clear in everyone’s mind. This
focuses all the small, often unrecorded, decisions made and
lessons learned toward the desired result.
At the same time, the degree to which the various critical
technical factors become understood and integrated into the
program is measured by the improvement of these performance
typical parameters were the lifetime and light output of the lamps
under development. When
the lamps reached the pilot production stage, we tracked
production parameters such as units per operator hour, or the
percent of lamp starts resulting in satisfactory products.
In some complex projects it was necessary to subdivide them
into programs with single objectives.
An example was an electroluminescent display program in
which one subprogram was aimed at improving pixel brightness;
another was aimed at reducing broken connections.
From the point of
view of outsiders, technical “breakthroughs” seem like flashes
of inspiration occurring nearly instantaneously.
When one is on the inside, the trees tend to obscure the
forest. It was, then,
with some surprise that we noticed that progress toward the goals
was quite systematic in most cases.
Moreover, when we
looked closer at the details of the programs where systematic
progress wasn’t seen, it was due to one or more of the following
The objectives of the program changed.
The program was disrupted by personnel or location changes.
The recording and utilizing of test results was poor.
The program was operating at the physical limits of the
At this stage, we
had already found two valuable ways to use these tools for
managing the development process.
One is that when the performance parameters were plotted as
a function of time or the number of experiments carried out, the
rate of progress frequently is smooth enough so that
extrapolations to future achievement of program objectives can be
made with some confidence. Perhaps
more importantly, these extrapolations can be presented
convincingly to nontechnical executives.
The other use is that flawed programs can be quickly
diagnosed and fixed.
these points, we noticed a number of interesting characteristics
in the behavior of different types of development programs. A common type of program is to squeeze some marginal gain on
the state of the art out of an existing technology. This can require sophisticated and expensive methods and
yield important market advantages, but it means the program is
operating near the physical limits of the technology.
circumstances, the curve of growth of the performance parameters
shows diminishing returns. That
is, the rate of improvement slows as the program progresses. When the physical limits are reached, the curve becomes
horizontal and no further progress is possible unless a new
technology is introduced.
obvious, phenomenon is that the fluctuations in the results from
test to test diminish as the program progresses.
This is due to steadily increasing control over the various
factors affecting performance.
We also studied
programs which involved technologies in their infancy, very far
from realizing their full potential.
An example is that of electroluminescent display panels. When working with these unexploited technologies, the shape
of the curves of progress show increasing returns, and the rate of
progress increases as the program progresses.
We believe this is due to the program being able to
concentrate its attention on the more important factors as it
gains experience. It spends less and less time chasing up blind alleys.
technology programs, the behavior of fluctuations in the results
is also quantitatively different from that seen in mature
technologies. Whereas fluctuations in test results tend to diminish with
time in mature technology programs, fluctuations in the immature
technology programs tend to increase.
Sometimes there’s also a period in which the performance
parameter fluctuates but doesn’t show any real improvement.
Then suddenly, progress takes off, showing growth and
larger fluctuations. This
seems to be the result of the process of assembling large blocks
of relevant information which, however, does not manifest itself
in overt performance until it comes together in a coherent
framework. As is
sometimes said, “We learned a lot of things not to do.”
Putting these two
types of behavior together is instructive. In the early stages of a developing technology, the rate of
progress increases with time and the curve of progress bends
upward. In the later
stages of development, the curve bends downward.
Thus, over the whole history of the development of a
technology, the curve forms a shape like a flattened letter S.
This is precisely what is seen in the literature on the
historic development of technologies such as steam engines.
But there has been relatively little explanation for it.
A full S curve is
rarely seen in a single development program. Usually the goals are more modest. Products are introduced when the performance meets a
marketable level not when the full potential is achieved.
Development programs operating near the maximum potential
are usually launched when a good deal of the potential has already
been discovered and exploited.
The most obvious
application of these ideas is to make all aspects of the product
development process more predictably concurrent. Production facilities can be planned and built as the
development program progresses.
Marketing research and product introduction plans can be
implemented with a more certain idea of program timing.
application is the use of these tracking methods to manage the
programs themselves. A
cautionary word: if
the use of the data becomes punitive, the data will become
identifying problems, there are other management functions that
tracking of performance parameters can help.
There’s always the risk that pouring resources into a
program can actually impede it (see, for example, The
Mythical Man Month by Fredrick P. Brooks).
Conversely, reduction of resources can cause the program to
become subcritical. By
monitoring the performance parameter progress curves, a rapid and
accurate measure of the effect of such actions can be made.
The shapes of
these curves can give valuable clues on the state of a technology.
If the growth curves show plateaus, perhaps that technology
has been milked dry and a new approach should be sought.
On the other hand, we sometimes noticed that, although a
program had reached its performance objectives, it had not yet
continued effort in the same direction seemed capable of and, in
fact, did yield significant gains.
An intriguing way
of using these methods is to track the progress of your
several instances, we were able to get samples of products which a
competitor introduced that were similar to products we had
these introductions were premature and we were able to compare the
state of the competitor’s performance parameters with our
records. This allowed
us to estimate just when the competitor was going to have a truly
equivalent product. Of
course, our strategy was to introduce an improved product at that
point and send him back to the bottom of his learning curve!