Home Six Sigma Heretic Measurements Part 1

Measurements Part 1

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

Shifty Sigma

Now that we have talked about defining Six Sigma and 10 stupid Six Sigma tricks, I’ll move on to the measure phase and discuss some things that work and don’t work in measuring the effect of Six Sigma.

The new metric that Six Sigma brought to us was the sigma index, so let’s start with it. Whenever I refer to this index, I will write out “sigma” and put scare quotes around it. This will get tiresome, but this is necessary to differentiate it from the statistical term for standard deviation. I’ll tell you up front that I’m not a big fan of the sigma index for a couple of reasons, which I promise I’ll get to.

But first, what is “sigma”?

This is a bit convoluted, I warn you.

A “sigma” by any other name . . .
Let’s say you have a process that produces output that’s measured on a continuous scale such as thickness or time. To make it easier, let’s assume that the process produces output that’s normally distributed and that the specification is symmetrical about the target.

You validate your measurement system so that you can trust your gauge, and perform a short-term data collection to characterize the output of your process.

Short term here is defined as an amount of time where your process is probably only affected by variation common to the process. The duration of “short term” depends on the process, so usually we take consecutive measurements at random intervals over time. (You might recognize this as the basis for rational subgrouping on an x-bar and r or s chart.) The idea is that you can get a “snapshot” of the underlying variation of the process. You make a histogram and find something like this:

Figure 1: Short-term process output

Note that you don’t need 16,000 data points to estimate your process. I just used a lot of data points to get a nice graph.

That looks pretty good. The probability of going out of specification is pretty small, and there’s a lot of stuff near the target. In fact, you find that the standard deviation of the process (sigma without scare quotes, or σ) is one-sixth of the width of the specification, or six times the standard deviation (Thus “six sigma”).

Sometimes, however, the process average shifts due to unusual occurrences (special causes). If you have a control chart on the process, you can detect even subtle shifts pretty quickly. If you don’t, you probably won’t catch a shift unless it’s bad enough so that it goes out of specification, long after the original special cause occurs. In Six Sigma lore, we assume that a process drifts up to ±1.5 standard deviations over time.

There has been a lot of discussion about the applicability of this ±1.5 standard deviation shift. So let’s be clear: There’s no reason to expect that any particular process average will shift ±1.5 standard deviations over time. However, before the old-guard statisticians say, “I told you so, you young whippersnappers,” there was also no reason to think that a process would shift by ±1 standard deviation back when we were requiring a Cpk of 1.33, either. There’s more to the difference between a Cpk of 1.33 and “sigma” than the magnitude of the shift, however. The basis for a Cpk of 1.33 is that even if your process shifts ±1 standard deviation, a normally distributed process’ natural process limit would fall on a specification limit giving you time to react. To calculate Cpk you’re required to do a control chart, so you’ll easily detect the shift within a few points and be able to correct the process. With “sigma” however, we’re conveying that these shifts of ±1.5 standard deviations are unavoidable and building in extra room inside the specification for those shifts. This is weird to me, because you either have a process in control using a control chart and will detect the shifts long before you’re likely to make a part out of specification, or you don’t have a process in control, so you can’t actually predict anything about the future, and calculating “sigma” or Cpk is useless anyway.

I like Cpk better in this case since Cpk explicitly requires control (stability over time), which in turn requires that you have a control chart so that you can react quickly if it shifts, and so that it doesn’t end up like Figure 2. I also like it better because if you have a Cpk you have the data you need for Cpm, my preferred process-characterization metric.

OK, so back to the ±1.5 standard deviation shift in the average. If we presume that our process does that, the specification limits over time won’t always be six standard deviations away from the output average. In the worst case, they will be only four-and-a-half, as below:

Figure 2: Short-term process output shifted up 1.5 standard deviations

Even with the shift, you would expect a process to make a nonconforming part only 3.4 out of a million times (3.4 ppm).

So, this output that over the short term has six standard deviations between the process average and the specification limits, and which, due to the ±1.5 standard deviation shift could have as few as 4.5 standard deviations between the process average and the specification limits, is indexed to six “sigma.” (When I explain this to Six Sigma clients they jokingly ask if they can get 25 percent of their money back, as they’re only getting 4.5 sigmas, not the six they thought they were paying for. At least I think they’re joking.)

There are two different ways to calculate “sigma.” One is based on assuming a normal distribution, the other on estimating the percent nonconforming, either by fitting a curve to a non-normal distribution or based on the number of defects possible.

If over the short term the distribution is normal and in-control, “sigma” is the z-score of the specification closest to the mean.

Where μ is the average and σ is the standard deviation. Figure 1 above would have a “sigma” of 6. If you have collected data over a long period of time so that your data are subject to those putative shifts in average, then you calculate the z-score based on those data and the specifications and add 1.5 to it. If you have non-normal data that’s in control, you can model a distribution and predict what tail will have the largest proportion exceeding the specification limit, look up this probability to get a z, and use that to get “sigma.”

The other way that “sigma” is determined, which is used for non-normal continuous data as well as discrete data, is to predict the parts per million that won’t be within specification and look up that value on a table indexed to a normal distribution. You can create this table in an Excel spreadsheet by making a column for “sigma” and filling it with 0, 0.1, 0.2, etc., on up to 6 (or higher if you feel you might need it). Next to that label a column “Defects per Million Opportunities (DPMO)” and enter:


replacing A4 with whatever the cell address is of your first cell and copy it down. (This equation is for the long-term “sigma.”) Now estimate from your data the number that would be out of specification in a million tries (DPMO) and look up the corresponding “sigma.” If you would rather just enter in the DPMO, you can use this formula:


Again, replacing A4 with the cell address of where you are entering the DPMO.

The good part
The idea is that you can index any process output to “sigma,” including processes that only produce discrete data (such as number of errors) and compare them. It was introduced to me as “A measure that’s so simple, even a manager could understand it.” If a manager’s “sigma” is six, good. If it isn’t, improvement is needed. Now this is out-the-door quality, so it’s also easy to convey that if your target is 3.4 ppm nonconforming out the door, you had better have a lot better conformance than that in each process step. Sounds good, so what could be wrong with “sigma”? Next month I’ll tell you why I don’t like to use it and why “sigma” can encourage bad behavior.

Special thanks to Michael Petrovich and his program MVPstats, which makes these graphics so easy to generate.

Please post your questions and comments at the Discussion Board.

Random Heresy

Constraining your improvement activities to manufacturing processes


News Flash

Six Sigma's lead instructor Steven Ouellette wrote an article with Dr. Jeffrey Luftig on "The Decline of Ethical Behavior in Business."



Six Sigma Online's lead instructor Steven Ouellette was profiled in the June 2008 issue of Quality Digest magazine. If you want to learn more about Steve's peculiar view of the world, as well as what he studied for a year in Europe, read the profile online.