Show Me The Money
 Details
 Created: 16 April 2008
 Written by Steven Ouellette
If you have been following my articles for the last few months, you know that we’re almost done with an experimental analysis and that today we will be doing the final step—making our company money. If you haven’t been following my articles, then you should probably be flogged with a soggy noodle until you admit to your other crimes in a tearful confession posted on YouTube.
So to recap: We are trying to choose the best material for a gear that we make. We have set up an experiment testing four new materials along with the current material (Material 5). We properly designed and executed the experiment and measured the wear for each material, with higher numbers meaning longer wear. The data can be found here. However, wear isn’t our only criterion; we also have to consider the cost of the materials and the losses due to nonconformance. Here are the material costs:
Table 1 — Material Costs
Material 1  $0.0375/unit 

Material 2  $0.0625/unit 
Material 3  $0.0625/unit 
Material 4  $0.0729/unit 
Material 5  $0.0417/unit 
In addition, as Genichi Taguchi showed us, we incur losses as we deviate from the target, both on average and due to variation. We have a lower specification limit of 38, a target, if we can get it, of 52 (as that will allow us to match our competitors), and no upper specification. I proposed a remediation cost of $1 per unit, which is the loss of getting a part right at specification. If we put the spec and the Taguchi Loss Function on a graph, we get Figure 1.
Figure 1 — The Taguchi Loss Function, Specification Limit, and Remediation Cost 

We found significant differences in the wear variability across the materials using the absolute deviation from averages (ADA) as a measure of the variance. We performed a Tukey’s honestlysignificantdifference posthoc analysis and determined which materials were significantly different from the others and summed it up in the following table:
Table 2 — Homogenous Subsets from Tukey Test on ADAs
Material  S1  S2 

5  0.875  
3  1  
1  1.531  1.531 
4  1.563  1.563 
2  2.5 
(To read the table, numbers in a column mean that those materials cannot be called different from each other. There are two subsets designated S1 and S2. Each material is shown on a row. The numbers in the subsets are the averages – in this case the average ADAs.)
We found significant differences in the means for the five materials and using our posthoc analysis from the GamesHowell procedure we came up with the following homogenous subsets:
Table 3 — Homogenous Subsets of Average Wear
Material  S1  S2  S3  S4 

5  40.125  
1  41.625  41.625  
3  44.5  44.5  
2  45  45  
4  52.25 
So that’s what we know so far. In the last article, I challenged you to determine which (if any) of the materials were the best choice. As it stands now, you could justify almost all of the materials based on one criterion or another. Material 4 has the highest average wear, but high variability too. Material 1 is the cheapest and is about the same on average as our current material, but it might be a little more variable than our current product and we can’t even make spec with that. Product 3 seems like a happy medium: it might be a little higher wear than our current project and it has a pretty low variability. Heck, there are probably people who would say to stick with the current material, because we have experience with it.
This is a case where, depending on your criterion, you could end up selecting different materials. But the goal of Six Sigma is to minimize the costs of poor quality, and that’s the criterion we should be using.
Luckily, the Taguchi loss function allows us to estimate the relative losses of the different materials. The losses aren’t necessarily exactly the dollar amount that you will experience, but the relative proportions of the cost differences are probably pretty close.
To estimate the losses, we need to get point estimates of the averages and variances. You could use the sample means, but we know that these averages are subject to sampling error and differences between materials are only there due to chance and chance alone. So to get the best possible estimate of the “real” averages, we can “pool,” or average, the ones that are indistinguishable from one other.
The tricky bit is when a material falls into two or more categories. In statistics we get to say funny things such as, “A = B and B = C but A ≠ C.” (Yeah, that one is a surefire laugh a minute with a statistician. A mad, highpitched one in a small, padded room, but a laugh nonetheless.) But in real life, we know that B can’t be equal to both A and C if A and C are different, so we have to make a choice. It’s the same with our materials.
One technique taught by my mentor Jeffrey Luftig is to pool across the homogenous subsets using the following rules:

If a setting only appears by itself in one subset, it’s its own point estimate (like Material 4 below).

If a setting appears in two subsets, we can’t use it in both. If it always occurs in the other subsets with the same block of other settings, we pool the block to get the estimate (like Materials 2 and 3 below).

If the setting occurs in more than one subset, but not with one or more other settings, it’s its own point estimate (like Material 1 below).

Any settings that are left only appear in one subset and are pooled (like Material 5 below).
Table 4 — Generating Pooled Point Estimates of Wear
Material  S1  S2  S3  S4  Point Estimate of μ 

5  40.125  40.125  
1  41.625  41.625  41.625  
3  44.5  44.5  44.75  
2  45  45  
4  52.25  52.25 
The same rules hold for pooling to get point estimates of the variability, but be careful here—we used the ADAs to make the statistical decision, but what we need is the variance. So we use the homogenous subsets to tell us what to average, but we average the variances, not the ADAs. (And of course you know you can’t take averages of standard deviations…)
Table 5 — Generating Pooled Point Estimates of Wear Variances
Material  S1  S2  Point Estimate of σ^{2} 

5  0.875  1.3485  
3  1  
1  1.531  1.531  3.8125 
4  1.563  1.563  
2  2.5  8.571 
Now heading into this analysis I want to make a picture so that I can see what’s going on. Using the pooled means and standard deviations we have just calculated, I generated Figure 1 to illustrate the output we would expect from each material.
Figure 2 — Taguchi Loss Function and Material Distributions 

Now looking at this I could probably rule out anything that has a tail going below that lower spec limit. For sure those materials aren’t at a Six Sigma level. So realistically, I’m probably only looking at Material 3 and 4 as my two potentials. But we will go ahead and calculate losses for them all.
To calculate the losses, we use the following formula:
Where Lx is the dollar loss associated with each individual unit produced, Cx is the remediation cost, Δ is distance the spec limit is from nominal, and N is nominal. (Note that an argument could be made saying that any gears in excess of the target should not count as a loss since there is no upper specification. Personally, I think that would be dangerous because high variation has its own inherent cost that should be accounted for, even if it is in a “good” direction.)
At this point, it’s all math:
Treatment  n  C_{x}  Δ_{1}  N  μ  σ^{2}  (μ  N)^{2}  Loss / Unit  Number of Units  Total Yearly Loss  Price to Door  Total Yearly Cost 
Material 1  8  1  14  52  41.625  3.8125  107.6406  0.568638  200,000  $1,364,732  $90,000  $1,454,732 
Material 2  8  1  14  52  44.75  8.571  52.5625  0.311906  200,000  $748,573  $150,000  $898,573 
Material 3  8  1  14  52  44.75  1.3485  52.5625  0.275056  200,000  $660,135  $150,000  $810,135 
Material 4  8  1  14  52  52.25  3.8125  0.0625  0.019770  200,000  $47,449  $174,960  $222,409 
Material 5  8  1  14  52  40.125  1.3485  141.0156  0.726348  200,000  $1,743,234  $100,080  $1,843,314 
Finally, we can answer the question of which material will result in the most bottomline savings to the process. Even though it’s the most expensive material, as Material 4 is right on target it saves us the most money. As compared to our current material, we estimate that it will reduce costs by $1,620,905 per year, even accounting for the fact that it’s going to cost us $74,960 more to buy the material. The “hidden” costs of variation off the target have been large: $1.84 million. Our current gear material doesn’t consistently meet the spec and we were told that the average wear is less than our competitors, so the real cost (and real savings with Material 4) is likely to be even higher—Deming’s “unknown and unknowable.”
Bring it on home
Using the Taguchi loss function to estimate the losses associated with different experimental settings can be very helpful in selecting the best one. But to get there, you have to understand how to do an ANOVA to look for the differences in means. In order, you would:

Test for normality (even though ANOVA is robust to departures from normality for equal sample sizes we might need to know if it is nonnormal for later steps)

Test for equal variances using Levene’s test (to find out if the settings change the variation)

If the variances are unequal, perform posthocs to determine where the differences are


Test for equal means using ANOVA

If the means are unequal, perform posthocs to determine where the differences are


Generate point estimates for the means and variances

Use the Taguchi loss function to account for differences in conformance to target due to variation and/or means in order to select the optimum to reduce losses
The Taguchi loss function is no panacea, nor does it give exact dollar numbers, but it does give a pretty reliable relative rating of where the losses are generated. It is, of course, highly dependent on knowing what the optimum nominal really is. Businesses without this type of market intelligence really need to fix that first, because in the absence of a target, everything seems good, which means you can get anything. In my experience, the absence of a target is the source of a lot of “mysterious” and “inexplicable” problems that disappear once a target is put in place.
What you want to avoid is making decisions without an understanding of the financial effects. Sure in this case choosing the most wear resistance material turned out to be the best, but you can easily picture circumstances where that wouldn’t be the case. By performing posthocs, you might find that while the expensive setting happened to be higher this time, it’s really not statistically distinguishable from the cheaper setting.
And hey, thanks for staying with me through this analysis. I hope that you can use these tools to save yourself and your company a lot of money. I know I had fun, and I think you did too.
But, I could be wrong.
Special thanks again to MVPstats and PHASTTM for making the analysis easy.