“In his more than 30 years at Ford Motor Company, Richard has made outstanding contributions to the development of our new products,” said Ford Motor Company President and Chief Executive Officer Alan Mulally. “Richard’s feel for the customer and technical expertise has helped improve all of our brands, and his drive and determination have been a great inspiration to our engineers as we have accelerated the development of our new products that people really want and value.”
Parry-Jones’ present responsibilities will be assumed by Derrick Kuzak, group vice president, Global Product Development, and members of the global product development leadership team. In particular, Gerhard Schmidt, 61, vice president, Research & Advanced Engineering, will assume the role of chief technical officer in addition to his present responsibilities.
Parry-Jones, 56, is known throughout the industry for his expertise in vehicle development – particularly in the area of driving dynamics and refinement. He also is highly regarded for his work on product safety and environmental initiatives.
He was named “Man of the Year” in 1994 by the top British publication Autocar, and in 1997 by the U.S. magazine Automobile. In 2001, he received the Golden Gear Award for outstanding automotive achievement from the Washington Automotive Press Association.
He is an elected fellow of The Royal Academy of Engineers and the Institution of Mechanical Engineers. In 1995, he was awarded an honorary doctorate by Loughborough University in the UK for his contributions to the motor industry and engineering education and training. He also was awarded an honorary Doctorate of Science from Cranfield University in June 2007.
Since 2001, Parry-Jones has been a visiting professor within the Department of Aeronautical and Automotive Engineering at Loughborough University. In 2005 he was awarded a CBE (Commander of the Order of the British Empire) in HM The Queen’s New Year honors list for services to the automobile industry.
Title: Group Vice President and Chief Technical Officer, Ford Motor Company
Parry-Jones also serves as a visiting professor within the Department of Aeronautical and Automotive Engineering at England’s Loughborough University, a position to which he was appointed on Oct. 17, 2001.
Equally passionate about customer satisfaction, Parry-Jones is helping Ford teams drive consumer insight and quality initiatives through the product engineering process to deliver cars and trucks with improved initial quality, long-range durability, technological innovation and the surprise-and-delight features that anticipate the wants and needs of customers.
He was named group vice president on Jan. 1, 1998, and appointed chief technical officer by the Board of Directors on Aug. 1, 2001. He was named as the senior executive for Mazda oversight on Nov. 15, 2001.
From May 1994 to December 1997, Parry-Jones was vice president of the Product Development Group in Europe. Vehicles developed under his direction there include the Ford Focus, Ka, Fiesta, Puma, Mondeo, Cougar and Galaxy.
Born in Wales in 1951, Parry-Jones joined Ford’s Product Development Group in 1969 as an under-graduate trainee. He went on to graduate with a First Class Honours Degree in mechanical engineering from the University of Salford, Manchester, in 1973, then held a number of product roles in Ford's UK operations.
He was appointed manager of Small Car Programs in 1982 and played a leading role in development of the 1981 European Escort (1981 European Car of the Year) and the introduction of the European Sierra in 1983. He was named executive engineer of Ford’s Technological Research in Europe in 1985, before adding responsibility for Vehicle Concepts a year later.
Fluent in both English and German, Parry-Jones’ international experience includes an assignment as director of Vehicle Concepts Engineering in the United States in 1988, before taking charge of Manufacturing Operations at Ford’s Cologne, Germany, assembly plant in 1990.
Named chief engineer for Vehicle Engineering in 1991, Parry-Jones is well known for his expertise in vehicle development, particularly in the area of driving dynamics and refinement, and his professional accomplishments have been widely recognized.
Parry-Jones was named “Man of the Year” in 1994 by the top British publication Autocar and in 1997 by the U.S. magazine Automobile. In 2001 he received the Golden Gear Award for outstanding automotive achievement from the Washington Automotive Press Association and was honoured as “Marketing Statesman of the Year” by the Sales and Marketing Executives of Detroit.
He is an elected fellow of both the Royal Academy of Engineers and the Institution of Mechanical Engineers. In 1995 he was awarded an honorary doctorate by Loughborough University in recognition of his outstanding contributions to the motor industry and engineering education and training. In early 2005 he was awarded a CBE (Commander of the Order of the British Empire) in HM The Queen's New Year honours list for services to the automobile industry.
Group Vice President
Global Product Development & Quality
Ford Motor Company
May 17, 2001
Thanks very much for that introduction and it's a pleasure to be here.
Over the next 45 minutes I want to talk about Engineering for Corporate Success in the New Millennium. Specifically, my address focuses on statistical engineering techniques and the power they bring to the engineering process in automotive design.
The business model for success and sustainability has changed. This change is causing upheaval at traditional manufacturers such as Ford. As the result of many developments such as the globalisation of markets, the reduction in trade restrictions, and the ever-increasing pervasiveness and utility of the internet, consumers are now extremely powerful. To survive in this environment, a producer must create desire among a very well informed, very savvy, very demanding customer base.
Of course, this new model does not eliminate a company's responsibility to create wealth and increase shareholder value. A company must have a sustainable business case if it is to have a future. The capital market's judgment, in the long term, is a reflection of how well a company can sustainably add value for their customers. Hence, succeeding with shareholders requires success with the consumer.
Thus, to succeed today, a company's core competencies need to include the ability to quickly understand consumers and exceed their expectations. This means translating their needs and anticipating their future wants, and then being so proficient and efficient in responding to that insight that the consumer becomes passionate about the product. It goes beyond fulfilling or stimulating rational needs to connecting with customers at a deep, emotional level.
It requires a focus on true customer insight and first-rate skills in engineering and manufacturing to create innovative solutions to consumers' emotional and rational needs. These two areas - consumer insight and engineering and manufacturing quality - are the themes I want to address.
The modern engineer has a crucial role in both. Today, the engineer is a driving force behind the insight and innovation that brings the consumer back. Relying on the marketing department to supply customer wants is no longer sufficient. Engineers must meet and live with their consumers to understand their underlying needs, their aspirations, their lifestyles, and their dreams. Engineers have to get out there and talk to consumers. They need to interact with consumers in a visceral way to understand their needs and emotions. This means doing observational research. You cannot understand what customers want just by asking them because many of them cannot articulate their needs and wants.
For example, attending Ford consumer clinics has taught me to look for where the consumer has improvised. I look for things like bungee cords, rubber bands, cardboard boxes, and accessories. These are instances where consumers have a need that we have failed to anticipate and fulfill. They may not even be aware of that need and perhaps when asked would not express it as one. The improvisation is a clue for a better product and that's where we need to use observational research. And engineers must understand the major societal forces that are shaping the way those consumers will behave in the future and anticipate the needs those behaviors may generate. No easy task!
Let us start with the consumer. Modern consumers are not just better informed, they are more globally aware and more pro-active about obtaining information. Their world is fast-paced. Their time is precious. They are impatient and particular.
They expect products and services to be personal and customized, and they judge the quality of the products and services they buy not against competitive or comparable products but against their most satisfying purchasing experience -regardless of sector or industry. Consumers now benchmark between industries. An outstanding experience buying a CD or using a computer or staying in a hotel, sets a new standard, a new expectation, against which all future product and service experiences such as books, restaurants, coats and cars will be judged.
How do engineers gain insight into consumers? Once upon a time, we segmented consumers by simple demographics such as age, gender, and income. They were envisioned as generic groups moving randomly from one trend to another. Today, we recognize that trends are not random and that there is no "generic" consumer. Even within broad labels, consumers are very different. Companies such as Ford have developed sophisticated tools to understand the dynamics of the market. Increasingly, we are clustering customers into precise sub-groups with common emotional needs, physical needs, lifestyles, values, and aspirations. We are also developing relationships on a one-to-one basis with consumers.
We are getting better at understanding the consumer. You might be thinking, "We're engineers. This is very good and nice, but it's all touchy-feely stuff. Give me the data." Because we're engineers, we need tools and models to capture consumer insights. A useful tool to model customer enthusiasm for a product is the Kano model.  In the Kano model, the abscissa reflects the degree of success in the corporation in responding to the customer need. The ordinate measures customer enthusiasm as a consequence of the degree of achievement.
There are three basic categories of customer satisfaction that, when delivered with flawless execution, combine to create customer enthusiasm. They are basic quality, performance quality and surprise and delight quality.
The basic quality reflects the absolute minimum that a customer expects. For example, a customer expects to be able to shift gears without any notchy feel. If we fail to deliver this, she or he is very dissatisfied. If we do deliver it, well that is just what the customer expected.
Performance quality covers aspects for which, the better we do, the more the customer enthuses about the product. Two examples are fuel economy and the level of craftsmanship in the car.
Surprise and delight aspects are probably the most difficult to deliver. These often differentiate a product from the rest of the competition.
If we fail to deliver surprise and delight features the customer is not dissatisfied, since the feature was not necessarily expected. However, surprising, delightful features cause the customer to enthuse about the product and build loyalty and brand image.
For example, in the Ford Focus we were determined to deliver superior ride and handling for a small family hatchback by using a control blade suspension system.
We also wanted to provide an innovative high-seating package for a family car. Consumers did not expect to be able to get in and out of the car so easily and so comfortably. But they can. Ford is operationalising the Kano model on all new products, such as the Focus, as a way to identify and communicate consumer insights.
In turn, the Kano model provides the engineer fundamental direction on what a product must do to meet and exceed consumer expectations. It becomes the engineer's task is to synthesize products that align with the Kano model for his or her program.
How the engineer designs the product, and particularly the tools that are used by the engineer, is the second key area of tonight's address - specifically manufacturing quality and the treatment of variability in engineering design. Variation is an old problem, and it is the curse of many manufacturing processes. Try as we may we cannot make the next part exactly like the one that preceded it. The treatment of variation in manufacturing and formulating strategies to deal with it date back to the 1920s and the invention of the control chart by Shewhart.  Variation impairs our ability to predict product performance - including the performance experienced by the consumer.
Although this is an old problem, new ways of managing variation at the R&D stage are being developed as a result of the synergies between engineering and statistical science. But, rather than talk about theories of what might be if the world was perfect, I will instead use a series of case studies - real world examples, like those used in business classes - to make my points. And then I will finish with an appeal for help. So let's start with a simple illustration of variability in manufacturing - paint.
Henry Ford once said, "You can have any color you want as long as it is black." This quote is frequently used out of context and misinterpreted to suggest that the auto industry is not concerned with what customers actually want. Nothing could be further from the truth. What consumers want is our obsession. Even Henry Ford, back in the early days of the twentieth century, was trying to provide what consumers wanted - affordable, reliable, available, individual mass transportation. With the technology at the time, black paint was the easiest color to apply and the quickest to dry. These attributes made the manufacturing process more efficient, which kept costs down and made the product affordable - which was the critical consumer want.
Speaking of quotations being used out of context, a few years before Henry Ford was painting his cars black, Benjamin Disraeli (who was the British Prime Minister near the end of the 19th Century) stated, "Lies, damn lies, and statistics." In Disraeli's time the word "statistics" meant, "Data collected and tabulated for use by the state." This quote is often used to deride statistical analysis. My goal today is to demonstrate the value of statistical analysis to engineering. I will demonstrate how this combination of engineering science (the study of physics and materials) and statistical science (the empirical modeling of variability) is a necessary tool to achieve what is demanded by our customers - a consistent level of superlative performance.
Paint colour is one of the more easily recognisable characteristics. It allows customers to customize and personalize their vehicles and vehicle manufacturers provide a vast array of colours to satisfy a wide range of desires.
I want to illustrate how variability plays a key part in meeting these desires, by allowing Ford to offer different colours at a consistent level of quality that customers have a right to expect. One key paint attribute necessary to deliver high quality is film thickness. Too much thickness and the paint will sag and run, too little and it will be too thin. To complicate things even further, different colours have different "hiding power" and so they require different thicknesses to achieve the required colour tone, appearance, gloss level, and so on. It is tempting to assume that the best way to keep the paint thickness on target would be to adjust the amount of paint being applied through the gun, based on deviations from the target. There are simple adjusters on the nozzles, so this simple feedback mechanism surely could do no harm.
Well, it so happens that if the process is already relatively stable with random fluctuations around the target thickness value, this kind of intervention is disastrous.
This chart shows how the variation has increased, not decreased. That is the last thing we want! The chart indicates (correctly) that we have been tampering with the process, and it calls out for us to use a different strategy to reduce variation. What we need to do is look for some process variables that are related to or influence the characteristic we are trying to improve. It turns out that one of the key factors which causes variation in paint thickness is the variation in airflow in the paint booth. Paint is applied in a fine mist, and draughts make it difficult to keep an even thickness.
The data in this Figure shows how paint thickness depends on airflow. As you can see, for a given spray rate, the greater the airflow, the thinner the paint tends to be. That is, variation in airflow will transmit variation to paint thickness, by virtue of the relationship, or correlation, to use the statistical term.
To avoid sags, runs, and other defects, we want to reduce or eliminate variability in paint thickness. So, if we can control airflow, we can reduce the contribution that it makes to paint thickness variability. In our plants, we have achieved control over airflow by installing acoustic anemometers that measure air velocity within the paint booth. To maintain target airflow, a control system identifies changes to fan speeds and dampers.
This chart shows what this has done for us. The intervention with airflow control has reduced variation in the paint thickness. The standard deviation has shrunk significantly from 4.2 to 1.3. The use of the Shewhart control chart helps us decide the correct strategy for reducing variation in paint film thickness. In this case, the data tells the truth, and I am sure Disraeli would agree.
Now that we have a process with less variation, we can tune it. We can change the spray rate and reset the paint thickness for each colour as required while avoiding quality problems such as incorrect thickness being induced by excessive variation in airflow. And Henry Ford would like this; we can now paint cars in colours other than black, with lower cost and higher quality. And a significant byproduct is a reduction in wasted paint, which helps the environment.
This is a consistent theme as you reduce variability. You don't just improve the quality of the product. You reduce waste and save money. This is literally a win-win-win solution and there's no new technology here. It is simply the application of statistical tools. By now most of the people in the audience must be thinking, "Here we go again, more manufacturing bashing. The way to divert variability is get those manufacturing guys to reduce the variability in the manufacturing plant." For my next example, I'd like to talk about how variability is also the curse of product design.
Designs vary in the way they function in the field because of customer usage and operating environment. These sources of variation have to be addressed, and it is the job of the design engineer to come up with innovative ways to do just that. The challenge of engineering is to design products that are as insensitive as possible to these sources of variation. That is, they must be robust. By "robust," I do not mean, "built like a tank." As I remarked earlier on the challenges of the modern business equation, we need to have robust designs that are also cost efficient. Fortunately, there are techniques available to achieve this goal.
For example, a few years ago as we were developing our Zetec engine family and we wanted to improve the start time of the engine. The customer's level of confidence in a vehicle (an emotional need) is impaired by start times significantly longer than one second and further impaired by significant variability in these start times. The major sources of variation are fuel quality (the customer usage) and ambient temperature (the operating environment). In robustness terminology, these sources of variation are called noise factors. The crucial technical feature in getting an engine to fire is to deliver precisely the right mixture of fuel and air at the tip of the spark plug at the moment the electronic engine controller gives the signal to the spark plug to ignite the mixture.
In this particular situation, the product engineer has a number of design variables (called control factors) with which he or she can experiment to try to counteract the effects of the noise. Once the engineer has decided which of the control factors and noise factors are crucial and need to be studied, she or he can capture all this information in a parameter diagram. Let me just mention a couple of things about this parameter diagram. First, the ideal measure of the function of the fuel presentation system is that, for a given amount of fuel injected into the cylinder, a given amount should present itself at the tip of the spark plug, uniformly mixed with air.
Note that other outputs of the system are labeled as error states - in this case emissions caused by inefficient burning of the mixture during the start cycle, and fuel leaks.
I will return to the consequences of the error states in a moment, but first, let's look at the number of control factors listed. In this case, it is seven altogether. These seven factors include injector type (of which there were six varieties on offer) and six other factors that could be investigated, with three different values each. It doesn't take long to work out that there are 6x36=4374 possible combinations of engine design to be evaluated. Now of course we can't evaluate all of these. To make a judicious selection, we need to use a statistical technique - namely a statistical design of experiments. By using this technique the relative performance of any combination in the 4374 varieties of engine design can be predicted. As much as I would love to, time won't allow me to get into too much detail on the theory of experimental design and how it works this evening, but it is a fundamental tool in optimizing engineering designs.
As a taster, here is a detail of some of the data collected in this experiment. You'll note that all of the factors were changed simultaneously, and in a structured way.
The highlighted part of this table shows a combination of any two variables is tested at least once. It is my experience that this way of experimenting is counter-intuitive to many engineers, who wrongly believe that the only way to conduct experiments is to hold everything constant and vary one thing at a time. I believe this stems from our deterministic thinking, which is a byproduct of the way we teach engineers. They study a world governed by the laws of physics, with a mindset firmly in verification mode, driven by a deductive nature. For empirical investigations, where we do not always understand perfectly how the physics works, particularly because of the impact of variation, our mindset needs to be one of discovery through inductive logic. In such instances, the right way to proceed is through statistically designed experiments to make the discovery process as efficient as possible.
It turns out that, after appropriate analysis of the data for this example, the inlet valve timing, position of the spark plug in the cylinder, and the injector location, are shown to have the largest effect on reducing the effect of the noise factors on the fuel-to-air ratio. With the right selection of nominals for these variables we can make the fuel-to-air ratio more robust to temperature and fuel quality, and hence improve start time across the range of these field conditions.
Here's the variability in fuel-to-air ratio without initial design nominals.
Here the variability is reduced with the addition of improved design nominals. Note that the variability in fuel-to-air ratio is greatly reduced for the improved design and this leads to improved start times. To reiterate, this optimisation has been achieved by an informed choice of the nominals for the design variables, and in this example, we found our optimized design without the need to consider expensive hardware, such as heated fuel injectors.
At this point, it is worth looking at the error states such as emissions during the starting phase. Because we have optimised the fuel delivery to the tip of the plug, we need to inject less to start the engine. This has a dramatic, positive impact on emissions. "Spare fuel" is minimised and emissions are minimised. Consequently, we are kinder to the environment while improving customer satisfaction.
Now, let's look at another engine example that is not empirical. This is an example where we actually use the same technique around deductive theory. And this relates to the work that we do on engine mapping.
We have to work hard to remove variation, but it is very easy to induce it unintentionally, and its effect spreads like a virus. Engines employ electronic controllers (EEC's) to balance and optimize fuel economy, torque, and emissions, as well as to respond simultaneously to driver inputs for speed, load, and other demands. The techniques for calibrating and mapping an engine controller are largely empirical. To build the empirical models to program EEC's, we draw heavily on the statistical techniques of regression and model-based likelihood.
In the early days, errors in the programming of the EEC were not too critical. But, as the marketplace dictates more stringent requirements, we are getting smarter in the way we model engine behaviour. One of the things we discovered was that the torque generated by the engine was not quite a symmetrical function of the spark timing (degrees before top dead center) that we chose for ignition. Hitherto, we had been fitting fairly complicated polynomials to the engine space, which, when it came down to it, were actually quadratic in the spark variable. As you know, quadratic functions must be symmetrical about their turning point, so there is a clue that something is wrong.
One look at a typical "spark sweep" (enhanced with more data than we usually collect) and you will see the problem. This graph shows the sweep data with a quadratic curve through the points representing the least square solution.
This graph shows the error that occurs when we do that curve-fitting exercise. The true peak is actually shown by the yellow diamond and the red circle shows the fitted-curve peak. As you can, see we're introducing significant error by our curve-fitting technique. This is a systemic error signal - not random variation - that is represented in the right-hand curve.
This is a different approach. The graph illustrates the residual variation that the quadratic does not capture. It has a magnitude of about +3 units of torque, which may not sound too bad, but note that there is systematic variation in the residual as a function of spark. To illustrate the synergies between engineering and statistical science, let's see what happens when we make an intelligent choice of the type of curve that we choose to fit.
We know from the physics that the curve is asymmetrical. It has only one turning point. We resisted the temptation to fit the curve with a higher order polynomial. The clever choice to this problem is to choose two quadratics for the curve that we join at a particular point (the knot). This achieves the best fit. If we use this form for the spark sweep, we get the following graphs on the same data as this last slide.
Note two things.
One, the improvement to the fit of the curve is dramatic as evidenced by the range of residual variance of now only +1 unit of torque
And two, there is now no systematic pattern in the residual. This graph bears a remarkable resemblance to the paint data graph I discussed a few minutes ago.
There is a connection here. In both manufacturing and engineering design, large variation is a sign of trouble. We need strategies to deal with it in the appropriate manner, so that we are left with as little variation as possible with no deterministic pattern.
My next example examines how studying variability in manufacturing processes can improve product durability. The product is carpet and the subject is wear. As any homeowner can tell you, carpets wear out, and this is true for the carpeting in automobiles. However, premature wear-out is unacceptable to the consumer - and the consumer gets to define what premature means. Last year, we determined that we have an opportunity to improve carpet durability. We formed a team with our supplier, with the goal of improving the long-term durability of our carpets.
To get started, we reviewed returned parts and determined that the primary root cause was a breakdown of the adhesive. I make it sound simple --- yet it took a significant amount of work to get to that conclusion.
Having reached it, we decided that we needed a more robust adhesive system, so we used a p-diagram to study the physics of the situation. The p-diagram identified the critical noise factors that the product had to manage in order to satisfy consumer expectations. It also identified the control factors that the engineer could use to tune the product to those expectations.
Let's have a quick look at this generic list of noise factors. Our engineers use this list for robust engineering. It's a useful starting point for thinking about what factors of a particular system fall into these categories.
In this particular case, we tested the rate of carpet weight loss using an abrader machine. Using our knowledge of adhesive technology and our understanding of the failure modes, nine key control factors were identified, all focused on the manufacturing process. These included the "curtain temperature" or bake temperature, the resin type and amount, and the extrusion speed.
We designed the experiment to explore the different possible settings for these parameters in a particularly clever way. By selecting this particular set of nine columns - the columns with arrows at the top - we measured the main effects of eight of the nine factors completely clear of any interactions with other parameters. Most of the other interactions are gathered in pairs in the remaining columns. This slide illustrates the finer points of careful and detailed planning of experiments - a skill which is nowhere near as highly refined as it needs to be for us to be efficient. It's extremely critical as engineers embark on empirical studies intending to induce facts about their engineering systems. They cannot deduce from known theory while wrestling with increasing test costs, smaller budgets, and shorter development product cycle times.
Most traditional testing methods and strategies do not take advantage of this method. In this particular case study, the traditional testing method was enhanced by the addition of objective performance metrics like the work done on the carpet and the rate of weight loss. The test consisted of producing samples under conditions identified in the DOE and then running the samples through the abrader apparatus I showed earlier to rapidly induce wear.
Let's look at some of the results. Note the random variation around the relationship between the absolute effect on the carpet wear. Yet, we have one outstanding entry - the so-called CD or carpet draw. There's one parameter that we've introduced that's had a big impact on the relationship of the transfer function.
This graph, which shows the sensitivity plots, illustrates the point. For all the factors we experimented with, the one that shows the highest sensitivity to weight loss or wearing of the carpet is, in fact, the carpet draw, or CD, factor. Therefore, we isolated the important factor to spend our time and money on, and we've gone off and improved it.
Here's the final result. This is a Weibull plot. We've moved the durability of the carpet significantly to the right. It's a significant improvement. In terms of learning, our supplier also came out ahead. The supplier told us that before using this approach, they were testing thousands of samples in order to get marginal improvements in variability and product performance. With this new approach, they ran 192 samples, including replicates, and have solidly, quantified what the new process can deliver. This supplier has now developed in-house training seminars on statistical experimentation methods for use on all their products.
Before we move into the next case study, I'd like to ask the following question: Since variation in the manufacturing process can transmit variation to the product function, is it possible for the engineering design to remain insensitive to this error transmission? The treatment of this subject requires extensive knowledge of both the underlying engineering theory - that's our deductive approach - and the theory of transmitted variation through the appropriate response function. Once again, the use of statistical science is needed to enhance the engineering approach. The secret in this next case was to exploit the curvature in the response function.
Let's look at another humble example - the electric window lifter. This example involves torque variation in automotive power window systems. I am pleased to note that this work was conducted in conjunction with one of our suppliers.
This is the equation for torque for this particular motor. The torque it generates is a function of the wire diameter on the windings, the length of the wire on one wrap, the wire conductivity, length of armature, length of magnet, internal diameter of the motor housing, magnet thickness, rotor core diameter, magnet angle, and a magnet material constant. This equation allows us to understand the relationship between torque output of the motor and critical parameters in the design.
What you see here is that the gap between the magnet and the housing is an important parameter in terms of transmitting its variation across to torque. At the current nominal value, variation in that gap produces significant variation in motor output torque.
What did we do? We increased the gap between the magnet housing.
You can see here what has happened to the variation in the torque. Just by selecting a different nominal value, we've significantly reduced the variation in the torque of the motor. Unfortunately, the motor no longer delivers the required target torque value. Fortunately, we can use a combination of the design variables on the numerator of the equation to adjust the output and bring the torque back to target.
I hope you noticed that we've done something subtle but rather profound. In designing this system, we first reduced variation. Then we adjusted the system back to its target. I would suggest that this is counter to the current paradigm in engineering.
In this paradigm, with its determinist thinking, the first priority is to get the output on target by manipulating the physics. This in turn is addressed by persuading our manufacturing colleagues to tighten all of the tolerances. Dealing with the resulting variation is a second priority. This paradigm needs to change.
I'd like to go back to the torque example to illustrate the point. If we're still not happy with the resulting variation shown in the graph - and that's possible - we may indeed have to tighten the production variation in the manufacturing of one or two components. The question is "Which ones?"
We can exploit an idea, first published in the technical literature over 40 years ago, which targets the components for variability reduction based on the gradient of the response function relative to the inherent manufacturing variation in the components.
You can see two different design variables - X1 and X2. Each has a different slope. Variation in the design variable, having its chosen nominal value, causes very different amounts of variation in the response or the output - the important output to the consumer on the left-hand side of the scale. There is clearly more scope in reducing manufacturing variability for the component on the left with design variable X1 and a steeper slope than the one on the right.
The transferability of variation on the design variable causes a much greater reduction in the variability of the output. You can see for the same reduction the important response or output. By the way, this is an expensive reduction in variability of the design parameter and it gains almost nothing in terms of variation of the output. Choosing this design variable is clearly a much better strategy.
Remember the equation showing the torque output of the motor and how it relates to the design parameters? Here are all those design parameters laid out along the bottom and, clearly, using this graph we can see which ones it will pay to go after to reduce variability. The height of the bars in this graph corresponds to the amount of relative variation transmitted to the motor torque by each component.
Clearly, we should concentrate on reducing manufacturing variation in the wire diameter (D) on the windings, the length of the wire on one wrap (lD), length of armature (lA), and magnet thickness (M).
This example highlights two important ideas. First, it demonstrates the importance of strong cross-functional teamwork in developing variation reduction strategies, engineering designs and manufacturing processes capable of delivering consistent, superbly functioning products.
Second, it illustrates the effectiveness of using variation reduction strategies at the design stage. I want to close by talking a little bit about what are the lessons learned from this work, and how we can we can better prepare our engineers for their craft.
We've gone through five case studies. As systems become more sophisticated, the complexity of problems increases. Just think of the complex set of interacting systems consumers demand in their vehicles today. The usefulness of the deductive methods - in the absence of strength in the inductive methods - is becoming a barrier to achieving the performance our consumers want. Strengthening our inductive skills is a key issue.
I like to think about engineering tasks in three areas. There's the deductive area where we do the physics. Frankly, we're in good shape on that. The second area is how do you make sure you're engineering the right things? How do you make sure you know the consumer and are discovering what the consumer wants? That's an area that we haven't been too good at, but that's changing and it's changing for the positive.
The third area of engineering is what I've talked about today. That is, having done the deductive work right, having connected with our consumers correctly, then how do we execute flawlessly? How do we manufacture high volumes with very little variation in our yield function for our consumers? This is an area where we need to do a lot more work. I don't see this as a competition. I see this as complementation. I'm not anti-deductive.
This is an interactive thing. It's not one size fits all. It's really using the deductive and inductive methods in concert to engineer and realize better results for our consumers and for our companies. I believe that most of the ideas I've talked about today have been around for years, if not decades; and, yet, for some reason, they haven't managed to gain enough traction to make them part of every day engineering. They are not the normal way we do engineering. Many of these very powerful ideas have not permeated the curricula of engineering universities.
We are spending an enormous amount of time at Ford Motor Company teaching and training our engineers to use these skills. It's horrendously time-consuming. I shudder to think what's happening in smaller companies because it requires a huge commitment of resources to do this. It requires a commitment on the part of our employees as well as on the part of management. Many of our tier one, tier two, and tier three suppliers cannot have the chance to step up to the mark on this. It is incumbent upon us to leverage the pockets of excellence that are available in this area today.
There is an awakening awareness of the need to change the way we train and teach engineers so that robustness becomes a feature of our output. We need to take the initiative. We need to focus on building these skills from the very beginning in the undergraduate engineering programs, graduate schools and all the way through postgraduate studies. We will have an even better chance to build on those skills once the graduate enters the industrial environment.
Lack of change in this area won't dissuade us or deter us from increasing the resources we are spending on retraining our engineers. But it seems to me that the error signal opening up between the real world needs of engineers practicing in the workplace and the skills they're being equipped with through the engineering professions and the engineering education system, is an error signal that I don't think any of us want to see continue to build up. I would love to personally work with anybody who's interested in helping to turn it around by becoming more competitive through intuitive literacy in these kinds of methods.
What I'm sharing with you is a journey that we followed - that I followed. A journey of discovery where we realized that many of the ways in which we engineer are inadequate for doing the job that we now need to do for our consumers. So that's really the crux of my message today. We are working in an extremely exciting profession.
It's undergoing massive change that is technology and consumer market-driven. These tools represent another exciting opportunity to enhance the value we can add and the satisfaction we can get from our jobs. It's not perhaps quite as glamorous as innovation, but I think it adds more value for our consumers.
 Clausing, D (1994). Total Quality Development - a step-by-step guide to world-class concurrent engineering, ASME Press, New York.
 Shewhart, WA. (1939). Statistical method from the viewpoint of quality control. Dept. of Agriculture, Washington, D.C. (re-printed by Dover Publications, 1986).
 Grove, D.M. & Davis, T.P. (1992). Engineering, Quality, and Experimental Design. Addison-Wesley-Longman, Harlow.
 Davis, T.P. and Lawarance, A.J. "Engine Mapping: A Two-Stage Regression Approach Based on Spark Sweeps". In Statistics for Engine Optimization, edited by S Edwards, HP Wynn and DM Grove, Professional Engineering Publishing, IMechE, London.
 Morrison, S.J. (1957). "Variability in Engineering Design", Applied Statistics, Vol. 6, 133-138.