Six Sigma and Simulation: Part 3

By Jeff Joines ( Associate Professor in Textile Engineering

This is the final installment of the three part series on Six Sigma, Lean Sigma, and Simulation. The first part explained the Six Sigma methodologies and linkages to simulation while the second part discussed where simulation could be used directly in the two six sigma processes (DMAIC and DMADV). The final installment will demonstrate how simulation can be used to design Lean Six Sigma Processes.

Recently, the Six Sigma continuous improvement methodology has been combined with the principles of lean manufacturing to yield a methodology named Lean Six Sigma. Recall Six Sigma is a continuous improvement methodology used to control/reduce process variability while Lean manufacturing is a management/manufacturing philosophy that deals with elimination of waste and is derived from the Japanese Toyota Production System (TPS). When people think of Lean, they conjure up Just-in-time (JIT) manufacturing (i.e., parts or information arrive just when you needed and not before). The elimination of waste is key in Lean systems and Toyota defines three types of waste: muda (‘non-value-added work), muri (overburden), and mura ( unevenness). Most people think of the non-value added form of waste when referring to Lean (e..g., a part sits in queue for ten minutes before being processed for one minute which represents ten minutes of non-value-added time). Many of the Lean tools deal with eliminating this form of waste (muda). Toyota indentified seven original common wastes (paraphrased from “Lean Thinking”) that Lean tries to eliminate.

  • Transportation (moving products that is not actually required to perform the processing)
  • Inventory (all raw materials, work-in-progress and finished products not being currently processed)
  • Motion (people or equipment moving or walking more than is required to perform the processing)
  • Waiting (waiting for the next production step (i.e., queue up))
  • Overproduction (production ahead of demand causing items to have to be stored, managed, protected, as well as disposal owing to potential)
  • Over processing (due to poor tool or product design creating unnecessary processing, e.g., over engineered product that the customer doesn’t need or pays for or having a 99% defect free rate when the customer is willing to accept 90%)
  • Defects (the effort involved in inspecting, fixing defects, and/or replacing defective parts)

Lean Six Sigma utilizes the continuous improvement methodology (DMAIC) as a data-driven approach to root cause analysis, continuous improvement, as well as lean project implementations. Lean encompasses a wide range Lean tools hat are used to implement changes as seen Figure 1. Many of the tools still use the Japanese words (e.g., Poka Yoke or mistake proofing)

Figure 1: Graphical Representation of 24 Lean Tools and Their Broader Categories (Kelly Goforth’s Master Thesis at NCSU)

As was the case for Six Sigma methodology, simulation modeling and analysis can be used in many facets of the Lean implementation and can be quite critical in making decisions. Most improvements have to be documented and analyzed where simulation modeling and analysis can be used easily to ascertain the benefits of the improvements to the current process before actual implementation. The following is just a few cases where I have applied simulation.

Value stream maps are a critical step in becoming lean and should be used first to identify areas of improvement before applying tools randomly. Value stream maps differ from process flow maps in that VSMs contain all the value added and non-value added steps/activities, include the information flow along with the material flow to make the product, are a closed circuit from the customer back to the customer, and take into account customer’s Takt time (i.e., the time needed to deliver the product at the customer’s pace). In developing a VSM, typically a snap shot of just a few key products are mapped for a particular day. Once the current state VSM is developed, areas of improvement as well as the lean tools to achieve these improvements are identified; future state maps are then generated to illustrate the improvement potential. The value stream map can be used to develop a simulation model and a wide variety of demand streams and SKUs can be experimented with to determine the VA and NVA times, etc.

Ford in the early 1900’s utilized fixed flow assembly line (i.e., one production line made up all the machines to produce one car in a sequential line) to maximize throughput. However, when the number of products and part categories increased while lot sizes decreased, manufacturing moved to functional layouts (i.e., job shops) where machines were grouped based on function (i.e., drilling machines). Now parts would flow to all the groups necessary to be produced which introduced great flexibility but also increased travel time, waiting, WIP, defects owing to machine setup, etc. The lean concept of cellular manufacturing decomposes the manufacturing system into groups of dissimilar machines that can process a set of part families which ideally decreases transportation, setups, balances load. These are a mix of smaller job shops and flow assembly lines combined. Determining these part families and groups of machines is quite complicated. Simulation can be used to establish a base line for comparison of the proposed new systems. The new systems can be simulated with varying demand variation, maintenance issues to test the design of the cellular groups before the machines are moved or setup in the new manufacturing system.

When people think of Lean they associate it with JIT and simulation has been applied the most in this area. Pull scheduling systems differ from push systems (i.e., a forecast of a set of parts is sent to the first process and are then pushed through the system until completion) in that parts are not produced until they are needed. Kanbans (signals) are sent back to the previous process to replenish parts only when they have been used by the current process. Pull systems ideally have lower WIP and faster through puts but typically only work for stable demand streams. For example, we worked with a large company building a new plant with fairly large lead-times. Parts of the organization had been very successful in implementing Pull scheduling systems to fill their stock inventories. The company had put in place demand leveling as way to deal with widely customer demand variations. They initially asked us to evaluate where should they place supermarkets (i.e., places to store inventory (Kanbans)), what should the size of the respective kanbans for each SKU, etc. After building several simulation models utilizing their historical demand streams, we determined that the total volume that was being placed on the plant was like a tsunami that would engulf the supermarkets essentially turning it into a push system anyways (i.e., everything sent to the first process (raw materials) and the processed to the end). We demonstrated through simulation the supermarkets would have to be large to be effective and these Kanban sizes were just impractical. The simulation model told them before an enormous amount of money and time was spent developing the process and information system to handle it and they could focus on other Lean areas.

Most people are familiar with the last form of waste (mura) and its elimination through Heijunka (production leveling). Production leveling/load balancing works in conjunction with pull systems and these systems can be simulated again to see their impact as well as to determine where supermarkets (e.g., inventory buffers) need to be placed to reach balancing. Total Preventive maintenance (TPM) is another area where lean practitioners can benefit from simulation modeling to ascertain affect of different policies and schedules on the system.

For more information on Lean Manufacturing and the lean philosophy, I recommend two books by the James Womack et al.: “The Machine that Change the World” and his latest book “Lean Thinking.”


The three part series has hopefully shown how simulation practitioners possess a skill set that is extremely beneficial for Six Sigma, Design for Six Sigma, and/or Lean Six Sigma project. These types of projects are not very unique but just general simulation models that require us to learn their particular language. I find it easier to work on Six Sigma projects because the Lean and Six Sigma practitioners understand statistical analysis necessary for input and output analysis even though they typically have only used the Normal distribution.

Six Sigma and Simulation: Part 2

By Jeff Joines (Associate Professor In Textile Engineering at NCSU)

This is the second of the three part series on Six Sigma, Lean Sigma, and Simulation. The first part explained the Six Sigma methodologies. Recall the goal of the DMAIC continuous improvement methodology is to control/reduce process variability of a current process or product while the Design for Six Sigma process DMADV is used to design a new process or product with minimal variability before creation. Simulation modeling can be employed in almost every phase of either methodology.


Six Sigma practitioners have to estimate the cost savings for each project to be certified or justify the project typically. However, most of these cost forecasts are made on point estimates of key parameters (i.e., raw material cost, customer/product demand, cost of capital, currency rates, etc.). By employing Monte Carlo simulation, variability and/or ranges on these point estimates can be employed to provide a more reliable estimate. Along these lines, several projects have been proposed and simulations can be utilized to help management perform project selection based on resource constraints and objectives.

Analyze and Improve

During the Analysis and Improve phases, Design of Experiments (Full, Fractional, Mixed, etc.) is the most common tool utilized which provides a base line to illustrate improvement when changes are made as well identifying factors of interest to control or change. The normal baseline measure is defined as the process capability (Cpk) which is an indication of the ability of a process to produce consistent results – the ratio between the permissible spread and the actual spread of a process. The Cpk index takes into account off centeredness and defined as the minimum of (USL-Mean)/ 3? or (Mean-LSL)/ 3? where USL and LSL are the upper and lower specification limit. A six sigma process is normally distributed with a Cpk value greater than 1.5.

Using the real system is better in terms of capturing all complexities, interactions, etc. However as simulation practitioners, we recognize when that might be possible or viable. The following lists examples where simulation modeling in terms of Monte Carlo or process simulation can be used.

  • If the product or process does not exist as is the case in a Design for Six Sigma, simulation models can be used to ascertain capability of a new process and product before implementation. For example, tolerance stack up of individual parts or processes can be determined. Take parts or processes which are within tolerance individually (e.g., bearing and a shaft) but the assembly process might not be capability owing to the tolerance stack up problem which occurs in manufacturing, service, and transactional processes.
  • The cost of performing a DOE with replications is too high (e.g., raw material cost, cost of shutting down current process). We have worked with companies in developing process and Monte Carlo simulation models that could be used to determine their capabilities and ascertain the potential improvement in their changes.
  • The time of running the set of experimentation makes it impractical to determine the baseline or ascertain the improvements of a process. While working with a large company and their six sigma process improvement team with a complex global supply chain, one of their projects was to reduce inventories of a series of products with a ten to twelve week lead time. The team had to evaluate six inventory policies, indentify which one of three suppliers was best, etc. The DOE with sufficient replications would have taken years to complete and made the project useless without the simulation model. Also, most of the data driving the model was based on lead-times which are not normally distributed.
  • Think of systems where there are multiple processes that feed one another (e.g., departments, plants, etc.) which contain only five or six factors each. Transfer functions can be generated from a traditional DOE on each individual process but not the entire system. A simulation model can be used to combine each individual transfer function into determining the capability of the whole system as well as testing a wider range of values.
  • There are several environments, where performing a DOE is impractical or impossible. For example, we have trained dozens of people associated with hospital systems from around the country in Six Sigma. Simulation modeling and analysis allows these practitioners to be able ascertain process capability with a model because the real system cannot be used since patient care is at stake. Other environments where we have used simulation modeling instead of the real system is in processes which are transactional like the banking or insurance industries.


Simulation can also be used as a process control aid as the process is being implemented to determine potential problems.

Hopefully it is apparent that simulation experts already posses the skills that can greatly help Six Sigma projects. These types of projects are not unique but just general simulation models we are know how to build. They only require us to learn the Six Sigma language as well as the need to calculate Cpk statistics. I find it easier to work with Six Sigma people because of their statistical training for understanding input and output analysis even though they typically have only used the Normal distribution. In Six Sigma and Simulation: Part 3, the use of simulation in the Lean Sigma world will be addressed.

Six Sigma and Simulation: Part 1

By Jeff Joines (Associate Professor In Textile Engineering at NCSU)

This is a three part series on Six Sigma, Lean Sigma, and Simulation. The first blog will explain the Six Sigma methodology and the bridge to simulation analysis and modeling while the second and third parts will describe the uses of simulation in each of the Six Sigma phases and Lean Sigma (i.e., Lean Manufacturing) respectively.

“Systems rarely perform exactly as predicted” was the starting line for the blog Predicting Process Variability and is the driving force behind most improvement projects. As stated, variability is inherent in all processes whether these processes are concerned with manufacturing a product within a plant, producing product via an entire supply chain complex or providing a service in the retail, banking, entertainment or hospital environment. If one could predict or eliminate the variability of a process or product, then there would be no waste (or Muda in the Lean World which will discussed in a third part) associated with a process, no overtime to finish an order, no lost sales owing to having the wrong inventory or lengthy lead-times, no deaths owing to errors in health care, shorter lead times, etc. which ultimately leads to reduced costs. For any organization (manufacturing or service), reducing costs, lead-times, etc. is or should be a priority in order to compete in the global world. Reducing, controlling and/or eliminating the variability in a process is key in minimizing costs.

Six Sigma is a business philosophy focusing on continuous improvement to reduce and eliminate variability. In a service or manufacturing environment, a Six Sigma (6?) process would be virtually defect free (i.e., only allowing 3.4 defects out of a million operations of a process). However, most companies operate at four sigma which allows 6,000 defects per million. Six Sigma began in the 1980s when Motorola set out to reduce the number of defects in its own products. Motorola identified ways to cut waste, improve quality, reduce production time and costs, and focus on how the products were designed and made. Six Sigma grew from this proactive initiative of using exact measurements to anticipate problem areas. In 1988, Motorola was selected as the first large manufacturing company to win the Malcolm Baldrige National Quality Award. As a result, Motorola’s methodologies were launched and soon their suppliers were encouraged to adopt the 6? practices. Today, companies who use the Six Sigma methodology achieve significant cost reductions.

Six Sigma evolved from other quality initiatives, such as ISO, Total Quantity Management (TQM) and Baldrige, to become a quality standardization process based on hard data and not hunches or gut feelings, hence the mathematical term, Six Sigma. Six Sigma utilizes a host of traditional statistical tools but encompasses them within a process improvement framework. These tools include affinity diagrams, cause & effects, failure modes and effective analysis (FMEA), Poka Yoke (mistake proofing), survey analysis (voice of customer), design of experiments (DOE), capability analysis, measurement system analysis, statistical process control charts and plans, etc.

There are two basic Six Sigma processes (i.e., DMAIC and DMADV) and they both utilize data intensive solution approaches and eliminate the use of your gut or intuition in making decisions and improvements. The Six Sigma method based on the DMAIC process and is utilized when the product or process already exists but it is not meeting the specifications or performing adequately is described as follows.

    Define, identify, prioritize, and select the right projects. Once selected to define the project goals and deliverables.
    Measure the key product characteristics and process parameters to create a base line.
    Analyze and identify the key process determinants or root causes of the variability.
    Improve and optimize performance by eliminating defects.
    Control the current gains and future process performances.

If the process or product does not exist and needs to be developed, the Design for Six Sigma (DFSS) process (DMADV) has to be employed. Processes or products designed with the DMADV process typically reach market sooner; have less rework; decreased costs, etc. Even though, the DMADV is similar to DMAIC method and start with the same three steps, they are quite different as defined below.

    Define, identify, prioritize, and select the right projects. Once selected to define the project goals and deliverables.
    Measure and determine customer needs and specifications through voice of the customer.
    Analyze and identify the process options necessary to meet the customer needs.
    Design a detailed process or product to meet the customer needs.
    Verify the design performance and ability to meet the customer needs where the customer maybe internal or external to the organization.

Both processes use continuous improvement from one stage back to the beginning. For example, if during the analyze phase you determine a key input is not being measured, new metrics have to be defined or new projects can be defined once the control phase is reached.

Now that we have defined six sigma, you may be wondering what is the bridge to computer simulation and modeling. Simulation modeling and analysis is just another tool in the Six Sigma toolbox. Many of the statistical tools (e.g., DOE) try to describe the dependent variables (Y’s) in terms of the independent variables (X’s) in order to improve it. Also, most of the statistical tools are parametric methods (i.e., they rely on the data being normally distributed or utilize our friend the central limit theorem to make the data appear normally distributed). Many of the traditional tools might produce sub-optimal results or cannot be used at all. For example, if one is designing a new process or product, the system does not exist so determining current capability or future performance cannot be done. The complexity and uncertainty of certain processes cannot be determined or analyzed using traditional methods. Simulation modeling and analysis makes none of these assumptions and can yield a more realistic range of results especially where the independent variables (X’s) can be described as a distribution of values. In Six Sigma and Simulation: Part 2, a more detailed look at how simulation is used in the two six sigma processes (DMAIC and DMADV) will be discussed.