2-H-2 Develop measurements for continuous quality improvement and target setting

1) Quality assurance

Quality assurance is a management function that includes establishing specifications that can be met by suppliers; utilizing suppliers that have the capability to provide adequate quality within those specifications; applying control processes that ensure high quality products and services; and developing the means for measuring the product, service and cost performance of suppliers and comparing it with requirements (ISM Glossary, 2006).

A) Definition of quality -According to the ISM Glossary quality has been defined in a number of ways, including: synonymous with "innate excellence"; a precise and measurable variable that is inherently present in the characteristics of the product or service, defined by the user, and therefore products and services must have clusters of attributes that groups of people (users) want "right the first time"; conformance and efficiency; design and measured conformance with no waste, meaning lower costs; performance at an acceptable price; or conformance at an acceptable cost_ or conformance to specifications, satisfying or surpassing customer needs throughout the life of the product or service.

There are several other definitions and dimensions of quality. The most widely used dimensions are discussed in the following paragraphs.

David Garvin of the Harvard Business School compiled one of the most respected collections of quality dimensions. Garvin found that most definitions of quality were transcendent or intuitively understood; product based or found in the components and attributes of a product; user based or customer satisfaction; manufacturing based or meets design specifications; and value based or perceived as providing good value for the price.

Using these five definitions of quality, Garvin developed a list of eight quality dimensions. These dimensions describe product quality specifically.

• Performance refers to the completion of one's contractual obligations.

• Features are distinctive and desirable characteristics of an item or service.

• Reliability is 1) the probability that a product will perform as specified (under normal conditions) without failure for a specified period of time; 2) a requirement in a specification that is part of the design criteria defining dependability.

• Conformance is likely the most traditional definition of quality. When a product is designed, certain numeric dimensions for the product will be established. These numeric product dimensions are referred to as specifications. Specifications typically are allowed to vary a small amount. This range of variation is called a tolerance.

• Durability is how a product tolerates stress or trauma without failing. An example of a product that is not very durable is a light bulb.

• Serviceabilicy is the ease of repair for a product. A product is very serviceable if it can be repaired easily and cheaply.

• Aesthetics are the subjective sensory characteristics such as taste, feel, sound, look and smell.

• Perceived quality refers to the belief the customer has relative to quality of products.

Service quality is even more difficult to define than product quality. While services and production share many attributes, services have more diverse quality attributes than products. This often results from wide variation created by high customer involvement.

Parasuraman, Zeithaml and Berry, three marketing professors from Texas A&M University, published a widely recognized set of service quality dimensions. These dimensions have been used in many service organizations to measure quality performance. The set of dimensions includes: tangibles, service reliability, responsiveness, assurance and empathy.

• Tangibles include the physical appearance of the service facility, the equipment, the personnel and the communication materials.

• Service reliability differs from product reliability in that it relates to the ability of the service provider to perform the promised service dependably and accurately.

• Responsiveness is a key performance indicator representing a supplier's.level of performance against the supply management professional's definition of responsiveness.

• Assurance refers to the knowledge and courtesy of employees and their ability to inspire trust and confidence.

• Finally, consumers of services desire empathy from the service provider. In other words, the customer desires caring, individualized attention from the service organization.

Just as there are many quality dimensions relating to production, there ate several other dimensions of service quality such as availability, professionalism1 timeliness, completeness and pleasantness. It should be noted that service design strives to address these different service dimensions simultaneously. It is not sufficient for a service organization to provide only empathy if responsiveness and service reliability are inadequate.

Why does it matter that different definitions of quality exist? By sharing a' common definition of quality, each department within an organization can work toward a common goal. In addition, understanding the multiple dimensions of quality desired by consumers can lead to improved product and service design.

B) Acceptance testing -Acceptance testing is defined in the ISM Glossary as test procedures that lead to formal acceptance of a new or changed product, process or system. For example, the overall condition of a given lot may be determined by inspecting only a portion or sample of the lot. For a software system, a user acceptance test plan is agreed to by the buyer and seller, carried out and then results are compared to pre-established severity thresholds to determine corrective action.

There are times when the receiving organization must inspect incoming materials from its suppliers. At these times, acceptance sampling is the technique that is used. When materials are received, the organization can utilize a range of alternatives, from 100 percent inspection to inspecting a relative few and drawing inferences about the whole shipment.

Is acceptance sampling needed? Acceptance sampling has been controversial. Some disagree with acceptance sampling because they are fundamentally opposed to the notion of an acceptable level of defects that is greater than zero. Also, many feel that the notion of the acceptable quality level (AQL) is counter to Deming's concepts of continual improvement. However, there is still a need for acceptance sampling in many different circumstances. Following are times when acceptance sampling might be needed:

• When dealing with new or unproven suppliers

• During start-ups and when building new products

• When product can be damaged in shipment

• With extremely sensitive products • When product can spoil during shipment (such as agricultural seed)

• When problems with a certain supplier that have been noticed in the production process bring the supplier's performance into question

Acceptance sampling is a statistical quality control technique used in deciding to accept or reject a shipment of input or output. When compared with statistical quality control, acceptance sampling is defined by its occurrence after production has been completed. This can be either at the beginning of the process when receiving components, parts or raw materials from a supplier or at the end of production as in the case of final inspection. The focus will be on inspection of incoming materials.

Producer's and consumer's risk - Producer's risk is the risk associated with rejecting a lot of materials that has good quality. For example, a producer of a product that has high quality has a customer that has concluded that the product has poor quality and returns the product. In this case, the producer has been judged inaccurately. Consumer's risk is the exact opposite. As a consumer, a shipment of a poor quality product has been received and the product has good quality. Therefore, the consumer pays for the product, uses it in the production process and suffers the consequences. Producer's risk is denoted by alpha (a) and consumer's risk is denoted by beta (~).The goal of acceptance sampling is to reduce producer's risk to low levels while maintaining consumer's risk at acceptable levels. The acceptable quality level (AQL) represents the process limit of a measured attribute averaged from a series of satisfactory lots. AQL is typically used for sample inspection (ISM Glossary, 2006). This concept of the acceptable quality level has been troublesome to many who consider this an acceptance of less-than-perfect quality. To statisticians this is simply an economic decision that is associated with producer's risk.

Lot tolerance percent defective (LTPD) is the level of poor quality that is included in a lot of goods. The differences between AQL and LTPD are sometimes confusing. Lots of AQL or better should usually have an alpha (that is, for example, a 5 percent) or less chance of rejection. This is related to Type I error. Lots of LTPD or worse should have a beta (that is, say, a 10 percent) or less chance of acceptance. This relates to Type II error. There is theoretically only one combination of sample size (n) and acceptance number (c) that meets both conditions simultaneously. In practice, both conditions would be unable to be met precisely, and the supply management professional must choose a combination of n and c that approximates both conditions simultaneously. The selection of the sample size n and the acceptance number c is referred to as the sampling plan.

For the most part, the assignment of AQL, LTPD, alpha (a) and beta (B) is a management decision. Once these values are determined, values for n and c can be determined. The bottom line in acceptance sampling is that acceptance sampling plans are designed to give two things, n and c, where:

n = the sample size of a particular sampling plan, and

c = the maximum number of defective pieces for a sample to be rejected.

The average sampling plan can be stated in simple terms; that is, n =20 and c==S. This dearly communicates the bounds of the sampling plan: take a sample of 20 items and if five are defective, reject the lot of materials. It is important to remember that the supply management professional should always randomize when selecting product from a supplier to be inspected.

Types of samples -The discussion has focused on sampling plans for single samples. This is not the limit of sampling plans. More complex sampling plans are referred to as multiple sampling plans or sequential sampling plans. With these sampling plans, the acceptance sampling rules might occur as follows:

n1 = sample size for sample # 1

n2 =sample size for. sample# 2

n n = sample size for sample # n

c 1 = acceptance number for sample # 1

c2 == acceptance number for sample # 2

en = acceptance number for sample # n

r1 =rejection number for sample# 1

r2 = rejection number for sample # 2

rn =rejection number for sample# n

Multiple sampling plans have advantages over single sampling plans. The sample size used in multiple sampling plans will have a smaller average sample size with the same amount of protection as a single sampling plan. The decision relating to multiple sampling plans will usually be made on the first. phase of the sample. This then results in smaller samples.

Acceptance sampling in continuous production -The single and double sampling plans are called lot-by-lot sampling plans. As separate samples are performed, additional lots of materials are received. Sometimes it is not feasible to collect products into lots as they are produced in a continuous manner. In these cases, acceptance sampling procedures for continuous production are used. These procedures typically involve alternating between 100 percent inspection and sampling inspection.

C) Certification requirements - See Task 2-H-1 for information on certification requirements.

D) Quality documentation -Typically, documentation requirements are specified by either the customer or the registrar. These may vary. Many of the requirements were outlined in Task 2-H-1. However, this simply requires developing standard operations procedures according to the customer's or registrar's requirements.

E) "Best-in-class" benchmarks -A benchmark is a standard or point of reference used in measuring or judging an organization's performance according to selected criteria (ISM Glossary, 2006).A benchmark is an organization that is recognized for its exemplary operational performance. A benchmark is not an average, it is the best.

Since benchmarks are outstanding organizations, benchmarking means to document performance and compare that performance to that of the best organizations. To facilitate the discussion, the terms "initiator organization" and "target organization" will be used. The initiator organization is the organization that initiates contact and studies another organization. The target organization is the organization that is being studied (also called a benchmarking partner). These are not static roles as the target organization often enters into a reciprocal agreement to observe the initiator organization. Besides providing inputs to improvement, benchmarking is useful for externally validating an organization's approach to its business. Several types of benchmarking are found in the quality literature and are defined below. Note that they are not all mutually exclusive.

• Process benchmarking is a performance comparison of business processes against an internal or external standard of recognized leaders. Most often the comparison is made against a similar process in another organization considered to be "best-in-class." This can involve studying process flows, operating systems, process technologies and the operations of target organizations or departments.

• Financial benchmarking usually involves using financial _databases, whether CD-ROM based or Internet based. As more organizations place annual reports on the Internet, it is expected that the Internet can become an important tool for benchmarking financial performance.

• Performance benchmarking allows initiator organizations to assess competitive position by comparing products and services with target organizations. Performance issues may include cost structures, various types of productivity performance, speed of concept to market, quality measures and other performance evaluations.

• Product benchmarking is performed by many organizations when designing new products or upgrades to current products. This type of benchmarking often includes reverse engineering or dismantling competitors' products to understand the strengths and weaknesses of their designs.

• Strategic benchmarking involves the practice of observing how others compete. This is rarely industry specific as organizations go outside their own industries and learn lessons from organizations around the world. This typically involves target organizations that have been identified as "world-class" or "high-performance" such as Baldrige, Shingo or Deming prize winners.

• Functional benchmarking is another type of benchmarking. An example of functional benchmarking occurs in supply management. The Institute for Supply Management ™ (ISM) provides a framework for the networking of supply management professionals. This allows for the functional sharing of information.

There are several primary purposes for benchmarking. These different purposes also imply differing levels of involvement in the benchmarking activities. Time consumption and costs may vary according to the purpose. The purposes of benchmarking range from just learning to becoming best-in-breed to achieving world-class leadership.

2) Quality management

A) Definition - Quality management is. the function of planning, organizing, controlling and improving the quality of products and processes (ISM Glossary, 2006). This involves leadership and prioritization of quality improvement activities in the organization.

B) Meeting customer needs- One of the important determinants of quality.is how well an organization satisfies or delights its customers. Often customers are defined as internal or external customers. Internal customers are those within the organization receiving internal goods or services. In a sense, an economic transaction takes place in internal services in that service providers are funded as a result of the services they provide to the organization as a whole. Other authors have used an abstraction of the term "internal customer" to include the person at the next step in a process; Therefore, the person who works at workstation number 3 can be considered the customer of the worker at workstation number 2.

External customers are the bill paying receivers of the work. The external customers are the ultimate people whom the organization is trying to satisfy with its products or services. If the organization has satisfied external customers, it will continue to prosper, grow and fulfill the objectives of the organization.

Another term that describes customers is "end user:' An end user is someone who is at the end of the chain of events that results in the production of a product or service. Software developers that are programming software solutions for customers often use the term" end user." Service organizations have many titles for customers. These titles include patient, registrant, stockholder, buyer, patron and many others. As service and product producers, the customer is the focus of activities.

Often customer satisfaction surveys and focus groups are used to determine levels of customer satisfaction. Once these measures are in place, a baseline should be created and tracked over time to determine if improvement efforts have resulted in heightened levels of customer satisfaction.

C) Quality tools - In this section, the seven basic tools of quality, the seven managerial tools and other various tools will be defined. Kaoru Ishikawa, the inventor of the seven basic tools, was known for "democratizing statistics. "What does this mean? Statistical concepts are difficult for many people to understand. For the average person, a means was needed to obtain the power of inferential statistics without the in-depth knowledge required to use parametric statistics correctly. Statisticians have long understood the importance of visual tools for communicating and understanding statistical concepts. Ishikawa adapted and invented these simple tools, known as the seven basic tools of quality (B7), so that the average person could analyze and interpret data. These tools have been used in thousands of organizations and by all levels of managers and employees with worldwide success. While an in-depth discussion of these tools can be found elsewhere, the following provides a simple definition:

• Histograms -The first tool of quality is a histogram. It is a diagram of values being measured versus the frequency with which each occurs. When a process is running normally (only common causes are present), the histogram is depicted by a bell-shaped curve (ISM Glossary, 2006).

• Pareto charts - Discussed in several areas of the Certified Professional in Supply Management (CPSM) exam specification, Pareto charts are graphs showing the frequency with which events occur, arranged in order of descending frequency. They are used to rank order the issues so that resources can be applied first to those with the largest potential return (ISM Glossary, 2006). These are actually histograms that are aided by the 80/20 rule adapted by Joseph Juran from Vilfredo Pareto, the Italian economist. The 80/20 rule states that 80 percent of the problems are created by 20 percent of the causes. This means that there are a vital few causes that create most of the problems. This rule can be applied in many ways. The 80 percent and the 20 percent are only estimates. The good news is that by focusing on the vital few, failures can be better controlled, satisfaction of the most important customers can be increased, or 80 percent of the complaints can be eliminated.

• Cause and effect (Ishikawa) diagrams - Often workers spend too much time focusing improvement efforts on the symptoms of problems rather than the causes. The Ishikawa cause and effect diagram is a good tool to help move to lower levels of abstraction in solving problems. The diagram looks like the skeleton of a fish with the problem being the head of the fish, major causes being the ribs of the fish and sub causes forming smaller bones off of the ribs. The facilitator moves to root causes by systematically asking brainstorming participants Why? This is sometimes referred to as the five whys.

• Check sheets - Check sheets .are data-gathering tools that can be used in forming a histogram. Check sheets can be either tabular or schematic.

• Scatter diagrams -A scatter diagram is a graph used to analyze the relationship between two variables. One variable is plotted on the x-axis and the other on they-axis. The graph will show possible relationships between them. Regression analysis and other statistical techniques can be used to quantify those relationships.

• Flowcharts or process maps -A flowchart is a diagram of the steps of a process. Each step is identified in sequence along with its key characteristics, such as time involved.

• Control charts -These graphs or diagrams are used in statistical process control (SPC) to record, measure and analyze variations in processes to determine whether or not outside influences' are causing a process to go out of control. The objective is to identify and correct such influences to keep the process in control.

The seven managerial tools for improvement In addition to the seven basic tools of quality is another set of tools that focus more on group processes and decision-making. These tools are known as the New Tools for Management. These tools have their roots in Japanese practice and date back to prior to World War II.

• The New 7 Tools (N7) were developed by a research effort enacted by a committee of the Japanese Society for QC Technique Development.

• The affinity diagram -When solving a problem, it is often useful to first surface all of the issues associated with the problem. A tool to do this is the affinity diagram, which is a total quality management (TQM) tool in which the members of a team each sort data in silence, looking for associations. After sorting, the team identifies the associations found, the critical links and the issues that emerged.

• The interrelationship digraph -After completing the affinity diagram, it might be useful to better understand the causal relationships between the different issues surfaced. The tool for doing this is the interrelationship digraph.

• Tree diagrams -With the affinity diagram, teams identify key issues relating to a problem. The tree diagram is useful to identify the steps needed to address the given problem.

• Matrix diagram -The matrix diagram is similar in concept to quality function deployment in its use of symbols. As with the other N7 tools, matrix diagra1ns show the relationship between two, three or four groups of information. It also can give information about the relationship, such as its strength, the roles played by various individuals, or measurements (ISM Glossary, 2006).

• Prioritization matrices -These are similar to the analytical hierarchy process developed by Thomas Saaty. When we obtain different priorities from the tree and matrix diagrams, weighted matrices are used to prioritize which variables or topic should be emphasized.

• Process decision, program chart -A process decision program chart is a tool that is used to help brainstorm possible contingencies or problems associated with the implementation of some program or improvement.

• Activity network diagram-The activity network diagram is also known as a PERT diagram and is used in controlling projects. This is known as an Activity on Node Chart (AON).

The N7 tools are useful for managing longer projects that involve teams. With the B7 and N7 tools, the supply management professional has a reasonably good set of skills that will help in managing many projects. It is important to note that these tools have been successfully used within many different settings, cultures and education levels. The power of these tools is that with the tools and the PDCA (plan-do-check act) cycle, organizations have a simple, easy-to-understand methodology for solving unstructured problems. The tools are especially useful when used in teams. Many of these tools are also fun to use, which is often seen as a plus. By using these tools effectively, unproductive meeting time is reduced to a minimum and good, fact based decisions are made.

The 5S 's are often useful to organizations beginning to implement quality improvement or just in time QIT). The primary focus of the SS 's is to create a culture of waste reduction and minimization as well as efficiency. The first S stands for the Japanese word Seiri. This stands for separating or organizing and throwing away items not used. Seiton, the second S, stands for sorting and neatness in the workplace. Seiso is similar to preventative maintenance except it applies to cleaning and making items shine. Seiketsu stands for standardization. This means that standards are set for the workplace and the organization should operate in a consistent and standardized fashion. This might include standard paint colors for electrical lines or air hoses, standard labels for temperature gauges, or how customer service representatives answer the telephone. Also, procedures are standardized. This provides a great basis for ISO 9000 registration. Finally, Shitsuke is sustaining the discipline required to maintain the changes that have been made. The reason that the 5S's are useful to help start quality initiatives is that they help to develop the discipline needed to improve quality. Such discipline requires cultural change that can occur as a result of implementing the 5S's.

D) Statistical tools- Statistical process control (SPC) is a technique utilizing the application of statistical control charts in measuring and analyzing the variation in processing operations. The methodology monitors the process to determine whether outside influences are causing the process to go out of control. The objective is to identify and correct such influences before defective products are produced, and thus keep the process in control (ISM Glossary, 2006).

If the process is tightly controlled, then its output (products) will be within allowable tolerance. SPC's primary activities are directed toward:

• Defining the process (using tools such as data collection, histograms, run charts and process capability)

• Reducing variation (using tools such as cause-and-effect diagrams, Pareto charts, brainstorming and team based problem solving)

results matching ""

    No results matching ""