This article will serve as an introduction to a series about what is both known, and what is unknown, about automotive quality. It will be useful for a wide range of readers; for those in the industry, as well as for end customers of every level of expertise from “gear head” to novice. I am certain a wide range of readers will find, at least, pieces of this series useful. I am certain many of the concepts and content can be used outside of automotive products as well.

I have worked in the automotive industry for almost 20 years, all of that time in quality control. I am not an engineer and many consider that point an advantage. It’s been my experience that engineers often see their product as “meets design intent”, or not. Having worked in engineering groups for so long, my view is somewhere between that of engineering and that of a typical customer. I tend to think along the lines of customer expectation instead of design intent. , when the basis for design intent should be customer expectation.

For just a moment, jumping to the very end of the series, I contend that there is a better way to measuring automotive quality, using data analytics to help. This is what we do at the Enprecis Group system. Our focus is always on helping the Original Equipment Manufacturers (OEMs) collect the right data, turning customer feedback and language into actionable insight. This series is not just for manufacturers and suppliers, however, so please continue even, or especially, if you are not a current customer of Enprecis Group.

Now with that introduction, let’s begin to explore what exactly is quality?

 

“Who builds the best vehicles?”

I smile when I hear this question, and feel like a philosopher when the reply in my head is “well, that depends upon how you define and measure it.” Some in the industry believe this is simply a count of warranty claims. Some will defer to syndicated studies like J. D. Power’s Initial Quality Study or the Consumer Reports New Buyers’ Ratings. In short, there are many ways, and each of them have merit and flaws. There are many sources of data, and this is where data analytics add value, by match results from each source against each other in order to give credible , well-rounded insights. I’m sure you are wondering what “flaws”? While I have no intent to take shots at any one, I think it is important to offer insight on some common flaws that is best to be aware of or look for when reviewing the various available quality sources:

  • Seasonality, or survey/interview periods – Seasonality will limit some problems and emphasize others, which results in a less than complete picture of quality. For example, if the data gathering period is limited to winter months, there will naturally be fewer complaints for air conditioning (cooling) while more for heating and defrosting, more for squeaks and rattles dues to harshness of roads and the impact of cold temperatures, more for difficulty using buttons and knobs given the potential impact of wearing gloves, etc. Understanding the time period of the data acquisition needs to be accounted for when reviewing the data gathered.
  • How the potential list of problems are provided to the consumer to comment on – if provided a list or menu of potential flaws, a customer is more likely to check more boxes (experience recall or often referred to as aided recall), regardless of severity or [loss of] satisfaction with the concern. It is very possible that an issue would have not been “mentioned” by a consumer had they not been provided a menu of choices. This is not to say that the issue is not real, but rather that it may not be severe enough to warrant mentioning. Understanding the difference between an aided mention versus that of an unaided mention can speak to severity in the mind of the consumer.
  • Counting method – How do we sum up problems? Are all problems equal? Let’s look at this in the form of an example. Is a loose speaker attachment and speaker static one or two issues? This depends on the source’s ability or willingness to distinguish between the two. If the counting method is seen as a single issue under “speaker trouble”, that may be to the detriment of detail that an engineer or supplier of the speakers or audio system would benefit. Counting of problems serves multiple masters depending on your view of the problem in your overall responsibility to the issue. Similarly to how we count the problems, the severity of the issue is also dependent on the relationship our organization has to the problem.
  • Feature inclusion – Knowing what is being evaluated by the consumer is critical to our understanding of the issue data being collected. How can a source compare issues of navigation systems from a vehicle without navigation built in, to vehicles with the feature? How about features like parking assist, automatic emergency braking, or automatic distance control? If I have an AM/FM radio, is it really fair to compare it to a full-featured infotainment system with 9” touchscreen? Vehicle composition and the influx of new technologies and features creates a varied landscape not only between competing products but also within a vehicle and its various trim levels and optional content. Having a sensor in the survey instrument to capture a complete picture of the vehicle’s content is needed to understand the quality issues including lack of content’s impact on the consumer’s opinion.
  • Warranty claims – While a great source of quality information, this is complementary to the overall quality experience as quality issues are not always warranty issues. Warranty can often miss on severity and/or degree of the issue if the issue is not brought in to be “fixed” or if the issue is not serviceable, the latter of which is often associated with problems that the consumer feels is “difficult to use”.
  • Sample sizes – All sources are often plagued by this flaw. Sample size and integrity will dictate the extent to how much we can trust in the analysis. No sample is completely accurate and as such good research and data collection is based on attaining representative and robust information that we feel confident in. Clarity on confidence levels and confidence bands are needed inputs for anyone to correctly judge results.
  • Language and terminology used the engineering community may not be understood by a customer, and vice versa. Understanding the comfort your target respondent has with terminology being used in a survey will allow you to create a dialog with and receive useful feedback from these important evaluations. Speak the language of the audience you need feedback from, not the user you are doing the research for. This will afford you valuable insights from the consumers you engage with. Equally important is the ability to translate these consumer insights into a “language” that speaks to the engineers need to address the issues.
  • Subjectivity – This is another consideration when attempting to understand consumer feedback. Asking how much brake dust is “excessive”, how long to cool the cabin from 120°F is too long, and what is it considered “Excessive Wind Noise” when a customer hears wind/air around the vehicle at 75 MPH are just a few examples of the subjective nature of consumer research. Engineers and analysts prefer specific and measurable targets for performance. It is important that we dive deep enough into these conversations to access this detail remembering that our intent is always to let consumer expectations drive our design intent.

On top of source flaws, there are also different ways that the public defines quality. Some believe that only defects count, while others include how well components work to their subjective standards, and others will focus on the feel/touch/look of materials (craftsmanship). Perhaps a door panel might not have anything wrong, but “feels cheap”. Is this a minor issue or a major one? It likely depends on the audience voicing this concern and their expectations. Furthermore, is it worse to have 5 minor issues, or to have one critical, chronic transmission issue?

There aren’t common answers to the situations or questions posed here. All of them are situational to and completely dependent on the strategic intent – how we define the design intent and this goes back to the consumer’s expectations. So circling back to “Who builds the best vehicles?” The answer is that almost every brand/automaker (yes, almost every single one) builds and sells vehicles with very low rates of defects. Thanks to Dr. W. Edwards Deming, guru of statistical process control, decades of certification programs and quality management systems, and automotive suppliers sourced by multiple OEMs, there is little difference by customer standards in terms of defects. Almost every brand or automaker has very strong quality performance from vehicles they make and they also have models that are competitively considered poor performers. Further, since so many OEM’s use the same suppliers, who not only manufacture components but also design/engineer the parts with similar or imperceptible differences in OEM specifications, the field of competition is further homogenized.

Asking opinions of customers about who are quality leaders may result with some typical and repetitive answers, but I would argue much of that originates from reputation or experiences, often influenced from that of family members’ or friends’ experiences. While certain brands are more often viewed as “among the quality leaders”, the reality is that nearly every brand has some leaders, and nearly every brand builds/sells at least one substandard product.

So while not naming brands, this isn’t me trying to evade giving an answer. On the contrary, this entire article is to give you “the answer”. All measures of quality are entirely dependent upon a person’s definition of quality. I could give you very different answers for which brands tend to score best in syndicated surveys for concerns/issues/problems, but answers would be different if the definition was about craftsmanship, as they would also be different if quality included the idea of satisfaction. Therein lies my next topic – or rather a continuation on this discussion of what quality is. To give you a peek into the next article installment, when quality dismisses the thought of satisfaction, manufacturers may miss on opportunities win over customer advocacy and loyalty. For example when still thinking about Infotainment systems, if you are a customer who enjoys many features and connectivity, you will likely be much more happy with the whole system as opposed to a basic one, even though some of the features and menus may be difficult to use. Tech savvy people will likely learn to use all those features, and the difficultly will decrease, but people looking for basic functionality even with a higher trim level vehicle, will likely mark those difficulties as problems. Quality does not exist in isolation from the rest of the customer’s expectations and experiences.

As we continue in this series, I will write in more detail about methodologies of measuring quality, offer suggestions about the importance of meeting customer expectations and how it often opposes traditional ranking methods, along with some best practices of resource prioritization.

 


ABOUT THE AUTHOR

Dave Girolamo

Director, Data Analytics

Dave is an automotive quality professional with over 19 years of experience integrating technology and statistics to create predictive models that support innovative design and lean manufacturing practices. His expertise has helped automobile manufacturers achieve multiple high-profile quality awards and streamline processes for optimal efficiency. His many years working directly with top automakers ensures that Enprecis Group’s technological innovations are developed with the day-to-day activities of quality and design engineers in mind.

For more information, please visit www.enprecisgroup.com/leadership

CONTACT US
Enprecis Group Communication Team
905-565-5777
social@enprecis.com