How much data is required for RA?

What is the minimum number of failure events needed for Reliability Analysis (RA)?  If a company does not have good historical data do we have to wait for those events to occur before being able to build predictive models?  A related  question would be, what do we do with all the PM or overhaul data, where some parts are preventively replaced?

The form of RA of most practical interest is an analysis model that includes relevant condition based maintenance (CBM) data.[1] The answer would then depend on how well the monitored (CBM) data reflects internal damage or external stress on the asset. If the intrinsic relationship between the condition data and the developing failure mode is strong, a relatively small amount of data (at least four life-cycles ending in functional or potential failure) is required. If the relationship is weak, more historical data is required to achieve a good  model. The RA software tells you how “good” your model is and whether and to what degree it is acceptable for predictive use, meaning how much confidence can you place in decisions made using the model. The software measures this by indicating confidence bounds in the Remaining Useful Life Estimate (RULE) and by providing a standard deviation indicating the amount of scatter around the RULE. The modeling process guides you in cleaning and augmenting your data management methods to achieve continuously growing confidence in Condition Based Maintenance (CBM) decision making. More information on this subject is given in Confidence in Predictive Maintenance.

It is preferable to build a sample for RA and predictive modeling based upon past failure mode life cycles ending in potential failure rather than life cycles ending in functional failure. This is because functional failures usually have significant consequences. A potential failure, on the other hand, is an imminent failure, confirmed by the technician’s observations at the time of repair. A potential failure has relatively fewer consequences.[2]

If a maintenance organization does not have good historical data (the case in most companies) it means that they haven’t got a good reliability information management process in place. Without such a process[3], systematic improvement in maintenance and reliability is impossible. With such a process in place, the RCM knowledge base grows with each new significant work order. And work orders will contain analyzable data. The (LRCM) process requires linking each work order, at the time of closure, to one or more RCM knowledge records based on what was found at the time of intervention. The work order should also indicate whether the failure mode failed (a functional or potential failure) or was preventively renewed without having failed (a suspension). Using this approach, good samples for analysis can be obtained automatically.[4] Reliability Analysis software such as EXAKT, can be applied easily to these data samples.

A PM (see the article What is PM) work order is should provide two types of data (See the article What’s the right data).

© 2011 – 2014, Murray Wiseman. All rights reserved.

  1. [1]Assuming that this condition monitoring or sensor data truly reflects the health state of a part or assembly.
  2. [2]Training technicians to discriminate Potential Failures from Suspension when reporting their observations upon executing a work order is one of the most significant actions a maintenance manager can take, if he wishes to build and use model based decision techniques.
  3. [3]Called Living RCM (LRCM).
  4. [4]Conversely, if a clear one-to-one relationship does not exist between each failure mode in the RCM knowledge base and a combination of codes or catalog values selected when closing a work order, Reliability Analysis will be defeated.
This entry was posted in Data and samples and tagged , . Bookmark the permalink.
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments