Can system reliability be predicted from average component reliabilities?

Abstract The paper reveals that a prediction of system reliability on demand based on average reliabilities on demand of components is a fundamentally flawed approach. A physical interpretation of algebraic inequalities demonstrated that assuming average component reliabilities on demand entails an overestimation of the system reliability on demand for systems with components logically arranged in series and series-parallel and underestimation of the reliability on demand for systems with components logically arranged in parallel. The key reason for these discrepancies is the variability of components from the same type. Techniques for countering variability by promoting asymmetric response through inversion have also been introduced. The paper demonstrates that variability during assembly operations can affect negatively the reliability of mechanical systems. Accordingly, techniques for reducing the variability of stresses during assembly operations have been discussed. Finally, the paper provides a discussion related to the reasons for the relatively slow adoption of domain-independent methods for improving reliability despite their numerous advantages.

An important contributing factor for the apparent insufficient attention to domain-independent approaches for enhancing reliability was the excessive emphasis on reliability prediction methodologies, specifically those associated with estimating system reliability using average component reliabilities.To achieve this, average failure rates of components were sourced from various reliability databases (e.g.MIL-STD-1629A, 1977).System reliability for very complex systems was calculated on the basis of the average failure (hazard) rates of the components building the systems.The shortcomings of this approach in generating accurate system reliability predictions led to growing disillusionment among researchers and practitioners.As a result, some authors (Knowles, 1993) questioned the validity of failure rate-based reliability predictions.Furthermore, the reliability on demand of a vast range of systems was based on the average reliability on demand of the components building the systems.Consequently, Section 2 of this article explores some of the reasons behind the failure of methods reliant on average component reliabilities on demand to accurately predict system reliability on demand.
A compilation and analysis of common mistakes in design that led to catastrophic failures has been presented in Petroski (1994).Domainindependent methods for improving reliability in design have been presented in Todinov (2019).Despite the clear advantages provided by the domain-independent methods for improving reliability, these methods have not been widely used to inform the design process.French (1999), for example, formulated several general principles for conceptual design.However, these principles did not focus on enhancing reliability or reducing technical risk.Pahl et al. (2007) also discussed general principles for engineering design.Yet, many of these principles either do not emphasise improving reliability and minimising risk or are overly specific (e.g. the principle of thermal design) and lack broad applicability.Collins (2003) explored engineering design from a failure prevention perspective, but no risk-reducing methods or principles with universal applicability were formulated.Thompson (1999) highlighted the necessity of effectively integrating maintainability and reliability considerations into the design process and emphasised the significance of failure mode and effects analysis (FMEA) in design.While the FMEA, which is widely used in the industry, is valuable for understanding how a component's malfunction can lead to system failure, it does not provide much guidance on domain-independent principles for designing for reliability.
Another problem is that the current approach to reliability improvement and risk reduction is almost entirely reliant on domain-specific knowledge and is conducted solely by experts in those domains.
By using domain-independent methods, rapid mental mapping can be achieved for challenging problems, thereby bolstering intuition.This often leads to surprising breakthroughs and swift outcomes.Take, for instance, the domainindependent principle of inversion (Todinov, 2019).Understanding this principle often leads to innovative approaches in improving reliability involving reversing position, motion direction, properties, features, states, or thought processes.A failure mode that appears in a specific position, orientation, motion, state, or property often vanishes when the position, orientation, motion or state is shifted to the opposite one while maintaining the system's essential functions.
The domain-independent method of algebraic inequalities (Todinov, 2023), for example, can be used for improving the reliability of any seriesparallel system by asymmetric permutation of interchangeable redundancies even when the reliabilities of individual components are unknown.
The effectiveness of the domain-independent principles in improving reliability lies in the fact that solutions to reliability issues in one domain can be applied to other domains by using the same principle.For example, the problem of premature failure of one of several power transistors working in parallel can often be solved by the domain-independent principle "increasing the level of balancing", through more uniform distribution of the electrical load across the transistors.The same principle can be used to achieve a uniform load distribution along the thread of bolted joints (Coria et al., 2020) and to eliminate premature failure of a shaft-hub connection based on a single key.The last issue can be addressed by replacing the key with splines, which also increases the level of balancing and distributes the load more uniformly.The same principle can also be used to eliminate damaging the top of a pile driven into the ground by introducing an intermediate component that distributes the load more evenly (Orloff, 2006).
Despite the clear advantages of the domain-independent principles for improving reliability, their adoption has been relatively slow.This can be attributed to a number of factors.
The first contributing factor is the lack of awareness and education regarding these principles.Reliability engineers, as well as other professionals involved in reliability improvement, are still unaware of the potential benefits from using the domain-independent methods.There is a lack of educational resources and training programs available to help individuals and organisations learn about these methods and develop the necessary skills to implement them.
The traditional reliability engineering education programs have not yet incorporated these methods into their curriculum, leading to a gap in knowledge and skills among practicing engineers.
Another contributing factor is that reliability improvement has traditionally relied upon methods such as active, standby, and k-out-of-n redundancy, physics-of-failure approach, as well as strengthening the design by incorporating various types of reinforcement, selecting stronger or corrosion-resistant alloys and condition monitoring.These techniques have been established and refined over many years and are familiar to reliability engineers and other professionals in the field.However, while useful in a number of cases, these techniques are associated with high implementation costs.These well-known methods created resistance to change, particularly for those who have become accustomed to relying on a small set of well-known, albeit costly techniques.In contrast, many domain-independent methods, such as the method of inversion for example, are not normally associated with significant implementation costs.
A strong reason for the slow adoption of domain-independent methods for reliability improvement is the belief held by many engineers that their specialised knowledge and expertise in their field is sufficient to solve all reliability issues associated with their designs.These engineers often view domain-independent methods as less tailored in addressing the specific reliability issues within their narrow domain and for that reason they view these methods as less effective.As a result, they remain attached to their established methods, despite the potential benefits that domain-independent methods offer.This leads to mental inertia caused by conventional wisdom, tradition and entrenched beliefs.Comprehensive knowledge of a specific domain often hinders innovation.It makes domain experts resist novel ideas in their domains and limits the possibility to take advantage of novel approaches to reliability improvement, which could positively impact the reliability and safety of their designs.
Another reason for the slow adoption of domain-independent techniques is the prominence of the physics-of-failure approach (Pecht et al., 1990).This approach, which emphasises the development of models based on underlying failure mechanisms, has been embraced by many reliability practitioners as the only reliable way to achieve improved reliability.However, while this approach has undoubtedly led to improvements in reliability on numerous occasions, it is not always practical or feasible to rely exclusively on physics-of-failure models.
The physical mechanisms underlying failure modes can be extremely complex and difficult to understand, leading to a great deal of uncertainty.Additionally, in some cases, failures are the result of multiple contributing factors, making it difficult to identify the root causes.For instance, corrosion fatigue involves two complex interdependent and synergistic failure mechanisms, making it particularly challenging to understand.
Next, revealing the root causes of failure usually requires extensive research which is costly and time consuming.Thus, continuing the previous example, the complex mechanism of corrosion fatigue (Pao, 1996) cannot be captured and modelled effectively if limited research is done on corrosion, fatigue and their interaction.
Root cause analysis is usually based on collecting data and data collection is always associated with cost limitations.Acquiring the necessary reliable data capturing and quantifying different types of uncertainty is a difficult task which requires significant investment.Most importantly, physics-of-failure models, even when highly successful, cannot transcend the narrow domain they serve and cannot normally be used to improve reliability in another, unrelated domain.
In contrast, domain-independent principles, such as the principle of reducing the variability of reliability-critical parameters, take a more comprehensive and holistic approach to reliability improvement, considering common factors that impact performance.
In this regard, this paper explores in detail the impact of variability of reliability-critical parameters on predictions related to system reliability on demand.This is done through the domain-independent method of algebraic inequalities -through physical interpretation of the classical arithmetic mean -geometric mean (AM-GM) algebraic inequality (Steele, 2004) and a new algebraic inequality based on concave functions.
This paper also explores some domain-independent techniques for counteracting the variability of reliability-critical parameters.

Impact of variability on the product of quantities from the same type
The negative impact of variability on the product of quantities of the same type X, can be demonstrated by a physical interpretation of the arithmetic mean-geometric mean algebraic inequality.
Consider the well-known arithmetic mean-geometric mean (AM-GM) inequality (Steele, 2004): ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi x 1 x 2 ::: where x 1 , . . ., x n are n positive real values representing various measurements of a quantity of the same type X.
Inequality (1) has a useful physical interpretation if presented in the equivalent form (2).
The right-hand side of inequality (2) then can be physically interpreted as the value of the product of n quantities of the same type X.
The expression in the left-hand side of inequality (2) can be interpreted as the average quantity � x of type X.It is simply obtained by taking the average of the measurements characterising the quantity X. Inequality (2) can also be rewritten as The left-hand side of inequality (3) can be physically interpreted as a value of the product where each measurement has the same, average value � x: Inequality (3) effectively states that the predicted magnitude of the product � x n based on an average estimate of the quantity of type X is higher than the actual value of the product from n separate measurements.
The larger the deviations of the quantity X from the average value � x ¼ , the stronger the inequality (3).Inequality (3) transforms into equality in case of no variation of the quantity X.In other words, for the perfectly balanced case, Here is an example.It is a well-established property that the overall gain (amplification factor) of multiple voltage amplifiers of the same type connected in series, with gains x 1 , x 2 , . . ., x n , is given by the product x 1 x 2 . . .x n of the gains of the individual multipliers.Thus, the gain of a cascade of n voltage amplifiers of the same type can be calculated by using the righthand side of inequality (3) where x i is the measured gain of the ith amplifier or by averaging the gains of all amplifiers and using the average gain � x: The gain of a cascade of n voltage amplifiers of the same type can also be estimated from the left-hand side of inequality (3) based on the average gain of the amplifiers.This estimate however, deviates significantly from the real value given by the right-hand side of inequality (3).Due to inherent variability, the measured gain for amplifiers of the same type will exhibit differences and using the average gain in calculations will result in a considerable deviation of the estimated amplification factor from the true amplification factor.

Reliability on demand predictions for systems whose components are logically arranged in series
To simplify the analysis in assessing the impact of component variability on predicting the system reliability on demand, only systems with components working independently from one another will be considered.The impact of assuming average reliability on demand on predicting the system reliability on demand will be investigated through a system with components which are: (i) logically arranged in series (Figure 1a), (ii) logically arranged in parallel (Figure 1b) and (iii) logically arranged in seriesparallel (Figure 2a).
Consider the section in Figure 1a including a number of components from the same type but of different variety, logically arranged in series and working independently of one another.Let x i (0 < x i < 1) be the reliability on demand of a component C i of variety i, where i ¼ 1, 2, . . ., n: The different component varieties C i can, for example, be associated with different suppliers, different working conditions, or different age.
The negative impact of variability on the predicted system reliability for the system in series, in Figure 1a can be demonstrated by a physical interpretation of the arithmetic mean-geometric mean algebraic inequality (2).
The right-hand side of inequality ( 2) can be physically interpreted as the reliability on demand of a section composed of n components of different variety, logically arranged in series (all components are of the same type).
The variables x i in inequality (2) represent the reliabilities on demand for components of different varieties, but all of the same type.Obtaining individual component reliabilities on demand for these different varieties is impractical.To do so would necessitate knowledge of the reliability on demand for every single component manufactured, for any age, working environment, duty cycle, number and type of material flaws, etc.This is why it is inevitable to use average values for predicting the reliability on demand of sections that contain components of the same type but of different varieties.
Due to differences in age, the presence of varying numbers of material and manufacturing flaws, inconsistencies in the manufacturing process, and variability in maintenance and working conditions, no two components of the same type have identical reliability.For example, the presence of material flaws significantly influences the reliability variation of components (Todinov, 2002(Todinov, , 2006)).Thus, components of the same type and material, sourced from different suppliers, may exhibit considerable differences in their reliabilities.Such differences can be attributed to variations in the number, size, and location of inclusions and other imperfections within the high-stress zones of the components.
When dealing with n components of a particular type X, we are essentially dealing with a set of inhomogeneous components from n distinct varieties.Due to the impossibility of determining the reliability on demand that characterises these different varieties, this inherent inhomogeneity necessitates the use of the average component reliability on demand.For instance, if 639 out of 900 valves of type X respond to a command to close/open, the reliability on demand for valves of type X would be assessed by using the average value 639=900 ¼ 0:71: The expression in the left-hand side of inequality (2) can be interpreted as the average reliability on demand � x of the components from the selected type, regardless of their variety.It is simply obtained by taking the average of the reliabilities on demand characterising the separate varieties.The left-hand side of inequality (2) can be physically interpreted as a reliability on demand of a section constructed with n components logically arranged in series where each component has the same, average reliability � x: The right-hand part of inequality (2) is the actual reliability of the system in Figure 1a.Inequality (2) effectively states that the predicted system reliability on demand based on an average component reliability on demand � x is higher than the actual reliability on demand of the system.
The expression in the left-hand part of (2) is the average reliability � x on demand for the components from the selected type X (e.g.valve, sensor, seal, etc.), assessed as an average related to n varieties.Note that the reliabilities on demand x i characterising the n varieties are not known and this is why the system reliability on demand cannot be estimated by using these probabilities.Because the expression for � x cannot be evaluated using x i , the ratio p r =p is used instead where p r is the number of successfully operating (reliable) components from type X, from past observations (statistics) and p is the total number of observed components.
Let us assume for simplicity, that the number of component varieties is equal to the number n of components.Thus, for the average reliability on demand � x of components from type X, the following equation holds: Equation ( 4) can be verified immediately considering that the left-hand side of (4) can be presented as which essentially represents the total probability associated with the successful operation of a component from type X.Indeed, a component from type X can operate successfully in n mutually exclusive ways.This includes the scenario where the component belongs to variety 1 and operates successfully (a compound event with probability ð1=nÞx 1 ), the scenario where the component belongs to variety 2 and operates successfully (a compound event with probability ð1=nÞx 2 ), and so on.The probability of successful operation of the component must approach p r =p because this ratio is the empirical reliability on demand for the component.
Very similar reasoning also applies to the case where the number n of varieties is smaller than the number n c of components in the system (n < n c ).Indeed, let n 1 , n 2 , . . ., n n ( P n i¼1 n i ¼ n c ) be the number of components in the system from each variety (these numbers are also unknown).The total probability of successful operation of a component in the system is then given by the left-hand side of (6): which must be equal to p r =p -the empirical probability of successful operation, where p r is the observed in the past total number of reliable components (from statistics) and p is the total number of observed components.
The left-hand side of ( 6) is the weighted average of the probabilities of failure characterising the n varieties.Indeed, a component in the system can operate successfully in n mutually exclusive ways.This includes the scenario where the component belongs to variety 1 and operates successfully (a compound event with probability ðn 1 =n c Þx 1 , the scenario where the component belongs to variety 2 and operates successfully (a compound event with probability ðn 2 =n c Þx 2 , and so on).
The total probability of a component operating successfully is then given by Equation (6).
To test Equations ( 4) and ( 6), Monte Carlo simulations were also performed, based on p ¼ 100,000 observed components and n ¼ 1,2, … ,10, component varieties.In an array, random values between 0 and 1 are initially assigned for the probabilities of failure characterising the n varieties.Next, p ¼ 100,000 components were selected by choosing randomly their variety.Each randomly selected component was also virtually tested for reliable operation on demand by using the reliability on demand characterising its variety.At the end of the simulation, the ratio of the total number of reliable components p r and the total number p ¼ 100,000 observed components was formed.The validity of Equations ( 4) and ( 6) has been confirmed with each Monte Carlo simulation.
The discrepancy between the predicted and the actual system reliability on demand can be significant as the next numerical example demonstrates.
Let's consider 900 valves of the same type X but of three different varieties (e.g., valves from machine centres 1, 2 and 3).From past statistics, 639 of the monitored 900 valves are reliable on demand.Because only the total number of valves 900 and the total number of reliable valves are known, the reliability on demand for the valves of type X will be estimated from: Assume that a set of three valves on a pipeline are initially closed and must all open on command to allow fluid through the pipeline.This means that the set of valves are logically arranged in series (each valve must be operational for the system to be operational).Commonly, the reliability of the section consisting of these three valves is estimated on the basis of the average reliability on demand � x characterising the valves.The reliability of the section is estimated from: R est ¼ 0:71 � 0:71 � 0:71 ¼ 0:36 Because of variability in component reliabilities on demand, the actual reliability on demand of the valve arrangement will be different from the estimated reliability on demand.
Considering the results ( 4) and ( 6), according to which inequality (3) can be rewritten as Assume for the sake of simplicity, that 300 valves of type X have been manufactured from each of the 3 manufacturing centres (valves of three distinct varieties).Let the number of reliable valves from the different varieties be 288, 258 and 93, correspondingly.
Consequently, the reliability on demand for each variety is as follows: As can be verified, the following expression holds true for the average reliability on demand � x : Suppose that a valve from each variety has been used to construct the section of three valves, logically arranged in series.
The actual reliability on demand of the section with three valves is As can be verified, the following relationships hold for the reliability on demand for the valves of type X: The estimated system reliability on demand (R est ¼ 0:36), derived from the average component reliability on demand, is 1.38 times higher than the actual reliability on demand (R real ¼ 0:26) of the section.The reason for this discrepancy is inequality (8).
Indeed, according to expression (6), the average reliability on demand is given by: The larger the deviations of the reliabilities on demand characterising the different varieties from the average value � , the stronger the inequality (8) will be.Inequality (8) transforms into equality in case of no variation of the reliabilities on demand characterising the separate varieties.In this case, Indeed, assume again that the total number of monitored valves from type X is 900 and the statistics indicated that the number of reliable valves is 639.
Because only the total number of observed valves 900 and the total number of observed reliable valves are known, the reliability on demand of the valve from type X will be estimated from: Assuming that the valves in the section are logically arranged in series, the reliability of the section is estimated from R est ¼ 0:71 � 0:71 � 0:71 ¼ 0:36 Let the number of reliable valves characterising the different varieties be close values, with small variation: 230, 210 and 199, correspondingly.In this case, the reliabilities on demand characterising the different varieties are: x 1 ¼ 230=300 ¼ 0:77, x 2 ¼ 210=300 ¼ 0:70 and x 2 ¼ 199=300 ¼ 0:66, correspondingly.Suppose again, that a valve from each variety has been used to construct the section of three valves where the valves are logically arranged in series.
The real reliability of the section is then R real ¼ x 1 � x 2 � x 3 ¼ 0:77 � 0:7 � 0:66 ¼ 0:35 which is now very close to the estimated value R est ¼ 0:36 of the reliability on demand for the section.
If there were no variability in the reliabilities of components of the same type, inequality (8) would become equality, and there would be no discrepancy between the estimated system reliability and the actual system reliability.The greater the deviations of component reliabilities from the average value, the more pronounced the inequality (8).
Deviations in reliabilities on demand of the separate varieties from the average reliability on demand characterising the corresponding type of component are inevitable, primarily due to differences in age, working conditions, material, and manufacturing flaws.Consequently, discrepancies between the predicted reliability on demand and the actual value will always exist.

Impact of variability on the system reliability predictions for systems with components logically arranged in parallel
Consider the system in Figure 1b with n components logically arranged in parallel.Consider the algebraic inequality: where 1 � x i � 0 and : This inequality is equivalent to the inequality: The last inequality can also be proved by using the AM-GM inequality, after making the substitution y Indeed, according to the AM-GM inequality: 10) is obtained from which, inequality (9) follows directly.Inequality (9) also has a useful physical interpretation.
Let x i (0 < x i < 1) be the reliability on demand of a component C i of variety i, where i ¼ 1, 2, . . ., n: The different component varieties C i can be from different suppliers, from different machine centres, or can be components of different age.The left-hand side of inequality (10) then can be physically interpreted as the actual probability of system failure on demand for the system in Figure 1b, composed of n components logically arranged in parallel.
The quantity in the left-hand side of inequality (10) can be interpreted as the average � x of the reliabilities on demand characterising the separate varieties.According to expression (7), this average value is equal to the ratio p r =p of the observed in the past total number of reliable components and the total number of observed components.
Inequality (10) effectively states that, for a system in parallel, the predicted probability of system failure on demand, based on an average component reliability on demand � x ¼ p r =p, is always greater than the actual probability of system failure on demand, irrespective of the reliabilities on demand of the separate components.
The difference between the estimated and the real probability system failure on demand can be significant as the next numerical example demonstrates.
Let's consider 900 components of the same type X but of three different varieties (e.g., valves from machine centres 1, 2 and 3).From past statistics, 261 of the monitored 900 components are reliable on demand.Because only the total number 900 of components and the total number of reliable components are known, the reliability on demand for the components of type X is estimated from: Now, suppose that a section consists of one component from each of these three varieties.Assuming that the components are logically arranged in parallel (at least one of the components must be operational for the system to be operational).
The estimated probability of system failure on demand based on an average valve reliability is: In an example symmetrical to one of the previous ones, assume that 300 valves of type X have been produced at each of three manufacturing centres, resulting in valves of three distinct varieties.The number of reliable valves from these varieties is 207, 42, and 12, respectively.
Consequently, the reliability on demand characterising each variety is as follows: x 1 ¼ 207=300 ¼ 0:69, x 2 ¼ 42=300 ¼ 0:14, and x 3 ¼ 12=300 ¼ 0:04: As can be verified, the following expression holds true for the average reliability on demand � x, characterising the valves of type X: Consider three valves from each of the three varieties, that are logically arranged in parallel and work independently from one another.In a parallel arrangement, including components working independently from one another, the overall probability of failure on demand of the section is given by: The estimated probability of system failure on demand F est ¼ 0:36 is 1.38 times larger than the real value F real ¼ 0:26:

Impact of variability on the system reliability predictions for series-parallel systems
Series-parallel systems of the type in Figure 2a are quite prevalent in various applications.These systems consist of components that are logically arranged in series with active redundancy at the component level for enhanced reliability and performance.
The negative effect of assuming average component reliabilities on the predicted system reliability on demand for the series-parallel system in Figure 2a can be demonstrated by a physical interpretation of the next inequality: where m � 2 is an integer exponent and x 1 , . . ., x n are n real numbers for which 0 � x i � 1: Inequality (12) can be proved as follows.
From the basic properties of the concave functions f ðxÞ and gðxÞ : f ½kx þ ð1 − kÞy� � kf ðxÞ þ ð1 − kÞf ðyÞ, and g½kx þ ð1 − kÞy� � kgðxÞ þ ð1 − kÞgðyÞ, where 0 � k � 1: It can be shown easily that the sum hðxÞ ¼ f ðxÞ þ gðxÞ of two concave functions f ðxÞ and gðxÞ is a concave function and by induction, it can be deduced that the sum of n concave functions is also a concave function.
Consequently, the function z i Þ are concave because their second derivatives are all negative: considering that m − 1 > 0 and 1 − x m i > 0: Let w i be weights defined such that w 1 ¼ w 2 ¼ . . .¼ w n ¼ 1=n: According to the Jensen's inequality (Steele, 2004) x n , the following inequality holds for a concave function: As a result, the inequality: is obtained from ( 13), which is equivalent to Since the exponential function e x is strictly increasing, according to the properties of inequalities, the direction of inequality (14) will not change if both sides of ( 14) are exponentiated: which yields inequality (12).For m ¼ 2, inequality (12) becomes Let x i (0 < x i < 1) be the probabilities of failure on demand of components C i of variety i, where i ¼ 1, 2, . . ., n: The different component varieties C i can be components of different age, sourced from different suppliers, or working in different conditions.
The left-hand side of inequality ( 16) can then be physically interpreted as the actual reliability on demand of the section in Figure 2a, composed of n subsections arranged in series, in each of which the components are logically arranged in parallel.(all components are of the same type).
The expression in the right-hand side of inequality ( 16) can be interpreted as the average probability of failure � x of the components from the selected type, regardless of their variety.It is equal to the ratio p f =p where p f is the number of failed components from type X from past observations (statistics) and p is the total number of observed components: 16) can also be rewritten as Inequality ( 17) effectively states that the predicted system reliability on demand, based on an average probability of failure on demand � x for the components of a particular type, is greater than the actual reliability on demand of the system.
This discrepancy can be significant as the next numerical example demonstrates.
Similar to the example in Section 2.3, consider a single type X of components, of three different varieties C1, C2 and C3, characterised by probabilities of failure on demand 0.69, 0.14 and 0.04, respectively and working independently from one another.Now, suppose that the series-parallel system in Figure 2b includes two components from each of the three varieties.The actual reliability on demand of the series-parallel arrangement is: Now suppose that the reliability on demand of the section is calculated on the basis of the average probability of failure on demand � x characterising the three varieties C1, C2 and C3, given by � x ¼ p f =p ¼ 261=900 ¼ 0:29 The estimated system reliability on demand based on average probability of failure on demand is: The estimated value R est ¼ 0:77 is 1.51 times greater than the real reliability R real ¼ 0:51 of the section.
For m redundant components in n sections in series, the left-hand side of inequality ( 12) gives the actual reliability on demand for the system in Figure 3a while the right-hand side of ( 12) gives an estimate of the system reliability on demand calculated on the basis of the average probability of failure characterising the different varieties C1, C2, . . ., Cn: For n ¼ 3 sections with m ¼ 3 redundant components in each section (Figure 3b), inequality (12) becomes 18) where the left hand-side of ( 18) gives the actual reliability on demand of the system in Figure 3b and the right-hand side gives an estimate of the reliability on demand of the system in Figure 3b, calculated on the basis of the average probability of failure characterising the three varieties C1, C2 and C3: For components of the same type but of three different varieties C1, C2 and C3, characterised by probabilities of failure 0.69, 0.14 and 0.04, the lefthand side of inequality (18) gives: R real ¼ ð1 − 0:69 3 Þ � ð1 − 0:14 3 Þ � ð1 − 0:04 3 Þ ¼ 0:67 for the real reliability on demand of the arrangement in Figure 3b.
If the reliability of the arrangement in Figure 3b is calculated on the basis of the average probability of failure � x ¼ p f =p ¼ 261=900 ¼ 0:29 characterising the three varieties C1, C2 and C3, for the estimated system reliability on demand based on average probability of failure, the value: The estimated value R est ¼ 0:93 is 1.39 times greater than the real reliability on demand R real ¼ 0:67 of the section.

Impact of variability on the system reliability predictions for parallel-series systems
The system in Figure 4 includes m � n components of the same type and different variety.Monte-Carlo simulation experiments confirmed that no systematic overestimation or underestimation of the reliability on demand of the parallel series system in Figure 4 exists.This means that for certain combinations of reliability on demand values of the components, a prediction based on an average component reliability on demand leads to underestimation while for other combinations, the prediction leads to an overestimation of the system reliability on demand.
In conclusion, using average component reliabilities on demand to calculate system reliability on demand is not a dependable approach even for components working independently from one another, as it is fundamentally flawed.The extensive focus in reliability literature on predicting system reliability by using average component reliabilities cannot be justified because variability of the reliability on demand for components from the same type is always present.
Another key conclusion is that reducing the variability of components and reliability-critical parameters is crucial to the adequate estimation of system reliability.
Precision machining is an effective way to manufacture components to exact specifications, thereby reducing variability in their reliability-critical parameters.In addition, supplier quality management can ensure that components meet the required quality standards, with reduced variability of reliability-critical properties.
Conducting component testing is also essential in identifying potential variability issues and taking corrective action before the assembly process begins.Statistical process control techniques are an effective way to identify component variability and enable prompt corrective action to be taken.Implementing continuous monitoring and feedback along with statistical process control can ensure the quality and reliability of the produced components.
Removing sources of variability is another important technique for reducing component variability.It has been demonstrated in Todinov (2019) that to achieve the maximum reduction of variability, the sources of variation to be removed must be selected carefully, using a procedure based on the equation for the variance of a distribution mixture (Todinov, 2019).

Reducing variability by self-balancing
Self-balancing can also be used to reduce variability of reliability-critical parameters.Such are, for example, the gas turbines with a symmetric design that cancels out net axial forces by generating equal and opposite forces.This design cancels any variable forces because they always appear with the same magnitude and opposite directions.In addition, the symmetric design eliminates the need for axial bearings, as there is no net axial force to be supported.As a result, the number of components required is reduced, which also promotes higher reliability (Matthews, 1998).
Reducing the variability of loading in assemblies or systems by selfbalancing can be found in rotating mechanisms (Meraz, 2005).In rotating machinery, unbalanced forces can cause significant stress and wear on the components, leading to increased maintenance costs and decreased reliability over time.By implementing self-balancing mechanisms, the machines are able to dynamically adjust for any imbalances, thereby reducing the amount of stress on individual components and increasing the overall reliability of the system.For example, modern gas turbines use self-balancing mechanisms such as tilting-pad journal bearings and active magnetic bearings to compensate for any imbalances caused by changes in operating conditions, such as changes in temperature and pressure.These self-balancing mechanisms allow the turbines to operate more efficiently and with less stress on individual components, resulting in increased reliability and decreased maintenance costs over the lifetime of the machine.
Another example of reducing variability by self-balancing is the use of active vibration control (AVC) systems.AVC systems utilise sensors and actuators to actively control the vibration and oscillations of various components and systems within an aircraft, such as engines, wings, and landing gear.By reducing these vibrations and oscillations, AVC systems can improve the reliability of the aircraft by reducing wear and mitigating the risk of fatigue failure or other forms of mechanical failure.
The use of active noise control (ANC) systems in industrial and transportation settings is a similar technique.ANC systems use sensors and actuators to actively cancel out unwanted noise and vibration, thereby improving the overall acoustic environment and reducing the risk of noiseinduced stress in workers.These systems work by measuring the incoming noise or vibration and generating an opposite sound or vibration signal to cancel it out in real-time.For example, some heavy equipment used in construction and mining sites use ANC systems to actively cancel out the noise and vibration generated by engines and hydraulic systems.By reducing the amount of noise and vibration that reaches the operator, ANC systems can reduce the risk of noise-induced hearing loss.Additionally, ANC systems can improve the overall reliability of the equipment by reducing wear and fatigue caused by excessive vibration and noise.
The Dynamic Variability Compensation System identifies real-time variability in mechanical performance and uses counteracting mechanisms to nullify these deviations.By pairing each variability with an opposing variability, the system creates a neutral state that enhances the overall mechanical reliability.
The first key component of the dynamic variability compensation system are the variability sensors.These measure mechanical inconsistencies or deviations, whether they are due to loading, wear, environmental factors, or any other influences.
The second key component are the compensation actuators.These are mechanisms designed to introduce an opposing variability or corrective measure.
The third key component is the predictive analysis unit.This is effectively a computing module that predicts future deviations based on past and current data.
The fourth key component is the control unit.It interprets sensor data, collaborates with the predictive analysis unit., and commands the compensation actuators.
Variability sensors constantly monitor the system, identifying any deviations or inconsistencies, in real-time.The control unit evaluates these deviations and, with the assistance of the predictive analysis unit, anticipates how they might evolve.
Once the nature and magnitude of a variability are identified, the control unit determines the necessary counteracting variability needed to neutralise it.Compensation actuators are then triggered to introduce this counteracting variability, effectively neutralising the initial deviation.By identifying and neutralising deviations in real-time, it offers a proactive approach to improving mechanical reliability, ensuring systems remain balanced and operate at their peak performance.
For example, if one part of a system is overloaded, asymmetric movement of a counterweight is activated so that the stresses appearing at a particular critical region are counterbalanced (Ciupitu, 2018;Zhen et al., 2017).
Potential applications can be found in manufacturing equipment.Machines requiring precision, like CNC machines, can benefit by ensuring each product is consistent despite machine wear or other influencing factors.Other potential applications are for robots working in dynamic environments, ensuring tasks are performed with low variability despite changing conditions.
The dynamic variability compensation system presents some limitations, one of which is the system complexity.Integrating multiple sensors, actuators, and computing modules complicate the design.This complexity leads to increased maintenance needs, increases the potential for more points of failure, and might require specialised training for personnel.Another limitation is the energy consumption.Constant monitoring and counteracting can increase the system's power demands.
Self-balancing mechanisms often require additional components, such as counterweights or compensating masses.This can increase the overall weight and size of the system, which can be a disadvantage in applications where weight and size are crucial factors.
Introducing self-balancing features may lead to increased costs, both in terms of component manufacturing and system maintenance.

Asymmetric response attained by inversion
Countering variability of controlling factors and properties can be achieved not only by exploiting symmetrical arrangements and geometry.In certain cases, promoting asymmetric response may also reduce significantly the negative impact of variability and delay the occurrence of a failure mode.This concept is illustrated in Figure 5a, where the inversion of the electromotor's position relative to its support introduces an asymmetric response related to the loading stress.In the original configuration (as shown in Figure 5a), most of the fluctuating loading stress is tensile, leading to a shorter fatigue life.However, when the position is inverted, most of the fluctuating loading stress becomes compressive, which enhances the fatigue life.
Another example of reducing variability by promoting asymmetric response through inversion can be seen in the enhancement of the reliability of a normally open mechanical switch soldered onto a printed circuit board (PCB), as shown in Figure 6a.
When variable force, denoted as F, is applied during the operation of the normally open switch in Figure 6a, the soldered points on the printed circuit board experience fluctuating stress with a relatively large magnitude.Over multiple operations, this fluctuating stress loading can lead to premature fatigue cracking.By obtaining asymmetric response through inversion, the normally open switch can be transformed into a normally closed one.This alters the activation process; instead of closing the normally open contacts, the activation now requires opening normally closed contacts.In the design depicted in Figure 6b, an excessive force F applied to the button does not translate into an excessive variable load on the soldered points.Due to this inversion, the severity of the fatigue loading on the soldered points is reduced dramatically and fatigue life is significantly enhanced.

Asymmetric response attained through nonlinear output
Asymmetric response countering variability can also be based on a on a non-linear output.
Systems with asymmetric response adjust the system's behaviour based on its operating conditions, ensuring that the system remains reliable under increased variability of the controlling factors.An example exploiting the asymmetry of the output characteristic to counter the negative effect of variability are the metal oxide varistors or Zener diodes (Horowitz & Hill, 1989).These devices are designed to conduct very little current below a certain voltage threshold and then conduct a large amount of current once the voltage exceeds that threshold.
Under normal voltage conditions, the Zener diode conducts very little current, essentially acting as an open circuit.The electronic device connected to the circuit operates normally.
If there is a sudden increase in voltage above the threshold level, the Zener diode becomes highly conductive almost instantly, diverting the excess current away from sensitive electronic components.This rapid change in conductivity at a specific voltage threshold protects other components in the system from experiencing damaging high voltage levels.Once the surge is over and the voltage drops below the threshold, the Zener diode returns to its high resistance state, ensuring that the normal operation of the device isn't affected.
As a result, by leveraging the asymmetry of the V-I characteristic (essentially non-conductive below a certain voltage and highly conductive above it), the Zener diodes improve reliability.They ensure that electronic devices remain protected from transient voltage spikes that could otherwise damage or reduce the lifespan of the connected equipment.

Reducing variability during assembly operations
Assembly operations are sources of increased variability of reliability-controlling factors.When assembling loaded components, it is crucial to ensure that the assembly process does not add any additional stresses to the components.Imbalances can cause significant problems such as increased wear, decreased lifespan and even failure.To reduce the magnitude and variability of assembly stresses during assembling of loaded components, it is essential to use standardised assembly processes.For example, using standard precision alignment tools can greatly improve the level of balancing and reduce stresses.These tools, including laser alignment tools and dial indicators, can help ensure that components are properly aligned, which is crucial for reducing the variability of assembly stresses.
Employing controlled assembly processes to minimise variation is a powerful technique for reducing excessive assembly stresses.By implementing strict assembly instructions and quality control measures, variations in the assembly process can be minimised, leading to a reduction in assembly stresses.Thus, implementing standard torque specifications and tightening sequences is another effective technique for reducing the variability of assembly stresses.It is also essential to use high-quality fasteners with consistent properties to ensure that they are tightened evenly.
Controlled assembly process can also be implemented by automating certain assembly tasks which can help to reduce variability caused by human error and improve overall assembly reliability.In this respect, in order to improve consistency and accuracy during assembly, it is essential to use automation and robotics.Employing robots in assembly operations helps reduce variation and ensure that components are properly aligned.
Additionally, using special fixtures and tooling can make assembly easier and more accurate, further reducing variability.Support structures or jigs can provide additional stability and help ensure that components are properly aligned, leading to a reduction in assembly stresses.
Implementing static and dynamic balancing techniques during the assembly of rotating machinery is critical to reducing variability of stresses during operation.Balancing weights are often used to correct imbalances during assembly, and can be added or removed as necessary to achieve proper balance.
Designing products for assembly can help to reduce variability during assembly.Design modifications can be made to improve ease of assembly, further reducing variability of assemblies.For example, redesigning component interfaces can ensure that they fit together more smoothly, making assembly operations easier, thereby reducing variability.

Conclusions
1.The paper reveals a fundamental flaw in the existing approach for predicting system reliability on demand.2. Using average component reliabilities on demand to calculate system reliability on demand is a fundamentally flawed approach even for components working independently from one another, as it is prone to significant errors.3. The impact of assuming average component reliabilities on demand on the predicted reliability on demand of systems with independently working components logically arranged in series has been revealed by using a physical interpretation of the arithmetic mean-geometric mean algebraic inequality.The estimated system reliability based on average component reliability on demand is always greater than the actual reliability on demand of the system.4. The impact of assuming average component reliabilities on the predicted reliability on demand of series-parallel systems has been revealed by using the physical interpretation of a novel algebraic inequality based on concave functions.The estimated system reliability on demand based on average component reliability on demand is always greater than the actual reliability of the system. 5.The impact of assuming average component reliabilities on the predicted probability of failure of systems with components logically arranged in parallel has been revealed by using a physical interpretation of the arithmetic mean -geometric mean algebraic inequality.
The estimated probability of system failure on demand based on average component reliability on demand is always greater than the actual probability of failure on demand of the system.6.It has been demonstrated that if there were no variability in the reliabilities of components of the same type, there would be no discrepancy between the estimated value for the system reliability on demand and the real value.The deviation of the reliability of components of the same type from the average value is inevitable, due to differences in age, operating conditions, environment, material and manufacturing flaws.7. Useful domain-independent techniques have been introduced for countering the variability of safety-critical factors (i) by self-balancing, (ii) by promoting asymmetric response and (iv) through assembly operations.

Figure 1 .
Figure 1.Reliability network of a system with components from n varieties, (a) logically arranged in series; (b) logically arranged in parallel.

Figure 2 .
Figure 2. (a) Reliability network of a series-parallel system with components from n varieties; (b) reliability network of a series-parallel system involving components of 3 varieties.

Figure 3 .
Figure 3. (a) Reliability network of a series-parallel system with components from n varieties and m redundancies in each section; (b) reliability network of a series-parallel system with components from 3 varieties and 3 redundancies in each section.

Figure 4 .
Figure 4. Reliability network of a parallel-series system with n parallel branches each including components from m varieties.

Figure 5 .
Figure 5. Inverting the relative position of an object with respect to its support delays a failure mode.

Figure 6 .
Figure 6.Enhancing the reliability of a mechanical switch by inversion from a normally open to a normally closed state.