Uncertainty propagation and sensitivity analysis in composite manufacturing cost estimation: ALPHA-framework and cost tool development

Abstract The presented ALPHA cost tool is a novel highly flexible bottom-up parametric hybrid cost estimation framework. It combines the benefits of both methods with the aim of providing cost information during all product development phases. The software offers full transparency to the user and advanced two-level uncertainty management to not only understand any project’s cost structure but also aid to identify its cost driving parameters. The implementation of sensitivity analysis makes the intrinsic uncertainty inevitable embedded in cost estimation become graspable. Gaussian error propagation offers direct feedback without extra calculation time while classic Monte Carlo Simulation gives detailed insight through post estimation analysis. From the vast number of commercially available or self-developed cost tools many probably already incorporate uncertainty measures similar to those proposed here. But this article shows both the potential of the additionally obtainable information from uncertainty propagation and demonstrates a way of integrating these risk considerations into a self-developed cost tool. GRAPHICAL ABSTRACT


Introduction
The strength of composites is their superior mechanical properties paired with unmatched lightweight potential and a high degree of design freedom. But their main weakness is the complex and labor-intensive manufacturing process combined with high raw material costs.
Rising financial pressure, also in traditionally less concerned industries like aerospace or racing applications, increases the need for sufficient cost estimation and control methods suitable to composite manufacturing industry. Reliable cost estimation is the foundation for successful business operation, which makes it a core feature in streamlining production efficiency. Not only correct bidding depends on cost estimation, but also all design and production decisions need to be economically evaluated with the help of cost estimation. Unfortunately, as the name estimation already indicates this process is always afflicted with a varying degree of uncertainty. This uncertainty can either come from errors in estimating the individual input parameters or from unforeseeable economic changes [1][2][3].
It is therefore extremely advantageous to not only obtain a reasonable estimation, but to also know the likelihood of a specific result and the expected deviation based on model input uncertainty. Uncertainty analysis helps to comprehend the impact of these errors and it allows identification of system critical parameters, the so-called cost drivers. The recognition of these values is important to the estimation as it tells which parameters to focus upon.
In this article the authors will discuss cost estimation together with suitable methods for uncertainty management and will present their concept of what they consider an ideal cost tool for composite aerospace manufacturing. A case study will demonstrate the application of the developed ALPHA software estimation tool and the importance of uncertainty management within cost estimation. Additionally, this article also aims to provide a guideline on the development of a cost tool, the possible trade-offs that need to be made and other important things that need to be considered.

Cost estimation methods
In general, all cost estimation approaches can be classified according to one of the following three basic methods (analogous, parametric and bottom-up cost estimation) or as a combination of them. While the distinction into these three methods is commonly enough, Niazi, et al. [4] provide a more detailed approach for classification of cost estimation models. Here only the fundamental principles for cost estimation will be discussed. Further and more detailed information on cost estimation especially for composite aerospace manufacturing can be found in [4][5][6][7].

Analogous cost estimation
The concept of analogous cost estimation follows the idea of using stored historic case knowledge to establish an estimation of the current project. The selection of the known past cases is governed by similarity. This can be done either rather simplistic by just selecting the most similar case and directly take the cost values from it or by establishing more complex similarity comparison methods that generate estimates based on multiple cases and their according level of resemblance. When applicable the analogous method provides a fast and easy way to obtain reliable estimation. The downside is the need to correctly identify the required similar case and to adequately rate the economic impact of case deviations [8][9][10][11][12].

Parametric cost estimation
The idea of parametric estimation is to statically analyze historic cases in order to establish mathematic correlations between project parameters and project costs. Once found, these so-called cost estimation relationships (CERs) allow for rapid estimation as long as the current case is within the CER's historic data range. Finding this correlation can either happen manually, normally leading to causal relationships or by applying for example neural networks with the downside of generating black-box systems [13][14][15][16][17].

Bottom-up cost estimation
While the two methods above relay on evaluation of historic knowledge the third method, bottom-up, works by decomposing the whole project into smaller and easy to estimate elements. These elements can either be work steps, activities or design features. After generating individual estimates for each element, these are summed up to form the full estimate of the project. This procedure provides highest flexibility of all methods and the freedom of independence from historic cases. But it also requires a large amount of design details and process understanding by the estimator and takes the most time to conduct [18][19][20][21][22][23][24][25].

Method comparison
Parametric and Analogous estimation technique are especially suitable for fast and easy estimations with limited amount details known, as long as sufficient applicable historic case knowledge is available. If unavailable or if a more detailed estimation is needed, the Bottom-up technique is the best option. It is also the most flexible method as it is not restricted by historic cases, but on the downside, it needs the highest amount of user input and known design details. A quantitative comparison of the different techniques and their strengths and weaknesses is shown in Figure 1.
In cost estimation practice different methods are used for different settings and models often are a combination of two or all three basic methods, see Hueber et al. [6] for more details.

Bottom-up-parametric hybrid
For the development of our cost estimation model, ALPHA, we decided on a Bottom-up-parametric hybrid approach. The idea was to combine the flexibility and applicability to new technologies of the bottom-up method with the early design stage capabilities of parametric. The aim was to develop a single estimation platform that allows cost estimation throughout all product development phases. This idea is displayed in Figure 2 in combination with a cost development curve taken from Duverlie and Castelain's study [11]. As can be seen it is very important to establish cost control in the development process as early as possible [2]. In the beginning it is essential to get rough numbers with a minimum of effort and details needed. As development progresses the influence of smaller details become more central and the estimation is required to allow for those to affect the outcome. This is when the higher details of bottom-up become preferable.
In ALPHA this method combination was realized with a work step-based bottom-up basic framework. The desired estimate is built up from single elements for every work step required in the desired manufacturing chain. This allows for fully customizable process chains and maximum estimation flexibility. It gives the cost engineer absolute control over the estimation process without constraints, while providing him with templates to increase speed and reproducibility of the estimation. Each element is then estimated either by classic engineering cost build-up or by using a parametric equation for this work step, depending on the available data and the desired estimation detail. Alternatively or additionally to these individual step elements a fullscale parametric model can be integrated into the estimation chain. This then allows the complete topdown estimation of the part. The whole program was designed to be easily adaptable to changing requirements and new element templates can be generated and implemented rapidly. This concept can be seen in Figure 3 showing the available estimation methods and their combination in ALPHA. [26] By combining these two systems in one framework, it is possible for example to expand the fullscale parametric estimation with additional work steps, which are not covered in the parametric equation. This could be an additional curing cycle or that the part is to be integrated into a larger assembly with further production steps. But the most important benefit is the possibility for seamless transition between rapid and detailed estimation which makes it ideal for application in all development stages. The full level parametric model was developed in parallel by our project partner first as a standalone tool and later intended as an extension module to be implemented within the ALPHA framework [27][28][29][30].
The ALPHA tool was developed specifically with single processes or parts in mind. In comparison to full system solutions this reduces program complexity and helps implementation in early stages where it aims to support designers in their decision process. Full system estimation is of more interest on the strategic level and needs special experts to operate, while ALPHA is created as a tool for day to day use at the practical engineering level. But the developed methodology could be applied to full system estimation as well, if needed.
Several reasons exist, why it might be beneficial to self-develop a cost tool in comparison to purchasing a commercially available solution. Frist only the self-development allows the freedom to choose the best fitting method combination for one's desired application. Second it offers the chance to generate the exact needed process chains and process steps or to allow the flexibility to adapt and expand the capabilities on demand. Another big advantage is the achievable transparency of the estimation as all used equations and data are accessible. In ALPHA the wish for clarity led to all equations being presented to the user while entering the input values. We see this crucial as unknown mathematics will lead to a black-box system, which reduces estimator's confidence in the system.
The biggest downside is probably the needed development time and resources, but then one has to be aware that also the commercial tool will need adaption time, effort to learn the tool and resources to adjust the databases to the individual needs.

Sensitivity and uncertainty
Cost estimation is inherently affected by uncertainty originating for from different sources. First are economic uncertainties arising from unpredictable changes on the market, like energy costs, fuel prices, etc. or unforeseeable changes in exchange rates, taxes or inflation. The second source is engineering errors resulting in wrong assumptions of required input quantities, falsely calculated production times or overlooked complications. In addition to this a model always has to incorporate simplifications in order to stay comprehensible which might cause errors if not correctly accounted for. [1,3,[31][32][33][34] The goal of sensitivity analysis is to investigate the influence of the individual input parameters of a model to the overall model output [35][36][37]. It can generally be distinguished into two types: local and global sensitivity analysis. In the first one the influence of every input value on the result is calculated for a specific value of the input parameter. But this result is then only valid within a small range around this defined value. In contrast to that the global analysis uses the statistical distribution for the input parameter to analyze the influence over the full variable space [35,36,[38][39][40][41].

Uncertainty propagation, Monte Carlo simulation and sensitivity indices
In the ALPHA cost estimation tool, a two-level uncertainty management was implemented. Simple error propagation provides immediate feedback on the to be expected uncertainty brackets, while Monte Carlo simulation takes additional calculation time before allowing insight into probabilistic cost distribution. For the ALPHA cost estimation tool, a two-level uncertainty analysis was intended. The first level should be able to immediately give a rough overview of to be expected effects of set uncertainties.

Gaussian error propagation
For this purpose, the first-order second-moment or Gaussian error propagation (GEP) method described in Equation (1) was used. But for GEP to be applicable it is necessary that the full model is expressible as an analytical equation or function of all input variables and all input variables have to be assumed to be uncorrelated and normal distributed. But this is acceptable as in common practice this would be necessary anyway as it would by extremely difficult for estimation problems to obtain any information on the correlation between the inputs or to establish higher order distributions [42,43].
Besides the mentioned limitations GEP provides an easy to implement method that requires little recurring calculation efforts. Only the establishment of the local differentiations can be time consuming. In ALPHA this was solved by calculating them once at implementation and recording them within the program. Only if an equation is altered or a new one introduced to the system, e.g. when a new module is added, then on its first execution ALPHA will automatically calculate the new needed local differentiations and again save them for future use. This ensures that the time-consuming generation is only done when needed and, in the meantime, GEP is able to provide direct feedback without noticeable calculation time.
where @y @x i is the partial differentiation of function for input x i ; r f is the result standard deviation, Var x i ð Þ is the variation of input x i ; and x i is the input parameter.
Although factor correlations are normally neglected, approaches to model them still exist in the literature. For example [3,44] propose the use of a Bayesian network, [31] suggest including correlations in the Monte Carlo Simulation and [45] advised to combine strongly correlated variables into one.
While the error propagation can be calculated directly during the estimation process, the more intensive sensitivity analysis needs to be conducted once finished. For this second phase of the uncertainty management a Monte Carlo approach with global sensitivity analysis was chosen [31,32,34,46].

Sigma normalized derivative
The sigma normalized derivative is the simplest sensitivity index. It is generated by normalizing the derivative of the function y versus the input parameter x i with the standard deviations of the input and output. This ensures that the indices are all normalized to 1 making them independent from the value of original input and that the quadratic sum of all S r x i equals 1, which is necessary for meaningful parameter-ranking [39].
is the sigma normalized derivative for input x i , r y is the standard deviation of function, r x i is the standard deviation of input x i and @y @x i is the partial difference of function for input x i :

Monte Carlo simulation, scatter plots and linear regression
In various variations Monte Carlo simulation (MCS) is the widely used method for risk and uncertainty measurements. It works on the premise that for all inputs a normal distribution with an expected mean value and standard deviation is known or determinable. Based on these input distributions a pseudo random sample, called the Monte Carlo matrix (MCM), is created. It contains a row or line vector for every individual input parameter. Its size is specified by the number N defining the amount of random numbers for every parameter. The choice of N highly influences the quality of the MCS, but calculation efforts for sensitivity indices also increases drastically by the factor 2 Ã N or N 2 , depending on the used calculation method [39,47].
Scatter plots, as explained in Figure 4, are a graphical method to identify influence of input factors [39]. To help visualize, the point clouds are often combined with regression fittings. Based on the mostly linear nature of the model and the shape of the clouds, basic linear regression was chosen in this work.
Although the scatter plots provide very neat visual information, one could wish for a single number per parameter to rate its influence. The most common method for this to achieve is to try and fit a linear regression in the form of Equation (2) onto the data. The two coefficients b 0 and b x j should be determined by the method of least-square computation [39].
where y is the regression function, b 0 is the first coefficient and b xj is the second coefficient.
Now the standardized regression coefficients (Equation (4)) can be calculated. It provides a reliable measurement of the sensitivity to the individual input and the linearity of the whole model, which can be assessed by the sum of squares for all b x j : For a linear model it equals 1 and less for any percentage of non-linearity.
where b xi is the standardized regression coefficient, b xi is the regression coefficient of input x i ; r xi is the standard deviation of input x i ; and r Y is the standard deviation of the function. The biggest disadvantage of the before introduced sigma normalized derivative is that it only evaluates around the distribution midpoints. The linear regression coefficient and the following variancebased indices on the other hand are established using the full space of the input factors. But their fidelity requires the Monte Carlo sample number to be large enough [39].

Variance based methods (first-and totalorder sensitivity index)
Another important class of sensitivity index is those of the variance-based types, of which the first-order or main effect and the total-order sensitivity index are the most used. While very useful, all variancebased indices suffer from the drawback of relying for the variance to capture the total uncertainty behavior of the input variables [38].
The first order sensitivity index, defined as shown in Equation (5), represents the direct contribution of the individual parameter to the output variance, which is why it is also referred to as main effect. It is best suitable for ranking of the factors, when the factor interaction can be considered non-relevant [37][38][39].
where S i is the first order sensitivity index of input x i ; V is the variance, E is the expected value and x $i is the all input factors but except the i-th factor.
The uniform distributions indicate low sensitivity, while emerging patterns indicate increasing sensitivity [38,39].
The first order index captures only the direct influence of the factor. In order to determine interaction between parameters higher order indices are required. The second order index can detect interaction between parameter pairs, the third order these of input triplets, and so on. From this principle Sobol [41] introduced the total effect index often referred to as Sobol indices [47,48]. They imply the full effect of a factor, including its direct contribution plus all its interactions with the other factors and are defined through Equations (6,7) [36]. S 1 is the first order index calculated as shown in Equation (5) for model parameter 1, while S 1j are second and higher order indices for the interaction between the factors 1 and j [37][38][39].
where S T1 is the total order sensitivity index of input x 1 ; S 1 is the main effect of input x 1 and S 1j are the second and higher order effects of input x 1 and x j : The Sobol indices are especially suitable for parameter screening; those parameters whose total effect are zero can be considered non-influential. While full calculation of them can be tedious for large models, suitable estimators for both first-and total-order indices exist for approximation [49].
where S Ti is the total order sensitivity index of input x i ; V is the variance, E is the expected value andx $i is the all input factors but except the i-th factor. To calculate the main effect, the parameter is fixed at one of its variations, before calculating the model's variance from the remaining MCM. This procedure is repeated for every variation of the parameter existing in the original MCM. For calculating the total effect, the procedure is similar with only one difference. All but the one parameter is fixed before calculating the variance. The concept of these basic calculation principles for the two sensitivity indices is shown in Figure 5 [39].
It is the scope of much research to find improved calculation algorithms for these and similar sensitivity indices in order to make sensitivity analysis applicable to ever more complex problems [39,47,50]. One of these approaches would be the use of Fourier transformation (FAST) for the calculation of the sensitivity indices [37].

Case study description
The sensitivity analysis was performed on the cost estimation of an resin transfer moulding (RTM) composite part manufactured to industrial standards. The estimated process chain shown in Figure 6 consists of the raw material, preparation of the tool and preform placement. The actual RTM process takes place in a press with a two-component injection unit used to infuse the preform with resin. After curing the part is demolded and put to machining for edge trim before non-destructive inspection (NDI) for quality insurance is performed. The production is finished with some assembly and final quality inspection. In both quality steps scrap is detected and its costs are assessed based on the so far invested production resources.
For every parameter listed in Figure 6(a) most likely value was determined and used in the ALPHA cost estimation. Further plausible lower (LB) and upper (UB) boundaries for these estimation values were defined and integrated for the tools error propagation. To fit the MCS a standard distribution was assumed in a way that the mean value m coincides with the most likely value, while standard deviation r was set to be one quarter of the difference of UB to LB, resulting in 95, 45% of the distributions values to be within the UB/LB boundaries. All other parameters were set to fixed values, first to keep evaluation efforts manageable and second because it was assessed that these values can either be estimated very accurately or are given values from financing or controlling department.
For the sensitivity simulation the MCM was filled with pseudorandom numbers according to the statistical definitions in Figure 6. The convergence was  tested for different matrix sizes. A larger matrix size provides a higher resolution, but the calculation time and memory demand also dramatically increase with it.

Cost distribution
Performing the cost estimation in ALPHA provided two outcomes: first the estimated product's manufacturing time and cost and second the uncertainty brackets for both values obtained through Gaussian Error Propagation for the whole project and for each individual work step. Figure 7 shows the contribution of each estimated work step together with the expected uncertainty generated via uncertainty propagation. The high auxiliary costs are typical for composite parts as in this case they cover tool preparation, demolding and preforming time. Material costs, curing and quality costs are also to be expected significant contributors to composite's production costs.
Besides the per work step distribution a cost split-up into labor, equipment and material costs can also be seen in Figure 7 together with the estimated total project costs and size of the Gaussian uncertainty brackets. It shows the characteristically high contribution of labor costs to overall manufacturing costs of a composite part.

Monte Carlo analysis
By performing a MCS additional information can be gained once the estimation is set up. In order to choose the right size or resolution of the MCM a convergence study was conducted whose results are displayed in Figure 8. The graphic shows the convergence of the mean value of the project costs over the size of the MCM, which is basically the number of considered pseudo random values per input parameter. Therefore, at a total of 100 different size points, logarithmically distributed between 1x10 2 and 1x10 6 , three iterations were created, analyzed and their mean value was also calculated. The calculation of three iterations with subsequent averaging was done to compensate for outliers at the low size range of the study. With larger matrix size the outliers start to be balanced out within the matrix themselves. From the graphic can be seen that the deviation quickly recedes and from 5x10 4 on is negligibly small. Based on this finding all further evaluations in this article was conducted using a MCM size of 1x10 5 as a good compromise of resolution and computational effort [31,35].
The analysis allows the establishment of the cost distribution function that is shown in Figure 9. It can be seen that the directly estimated uncertainty is in very tight correlation with the obtained standard deviation from the MCS. The uncertainty brackets are only a little bit wider, covering around 74% of expected cases, while 68% are within the standard deviation. But one must be aware, that this correlation is depending on the model's linearity and it only works well in this case due to the linear character of the cost estimation.
The charm of the graphic evaluation in Figure 9 and the MCS in general is simple to implement while providing large amount of additional insight. Especially it cannot only show an error range, but also provides the distribution of the expected manufacturing cost.
Further the transformation into the cumulative probability function shown in Figure 10 is also pivotal. This visualization allows for graphical risk analysis and enables the cost estimator trade-offs between expected risk and possible profit. It shows directly the probability or likeliness for the    production cost to be below a certain value, helping in the definition of offerings. Especially providing feedback on the risk involved with specific offering prices or discounts. For example, in this case the probability for the project costs remain below 800e are about 80% and there is a more or less than 100% chance that the costs will not exceed 1000e. This means that being able to offer/sell the part for more than 1000e nearly certainly provides a revenue (not considering overheads).

Scatter plots, sensitivity indices and parameter ranking
The information generated by the Monte Carlo analysis was further used to characterize the model's behavior in dependences of the individual input parameters in detail. Therefore, for each of the 20 investigated input variables scatter plots were generated, visualizing their influence on the outcome. From the scatter diagrams the linear regression and the linear regression coefficient was calculated by the method of least squares. Further the data was used to establish the first-and total-order sensitivity indices and compare them to the sigma normalized derivatives mentioned in Table 1 [47].
In general, the four most influential parameters shown in Figure 11 are of no surprise, when looking at the cost distribution in Figure 7. With 42% labor costs are by far the most dominant cost factor, which is primarily defined by the labor rate and secondly by the required times for labor intensive work steps. The high influence of the auxiliary time is somewhat specific to this case study and derives from a production characteristic of this part. Due to the sealing configuration tool preparation requires unusual long time, while the flat preform geometry requires no preforming step. In a more typical process, the auxiliary time would probably be replaced by preforming time in the list of most influential parameters. And although the material costs in Figure 7 contain several additional positions, the raw material costs in Figure 11 are their main contributor. The cure time is also a well-known cost driver for composite parts with long curing times. But even though the found cost drivers were not unsuspected being able to mathematically qualify and correctly rate them in all estimations is highly valuable information to every cost estimator. In direct comparison of the different sensitivity indices (see Table 1) a change of ranks can be found for three parameter pairs. Cure time and scrap rate are one of them. While the linear regression and sigma normalized derivatives consider cure time more important than scrap rate, the first order and the total effect sensitivity index both see scrap rate before the cure time. This is because the sensitivity indices and especially the higher order indices are better suited for non-linear correlations and both scrap rates are the two non-linear factors in this case study. But generally, one can see that the model is highly linear as the first Sobol (total effect) index, first-order (main effect) sensitivity index and linear regression coefficient are all in close correlation to each other [47]. Second the sum of b xi 2 (quadrat sum of the linear regression coefficient) equals 0.9839 indicating 98% linearity [39]. The two negative factors for the sigma normalized derivative (X 13 and X 17 ) originate in the indirect proportionality of these two factors to the model output causing the local derivatives to be negative for these two parameters.

Discussion
The emphasis of article is on cost tools in general and their key factors that need to be considered in a cost tool in order to maximize its usefulness. The focus was not on the strength or capability of an individual cost tool, nor any specific application of one, although one was presented as an example.
Composite production is on the verge from 'garage shop manufacturing' to full grown automated industry with the need to adopt typical standard management processes. Two of which are cost estimation and uncertainty management. This article links for the first-time composite production cost estimation with uncertainty propagation and risk management.

Requirements and benefits for cost tool
In order to achieve the desired benefit a cost tool has to fulfil several often-contradicting requirements. But when correctly designed the use of a standardized cost tool can provide heightened reproducibility, fidelity and homogenization of the performed cost estimations. It further allows the seamless incorporation of knowledge systems and databases and can be used for integrated multiattribute cost-versus-optimizations. Figure 12 summarizes the key attributes considered by the authors for a good cost estimation tool in this article. While being a relatively simple image, the novelty and charm of Figure 12 is that it shows at one glance all needed aspects of a cost tool paired with the encountered trade-offs.
Transparency and uncertainty management are perceived as primary and non-contradicting factors. Transparency for the user is absolutely necessary in order to avoid a black box feeling and ensure the estimators trust in the tool and its capability. It also decreases the risk of estimation errors caused by not fully understanding the software. Risk and error considerations are necessary simply because cost estimation can never be an exact procedure and awareness of this intrinsic uncertainty is essential.
Based on the needs and purpose of the model careful trade-offs might be necessary for the other six attributes. Tailoring it to a specific application makes it less complex and easier to use, but also reduces flexibility and its ability to be applied to new manufacturing technologies or products. The same applies to the wished level of detail. The more detailed the more complex the model gets, the less user-friendly it can become.

Cost tool development: roadmap and guideline
The biggest benefit of a self-developed cost tool is that it can be designed specifically to the requirement needs. This allows for the principle 'as complex as needed, as simple as possible' while still providing every desired information and required functionality. Commercial software on the other hand often is either not designed for the specific application or has far too extended capabilities making it complex to operate. Additionally, the software must not be believed to be directly fully operational, but instead extensive training, establishment of datasets and adaptations are still necessary.
Commercial software therefore often has an adjustment slider to adapt underlying equations and estimation principles to different applications and companies. But identifying the right setting of such an adjustment parameter can prove difficult and in our experience, allows the estimation to virtually take on any value desired. Full estimation flexibility and transparency is often only achievable with a self-developed system. When choosing to develop a custom-built cost estimation software one first has to carefully consider the desired capabilities, functionality and the planned phase of application as described in the following or depicted in Figure 13.
At the beginning the fundamental requirements of the tool need to be set by answering among others the following questions: For which phase of the product development is the cost tool intended for? Should it provide a quick and rough estimation at little detail during the early stages or is a more detailed estimation required to balance small design details or production optimizations?
Then the product portfolio has to be analyzed thoroughly in respect to its diversity, available case knowledge and the possibility to establish such knowledge. Available knowledge bases or the implementation of knowledge systems can help drastically reduce estimation times, but one has to be careful to not unknowingly extrapolate beyond the knowledge base's data range. For very different products it might be more difficult to find a common cost driver, but chances are that still some can be found and the model might just need additional input or correction factors.
Depending on the defined requirements and the character and availability of historic case knowledge the most suitable estimation method or a combination of them must be chosen as foundation of the model. Analogous and parametric estimation are more suitable for the early design stages with little known details, while bottom-up becomes more favorable in later stages where it can provide more information. A general classification of the methods was given in Section 2.1 and Figure 1 and a detailed overview of concepts and models for the composite aerospace industry can be found in [4][5][6].
But without additional sensitivity error handling any estimation tool lacks the capability to take into account estimation uncertainty or risk. Typically, these would be uncertainties from estimating model input parameters like material usage or labor times or external risks from for example possible price changes. Additional sensitivity analysis allows the statistic evaluation of the model and the identification of estimation critical parameters. As already discussed before, the authors think this is a crucial aspect in cost estimation and consider uncertainty awareness absolutely necessary for every estimation process. The most basic technique would be simple error propagation like the first order second moment (or Gaussian) method used in the ALPHA cost tool. Its main benefit, besides the easy implementation, being the extremely low needed calculation power allowing for immediate feedback to the estimator even during the model set up. On the downside it is only capable of returning simple error values which represent a probabilistic boundary for the eventual outcome and its fidelity is depending on the model's linearity. For mostly linear models though theses error brackets correspond very well with the standard deviation obtained from the far more time-consuming MCS. The MCS is probably the most common method for sensitivity analysis in order to generate information of the model's sensitivity to changes of the individual input parameters. At the same time, it allows the establishment of a cost distribution function which enables refined financial risk and potential assessment.

Encountered limitations
In the establishment of the presented cost tool and moreover during this work the following limitations were encountered. Some of which are inevitable for cost estimation while others just had to be accepted for the time being: In absence of better knowledge any correlation between the individual input parameters were neglected and all parameters were considered as independent. The underlying cost estimation is subject to typical simplifications and possible estimation errors.
All model values and their distributions were chosen to the best of the authors knowledge and belief, but mis judgement in cost estimation can never be fully eliminated. For the sensitivity analysis only a basic Monte Carlo method with brute force calculation of the indices was implemented. More advanced techniques could lead to similar results while consuming significantly less calculation time. Due to missing real-life industry data the intended parametrizes for the individual work steps could not be established, although the software is fully designed to incorporate and facilitate them. The simple error propagation and sensitivity indices (sigma normalized derivative, linear regression coefficient and to some degree the first order sensitivity index) only work reliable for linear models and quickly loose fidelity for increasing non-linearity. A detailed benchmark and method comparison, although highly interesting, was impossible as it would have required additional comparable cost estimation that was unavailable to us.
Although present in this development, some of the limitations could be overcome with additional development time or in different application context. In the case of an existing knowledge base statistic tests could be applied to uncover potential correlations between inputs which then can be included in the model to consider the dependencies. The transfer of estimation parametrizes between industries or even different companies would always be dangerous and either needs careful checking and probably adaptation or completely new development would be necessary. The used Monte Carlo simulation is not only the most common method but is also often used as benchmark to test improved calculation methods which could be implemented in a different realization. Upon implementation a model can easily be tested for its linearity and to check its suitability for the simple uncertainty methods or the need of advanced methods. Generally, should a cost tool never be considered finished, but should always be subject to continuous improvement through experience.

Conclusion
In this article one possible concept for a cost estimation tool capable to cover the full spectrum of aerospace composite part development was presented. The novelty of the ALPHA approach lies mainly in the specific method combination and the new application of these established methods to cost estimation in the composite manufacturing environment. Especially the use of the Gaussian error propagation provides a very good and instantly available uncertainty feedback. Its limitation to linear systems being rather uncritical as cost relations are mainly linear or close-to-linear in nature. Further it was shown that the establishment and implementation of uncertainty and risk management into an existing cost estimation tool is rather simple, while afterwards proving to be a very important and helpful improvement to the final estimation. This work presents the basic steps of cost estimation, what to consider in an estimation tool and simple measurements for uncertainty control together with a short example and guideline on cost tool development. In conclusion neglecting uncertainty largely reduces the value of any cost estimation. Especially as a single output promises a false sense of certainty which is against the nature of any cost estimation process. Only implementation of an uncertainty measurement leads to a cost tool capable of providing the essential information needed for successful business operation.