Fragility-based lot-sizing in veterinary pharmaceutical plants under demand uncertainty

We study a production lot-sizing problem inspired by a veterinary pharmaceutical plant in which demands are uncertain. First, we develop a deterministic capacitated lot-sizing model for the production of animal pesticides, performed in three machine-specific stages. Second, we propose a traditional robust optimisation formulation following the popular budget-of-uncertainty approach. Third, we derive a novel fragility-based approach that circumvents well-known issues with traditional robust optimisation approaches, such as the estimation of budgets of uncertainty, the over-conservatism of robust solutions and the sensitivity of solutions to the decision maker's risk attitude. The fragility-based approach is grounded in the idea of minimising violations, over the full uncertainty support, from a user-specified cost target. It avoids the estimation of budgets of uncertainty and produces less conservative solutions via explicit modelling of constraint violation. We demonstrate the effectiveness of our approach on instances built upon real data provided by our industrial partner, a major player in the Brazilian veterinary pharmaceutical sector. The results show that our fragility-based approach reduces average total costs across all instances and maintains greater model stability under different target estimations. It also preserves cost savings when bottlenecks are introduced in production and when inventory costs and capacities are varied.


Introduction
Lot-sizing is one of the most important pillars of production planning.It typically involves the optimal timing and sizing of production quantities in order to maximise profit or minimise cost while serving customer demands (Brahimi et al. 2017;Clark, Almada-Lobo, and Almeder 2011).In this paper, we investigate production lot-sizing in the process industry, inspired by the case of a real-world plant that produces more than 200 veterinary pharmaceutical items to fulfill demands of clients in more than 14 countries.Typically, production in the veterinary pharmaceutical industry, as in many other sectors such as food, steel, and paper, is resource-intensive and presents constraints associated with the specific operational limitations at the production facility (Mehrotra et al. 2011).In addition, accurate information is hard to come by, as products are seasonal and rather dependent on both weather conditions and the agricultural production cycle, which makes its production planning particularly challenging (Alem, Oliveira, and Ruiz Peinado 2020;Marques et al. 2020).
Given the uncertainties related to demand forecasting that typically arise in the addressed context, we propose CONTACT Aakil M. Caunhye aakil.caunhye@ed.ac.ukThe University of Edinburgh Business School, 29 Buccleuch Place, EH8 9JU Edinburgh, UK robust optimisation (RO) approaches, which assume that nominal and deviation values are known a priori and belong to an uncertainty set.RO has emerged as a powerful technique to deal with uncertainties due to its nonparametric nature and potential computational advantages over purely stochastic programming approaches.This means that, in contrast to stochastic programming, RO operates without the assumption of an input probability distribution.It imposes minimal restraints on the behaviour of uncertainty.A classical RO model requires only the range of data variations, whereas a stochastic programming model needs an assumed probability distribution whose fitting on historical data typically entails maximum likelihood estimations of scale/location parameters.Even if the sample average approximation of a stochastic programming model is used, a probability distribution assumption remains necessary, with the additional challenge of solving over large numbers of scenarios to achieve convergence via Law of Large Numbers, which is generally computationally cumbersome even for small size combinatorial problems.The decisions prescribed by RO tend to be less sensitive to data variations, which is accomplished by optimising worst-case performances over data within an uncertainty set, whereas stochastic programming, on top of sensitivity to input data, has an added sensitivity to the chosen probability distribution.

Motivation
The seminal RO approach by Bertsimas and Thiele (2006) in lot-sizing under uncertainty achieved its pioneering success by avoiding probability assumptions and allowing protection against all realisations from an uncertainty set while maintaining high tractability.The adjustable RO method (Aharon, Boaz, and Shimrit 2009;Gorissen and Hertog 2013) further contributed to robust lot-sizing by allowing a subset of decisions to be made after uncertainty revelation (also called 'wait-and-see' decisions), while maintaining the same protection against uncertainty as the seminal RO approach.The method also avoids probability assumptions and it ensures tractability by modelling waitand-see decisions as affine functions of uncertainty.More recent advances in the area of distributionally robust lot-sizing (Zhang, Shen, and Song 2016) have allowed the enrichment of uncertainty sets with descriptive statistics such as moments and covariances.The tractability depends on the descriptive statistics used.
For instance, covariances typically lead to conic robust counterparts when the deterministic lot-sizing model is linear.
The lot-sizing literature widely acknowledges that the seminal RO approach produces over-conservative solutions (Aurélie 2004;Bienstock and Özbay 2008;Bohle, Maturana, and Vera 2010;Chu, Huang, and Thiele 2019;Gorissen and Hertog 2013).Furthermore, both the RO and adjustable RO approaches rely on the budget of uncertainty, a parameter that controls the size of the uncertainty set and to which the conservatism of prescribed solutions is notably sensitive (Alem and Morabito 2012;Ardjmand et al. 2016;Curcio et al. 2018;Rocco and Morabito 2016).Estimating budgets of uncertainty is not straightforward or even intuitive, as it requires the decision-maker to quantitatively assess their aversion to more/less conservative solutions, which is fundamentally subjective and often impossible to attribute an accurate numerical value to.Furthermore, by restricting the size of the uncertainty set, the budget of uncertainty evidently offers protection against limited uncertainty realisations, which translate to a lack of guarantee on outof-sample performances, meaning that if data fall outside the uncertainty set, performances may suffer uncontrollably.Approaches to address the over-conservatism, such as through adversarial approach (Bienstock and Özbay 2008), constraint aggregation (Bohle, Maturana, and Vera 2010), all still rely on estimations of budgets of uncertainty.
In the lot-sizing context, over-conservatism translates to costly setups and production levels in order to overcome high demands at minimal holding/backlogging costs.Evaluating the budget of uncertainty amounts to estimating a parameter that will rule out high cumulative demands in each time period.In addition to the lack of intuition surrounding this parameter value, the dynamic nature of lot-sizing requires this estimation to be carried out for every time period, which further complexifies the task.Even if the decision maker were to have reasonable bases to estimate the budget of uncertainty, its impact on the conservatism of the resulting lot-sizing solution is unclear.We therefore ask the research question: can we develop a robust lot-sizing methodology that avoids the estimation of budgets of uncertainty, relies on (more) intuitive parameter inputs, and explicitly models the impact of uncertainty on the lot-sizing solution, without compromising the tractability of the robust counterpart?

Related papers
Table 1 gives an overview on lot-sizing problems via robust optimisation, focusing on their main characteristics, such as type and production environment, robust optimisation method, and eventual downsides of their approach.The reader interested in an up-to-date review of stochastic lot-sizing problems is referred to Alem, Oliveira, and Ruiz Peinado (2020); Bindewald, Dunke, and Nickel (2023) and references therein.The first line of Table 1 also shows our intended contribution as a matter of comparison.Most papers develop pure robust optimisation (RO) models with budget of uncertainty à la (Bertsimas and Thiele 2006) to primarily deal with demand uncertainty (and eventually other parameters such as costs, returns, and production times) in a multitude of lot-sizing variants, including inventory-routing (Golsefidi and Jokar 2020; Solyali, Cordeau, and Laporte 2012); lot-sizing with remanufacturing (Attila et al. 2021;Wei, Li, and Cai 2011); lot-sizing with cutting-stock (Alem and Morabito 2012;Curcio et al. 2022); lot-sizing with supplier selection (Buhayenko and Hertog 2017;Thevenin, Ben-Ammar, and Brahimi 2022); lot-sizing with deterioration/perishability (Coniglio, Koster, andSpiekermann 2016, 2018;Santos, Agra, and Poss 2020); lot-sizing and scheduling (Alem et al. 2018;Curcio et al. 2018;Hu and Hu 2020); as well as other variants involving crop planning (Rocco and Morabito 2016), pricing (Ardjmand et al. 2016), order acceptance (Aouam et al. 2018), Make-To-Order policy (Agra, Poss, and Santos 2018), and a multi-plant lot-sizing perspective (Jalal et al. 2022).Not surprisingly, only a few studies derive adjustable  robust optimisation formulations (ARO) and/or distributionally robust optimisation approaches (DRO), most probably to avoid losing computational performance in already difficult combinatorial problems.In general, these approaches are applied to simpler and/or more standard lot-sizing problems, such as the uncapacitated case, single item, and no setup.Notably, there is only a handful of studies whose models are tailored for a specific sector/industry, showing that there is definitely room for investigating lot-sizing in practical contexts and validating them using real data.Notice also that, as aforementioned, estimating proper budgets of uncertainty and computational intractability are often regarded as potential drawbacks of the existing robust lot-sizing problems.
In special, the few papers that derive approaches to circumvent typical RO issues such as over-conservatism (via ARO, DRO and other data-driven methods) end up with harder-to-solve formulations, often requiring specialised solution methods.

Research gaps
The methodology that comes closest to answering our research question in lot-sizing is from Aharon, Boaz, and Shimrit (2009) who proposed an approach called the Globalised Robust Counterpart (GRC).The GRC allows uncertainty to vary over the entire support and omits the budget of uncertainty restriction.It minimises a target cost while allowing violations from that target, such that these violations follow a user-defined unit cost times a distance measure.This results in high tractability when the distance measure is polyhedral, meaning that the robust counterpart is a linear program if the deterministic model is also linear.The main issue with the GRC is that it needs a user-input unit cost of target violation, which is not intuitive to estimate.
All mainstream robust lot-sizing models, including the GRC, have cost minimisation objectives.In the GRC, the cost takes the form of a target, whereas in the remaining literature focuses on worst-case holding and backlogging costs under polyhedral uncertainty sets constrained by budgets of uncertainty.A recent theoretical development in RO was proposed by Long, Sim, and Zhou (2023), called fragility-based robust optimisation, whose philosophy is to minimise the cost of violation of uncertainty-affected constraints from preset targets.Instead of budgets of uncertainty, the approach requires the decision-maker to set a target for the overall cost, which is considerably less subjective and is better grounded in business performance.For instance, the target would be a nominal price the business is willing to pay for all operations.This is considerably easier to conceive than a conservatism level which depends on individual decision-makers' personal traits, or a cost of constraint violation which lacks intuitive bases.In addition, while current robust lot-sizing models immunise planning constraints against uncertainty, they do not model how uncertainty affects the constraints.The fragilitybased approach allows an explicit modelling of constraint violations under uncertainty variations and concurrently gives the decision-maker a sense of the extent of performance departures from preset targets as uncertainty varies.Moreover, as the model does not restrict variations within budgets of uncertainty, it yields better outof-sample performances.
To the best of our knowledge, this paper is the first time the fragility-based approach is being applied to lotsizing or production planning.Our approach is not to merely undertake a direct translation of the theoretical framework conceived by Long, Sim, and Zhou (2023), but also to tailor the method according to the specificities of capacitated lot-sizing models -especially with regards to backlogging/holding costs -so as to yield better solutions while maintaining tractability.More precisely, while the direct translation would yield a nonlinear robust counterpart, we provide a reformulation that is linear and follows the same problem class as the deterministic model, thereby avoiding a loss of tractability over traditional RO.Furthermore, we produce violations that follow the logical relationships between backlogging/holding and demand variations so as to strengthen the fragility-based formulation and maintain the intrinsic attributes of lot-sizing models.
The main contributions of this study are as follows: • We develop a deterministic capacitated lot-sizing model to represent the production planning of animal pesticides whose manufacturing process usually involves three stages: weighing, reactors, and filling.
As far as we are aware, modelling the animal pesticide manufacturing process in this way has not been done before.In addition, multistage lot-sizing problems (and/or in a multi-machine setting) have not been tackled under the prism of robust optimisation, as evidenced in Table 1.Our model portrays production flow between stages, while enforcing time limits on weighing and volume restrictions on reactors, as well as allowing for reactor cleanup and for productspecific formulas to be created from appropriate reactors.This formulation is further extended to take into account demand uncertainty based on the popular robust optimisation with budget of uncertainty.• We propose a novel fragility-based optimisation approach for the lot-sizing problem of a veterinary pharmaceutical plant.The approach relies on setting a cost target and allowing violations from the target as demands depart from nominal.It is adapted from the recent developments by Long, Sim, and Zhou (2023), but we tailor it to the specificities of a capacitated lotsizing problem with backlogging and holding costs considerations.• In particular, specific modifications are proposed to the original idea of Long, Sim, and Zhou (2023) to ensure tractability of the final model and constraint violations are redefined such that they follow backlogging/holding schedules.The approach avoids the estimation of budgets of uncertainty and offers less conservative production planning since it does not seek to optimise worst-case performances, thereby tackling the two major issues that inhibit the efficacy of traditional robust optimisation approaches.
The proposed approach, together with a deterministic formulation and the traditional RO, are applied to solve a real-world problem from a veterinary pharmaceutical company.The results show that our fragility-based approach reduces average total costs across all instances and maintains greater model stability under different target estimations.It also preserves cost savings when bottlenecks are introduced in the production plant and when inventory costs and capacities are varied.
The remainder of this paper is organised in the following way.In Section 2, we introduce the studied veterinary pharmaceutical company's problem.Section 3 describes the models developed in this work, including a deterministic formulation, the traditional RO formulation, and the novel fragility-based formulation.Section 4 describes the computational results of all proposed formulations for the company.Section 6 summarises the managerial insights obtained from our computational experiments and discusses the limitations of the fragilitybased approach.Finally, Section 7 provides concluding remarks, summarises our findings and propose some future research directions.

Problem description
The inspiration of our investigation is a veterinary pharmaceutical plant located in the State of São Paulo, Brazil.This company is a major player in Latin America, being responsible for providing more than 200 distinct items to 14 different countries, with an annual revenue of over 100 million dollars.The main production facility of the company is a large-scale plant with approximately 1000 workers, where the majority of the items are produced.In general, veterinary pharmaceutical items are seasonal, being quite dependent on weather conditions and on the agricultural production cycle.As the majority of the items have their own seasonal demands, this sector requires relatively flexible machines to prevent equipment from going unused for most of the year.The case-study is built upon the sector of animal pesticides that are usually used for protection against parasites such as worms and ticks.We chose this sector for our case-study for several reasons: (i) the availability of high-quality data about the production planning and processes of this sector, such as item demands, average utilisation of machines, and nominal capacities; (ii) the high amount of capital immobilised in stocks due to poor managerial decisions regarding demand forecasting and production planning; (iii) the fact that the animal pesticides sector operates independently from the rest of the production plant.A total of 27 individual items are produced in this sector using 11 different formulas.We define an item as a packed unit ready to be sold, while a formula is the chemical compound within the item.Different packages (in terms of types and/or volumes) even with the same chemical compound are considered different items for the company, e.g.cans and bottles of soda.
Figure 1 illustrates the company's production process of animal pesticides in three stages.In the first stage, the raw materials needed to make an item are collected from storage and weighed in order to prepare the correct recipe for the formula.In the second stage, the raw materials are assigned to the reactors where they will be mixed to create the required formula.Although any reactor can be used to produce any formula, the assignment of formulas to reactors is usually made based on the volume (in litres) to be synthesised.The animal pesticides sector has four individual reactors, two with 5000 litres, one with 4000 litres, and another one with only 50 litres; all can operate in parallel.In the reactors, the formulas are processed in batches.A setup is required to clean up the machines whenever a new formula needs to be synthesised.Although the cleaning step is usually not required between batches of the same formula, sanitary guidelines followed by the company state that any reactor must be cleaned after five consecutive batches of the same formula.The formulas are then transferred to one of the filling lines in the third and last production process stage, where it will be bottled into different items.Each filling line operates with different volume ranges and thus the item must be assigned to a specific line based on its volume and cannot be processed by a non-compatible machine.The filling lines are initially adjusted to produce a specific item in a given bottle size.At each item changeover, a setup related to cleaning and machine adjustment is required.After filled with the formula, the recipient is labelled, sealed and packed into the final product and send to the customer or to a storage.Even though the reactors are often regarded as the production process bottleneck, machines from stages one and three, especially filling lines II and IV, may present high levels of utilisation in some periods, therefore limiting the production of animal pesticides.Here, machine's utilisation is defined as the fraction of time in which a given machine is busy, considering setups and actual production.To represent the possible production bottlenecks as consequence of the high utilisation of some stages/lines, the proposed optimisation model takes into account the machine capacity of all three production stages, as well as constraints to ensure the production conservation flow across different stages.The corresponding mixed-integer optimisation model considers that the planning horizon is divided into T = 12 periods (months) in which production planning decisions must be updated.The problem consists of defining the optimal lot size levels for each item and period in order to minimise the company's inventory and backlogging costs throughout the planning horizon.
Our formulation does not consider work in process, thus we only need to check the balance of production between two consecutive stages.Unlike the filling lines, where is possible to estimate the individual processing time for any item, the reactors produce the formulas in batches which roughly take similar processing times independently of the volume produced.Furthermore, the reactors must follow technical requirements related to production volumes.A batch is not allowed to have a volume below a reactor's minimum because it is unable to mix the formulas properly.Similarly, it is also not possible for a batch to exceed the reactor's maximum volume.In addition, different items can be processed together in the same batch as long as they are composed by the same formula.Finally, the reactors not only need to be cleaned before any formula changeover, but they also must be cleaned after every five consecutive batches of the same formula.On the other hand, the weighing and filling lines do not operate in batches; thus, processing times can be individually estimated.They do not have minimum and maximum volume rules.Furthermore, a setup in machines from these stages is only required when an item changeover occurs.

Mathematical models
Let T be the set of time periods; I be the set of items; L be the set of formulas; K w be the subset of machines in the weighing stage; K r be the subset of machines composed exclusively by reactors; K f be the subset of machines composed exclusively by filling lines; and L k be the subset of formulas that can produce in reactor k ∈ K r ; K is the set of all machines (K w ∪ K r ∪ K f ).The data and variables described below with superscript 'w' relate to the weighing stage, those with superscript 'r' relate to the reactors and the ones with 'f ' refer to the filling lines.The parameters and decision variables are given as follows.Decision variables

Parameters
Number of times a setup is needed to produce formula l in reactor k in period t.

Deterministic model
The optimisation model is now posed as follows. min subject to: Stage II (Reactors) Stage III (Filling Lines) The objective function (1) minimises the overall production cost due to holding, backlogging and setup costs.
The weighing stage is represented by constraints ( 2)-( 4).The set of constraints (2) ensures the production flow between the weighing stage and reactors by enforcing that the total production of any item in this first stage must be the same in the reactors in any period.Constraints (3) guarantee that a setup is performed at the weighing stage in any time period where production takes place.Finally, the weighing capacity is enforced by constraints (4).These constraints compute the expected time to execute the production plan in each period and impose a production capacity limit in time units.
The second stage, where items are passed on to reactors, is portrayed by constraints ( 5)-( 8).Similar to constraints (2), the first constraints in the reactor stage (5) are used to enforce production flow between the current stage and the next one.In this way, during each period, the total production of any item in the reactors must be equal to the production in the filling lines.Constraints (6) ensure that the minimum and maximum volume of reactors are respected.They do so by enforcing that the total volume produced of any formula by any reactor k must always be a value between multiples of its minimum and maximum volumes.The number of times each reactor must be cleaned for the production of formula l in each period is computed by constraints (7).This is achieved by enforcing that cleanup is required whenever a formula is produced for the first time in the period in reactor r and after every five (u k l ) batches of the same formula thereafter.Since parameter u k l assumes a value of 0 if formula l cannot be produced in reactor k, constraints (6) together with ( 7) have the additional function of ensuring that formula l is produced in the correct reactor.Lastly, constraints (8) impose capacity restrictions for every reactor in every period by computing the total time it takes to produce the formula in that period and comparing it with the machine production capacity in time units.
Finally, the filling stage is represented by constraints ( 9)-( 12).Since the output of the machines from this stage are the final items that are ready to be sold, the formulation uses the total production of the filling lines to compute the inventory balance between periods in constraints ( 9) and (10).Constraints (9) represent the balance for the first period whereas constraints (10) apply to the remaining periods.The setup requirement of the filling lines are imposed by constraints ( 11).The production capacity limit, in time units, from each machine in the filling stage is enforced by constraints (12).Finally, constraints ( 13) and ( 14) set the domain of the decision variables.

Baseline model with demand uncertainty based on Bertsimas and Thiele (2006)
One of the most popular approaches to incorporate demand uncertainty into lot-sizing problems is the robust optimisation methodology proposed by Bertsimas and Thiele (2006).This approach models demand variations within a polyhedral uncertainty set, in which the demand for an item i in a period t, d it , is conceived as a symmetrical and bounded uncertain variable inside interval [ dit − dit , dit + dit ], where dit is the nominal demand value and dit is its maximum deviation.In this modelling paradigm, decision-making conservatism is controlled by budgets of uncertainty it , such that i1 ≤ i2 ≤ • • • ≤ i|T | and it ≤ i(t−1) + 1, for all i ∈ I and 1 < t ≤ |T |.Its main role is to limit the total historical deviation, as shown in the following uncertainty set is an associated scaled deviation whose value ranges from −1 to 1 and is the history of demand realisations.The robust optimisation approach works by finding the production-setup plan that gives the best worst-case inventory/backlogging cost over the uncertainty set.This is achieved via a piecewise linear reformulation of the inventory balance constraints ( 9) and ( 10), where the constraints are replaced by ( 15) and ( 16), respectively and a new objective function is defined as in ( 17) min From Bertsimas and Thiele (2006), constraints ( 15) and ( 16) can be reformulated as the following set of linear inequalities (details of derivations are shown in Appendix 3) Thus, our baseline robust lot-sizing counterpart, hereafter referred to as BRLS, consists of objective function (17) and constraints ( 2)-( 8), ( 11)-( 14) and ( 18)-( 21).

A novel fragility-based approach
A major issue with the previous approach is that while it is known that violations of these constraints ( 15) and ( 16) will happen when demand realisations are outside the uncertainty set polyhedron, there is no explicit modelling of how these constraints will react to these violations, which means there is no conceptualised model behaviour outside the prescribed uncertainty set.In addition, identifying a value for it , the budget of uncertainty, for every time period and every product is particularly challenging as it requires the decision maker not only to evaluate their level of conservatism for every product, but also to numerically assess how this level will change over time.
Last but not least, robust optimisation via budgets of uncertainty is often criticised for yielding overly conservative solutions and poor average performances.This often happens because of the heavy focus on the optimisation of the worst-case performance, which often leads to overprotected decision plans.
To circumvent these issues, we adopt an approach based on the fragility-based robust optimisation framework proposed in Long, Sim, and Zhou (2023).The principle of the fragility-based robust optimisation approach is: (1) instead of cost minimisation, a target is set for the cost; (2) target violation is allowed over the entire uncertainty support via a distance measure, such that as demands deviate from their nominal values, the allowable violation increases; and (3) a new objective is defined whereby the 'cost' of constraint violation is minimised, via a metric called the fragility measure.
The first step in formulating the fragility-based model relies on a specified cost target ν that represents the total (setup and backlogging/holding) cost that should not be exceeded under nominal demands.The total cost metric is then moved from the objective function to the following constraint This immunises the overall production cost incurred, which includes the total setup cost, the total backlogging/holding cost and a cost of constraint violation (which will be explained next), against uncertainty realisations that fall within the full box support of the uncertainty.It sets a new objective function min κ.
The variable κ is a measure of the fragility of the system and can be viewed as a cost of allowing the total setup and backlogging/holding cost to violate the pre-specified target.As κ decreases, smaller target violations are incurred for the same sequence of demand deviations (from nominal), indicating that the system is more resilient to uncertainty.The rationale behind the fragility-based formulation is that as the total demand deviates further from nominal, the decision maker accepts greater violations to the cost target, while aiming to keep the cost of these violations low.At nominal demand (z = 0), the total setup and backlogging/holding cost must not exceed the target, which can be easily inferred from inequality ( 22) when z = 0.The fragility-based approach avoids the need of estimating it , while explicitly modelling constraint violations.In addition, target estimations are invariably more intuitive than the estimations of conservatism levels since they are easier to conceive in the context of the application problem.The fragility-based model requires the estimation of a single target value, whereas the baseline (Bertsimas and Thiele 2006) approach needs estimations of conservatism levels, it , for every product in every time period.A straightforward way of setting the target is by using the formula (1 + α) times the optimal cost value from the nominal model, where this nominal model is simply the deterministic model with all demand values set to be nominal.
While the fragility-based framework of Long, Sim, and Zhou (2023) offers multiple advantages over the traditional robust optimisation approach, it cannot be directly applied to our model and needs further reformulation to produce a solvable robust counterpart.A difficulty in moving H from the objective to the constraint ( 22) is that this invalidates the backlogging/holding cost linearisation represented by constraints ( 15) and ( 16).The original nonlinear formulation of the backlogging/holding cost constraints are where For ease of representation, we also define where We can therefore express constraint (22) as which can be rearranged to give an uncertainty-free lefthand side of and an uncertainty-dependent right-hand side of Since we require the constraint 'left-hand side ≤ righthand side', to be valid ∀z ∈ {|z it | ≤ 1, i ∈ I, t ∈ T }, the right-hand side becomes the auxiliary problem min The equivalence exists because the solution of ρ it will always occur at an extreme point of the feasible region, meaning that it will be either 0 or 1.Further improvement to the fragility-based approach can be made by considering the specifics of cost violation within the lotsizing context.For a product i at time t and a fixed production level, if demand is being backlogged, i.e. when cumulative demand is greater than the current production level, any demand increase will worsen the backlogging cost, whereas drops in demand will lower it.On the other hand, if the production level is greater than the cumulative demand, which means inventory is being held, any increase in the demand will lower the holding cost, whereas drops in demand will worsen it.This indicates that we can achieve greater realism by making the fragility measure product-and-time-dependent and connecting demand deviations with whether inventory is being held or demand is being backlogged, which is portrayed in the reformulation of model ( 23) to min and the definition of a new objective function min i∈I t∈T κ it to replace min κ.When ρ it = 1, the cost violation is κ it dit z it , which means that when the production plant has existing backlogging, an increase in demand (z it > 0) leads to an increase in cost violation, whereas a decrease in the demand (z it < 0) leads to a decrease in cost violation.However, when ρ it = 0, the cost violation is −κ it dit z it , which means that when the production plant has existing inventory being held, an increase in the demand leads to a decrease in cost violation, whereas a decrease in the demand leads to an increase in cost violation.This is in line with the principle that a plant with excess inventory benefits from increased demands, whereas one with inventory shortage suffers from increased demands.When fully expanded, the model is which is bilinear and thus nonconvex.We apply McCormick envelopes to derive the following convex relaxation of the model: Dualizing, we obtain the model We note that FRLS relies on the convex relaxation of bilinear terms via McCormick envelopes.The original bilinear constraint is Suppose that Z * is the value of the right-hand side model and Z R * is the value of its relaxed linear model.We know that Z R * ≤ Z * .This means that which implies that the relaxed linear model is a tighter bound on i∈I t∈T w(v it ) − ν.The term i∈I t∈T w(v it ) represents the setup cost and ν represents the total-cost target, and since the total cost (and its target) is expected to be higher than the setup cost, we expect i∈I t∈T w(v it ) − ν ≤ 0. The relaxed model therefore forces i∈I t∈T w(v it ) − ν to be more negative, and therefore the setup cost to be lower.Lower setup leads to lower production and therefore, to less conservative solutions.General suboptimality gaps for McCormick relaxations are not available in the literature.However, there are proven relationships between the McCormick gap and the convex hull gap (Boland et al. 2017) for [0, 1] bilinear problems, namely The user-specified target ν influences the value of the fragility measure κ.The higher the target, the lower the fragility measure.The following simple problem helps better understand the relationship between the target and the fragility measure.
Illustrative problem.Suppose we have the following single-period, single-item uncapacitated robust lotsizing problem (the model and its fragility-based version are assumed to be feasible and bounded) Its fragility-based version, as we can see from (24), is given as We know that the optimal value of ρ is either 0 or 1. Suppose that the optimal value of κ is positive (i.e.κ * > 0) Case ρ = 1: If production is backlogged, ρ = 1 and the model becomes . The first equivalence holds because validity over all −1 ≤ z ≤ 1 is required.The second equivalence follows because the coefficient of z is positive.
Case ρ = 0, κ ≤ c h+ : When the item is held, . Same as when ρ = 1, the equivalence holds true because validity for all . Here, validity for all −1 ≤ z ≤ 1 is achieved at z = 1, since the coefficient of z is negative.We can summarise the bounds on κ via the inequalities where the implication follows from the fact that the fragility-based approach aims to minimise κ and we assumed that the optimal κ is positive.We observe the negative relationship between the fragility measure and the target, in the case where there exists a fragility measure lower than the holding cost.In this case, a higher target leads to a lower fragility measure.In addition, for the same setup and production, a unit increase in the target reduces the fragility measure by the factor 1 d .This also shows that under higher uncertainty levels (higher d), the fragility measure becomes less sensitive to the target.

Computational experiments
The key objective of our computational study is to evaluate the performances of our fragility-based approach vis-a-vis the deterministic approach and the traditional/baseline RO approach under different conditions that represent alternate production planning environments.For this purpose, we construct 9 problem instances that include a base case (all original company data maintained) and cases with cost or capacity alterations that capture different economic realities.Our computational experiments on every model (deterministic/BRLS/FRLS) are conducted in a systematic manner.For every instance, we solve the model in question to obtain the optimal setup-and-production plan.Then, the backlogging/holding costs incurred from this plan are computed for 2000 randomly generated demand scenarios.These two steps are repeated for different uncertainty levels and different values of tuning parameters, i.e. the conservatism level for BRLS and the cost target for FRLS.All computational experiments are executed on a computer with processor Intel Core i7-8700 CPU @ 3.60 MHz and 16GB of RAM using the solver CPLEX v.20.1 coded in GAMS 37.1 modelling language.We impose a time limit of 3600 seconds for the solution of the models.

Problem instances
Table 2 summarises the 9 instances that were constructed and their respective characteristics.In line with the practice of our industrial partner, a major company in the Brazilian veterinary pharmaceutical sector, we use a planning horizon of one year, further divided into 12 time periods for monthly decision-making.We are considering the data of a manufacturing cell that produces 27 different items/products using 11 formulas and 9 heterogeneous machines, among which 1 is a weighing machine, 4 are reactors, and 4 are filling machines.Most of the data used in this study are taken directly from a spreadsheet provided by the company, which we have consolidated, as well as supplemented with estimates of missing parameters, as shown in Table 3.The most important missing information are itemspecific setup times and costs.Setup times are estimated based on the historical company utilisation of individual machines, by assuming that any downtime in machine k is due to the machine undergoing a setup.We call this value Estimated Setup Time for k or EST k .As this value is not item-specific, we multiply it by the proportion of the overall processing time × number of setup spent on item i.The rationale is that items with higher processing times or that require more setups to start production will consume a greater portion of the overall setup time.Following notations that are consistent with our mathematical programming formulation, let ŷw ikt , nlkt and ŷf ikt be the company's historical monthly number of setups in the weighing, reactors and filling stages, respectively.Using the weighing machine for illustration, the setup time for item i is estimated as The cost for setting up machine k to process item i is estimated as the company's lost profit when the machine is idle due to setups.To obtain this value, we compute the number of units of item i that could have been produced by machine k during its setup time and multiply that by the item's expected unit profit, denoted by EUP i , which was obtained from the company's annual report.For example, suppose that item i has EUP i = 3 BRL (Brazilian Reais) and that this item requires a w ik = 2 minutes to be processed on weighing machine k with corresponding setup time of s w ik = 60 minutes.Then, the estimated setup cost incurred by the machine would be s w ik a w ik × EUP i = 60 2 × 3 = 90 BRL, since the weighing machine could have processed 30 items during setup, each providing a 3 BRL profit.For reactors, the unit profit on formula l is considered, rather than the profit on item i, and it is denoted by EUP l .Additional information about the parameters values are presented in Appendix 1.

Base case results and discussion
We first study the base case instance (F1), where the original consolidated and supplemented company data is used, and compare the performances of FRLS (the fragility-based model) against BRLS (the baseline RO model) and the deterministic model over different uncertainty levels.Uncertainty levels are characterised via demand deviations, dit , where for every i ∈ I, t ∈ T , we set dit = δ dit .Here, δ represents the percentage deviation from nominal, meaning that the uncertain demand falls within the range [(1 − δ) dit , (1 + δ) dit ].A higher δ translates to an allowance for larger demand deviations, which indicates a higher uncertainty level.All our experiments are conducted over δ ∈ {0.1, 0.2, 0.3, 0.4, 0.5}, but we report the results on δ = 0.1, 0.3 and 0.5 for the sake of conciseness.The performances of the robust models BRLS and FRLS depend on their 'robustification/tuning' parameters, which for BRLS is the level of conservatism/budget of uncertainty and for FRLS is the cost target.From here on, whenever we use the term 'tuning', we refer to running BRLS over different budget of uncertainty values and running FRLS over different cost targets.At every uncertainty level, we analyse BRLS over it = ¯ ∈ {1, 3, 5, 7, 10, 12}, for all i ∈ I and t ∈ T , and FRLS over different cost target values, where the target is set to α × 'optimalvalueofdeterministicmodel , and α is assumed to be greater than 1.In addition, five cost targets are chosen such that α ∈ {α 1 , α 2 , α 3 , α 4 , α 5 }.The value of α 1 is the first α (> 1 and to 1 decimal place) for which FRLS is feasible.As an underlying principle of the fragility-based approach, we know that as the cost target increases, the objective value decreases, since smaller target violations are necessary across the support of the uncertainty set.We use this principle and choose α 5 to be the first value (> α 1 ) for which the optimal FRLS objective function value is 0 (incrementing α in steps of 0.1 whenever possible, or 0.01 whenever α = 1.2 gives an optimal objective function value of 0).In that case, α 5 × (optimalvalueofdeterministicmodel) represents the minimum target that remains non-violated across the uncertainty support.To obtain the other three α values, α 2 , α 3 , α 4 , such that all values are equally spaced, we apply the formula The resulting set of α values for each δ are shown in Table 4.The production planning results for instance F1 are summarised in Table A14 in Appendix 2, which presents the optimal setup costs, as well as the average (Avg), standard deviation (Std Dev) and worst case values (Worst) for the holding/backlogging and total costs over 2000 randomly generated demand scenarios, considering the range [(1 − δ) dit , (1 + δ) dit ].We compute the holding/backlogging and total cost in each scenario by fixing the production and setup values as in the solution provided by the respective model, and then solving to optimality the deterministic model considering only the demand of that scenario.All cost results are given in Brazilian Reais (BRL).To better visualise the costs presented in Table A14, we plot Figure 2 to show the breakdown of the average total cost into setup, holding and backlogging costs, for each uncertainty level.
The setup cost is considerably lower than the costs for inventory holding/backlogging ( ≈ order of magnitude 10 times smaller).As the setup costs are relatively small, and difficult to see, we have placed a table with their values next to each graph.Across all robust models, as the uncertainty level increases, the relative contribution of the setup cost to the overall cost further decreases, as it is outpaced by surges in holding and backlogging costs.Higher uncertainty levels mean more extreme demands (higher maximum and lower minimum demands), which translates to larger differences between demands and production levels.As expected, both BRLS and FRLS outperform the deterministic model on the 2000 randomly generated scenarios, across all tuning parameters and uncertainty levels, giving lower average and worst-case total costs, as well as smaller standard deviations on total costs.This is because both BRLS and FRLS protect the production plan against uncertainty, whereas the deterministic model is not designed to hedge against uncertainty.Because unit backlogging costs are more expensive than unit holding costs, BRLS and FRLS achieve this protection by raising production levels, thereby holding more inventory and backlogging less.Amidst this overarching trend, we observe that FRLS maintains greater model stability (in terms of production planning) than BRLS during the tuning process.When BRLS is tuned to different levels of conservatism, there are significant changes in production levels, which is shown by marked differences in holding and backlogging costs.Specifically, at δ = 0.1, holding costs for BRLS vary from 2.37 × 10 7 at ¯ = 1 to 2.94 × 10 7 at ¯ = 12 (24% increase), whereas the holding costs for FRLS vary from 2.27 × 10 7 to 2.32 × 10 7 (2.2% increase).Accordingly, backlogging costs for BRLS decreases from 5.38 × 10 7 at ¯ = 1 to 4.73 × 10 7 at ¯ = 12 (12% decrease), whereas the backlogging costs for FRLS vary from 3.59 × 10 7 to 3.69 × 10 7 (3% decrease).This discrepancy in model stability becomes even more prominent at higher uncertainty levels, as we can see from the holding and backlogging cost results shown in Table A14 (Appendix 2) across different values of δ.This indicates that a decision maker who is unsure of the exact value of ¯ that represents their level of conservatism faces vastly different production planning outcomes, which is undesirable given the subjectivity involved in quantifying conservatism, let alone in doing so for every product and time period.
Despite the high sensitivity of optimal BRLS decisions to the decision maker's attitude towards uncertainty, the average total cost maintains comparable stability to FRLS.It varies in the range [7.96 × 10 7 , 8.04 × 10 7 ], compared to [7.67 × 10 7 , 7.77 × 10 7 ] for FRLS, with similar levels of stability preserved across uncertainty levels.However, for every tuning parameter value at every uncertainty level, the average total cost for FRLS is lower than that of BRLS, showing that the fragility-based approach outperforms BRLS across all base-case experiments regarding average total costs.For δ = 0.1, 0.3 and 0.5, the average total costs of the best-tuned BRLS models are 7.89 × 10 7 , 1.24 × 10 8 and 1.71 × 10 8 , respectively.For FRLS, these values are 7.57 × 10 7 , 1.18 × 10 8 and 1.67 × 10 8 , respectively, which is 3% to 5% cheaper.From Figure 2, we recall the aforementioned trend that the robust models seem to be 'backlogging averse', compared to the deterministic model, meaning that they tend to raise production so as to lower backlogging levels as much as possible and thus dampen the impact of their corresponding higher unit costs.The figure also clearly shows that the fragilitybased approach is less backlogging averse than the baseline RO approach, but more so than the deterministic approach.FRLS is clearly a middle-ground approach that avoids the well-known over-conservatism of BRLS, while ensuring better protection against uncertainty than deterministic planning.For δ = 0.1, 0.3 and 0.5, the best-tuned FRLS models have backlogging costs 4.3%, 2.4% and 1.8% higher than BRLS, respectively, but lower holding costs by 11.5%, 16.0%, 8.3%, respectively, showing the restraint imposed by FRLS on over-production and thus, over-conservatism, in favour of heftier holding cost savings.This is because FRLS explicitly models cost violations and aims to build a production plant whereby changes in demands result in minimal cost violations, which leads to a system that is neither over-sensitive to higher demands, nor to lower demands, differently from BRLS whose focus is optimising worst-case performances.The benefits of the worst-case optimisation to worst-case costs in this case are clearly shown in Figure 3, where BRLS outperforms FRLS by 1.6%, 7.6% and 7.1% for δ = 0.1, 0.3 and 0.5, respectively.For the same reason, BRLS tends to have lower standard deviations in total costs.

Solution times and comparisons with stochastic programming
For comprehensive baseline comparison, we show the results of the sample average approximation (SAA) of the stochastic programming version of our model.The SAA   8), ( 11) − ( 14), where dits is demand data generated uniformly in the range [(1 − δ) dit , (1 − δ) dit ] for a scenario s ∈ S. We denote the cardinality of S by |S|.The SAA model has O(2|I||T ||S|) additional constraints and O(|I||T ||S|) additional decision variables compared to the deterministic model.For the company data of 27 items and 12 time periods, a 100-scenario model has almost 65000 additional constraints and 33000 additional decision variables.This compares to 4000 additional constraints and 4000 additional decision variables for our robust lotsizing models.Not surprisingly, CPLEX struggles to solve the SAA model even when it is trained on small numbers of scenarios.Performance comparisons of our deterministic and robust models against the 100-scenario SAA are shown in Table 5.These comparisons are across 2000 randomly generated scenarios (these are out-of-sample for SAA, i.e. none of the 100 scenarios used to train SAA are part of the 2000 scenarios).At the lowest uncertainty level (δ = 0.1), SAA performs better than FRLS on the average cost, although its standard deviation and worst-case costs are higher than FRLS.Its higher average backlogging cost is outweighed by savings on the average holding cost.SAA has inferior performances compared to FRLS for all other uncertainty levels.It has higher average total costs, higher standard deviations of total costs and higher costs than FRLS.The main reason is that the costlier backlogging cost incurred by SAA cannot be counterbalanced with savings in inventory holding when more extreme demands occurs.We also note that solution times for SAA are much higher than the other models.SAA also fails to find optimal solutions after 1 hour, with optimality gaps being around 1%.

Comparisons of best-tuned models across instances F2-F9
In the previous subsection, we showed that the fragilitybased approach yields the cheapest production plans on average, irrespective of the choice of tuning parameter values, and is also less sensitive to the decision makers's risk attitude.We also showed full results of the tuning process where model-specific parameters ( ¯ for BRLS and α for FRLS) were varied and the performances of resulting production plans were reported.This section aims to compare model performances across other instances considering different cost and capacity values to show the power of the fragility-based approach when faced with alternate production planning settings.From now on, we will compare besttuned models, showing differences between the bestperforming (best average total cost) BRLS and FRLS and the deterministic model across instances F2-F9.Instances F2, F3 and F4 portray situations with higher holding, backlogging and setup costs, respectively.Instances F5 and F6 conceive situations where the overall production capacity is reduced and increased, respectively.Instances F7, F8 and F9 create bottleneck production stages by enforcing significant capacity drops on the weighing machine, reactors, and filling machines, respectively.

Cost-based instances F2-F4
Full experimental results for cost-based instances F2-F4 are shown in Table A15 in Appendix 2 and visually summarised in Figure 4.As in the base case, we also observe the inherent backlogging aversion that comes with planning under uncertainty via BRLS and FRLS.As expected, the aversion is more pronounced in instance F3, where backlogging costs are increased by 25%.Indeed, when backlogging costs are higher, BRLS/FRLS, backlogging averse, will aim to reduce backlogged inventory even further.Interestingly, F3 is also the instance where planning under uncertainty outperforms deterministic planning the most.For F3, average total cost savings of 12% and 14% are observed for BRLS and FRLS, respectively, with respect to the deterministic model.For F2, the cost savings are 6% and 9% for BRLS and FRLS, respectively and for F4, they are 9% and 12%.We can also observe the mechanism via which FRLS reduces the conservatism of BRLS.FRLS has fewer setups (indicated by lower setup costs) than BRLS and therefore accepts slightly higher backlogging costs (5% on average) in order to achieve significant holding cost savings (15% on average), thus leading to improvements in average total costs across all instances.Compared to the base case, where the best-tuned BRLS and FRLS models make cost savings of 9% and 12%, respectively over the deterministic model, planning under uncertainty maintains similar benefits on the cost-based instances (9% and 12% for BRLS and FRLS, on average, across instances F2-F4).This shows that taking uncertainty into account results in considerable cost savings even under economic situations that enforce higher costs on setup, holding or backlogging.However, the over-protection prescribed by BRLS because of the worst-case approach is still evident.Across all cost-based instances, BRLS has total backlogging cost 28% lower, but total holding cost 37% higher, on average, than the deterministic model.In the base case, the figures are 30% and 14% for backlogging and holding costs, respectively, showing that BRLS reacts to higher costs by becoming even more backlogging-averse.On the other hand, across all cost-based instances, FRLS lowers the over-production and incurs total backlogging cost 24% lower, but total holding cost 18% higher, on average, than the deterministic model.Moreover, the base-case figures are 25% and 10% for backlogging and holding costs, respectively, showing the lower sensitivity of FRLS to cost-increasing economic circumstances.FRLS therefore offers model stability across cost-based instances, in the sense that it not only maintains superior performances, but also keeps production plans more stable under higher input costs.The trade-off remains that worst-case performances for FRLS are more costly (8% on average) than BRLS.

Production-capacity-based instances F5-F6
Table A16 in Appendix 2 details the experimental results for instances F5 and F6, where the overall capacity is reduced and increased by 25%, respectively.Figure 5 gives a visual summary of the results.An eminent observation is that overall capacity loss significantly impacts the performances of the prescribed production plans.The average total cost across all best-tuned models and all uncertainty levels for the base case is 1.28 × 10 8 .For the cost-based instances, this average increases to 1.38 × 10 8 (8% rise), whereas for instance F5 (capacity decrease by 25%), it experiences a significant surge to 1.87 × 10 8 (46% rise, with respect to base case).Furthermore, the benefits of employing our fragility-based approach becomes even more pronounced.FRLS improves the BRLS average total cost by 4%, compared to 3.6% for the base case and 2.7% for cost-based instances.We observe that in instance F5, FRLS shifts its holding-backlogging balance.When the overall production capacity is 25% lower, production levels are capped more tightly, leading to inflated backlogging quantities and lower levels of held inventory.Because of this, the fragility-based approach understandably shifts its strategy to become more backlogging averse than the baseline RO approach in order to mitigate the effects of the inevitable rise in backlogged items and thus reduce the overall average cost.This is shown by the FRLS backlogging costs being lower than BRLS for instance F5, which is contrary to what has been observed so far in other instances.For example, when the production capacity is 25% higher, FRLS reverts back to its original strategy of reducing the over-aversion to backlogging that the BRLS worst-case approach causes.This is shown in the lessening of backlogging so as to enable greater reductions in holding costs.Another important observation is on the impact of planning under uncertainty.When the production capacity is reduced by 25%, taking uncertainty into account has less pronounced impacts on the performance of the production plan, despite the fact that it still produces more economical plans than deterministic planning (1% and 5% cost savings on deterministic planning for BRLS and FRLS, respectively).This is a consequence of the tight capacity narrowing down the decision space of the models and therefore not allowing enough room for more significant changes to the production plan.In comparison, the average costs resulting from BRLS and FRLS for the increased-capacity instance, F6, are considerably lower than the deterministic solution, with 12% and 17% cost savings, on average, for BRLS and FRLS, respectively.When the production capacity is higher, robust models have more leeway to protect the system by producing higher quantities and thus reducing expensive backlogging.

Bottleneck instances F7-F9
Having seen in the previous subsection that capacity limitations have dramatic effects on model behaviours (we saw a 46% rise in average costs when production capacity was reduced by 25%), we now deepen the capacity investigation by introducing bottlenecks in the production plant.This is done through 3 instances F7, F8 and F9, where 50% capacity reductions are imposed on the weighing machine, reactors, and filling machines, respectively.The experimental results are summarised in Table A17 in Appendix 2 and visually illustrated in Figure 6.Across all instances, the fragility-based approach yields lower average total costs.We observe that the total costs are less impacted by a bottleneck weighing machine than bottleneck reactors or filling machines.This is because every reactor can only produce a subset of the available formulas and both the reactors and the filling machines have longer processing times on items than the weighing stage.When the weighing stage is made bottleneck, FRLS remains less backlogging averse than BRLS (rise of 5% on average on backlogging costs, in order to cut holding costs by 13% and overall costs by 2.6%).When reactors or filling machines are made bottlenecks, it is notable that BRLS only outperforms the deterministic model by a small margin (0.3% on average) compared to FRLS (3.3% on average).This is because the baseline RO approach aims to raise production levels and reduce backlogging, but in a way that is limited, not only by machine capacities, but also by the budget of uncertainty.FRLS, on the other hand, does not have budget-of-uncertainty restrictions and therefore has more flexibility in its production planning strategies.While generally producing less backlogging averse plans, FRLS can also produce plans that lower the backlogging levels of BRLS when capacities are tight and significant cost savings cannot be made on inventory holding.As a result, FRLS outperforms BRLS even when bottlenecks are introduced.

Summary of managerial insights and discussions on the limitations of the fragility-based approach
Our tests were run across nine different instances representing different production realities.In this section, we summarise the managerial insights gathered in the previous section so as to facilitate their access to decision makers.We then discuss the limitations of the fragility-based approach.The main takeaways from our experiments are: (a) Protecting against uncertainty saves cost Across all our experiments, robust lot-sizing plans are less costly (on average, in the worst case, and in terms of standard deviation as well) than the deterministic plan.The robust models achieve this improvement by raising production levels so as to lower expensive backlogs.(b) The traditional approach to robust lot-sizing is sensitive to the decision maker's risk aversion Its performance varies greatly with the budget of uncertainty.This sensitivity is not predictable, but follows a pattern.From low to medium budgets of uncertainty, the total cost decreases, and from medium to high budgets of uncertainty, the total cost increases.Higher budgets of uncertainty leads to higher production levels, as the robust model protects the production plants against higher levels of demand variations.This translate to lower backlogging and higher holding.Beyond a certain budget of uncertainty, surges in holding costs begin to outweigh savings on backlogging costs.(c) Fragility-based lot-sizing offers greater cost savings, on average, than traditional robust lot-sizing Across all our instances, we observe lower average total cost from the fragility-based approach compared to the baseline robust approach.The fragilitybased approach is less conservative than the baseline robust approach, and it achieves this lower conservatism via an adaptable production planning mechanism, which we detail below.(d) The fragility-based approach is less backloggingaverse than the traditional robust approach when production capacity is not tight It offers a middle-ground between the overly conservative baseline robust approach and the deterministic model.By explicitly modelling cost violations, the fragility-based approach leads to a production plan that is not over-sensitive to either high or low demands.The baseline approach, on the other hand, pushes for the best worst-case performance without heed for performances outside the worst case.(e) The fragility-based approach is more backloggingaverse than the traditional robust approach when production capacity is tight Tight production capacity inevitably leads to higher backlogs.The fragility-based approach shifts its strategy and becomes more backlogging-averse than the baseline robust approach in order to mitigate the impact of the inevitable rise in backlogged production.
Fragility-based lot-sizing has its limitations.It requires a pre-specified distance measure to portray cost variations with departures from nominal uncertainty.The choice of distance measure impacts the tractability of the resulting model.Furthermore, the theoretical relationship between the distance measure and the optimal solution is still unknown.This contrasts the baseline robust optimisation approach, where bounds on probabilities of constraint violations are known (although loose and difficult to use in practical problems) and linked to the budget of uncertainty.The fragility-based approach, understandably, has worse worst-case performances than the baseline robust approach.It has worst-case costs ranging between 2% and 16% higher than the baseline approach.On average across all instances, the worst-case cost is 8% higher than the baseline approach.In situations where protection against worst-case performances is primordial, even at the expense of average performances, the over-conservatism of the baseline approach can be beneficial.Although the fragility-based approach is less sensitive to target estimation than the baseline approach is to the budget of uncertainty, the relationship is as yet empirical.As far as we are aware, there are no known theoretical results on the sensitivities of both approaches.We therefore recognise that the observations we provide regarding this issue are based on the veterinary pharmaceutical lot-sizing problem we study in this paper and that care must be exercised before generalisation to other problems.

Concluding remarks
In this paper, we develop a novel lot-sizing approach for the production planning problem typical of veterinary pharmaceutical companies whose production stages involve weighing, mixing and filling steps.We propose a fragility-based optimisation approach, where the decision maker specifies a target cost and the model aims to minimise violations from this cost over the entire support of the uncertainty set.This differs from traditional robust optimisation approaches because it requires the choice of a cost target, instead of the hard-to-estimate budgets of uncertainty, and explicitly models constraint violations.Computational experiments across nine different problem instances show that our fragility-based approach unanimously reduces average total costs and maintains greater model stability under different target estimations.This is in contrast to the budget-of-uncertainty approach which has well-established high sensitivity to the budget of uncertainty and therefore, to the risk attitude of the decision maker.Our fragility-based model also preserves cost savings when bottlenecks are introduced in the production plant and when inventory costs and capacities are varied.Cost savings are achieved by mitigating the over-conservatism of the budget-of-uncertainty approach and providing beneficial balance in holdingbacklogging costs so as to cut back overall costs.As a general rule, while the budget-of-uncertainty approach aims to optimise worst-case performances, our fragility-based approach aims to minimise the sensitivity of our production plan to demands that depart from nominal values, which makes the overall production plant less 'fragile' and more resilient to demand variations.There are interesting future research directions to explore with regards to the fragility-based lot-sizing approach.The first one is on the impact of using the Wasserstein distance within the fragility-based approach.The Wasserstein distance is known to possess interesting theoretical features, one of which being that it offers a data-driven approach to modelling uncertainty, with asymptotic optimality guarantees on large datasets.The second research direction involves understanding the link between the distance measure, out-of-sample performances and the fragility measure.This research direction will lead to tailored distance measures for different lot-sizing problems, in such a way that less fragile systems are further encouraged.

Disclosure statement
No potential conflict of interest was reported by the author(s).
Table 4. Set of items that can be produced in each filing line.The remaining tables (Tables 5-A13) provide the value for all relevant parameters.Table 5 shows the capacity, in seconds, of each machine.Note that this capacity does not change between periods.Table A6 presents the minimum and maximum volume in liters each reactor can process in each batch and Table A7 shows the volume of formula in liters required to produce each item.Table A8 has the processing times, in seconds, for each item on stages I and III machines and Table A9 presents the time, in seconds, needed to produce a batch of each formula on each reactor in stage II.Table A10 presents the setup costs and times, in seconds, for stages I and III machines and A11 shows these values for stage II machines.Table A12 details the holding and backlogging costs for each item, and the initial inventory and backlog at the start of the instance's planning horizon.Note that the holding, backlogging and setup costs do not change between periods.Finally, Table A13 shows the items demand for each period.
Table A7.Volume of formula required to produce each item in litres.

Figure 1 .
Figure 1.Production process of animal pesticide.
where N is the number of terms involved in bilinear multiplications.Although suboptimality gaps are not precisely known, McCormick relaxation ensures the class of FRLS remains the same as that of the deterministic model.Furthermore, the model has additional O(|I||T | 2 ) constraints and variables, where |I| and |T | denote the cardinality of sets I and T , respectively, compared to the deterministic model.This is the same as BRLS, which also has additional O(|I||T | 2 ) constraints and variables compared to the deterministic model.FRLS thus maintains the same model class as the deterministic model and shares similar complexity to the baseline approach.
of the total processing time ×number of setup spent on item i in the weighing stage.

Figure 2 .
Figure 2. Cost breakdown of the solutions proposed by each model, considering each possible budget ( or α) and deviation (δ), for the base case.

Figure 3 .
Figure 3. Percentage change compared to the deterministic solution for the worst case total cost and standard deviation, considering different budgets ( or α) and deviation (δ).

Figure
Figure Cost breakdown for the cost-based instances.

Figure 5 .
Figure 5. Cost breakdown for the capacity-based instances.

Figure 6 .
Figure 6.Cost breakdown for the bottleneck instances.
Edinburgh Business School (Scotland), where he teaches operations research and mathematical programming.He is also serving as director of the MSc Data and Decision Analytics (online) in the same institution.Prior to joining the University of Edinburgh Business School, Dr Alem served as an Assistant Professor in Operations Research at the Federal University of Sao Carlos in Sorocaba.Dr Alem published over 30 papers in journals indexed by SCI/SSCI/SCI-E papers at reputable venues such as Taylor & Francis, Elsevier, Informs, Springer, and ACS.He has served as a reviewer for more than 15 journals.His research focuses on developing mathematical programming approaches to improve humanitarian supply chains and production planning settings.Pedro Munari is an Associate Professor at the Production Engineering Department of the Federal University of São Carlos in São Paulo, Brazil.He holds a M.Sc.and Ph.D. in Computer Science and Computational Mathematics from the University of São Paulo.His Ph.D. Dissertation earned the prestigious Doctoral Prize for the Best Dissertation from the Brazilian Society of Applied and Computational Mathematics.Dr. Munari has also held visiting scholar positions at the School of Mathematics of the University of Edinburgh (Scotland, UK), and at the School of Industrial and Systems Engineering of the Georgia Institute of Technology (Atlanta, USA).He has coordinated numerous successful research projects with grants from funding agencies and has developed applied projects with several companies in Brazil, with focus on Operations Research and Logistics.His research interests include exact and heuristic methods, with emphasis on the column generation technique, branch-price-and-cut methods, and decomposition techniques for large-scale problems.Additionally, he has made contributions to the field by introducing formulations and solution methods for challenging deterministic, stochastic and robust combinatorial optimisation problems.

Table 1 .
Literature summary of lot-sizing problems via robust optimisation.

Table 2 .
Summary of the proposed instances.

Table 3 .
Summary of the parameters used in the model.

Table 4 .
Values of α for the base case instance (F1).

Table 5 .
Performance comparisons with sample average approximation (optimality gaps for SAA are after 1 hour of solution time).
Note: All cost values, including standard deviations, are in millions.

Table 2 .
Set of items synthesised by each formula.

Table 3 .
Set of formulas that can be produced in each reactor.

Table A6 .
Minimum and maximum volume in litres of each reactor.

Table A8 .
Processing times in production stages I and III in seconds.

Table A9 .
Time to process a batch of each formula in production stage II in seconds.

Table A10 .
Setup costs (BRL) and times (seconds) for production stages I and III.

Table A11 .
Setup costs (BRL) and times (seconds) for production stage II.

Table A12 .
Holding and backlogging costs (BRL) as well as initial inventory and backlogging levels.

Table A13 .
Nominal demands in units.