Decentralised collaborative job reassignments in additive manufacturing

ABSTRACT Cloud Manufacturing (CMfg) is a promising approach that leverages the sharing economy to reduce costs and enhance supply chain flexibility. Particularly, when utilised alongside Additive Manufacturing (AM), CMfg is considered a key enabler for collaborative production (CP) systems. However, there is still a lack of planning models that reduce entry barriers for CP. Therefore, we propose a decentralised CP planning framework for AM. In our approach, machines autonomously select jobs from an existing production plan to forward them to other suppliers that can produce these parts more efficiently. A CMfg platform facilitates job forwarding and creates promising part bundles and manufacturing machines autonomously places bids on the packages via a combinatorial $ 2^{nd} $ 2nd price reverse auction. Costs of the reallocated bundles are shared throughout a Shapley value-based approach without the need to disclose critical information. We benchmark our proposed framework against a centralised planning approach and find that it achieves comparable effectiveness as the benchmark solution. We also show that this mechanism promotes individual rationality and that agents particularly benefit when participating in both offering and acquiring production jobs through the auction.


Introduction
In recent years, the focus in the production industry has shifted from mass production to mass customisation, and service expectations are growing while customers increasingly seek to purchase sustainable products (Alicke, Rexhausen, and Seyfert 2016;Crommentuijn-Marsh, Eckert, and Potter 2010;Hu 2013).Such an environment creates immense pressure on manufacturers to meet customers' requirements.One option to overcome this burden is to adopt the sharing economy (SE) paradigm.
Researchers found that the SE can reduce production costs while responding flexibly to customers' needs and expectations (Grondys 2019).This cost saving can be accomplished with collaborative production (CP), which is one of the main pillars of the SE (Probst, Frideres, and Pedersen 2015).Several quantitative in-depth studies have already demonstrated that with collaborative production, costs can be decreased significantly (Gansterer, Födermayr, and Hartl 2021;Gansterer and Hartl 2020b;Zehetner and Gansterer 2022a).However, to overcome the barriers of CP, suitable technologies are needed.Cloud Manufacturing (CMfg) could be one of the key technologies for CP.Xu (2012) introduced the a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable manufacturing resources (e.g.manufacturing software tools, manufacturing equipment, and manufacturing capabilities) that can be rapidly provisioned and released with minimal management effort or service provider interaction.
In CMfg three kind of participants are interconnected: suppliers, who share their production resources, customers, who have parts requiring production and the platform operator, responsible for managing the CMfg system.The operator's role includes finding, combining, controlling, and coordinating the services to meet customer demands (Adamson et al. 2017).
Early research that laid the groundwork for CMfg emphasised the significance of these platforms in promoting resource sharing and asserted that the operators need to provide effective and unobstructed coordination mechanisms to market participants in order to generate value (He and Xu 2015;Ren et al. 2015;Wu et al. 2013;Zhang et al. 2014).Some studies explored incorporating traditional manufacturing technologies into CMfg and identified challenges in this fusion.Nevertheless, other researchers proposed using Additive Manufacturing (AM) as a more compatible substitute technology.This approach gained acceptance and has subsequently been adopted by several industry platforms (Haseltalab and Yaman 2019).
AM, also known as 3D printing, is a manufacturing technology that produces the needed product layer-bylayer directly (Jiang, Kleer, and Piller 2017).In contrast to conventional technologies, it does not require costly tools, molds, or punches for production.AM is also associated with short lead times, which offer a high degree of flexibility and creates the opportunity to sell part print files instead of selling pyhsical parts.This characteristics make it an ideal technology for integration with CMfg (Haghnegahdar, Joshi, and Dahotre 2022).
Furthermore, the manufacturing production process is digitally streamlined and produces negligible production quality deviations.Hence, this technology makes it easy to share or exchange production jobs between plants or machines/agents (Baumers et al. 2016;Berman 2012).
This aspect is of particular importance as it addresses prevalent environmental challenges.By relocating production closer to customers, the potential for reducing CO 2 emissions resulting from transportation becomes evident (Chaudhuri et al. 2021;Khorram Niaki and Nonino 2017).Moreover, environmental concerns can also be mitigated within the manufacturing realm.In Additive Manufacturing (AM), the aggregation of components into batches, referred 'dense batching', can significantly reduce overall manufacturing durations (Zehetner and Gansterer 2022a).Furthermore, the intrinsic nature of CMfg fosters increased collaboration, creating more opportunities for effective batching.Researchers also have demonstrated that energy consumption can be decreased by optimising machine utilisation, resulting in lower energy consumption (Rinaldi et al. 2021).Notably, Simeone, Caggiano, and Zeng (2020) contend that denser batching can also minimise waste metal powder by producing more parts on a single print bed, thereby reducing material costs and enhancing sustainability.Overall, the adoption of AM has the potential to streamline global inventory, leading to conservation of resources (Kunovjanek and Reiner 2020).
Nevertheless, to leverage the advantages of CP and AM, service matching and scheduling are essential in CMfg.In the past, researchers primarily focussed on centralised AM planning methods.While these approaches can yield globally better solutions (Saharidis, Dallery, and Karaesmen 2006), they encounter challenges in efficiently generating schedules in dynamic CMfg environments.
Multi-agent technologies offering solutions for overcoming these challenges in CMfg.Autonomous agents can be modelled with objectives and preferences, enabling the creation of schedules through cooperation, coordination, and negotiation (Y.Liu et al. 2019).Halty et al. (2020) suggests that game theory approaches could be suitable tools when designing decentralised systems.However, in decentralised planning involving multiple agents one needs to balance fairness and efficiency.This often results in a trade-off where fairer results lead to less efficient solutions.Furthermore, defining fairness within the system is a critical consideration, a question that lacks a universally applicable answer (Yilmaz and Yoon 2020;Yilmaz, Yoon, and Seok 2017).
In the past, researchers applied more and more auction mechanism designs to solve production and transport planning problems in a fair manner (Gansterer, Hartl, and Sörensen 2020;Pahwa, Starly, and Cohen 2018;Z. H. Liu, Wang, and Yang 2019).However, only few studies on decentralised planning approaches were conducted in this field, and no study considers all critical elements for CP in AM.Our study tries to close this research gap by proposing a decentralised production planning framework that not only incorporates the essential technical elements necessary for effective AM planning, but also facilitates an appropriate decentralised mechanism to achieve the individual objectives of agents within the system.Thus, we introduce an auction-based mechanism in which machines can select parts from their existing production plans that should be forwarded to a CMfg platform.It creates effective bundles and redistributes them among agents if this increases cost efficiency.The participating agents then share the overall cost savings with a Shapley value-based approach.
Our study provides the following four contributions.
• We propose an enhanced decentralised framework for the collaborative batching problem in multi-site AM. • We utilise a novel, fair cost-sharing method to the problem that efficiently approximates the Shapley Value.Additionally, our study demonstrates that active participation in the framework benefits each agent.• In the computational studies, we thoroughly compare the newly proposed approach against the performance of a centralised benchmark solution framework.We demonstrate that significant savings can be reached within reasonable computational time.• In our study, we also investigate the most effective collaboration policies for agents and create valuable management insights.
In our prior research (Zehetner and Gansterer 2022c), we introduced the basic framework.Nevertheless, the current study constitutes an enhanced framework.Our work also provides a comprehensive literature review and an in-depth explanation of the framework.It also includes experiments that substantiate the framework's performance and presents larger instances for better real-world applicability.Additionally, we provide deeper insights for individual agents within the framework and draw valuable managerial conclusions.
The remaining paper is structured as follows.Section 2 provides a literature review.In Section 3 we summarise the problem description.Section 4 describes our proposed framework.The computational study can be found in Section 5, while Section 6 closes the study with a summary and a conclusion.

Related literature
Our literature review focuses on decentralised scheduling and resource allocation problems in AM and CMfg.However, we also approach the field of logistics as there are several suitable methods which can be adapted to production planning problems.
Only a few articles studied the application of decentralised methods within the field of AM.Pahwa, Starly, and Cohen (2018) devised a reverse auction mechanism in which customers bid prices for requested services.A platform then finds a provider that produces the part under the stated price.Li et al. (2019) presented an approach for dynamic order acceptance and scheduling for on-demand AM production.In their study, service providers select orders and schedule them individually.In the study of de Antón et al. (2020), a methodbased combinatorial auction is used to solve an AMnesting problem in which printers bid on batches.S. Liu et al. (2021) proposed a non-cooperative game model to reduce completion time and cost and to improve service quality.Their model takes into account detailed production attributes such as the moving speed of the nozzle, model dimension, printing resolution, printing material, and pricing.Their model solve two-sub games using a Genetic Algorithm (GA) to obtain the Nash-Equilibrium (NE).Yang, Chen, and Kumara (2021) developed a bipartite matching framework for customers and manufacturers in SE operations.The authors applied the stable matching algorithm to optimise the matches between customers and suppliers.
Researchers conducted numerous studies of decentralised planning problems in CMfg.Y. Liu et al. (2017) proposed a resource service sharing in CMfg based on the Gale-Shapley algorithm.Xiao et al. (2019) introduced a multi-task cloud manufacturing model based on game theory from the customer's perspective.They modelled the agents' utility dependent on the reliability of the service, the total costs, and the completion time and weighed the factors accordingly.The authors optimised the problem by finding the NE with a biogeography-based optimisation algorithm.Gansterer and Hartl (2020a) proposed a multi-level lot sizing problem.They present a 3-phase decentralised mechanism in which no critical information has to be shared.Gansterer, Födermayr, and Hartl (2021) investigated the capacitated multi-level lot sizing problem.They developed a myopic upstream lot-shifting heuristic.Their algorithm fixes feasibility violations with repair operators.Huo et al. ( 2022) introduced a CMfg collaboration approach in which resources can be jointly utilised across participating factories.Their model optimises the matching degree by balancing supply and demand between factories.The study represents the surplus and demands of resources as vertex vectors on a bipartite graph and solves them using a Kuhn-Munkras algorithm.
To create production plans in a decentralised manner, scholars have found auctions to be particularly useful (Parente et al. 2020).Dewan and Joshi (2002) proposed an approach for the dynamic job shop scheduling problem such that time units of machines are being sold in an auction.In this model, jobs act as bidders, and the machines themselves as auctioneers.The problem was formulated as a Lagrangian Relaxation to decompose it.Their experiments revealed that their decentralised approach did not attain the same solution quality as a centralised approach.However, the auctionbased method proved capable of solving larger problem sizes.Chang, Hsieh, and Chen (2007) presented a reverse-auction-based mechanism for assigning jobs from owners to bidders.They formulated two integerprogramming formulations.One is a deterministic version of a mean value-based bidder, while the other program is modelled stochastically to account the bidder's uncertainties and risks in the decision-making process.They achieved a near-optimal solution using a Lagrange relaxation-based scheduling method.Kang et al. (2020) proposed two multi-unit double auction mechanisms to solve a service allocation problem in an industrial park where manufacturers can share private idle manufacturing capabilities and resources with others.Y. Liu et al. (2022) investigated an iterative combinatorial auction mechanism for multi-agent parallel machine scheduling without violating information privacy.Their experiments demonstrated that their proposed decentralised mechanism delivers a high-quality solution with a small price of anarchy compared to a centralised planning approach.
Researchers also recognised the advantages of truthful auctions and proposed such approaches.Z. H. Liu, Wang, and Yang (2019) introduced an iterative incentive-compatible double auction mechanism in which manufacturing resources are optimally allocated and scheduled to users.The algorithm takes into account multiple objectives, including cost, execution time, and logistics.Z. Liu and Wang (2020) proposed a truthful and fair resource bidding mechanism in an incomplete, transparent information environment.They employed an iterative bidding mechanism approach until providers and demanders reached an agreement on the price.
Auctions are utilised to allocate resources, not only in manufacturing but also in transportation problems.Berger and Bierwirth (2010) investigated a framework for exchanging transportation jobs that facilitates collaboration among logistics providers.The formulated objective is to maximise the total profit without reducing the individual surplus of the carriers.The authors proposed an auction-based five-step approach in which jobs are offered as bundles to bidders in order to reassign tasks.Subsequently, researchers have developed further enhancements for this model.For instance, Gansterer and Hartl (2016) developed and evaluated various request evaluation strategies for carriers participating in auction-based collaborations.Furthermore, Gansterer and Hartl (2018) introduced a method to create a set of attractive bundles and Gansterer, Hartl, and Sörensen (2020) proposed efficient approximate bidding strategies.
Nevertheless, to the best of our knowledge, no study addresses the decentralised combined batchingscheduling problem within a collaborative environment.

Problem description
Figure 1 depicts the AM batching and scheduling problem we aim to solve.In this problem, customers and sites are allocated to regions.The customers order parts directly from a manufacturer (site).The parts can be made of different materials and need to be combined in so-called batches that only contain parts of the same material.These batches have geometrical boundaries derived from the print bed of the AM machine.In our problem, we model parts as 2D rectangles, and they need to fit the size of the print bed.Thus, parts 1, 2, 3, and 4 do not fit in one batch but require three batches.The batches are scheduled on the machines of the designated site.A part can only be produced on a site if the material is available at the location.Batches also need setup time.If there is no material change between the batches, the setup time is relatively short.This can be seen between batch 2 and batch 4 on Machine 2. If the material changes (e.g.batch 5 to batch 3), the setup time is high because the printing material must be completely cleaned from the machine.When parts are produced, they are stored onsite until the shipping date, which is determined by the customer's due date of the part and the delivery time.The objective of the problem is to minimise the overall costs, consisting of four components: (i) production costs, which increase linearly with the production duration multiplied by site-specific cost factors, (ii) set-up costs, which are linear regarding the sum of the set-up time, (iii) transportation costs, which naturally differ between locations, and (iv) inventory costs occurring when parts are stored on site.The collaborative aspect of the problem is indicated with the red arrows in the figure.The dashed red arrow indicates a job forwarding from site 2 to site 1 and vice versa.By this, job forwarding costs can be significantly decreased because of (i) better batching opportunities, (ii) reduced setup time, and (iii) lower transport costs.In the given example, parts 4 and 6 are merged into one batch, which decreases the overall production duration of these two parts and, subsequently, the production costs.Part 5 is forwarded to site 1.Thus, this part does not cause a high setup time, as all other parts are made from the same material.Furthermore, region 3 may be in proximity to region 2. Hence, transportation costs could be saved.It is worth mentioning the potential for reducing inventory costs as well.However, given the substantial cost savings, this aspect becomes inconsequential in the discussion.The interested reader can find the mathematical formulation of the global problem, in the study presented by Zehetner and Gansterer (2022a).In the following, we present the newly proposed auction framework, which is applicable when no fully informed decision maker exists.

Auction framework
To solve the problem, we propose a decentralised framework based on a combinatorial auction.An effectively designed auction has four desired properties: it is (i) incentive compatible (IC), such that the agents' dominant strategy is to report information truthfully, (ii) individually rational (IR), meaning agents participate only if they do not get a higher utility without participation, (iii) budget balanced (BB), which means that the auction does not win or lose money, and (iv) efficient, which means it should maximise the total agents' value (Babaioff and Walsh 2005).It is impossible to fulfil all four requirements (Myerson and Satterthwaite 1983).However, our approach achieves IR, BB, and IC while maintaining high efficiency.We solve the problem by applying the framework visualised by the swimlane process chart in Figure 2.
(1) Request selection: The process starts with each site's locally optimised production plan.The machines (agents) select parts (jobs), which are then forwarded to a CMfg platform (auctioneer).( 2) Bundling: The auctioneer creates promising bundles.
(3) Bidding: Agents bid for the bundles by computing marginal costs for each of them.(4) Winner Determination: The auctioneer allocates the bundles to agents such that the overall costs are minimised.( 5) Payment Determination: Costs are shared by the machines that submitted the parts, and payments are forwarded to the producers of the bundles.( 6) Withdrawal option: After the payments and costs are determined, the auctioneer reports the results to the agents.If an agent is worse-off, it can withdraw its parts.In this case, the framework is repeated from step 2 to step 5 without the parts from the withdrawn agent.
The algorithm terminates if all remaining agents are satisfied with the outcome.Parts are then reallocated to the winning agents.
Figure 3 illustrates the auction system used for the scenario depicted in Figure 1.This representation demonstrates the initial allocation of production parts, where P1 and P2 are assigned to Machine M1, P3 and P4 to M2, and M3 to P5, P6, and P7.The auction process unfolds in several steps: In step 1, the machines select suitable, costly parts, which are forwarded to the auctioneer.The figure represents, the situation where M1 does not submit any part, M2 forwards all its parts, and M3 submits P5.In the second step, the auctioneer forms five bundles, with parts potentially being part of multiple bundles (e.g.part P3 appears in bundles B1 and B5).In the next step, bidding, the bundles are offered to the machines again, and each agent reports its marginal costs for each bundle.It is important to highlight that even though M1 does not submit any bundle, it remains engaged in the process and reports its bids.In step 4, the auctioneer solves the combinatorial optimisation problem, such that allocating bundles to machines minimises overall production costs.In the last step, the payments are determined.The example in bundle B4 involves two parts-one initially requested by M2 and the other by M3.Although M2 receives compensation for producing both parts, fair cost-sharing between M2 and M3 is necessary.Throughout this process, no machine withdraws its parts as each has costs either lower or equal to those prior to the process.Thus, the algorithm is terminated at this point.
The next Sections 4.1 to 4.5 describe each of the five process steps in detail.For the framework we use the notation provided in Table 1.

Step 1: request selection
Each machine evaluates the parts in terms of production efficiency.If a machine decides that one or several parts are too costly, it can submit them to the auction.We use a threshold strategy for request selection, meaning that part p is selected if its margin m p is lower than a parameterised threshold margin ε.We determine the Shapley costs (Shapley 1953) and then calculate the margin using the revenue of the parts as given in (5).A commonly used definition of the Shapley value is based on marginal vectors.When adapted to our problem, i would contain all possible orderings of the full parts set P i , allocated to machine i.Let π = (π 1 , π 2 , . . ., π P ) ∈ i be one of these orderings.If part p is at position k, i.e. π k = p, then its marginal costs are defined as The marginal value can also be written as , where P π p is the set of parts preceding part p in order π.The Shapley costs of part p can then be defined as the average marginal contribution of all possible orderings i of machine i.This is given in (1). (2) e P = w pow j∈J P max p∈P F jp h p r + w las p∈P f p g p h p r (3) We estimate the marginal costs for each part by (2).This value considers three terms: setup, transport, and production costs.For the latter, we need to determine the production duration, which we do by (3).J P represents the affected batches of the part set P, which is required to compute production duration and set-up time.This is expressed by (4).Finally, the marginal costs of part p can be calculated by (5) using the Shapley costs φ p (v) and the price of part v p .A part is selected to be submitted to the auction if m p < ε.We apply this set of equations for each machine i to calculate the marginal costs for each part.Computing all marginal costs is time-consuming due to the N P-hardness of the Shapley value (Deng and Papadimitriou 1994).We use the sampling technique by Castro, Gómez, and Tejada (2009) to overcome this computational complexity.As it is a generic approach, it can be used to approximate the Shapley values for various problems.The method is based on a randomised subset of all possible orderings and determines the Shapley value by averaging the marginal values for the subset of orderings.van Campen et al. (2018) enhanced this method by modifying the orderings such that each part p appears in each position of the sequences the same amount of times.The authors provide solid results showing the effectiveness of their method.However, their approach does not necessarily lead to solutions, which are BB.As this is an essential property of our framework, we enhance their approach to be BB.Within our novel method, we use a linear model to create orderings that place parts at the same positions the same number of times.However, our model ensures that each sequence only appears once.Hence, we can calculate the marginal values for each part of the ordering.The model for creating the sequences is explained in the following section.

Structured ordering problem
Our Binary Linear Programming (BLP) ordering approach generates a subset consisting of n Table 4.The marginal contributions of randomly selected orderings with the proposed structured sequence approach of the 3-player game of Table 2. Equation ( 6) ensures that each part p is allocated to one position u, while (7) establishes that each position u has a part p allocated, and (8) makes sure that each part appears at each position u, a total of n times over all sequences.In (9), we assess whether a part p in ordering o is at the same position u as in ordering o , while (10) ensures that each ordering only appears once.The BLP needs to be solved for each machine i.By solving the model, the ordering can now be encoded by π u = p∈P p V opu .We also want to point out that a constraint programming model could be a suitable tool for this problem.However, for the sake of consistency, we use the proposed BLP model.We provide a simple example of the approach in the following paragraph.2 shows an example with 3 players (cf.van Campen et al. 2018).We can adapt the principle of this 3-player game to a cost sharing problem with 3 parts.Row S represents all possible player combinations and row v(S), the corresponding costs.

Example. Table
Table 3 contains all possible orderings of the 3-player game and the corresponding marginal values.Also, the exact Shapley value is given in the last row, m v , for each player.
Table 4 provides the process for approximating the Shapley value with the newly proposed approach.Column Ordering represents the orderings for a group size n = 1.Each part must be at each position exactly once.Thus, we need to create three orderings.
These orderings are created using the BLP model ( 6)-( 10).Columns m π v (1), m π v (2), and m π v (3) represent the marginal values for the orderings and the players.Row m v provides the average values of the column.When comparing the sum of the marginal values of this solution to the outcome of the grand coalition {1, 2, 3}, it becomes visible that the two are equal.Hence, it is BB.

Step 2: bundling
This process step aims to build attractive bundles for the bidders.To do so, one option would be to create all possible part combinations.This approach would increase the number of bundles factorial to the number of parts.Such a method would create considerable computational complexity in the subsequent three steps of this framework.Hence, literature suggests to reduce the number of bundles to a subset of attractive ones (Horner, Pazour, and Mitchell 2021; Mancini and Gansterer 2021).Gansterer and Hartl (2018) proposed an approach with so-called candidate solutions.A candidate solution is a set of nonoverlapping bundles which includes all requests.The authors prove that such an approach reduces complexity while reaching good solution quality.We create candidate solutions by merging two bundles on each level.We start with single parts as a bundle and then evaluate the fitness of each pair.As the auctioneer (i.e. the CMfg platform) does not have complete information about the agents, we can only estimate the attractiveness of the bundles.We do this by using the part information communicated by the agents.A MILP problem minimises the overall fitness of the next level.
Equation ( 11) is the objective function, with the aim of minimising the overall fitness of the parts to be merged.The fitness of bundle combinations is defined by ( 12), where each component of the fitness function refers to one cost component described in Section 3. We introduce parameters α, β, γ , and δ to weigh the components and adjust them to the actual costs.The variable M bb indicates which bundles are merged together.Equations ( 13)-( 15) ensure that only one merge per bundle is possible.As M bb is symmetric, we only need to consider one side of it.This is done by ( 16).Equation ( 17) limits the number of merge operations to half the number of bundles with ζ = θ/2 , with θ being the number of bundles of the previous merge level.If there is an odd number of bundles, the single residual bundle is conveyed to the next level.This process is repeated until merge level ξ is attained, as illustrated in Figure 4.

Step 3: bidding
In our framework, agents can subcontract other agents to fulfil their production requests.We assume that the end customer pays the initial owner, which then pays the subcontracted manufacturer.This means that the subcontractor offers a takeover price to fulfil the task.
According to Berger and Bierwirth (2010), this setting can be organised as a reverse auction.For such an auction, the machines must evaluate the marginal costs for each bundle.To determine the marginal costs for each bundle b, machine i has to solve a new production planning problem, which includes the additional part set of the bundle.The machines determine the costs using a single machine batch-scheduling AM model, which is formulated as follows.The marginal costs of the bundle to bid on is equal to the difference of the objective value z b and the objective value of the initial production plan.
x p + f p ≤ f pc ∀p ∈ P (20) y p + g p ≤ g pc ∀p ∈ P (21) The objective function (18) minimises the costs.Equation (19) ensures that each part is allocated to only one batch.Equations ( 20) and ( 21) guarantee that the part is positioned within the geometric boundaries of the print chamber, while ( 22)-( 26) avoid collisions with other parts.The model uses pre-configured batches, meaning that a batch j has a pre-allocated material m.Hence, E jm is determined ex-ante and is therefore an input parameter.This formulation guarantees the linear form of the model.Constraint ( 27) ensures that parts are allocated to batches j that require the same material.The height of batch j is equal to the maximum height of part p within the batch.This is computed by Equation ( 28), while (29) determines the production duration of batch j.Auxiliary variable z p is needed to determine the inventory costs.It represents the finishing date of part p and is computed by (30).Equation ( 31) computes the latest possible pickup date of batch j, and Equation ( 32) ensures that items are allocated only to batches which are scheduled on machine i.We allow only one successor per batch j.This is formulated by (33).Constraints (34) ensure that batch j cannot be its own successor, while (35) guarantees that each allocated batch has a successor (excluding the last one in the production sequence).Equation (36) creates the last batch of the production sequence.We do not allow circular production sequences, which is guaranteed by Equation (37).If a batch j is not produced on a machine, constraints (38) set its production start time e j to 0. The production start date of batch j needs to be before the latest possible start date of the batch, which is determined by d j .This is done by (39).The production start of u j must be scheduled before the end date of the previous batch j, which is ensured by Equation (40).

Step 4: winner determination
One of the challenges of combinatorial auctions is the computational complexity.We can overcome this issue by solving the model algorithmically or by limiting biddable combinations (Pekeč and Rothkopf 2003).Our approach is to restrict combinations to attractive ones by bundling them (see Section 4.2).This allows us to solve the winner determination problem with the following linear model.
Equation ( 41) is the objective function that aims to minimise the overall costs.It consists of the winner determination matrix Q bi , which indicates the winning bidders and the bidding matrix C bi .The latter consists of the reported marginal costs of step 3. Constraint (42) ensures that a maximum of one bundle b is allocated to a machine i.Also, the model only allows each bundle to be allocated to one machine i, which is guaranteed by ( 43).
With this restriction, we overcome the problem of reporting costs of bundle combinations.Thus, machines only need to determine the marginal costs of one bundle but combinations of bundles, which decreases the computational complexity of the bidding phase (see Section 4.3).Equation ( 44) guarantees that each part is allocated to a machine i only once, while D bp is the bundle-part configuration, which is determined in step 2.

Step 5: payment determination
An IC pricing mechanism that requires reporting of true costs is the Vickrey-Clarke-Groves (VCG) mechanism.
The VCG auction is based on the concept of secondprice auctions (Krishna 2010).A suitable way to solve the VCG auction is to run the WDP once for the whole set of bidders and once for each bidder removed (Cramton, Shoham, and Steinberg 2006).The actual payments for each machine i are determined by (45).In this equation, we use the reported costs C bi of the agent and deduct the difference between the objective value, including i (Z * ) and the objective value without this agent (Z 0 i ).We calculate Z 0 i by excluding bidder i from the WDP formulated in ( 41)-( 44).
The initial owners of the parts must share the costs of the bundles.According to Guajardo and Rönnqvist (2016), the Shapley value can be suitably used for cost-sharing methods.By applying such a method, we also can ensure that the auction mechanism is BB, which is given by the definition of the Shapley value (Shapley 1953).Our approach does not need to reveal critical information from the buyers or the sellers but is based on an indicator which uses data from part attributes.The indicator v(P i ) is computed using Equation ( 46) and the Shapley value defined in (1).To approximate the Shapley value, we use the same approximation method as presented in Section 4.1.The Shapley value then determines the costs c b p for each part p of bundle b with (47).As we can directly allocate transport costs χ i p to each part, we deduct this cost component from the Shapley value calculation and add it separately to the equation.

Computational study
In this section we investigate distinct studies: a performance study and an experimental study.The performance study evaluates the efficiency of the proposed framework, focussing on its solution time and quality.The latter is quantified in terms of costs.In contrast, the experimental study examines the impact of employing a decentralised framework as opposed to a traditional centralised planning approach.Specifically, our interest lies in understanding the potential cost savings across varying levels of collaboration.Additionally, we analyse the framework's overall efficacy and its performance on an individual level.

Performance study
We apply the proposed framework (AUCT) and compare the results to the centralised algorithms, which were both proposed by Zehetner and Gansterer (2022a).
The first algorithm solves the problem exactly by a quadratic model proposed (CENT-QUAD) with a timelimit parameterised as 36,000 seconds.The second solver is a hybrid GA-MILP algorithm (CENT-GA).This approach cannot guarantee optimum solutions.Still, the study shows that it delivers good results for the collaborative-batching problem.The GA is parameterised such that the ratio of the fitness factors approximately matches the ratio of the corresponding cost components.All experiments are conducted on an Intel Core i5-8365U CPU @1.60 GHz with 16.0 GB RAM on Windows 10.In the performance study, our aim is to evaluate the efficacy of the framework and to assess the solution quality, determined by the costs, as well as the computational time.The framework was coded in Python and utilises Gurobi v9.1 (Gurobi Optimization, L. 2022) to solve the MILP models.For the MILP bidding models in step 3 of the process, we reduce the time limit to 180 seconds and use the best solution obtained.We used α = 3, β = 1, γ = 1000, and δ = 0.1 as values for the fitness parameters of Section 4.2.

Instances
For the performance study, we investigate 30 instances with an increasing number of parts and sites.The number of parts and sites are illustrated in columns Sites and Parts in Table 5. Regarding the cost factors of the sites, we use τ k = 45, σ k = 50, s k m = 4, and s k b = 1.The parameters of the machines are set to f pc = 400; g pc = 350.The geometry of the parts follow the distribution f p ∼ U [25,250]; [10,400].Part prices are determined as v p = costs 1−margin , where for the costs it is assumed that the parts are produced in a single batch.The transports costs are approximated by χ i p = v p ψ + ω G i p , using ψ = 0.0035 and ω = 0.35.

Results
In Table 5, we detail the associated costs of each algorithm of each instance.Bold values highlight the best solution for their respective instance.Notably, in instances characterised by a comparatively small number of parts, specifically those with fewer than 10 parts, all algorithms exhibit performance levels nearly reaching optimality.CENT-QUAD corresponds to the solution of the commercial solver, which solves the quadratic model.It has a parameterised solution gap of 1%, thus it is expected to produce near-optimal solutions.Due to numerical reasons, we observe that the CENT-GA performs slightly better in several cases.CENT-QUAD manages to successfully solve only the initial 11 instances, as it surpassed the time limit constraint of 10 hours in subsequent cases.For all remaining instances, except for instance 14, CENT-GA consistently delivers superior results compared to AUCT.In Figure 5(a), we illustrate the disparity between the solutions of the algorithms and the top-performing algorithm for each instance.It demonstrates that the maximum deviation of AUCT from GAP is 20.6% at instance 22.For all subsequent larger instances, the disparity remains below 16%.Hence, we can conclude that despite AUCT being a decentralised solution method, it consistently delivers solution quality comparable to that of centralised methods.
Table 6 lists key figures regarding the solution durations for each instance and algorithm.Columns S1 to S5 denote the time spans for each individual step within the framework.Bold values show the shortest solustion duration.From this, we can observe that AUCT outperforms the other two algorithms in all instances except for instance 25 and 27, where the GA requires less time.
In Figure 5(b), we depict the solving durations of the algorithms.To enhance the ease of comparison across instances, the vertical axis is represented on a loc-scale.It's evident that, at smaller instances, there's a significant gap between AUCT and the other algorithms.Notably, the disparity narrows between GA and QUAD in larger instances.However, on the whole, AUCT generally outpaces GA, even in these larger scenarios.
In Figure 6, we display a stacked line plot illustrating the ratios of each step's duration for every instance.It is evident that, for very small instances, step 5 significantly dominates the overall duration.For instance 1, step 5 accounts for a large share, representing 56.3% of the total duration.As the instance size increases, step 3 (bidding) emerges as the primary contributor to computational time, with the remaining steps contributing minimally, as the duration of step 3 encompasses 99.9% of the overall duration.This observation emphasises the necessity for further optimisation of step 3 to enhance algorithmic efficiency.

Experimental study
In our experimental study, we investigate the cost savings of the proposed framework within a CMfg environment with different levels of collaboration.In contrast to the performance study, we are not only interested in the total savings, but also in the behaviour of the several cost components as well as the trends of the various cost components of the objective of our model.
Furthermore, we compare the costs to benchmark solutions created by a central authority to discuss the limitations of the decentralised approach.Additionally, our investigation delves into the bidding behaviour exhibited   by agents and compares it to their individual cost savings.The central authority solution (CENT) is created by the hybrid GA-MILP algorithm used in Section 5.1.
Our study starts with an optimal production plan per site.We obtain the optimum production plan, with an MILP model of the single-site problem.This is done to optimality by a general purpose solver.The parameters of the algorithm, are the same as in the previous section.

Instances
We conduct six scenarios with 50 instances each, wherein each site owns three machines.In the design of these scenarios, scenario 1 permits no collaboration due to the presence of only a single site.As the scenario number increases, the aim is to explore scenarios that facilitate enhanced levels of collaboration.This expansion implies that an increasing number of sites, and consequently machines, can exchange their parts, thus offering greater prospects for cost savings.The primary characteristics of these instances are detailed in Table 7.Given that we are examining a CMfg setting, both suppliers and customers are dispersed geographically.Each machine retains identical cost factors as we aim to evaluate cost savings due to collaboration rather than outsourcing to more efficient machines.We use the same uniform distributions and cost factor as in the performance study of Section 5.1.All instances are publicly available in Zehetner and Gansterer (2022b).

Results
Table 8 lists the mean costs and their corresponding ratio for each mode and cost component.INI refers to the cost of the initial production plan, CENT to the costs incurred through collaboration applied by to the centralised GAbased algorithm, and AUCT represents the results from the decentralised auction-framework.We can observe that the predominant cost driver is the production cost (PROD) with a minimum ratio of 67.9% at scenario 1 and a slight increase for large instance sizes.The table further unveils that transport costs (TRANS) emerge as the second primary cost driver, succeeded by set-up costs (SET) and inventory costs (INV).Figure 7(a) illustrates the computed efficiency gain of both AUCT and CENT.It reveals that relative cost savings amplify with increasing levels of collaboration.When observing the median values for AUCT and CENT, it becomes apparent that CENT consistently outperforms AUCT across scenarios.Nonetheless, a trend emerges indicating the narrowing of this performance gap from smaller to larger instances.Moreover, the variability in efficiency gain, as represented by the length of the boxes of the plots, is evident for both CENT and AUCT as instance size grows.Figure 7(b) depicts the efficiency gains for all scenarios for each cost component.Results reveal that cost savings occur particularly for PROD and TRANS, while SET costs occasionally increase for both AUCT and CENT.The centralised approach consistently manages to reduce INV at each instance.In contrast, the inventory costs of AUCT show a high variability.However, this hardly affects the quality of the solutions given the minimum contribution of INV to the total costs (average ratios of the cost components of the initial solution are PROD: 69.17%, TRANS: 21.80%, SET: 9.00%, and INV: 0.2%).Table 9 lists the numerical key metrics of the efficiency gains.The mean efficiency gains for AUCT across scenarios span from 7.47% to 21.0% (with scenario 1 excluded), where those for CENT fluctuate between 14.51% to 23.81%.The maximum efficiency gain is noted in scenario 6 reaching 26.03% for AUCT and 27.85% for CENT.The column titled Gap highlights the disparity between the mean efficiency gains of AUCT and CENT.These figures also indicate a remarkable gap in instances with minimal collaboration which narrows considerably as collaborative opportunities expand (Scenario 2: 48.51%, Scenario 6: 11.61%).Column 'Std' reflects the standard deviation of the efficiency gain for each scenario.Also here, we can observe that the range of the effiency gain decreases at higher collaboration levels for both approaches, indicating more consistent gains.This consistency might enhance the acceptance of stakeholders for collaborative settings, given the predictability in performance.
Given that the results do not adhere to a normal distribution, we employ the one-sided Mann-Whitney U test (Mann and Whitney 1947) to evaluate the statistical significance of the efficiency gain.We can observe with column 'Gap', not surprisingly, CENT outperforms AUCT in each scenario.Furthermore, we can observe that the corresponding p-value tends to increase with collaboration levels.
This trend suggests that the difference in efficiency gains becomes less statistically significant in high collaboration scenearios.Nevertheless, we have to emphasise that no cost-sharing method is implemented in CENT, and IR cannot be guaranteed within this approach.Rows in COMP represent the efficiency gains of all scenarios, segmented by individual cost components.Results support findings shown in Figure 7. Examining the 'Mean' columns, it is clear that the benchmark algorithm consistently surpasses the auction-based framework across all cost components.By analysing the 'Std' columns, we observe that AUCT tends to produce more stable solutions in comparison to CENT.The 'Gap' column reveals that the disparity between the two approaches is at its narrowest for production costs, which is the most significant cost driver.submitted parts but did not win a bid.The median efficiency gain within this group is 11.4%.The second category, Parts received only includes agents which won a bundle but did not submit any parts to the auctioneer.The median efficiency gain in this group is 13.6%.The final category, (Parts requested & received) covers all machines which submitted at least one part and won at least one part, reaching a median of 18.2%.We apply a Mann-Whitney U test (Mann and Whitney 1947), to evaluate statistical significance between these groups.In a two-sided test between groups 1 and 2, we could not find a statistically significant difference (p = 0.75).Thus, we conclude that neither supply nor demand would be over-incentivised in the proposed framework.Hence, this should lead to balanced participation levels on both sides.We also conducted a one-sided test between groups 1 and 3 as well as groups 2 and 3, respectively.For both instances, we observe statistical significance (p < 0.001) that group 3 has a higher efficiency gain than the other two.This finding suggest that agents should actively participate on both sides to achieve the highest cost savings.

Conclusion and future research
We introduced a collaborative, decentralised mechanism for a multi-site AM production planning probem.This approach is based on a reverse combinatorial auction in which agents report their marginal costs, which are to be minimised.We introduced a new method for Shapleyvalue approximation as well as a new bundle generation method that allows us to create candidate solutions, thus reducing complexity in the subsequent processing steps.A computational study in which we investigated the framework's cost efficiency and computational performance was provided.We also benchmarked the newly proposed method against results generated by a central, fully informed authority.

Framework performance
The performance study unveiled that the auction-based framework achieves substantial efficiency gains, although agents are not required to disclose sensitive information.The findings also demonstrated that the decentralised framework maintains its efficiency even when applied to large instances.It is worth noting, however, that further scaling up of the problem leads to considerably extended computational times, primarily due to the computational effort associated with step 3 of the framework.

Managerial implications
Our study shows, that, although a decentralised planning method is utilised for the given problem, significant costs savings can be reached.In the most challenging scenario, we achieved an average cost savings of 21.04%, which indicates that we were able to reach 88.39% of the benchmark solution.Our study further demonstrated that increased active participation increases efficiency for every individual agent.Thus, we prove that our framework is IR, meaning that no agent needs to cover the costs of other agents.In essence, agents who were involved on both the supplier and forwarder fronts reached the most significant efficiency gains.We can conclude that the higher the participation, the higher the efficiency.This result should encourage industry leaders to enforce such mechanisms and avoid free-riding behaviour in this framework.

Future research
Further research could evaluate alternative approaches for each process step.For the first step, different approximations of marginal costs might be worth investigating.Also, the bundle selection process might be improved by eliminating certain bundles in order to speed up the process.However, at the moment, the third step requires the most computational time.Hence, instead of solving a production planning problem for each bundle, alternative -probably heuristic -approaches for determining the marginal costs can be developed.

Figure 1 .
Figure 1.AM batching and scheduling problem (adapted from the centralised problem presented in Zehetner and Gansterer 2022a).Red arrows indicate job forwarding between sites.Schedule of machines represents exchanged situation.

Figure 3 .
Figure 3. Auction framework consisting of 5 steps.Diamonds represent machines, parts are depicted as circles, and bundles as rectangles.Parts can be part of different bundles (e.g.part P5 is included in bundles 3, 4, and 5).Black arrows represent bids, green and red arrows visualise payments.

Figure 4 .
Figure 4. Example of the bundling phase, where 4 non-overlapping bundles are created with 5 parts.Arrows indicate a merge process.On each level, 2 bundles of the previous level are combined into one.

Figure 5 .
Figure 5. Performance of algorithms is compared of each instance with instance size increases with horizontal-axis.Circles represent solutions solved with quadratic model.Diamonds represent the centralised GA algorithm, while squares denote the decentralised auction-based framework.(a) illustrates the relative costs determined by each solver in comparison to the best solution of each instances.(b) depicts the absolute computational time in seconds for each solver and instance.). .

Figure 6 .
Figure 6.Ratio of duration of each framework step for each instance of computational study.

Figure 7 .
Figure 7. Relative efficiency gains, where the costs of the initial machine allocation are used as a reference.AUCT represents the efficiency gains of the auction-based framework.CENT represents the efficiency gain of the centralised benchmark solver.(a): Box plot of overall efficiency gains (sample size n = 50, whiskers factor = 1.5IRQ); (b): Box plot of efficiency gains for each cost component of all instances (sample size n = 300, whiskers factor = 1.5IRQ).

Figure 8 Figure 8 .
Figure 8. Distribution of efficiency gain clustered by agent behaviour.The data is clustered in three groups: (a) Parts requested only (n = 479) -machines which submitted parts, but did not win a bid.(b) Parts received only (n = 628) -agents which won bids, but did not submit parts to the auctioneer.(c) Parts requested & received (n = 2300) -machines which submitted and received parts from the CMfg platform.Whiskers factor for box plots = 1.5IRQ.

Table 1 .
Notation of the framework.set of bundles S jj ∈ B: unity if material differs from batch j to j i ∈ I: set of machines/agents α ∈ R + 0 : height weighing factor for fitness j ∈ J: set of batches (J ⊂ N)β ∈ R + 0 : transport weighing factor for fitness j ∈ J * = J ∪ 0: set of batches' successors j γ ∈ R + 0 : materials weighing factor for fitness l ∈ L: set of regions δ ∈ R + 0 : inventory weighing factor for fitness m ∈ M: set of materials (M ⊂N)ε ∈ R + 0 : threshold margin p ∈P: set of parts ζ ∈ N 0 : number of merges for level P b : set of parts within bundle b θ ∈ N 0 : number of bundles of previous level P i : set of parts allocated to machine i λ bb ∈ R + 0 : fitness of combined bundles b and b P bb : P b ∪ P b μ bb ∈ N 0 : number of materials in bundles b and b P π p : set of parts which precede part p in order π μ P ∈ N 0 : number of materials of parts in set P π ∈ i : set of orderings of machine i ν bb ∈ N 0 : number of parts in bundle b and b b ∈ B: 0 : setup time for material change w 3 pp ∈ B: variable for collision detection t p ∈ R + 0 : delivery date of part p w 4 pp ∈ B: variable for collision detection v p ∈ R + 0 : selling price of part p x p ∈ R + 0 : x position of part p w las ∈ R + 0 : speed of laser melting per volume y p ∈ R + 0 : y position of part p w pow ∈ R + 0 : duration of powder-process/layer z p ∈ R + : end production date of part p A mp ∈ B: unity if part p requires material m F jp ∈ B: unity if part p is in batch j C bi ∈ R + 0 : bid of machine i for bundle b M bb ∈ B: unity if bundle b is merged with bundle b D bp ∈ B: unity if part p is in bundle b Q bi ∈ B: unity if machine i wins bundle b E jm ∈ B: unity if batch j requires material m V opu ∈ B: unity if part p is on position u in o G ll ∈ R + 0 : delivery time from region l to region l W oo pu ∈ B: unity if position u of part p differs from sequence o to sequence o' G i p ∈ R + 0 : delivery time of part p from machine i Y jj ∈ B: unity if batch j is produced after j N lp ∈ B: unity if part p is shipped to region l

Table 2 .
An example of a 3-player game.

Table 3 .
Exact solution of the Shapley value for a 3-player game.

Table 5 .
Determined costs for each instances for each algorithm.Sites and Parts list the key figures for each instance.Bold values mark the solutions with the lowest costs for each row.Columns GAP indicate the deviation from the best result of the resprective instance.

Table 6 .
Solution time in seconds for all algorithms.Notes: S1, S2, S3, S4, S5 represents the maximum time taken by an agent during the specific step of the framework.Column SUM gives the overall duration of the auction-based framework.Bold values indicate the best result for each instance.

Table 7 .
Key figures of instances.

Table 8 .
Mean costs and ratio for each algorithm and for each cost component.
Notes: INI: costs of initial production plan without collaboration; CENT: costs of plan solved by centralised GA-based algorithm; AUCT: costs of decentralised auction framework; PROD: production costs; TRANS: transport costs; SET: setup costs; INV : inventory costs.

Table 9 .
Key figures of efficiency gains of auction-based (AUCT) and centralised algorithm (CENT) in percentage values.Mean lists the mean value of the scenario per solution approach, while Std lists the standard deviation of the efficiency gains.Gap represents the mean efficiency gap between AUCT and CENT.p-value indicates the statistical significance for a one-sided two-sample Mann-Whitney U test.Rows SCEN list the results of scenarios 1-6, while COMP represents the efficiency gain for each component of all scenarios combined (PROD: production costs, TRANS: transport costs, SET: setup costs, INV : inventory costs).