Exergetic Port-Hamiltonian Systems: Modelling Basics

Port-Hamiltonian systems theory provides a structured approach to modelling, optimization and control of multiphysical systems. Yet, its relationship to thermodynamics seems to be unclear. The Hamiltonian is traditionally thought of as energy, although its meaning is exergy. This insight yields benefits: 1. Links to the GENERIC structure are identified, making it relatively easy to borrow ideas from a popular framework for nonequilibrium thermodynamics. 2. The port-Hamiltonian structure combined with a suitable bond-graph syntax is expected to become a main ingredient in thermodynamic optimization methods akin to exergy analysis and beyond. The intuitive nature of exergy and diagrammatic language facilitates interdisciplinary communication that is necessary for implementing sustainable energy systems and processes. Port-Hamiltonian systems are cyclo-passive, meaning that a power-balance equation immediately follows from their definition. For exergetic port-Hamiltonian systems, cyclo-passivity is synonymous with degradation of energy and follows from the first and the second law of thermodynamics being encoded as structural properties.


Energy versus exergy
Energy is the most famous conserved quantity and serves as a lingua franca throughout physics and beyond. However, analysis of technical systems based on the first law of thermodynamics alone (energy analysis) is often not helpful or even misleading because the quality of the energy that is exchanged between components or subsystems is not taken into account [1]. For instance, 100 W of heating power can be obtained from 100 W of electric power but the same heating power cannot be used to again generate 100 W of electric power, even if an ideal engine without losses is assumed. This is due to the second law of thermodynamics which states that entropy (microscopic disorder) can only be produced, never destroyed. Because of irreversible degradation, energy should not be regarded as a resource, given that resource means having potential to cause change that is observable at the macroscopic level.
In contrast, exergy [2] (also referred to as available energy (of body and medium) [3][4][5] and availability [6]) takes energy quality into account. For instance, 100 W of electric power can be fully utilized to do work, assuming and ideal engine. Thus, electric power can be understood as an energy rate (energetic power) as well as an exergy rate (exergetic power). In contrast to that, the amount of work which can be obtained from a 100 W heat source assuming again an ideal engine is bounded by the Carnot efficiency and depends not only on the source temperature but also on the environment temperature. If the source has a temperature of 375 K and the environment has a temperature of 300 K, then the Carnot efficiency is ð375K À 300KÞ=375K ¼ 0:2. Consequently, the energetic power of 100 W amounts to 20 W of exergetic power, meaning that no more than 20 W worth of work can be generated from it in the given environment. If this heat source would be an electric heater then its exergy destruction rate would consequently be 80 W.

Exergy analysis and thermodynamic optimization
Exergy destruction rates (also called irreversibility rates) are instrumental for thermodynamic optimization [1,[7][8][9]. Loss of exergy is proportional to production of entropy with the proportionality factor being the environment temperature. Hence, the Exergy Analysis method (in its original form), compares the system under study to its reversible counterpart. In particular for heat engines, operation in the reversible limit implies infinitesimal heat transfer rates and thus zero power. Consequently, the Carnot efficiency is a limit which is never attained in applications.
This mismatch between practice and the well-established quasistatic reversible theory prompted the emergence of Finite-Time Thermodynamics (FTT) in the mid-1970s, see [10] for a review. The central goal of FTT is to establish more realistic performance limits for the operation of systems featuring irreversible processes under finite-time constraints. In this case, different objective functions generally lead to different optima. However, all interesting objectives make some trade-off between minimizing exergetic losses (reversible limit) and maximizing power ('free-fuel limit'). In the big picture, the latter extreme pays off only when utilizing exergy that otherwise would be lost (e.g. solar energy and low-temperature heat). Regarding consumption of carbon-based resources, ecological criteria are of utmost relevance. The FTT literature is almost completely confined to simple (endoreversible) models [11,12] that consider only very few irreversible processes. In this way, performance bounds are computed for different ecological and economical objectives using variational methods and in particular (averaged) optimal control theory. This has led to general insight and principles [13] but a considerable gap between theory and applications still remains.

Purpose of this research
Human energy systems and industrial processes urgently need to achieve higher total efficiencies while shifting dependencies to renewable resources. This requires integration of different energy domains and a high-level of interconnection (with prosumers). Ultimately, we need to deal with quite complex networks whose nonlinear transient dynamics are crucial. In fact, this is the case for many applications, from sustainable heat engine technologies to large-scale district heating networks. Thus, the development of a practical framework to support engineering efforts in these directions is of great importance. In particular due to their compositional nature, exergetic port-Hamiltonian systems provide a solid foundation for optimization-and control-oriented modelling of energy systems and processes. Their diagrammatic language helps to formulate, understand and communicate models and optimization goals in interdisciplinary environments. In [14], the use of 'bond graph type of diagrams' has been suggested as an alternative to Grassmann diagrams which are commonly used as a visualization tool for (steady-state) exergy analysis. Linking diagrammatic expressions directly to the physical and mathematical structure of the underlying models makes exergetic port-Hamiltonian systems a powerful tool for (transient) exergy analysis and related thermodynamic optimization methods. Multi-energy systems, regenerative thermal engines and heat pumps, as well as buildings are some interesting application areas which could benefit from this research.

Port-Hamiltonian systems and thermodynamics
In traditional systems theory, building blocks interact by exchanging arbitrary signals. In contrast to that, the essence of port-Hamiltonian systems theory is to endow models of physical systems with a geometric structure, called Dirac structure [15][16][17][18][19], that expresses the exchange of power among system components and possibly across system boundaries. The central structural property of a port-Hamiltonian (sub)system is a power balance equation which relates the stored power, the dissipated power and the supplied power. By definition, the dissipated power is always non-negative, and consequently, the stored power is always less than or equal to the supplied power. This property is referred to as cyclo-passivity or (in a more general context as) cyclo-dissipativity. In the present context, 1 both mean the same. If the storage function (Hamiltonian) is bounded from below (implying finite storage capacity), the system is said to be passive (or dissipative). If the dissipated power is always zero, the system is said to be (cyclo-)lossless, see [20][21][22].
As in classical Hamiltonian mechanics, the storage function of a port-Hamiltonian system is traditionally thought of as an 'energy' function. Hence, the power balance equation should be of energetic type and 'dissipated power' should refer to the rate at which 'energy' is lost due to phenomena such as mechanical damping or electrical resistance. Such use of language is clearly at odds with the first law of thermodynamics.
Many popular applications of port-Hamiltonian systems are confined to the electromechanical realm where internal energy (a macroscopic abstraction of mechanical energy at the microscopic level) does not affect the dynamics of interest. In [23] it is stated that for isothermal 2 systems the Hamiltonian represents 'free energy' which can be lost. Indeed, in the isothermal case, the three concepts Helmholtz free energy, Gibbs free energy and exergy are closely related, see Section 4.1.
Following inspiration from bond-graph modelling, previous attempts to include thermal phenomena in the port-Hamiltonian framework relied on non-linear powercontinuous transformers. In [24], a port-Hamiltonian model of thermal conduction in a solid is presented. Entropy is used as a state variable and the internal energy is accounted for in the energetic Hamiltonian. Thermal conduction is understood as a power-continuous energy transformation process which produces entropy. According to the first law of thermodynamics, this must lead to lossless systems. In this case, the port-Hamiltonian structure does not encode that the dynamics is severely constrained by the second law of thermodynamics. In other words, degradation of energy and its implication on stability does not manifest in the port-Hamiltonian structure.
A source of possible confusion is that the 'Hamiltonian' of a dissipative port-Hamiltonian system not only generates a Hamiltonian dynamics but also a dissipative gradient dynamics. In thermodynamics, dissipation is synonymous with entropy production. Therefore, it is not surprising that entropy appears next to energy in the exergetic Hamiltonian. For exergetic port-Hamiltonian systems, the systems-theoretic meaning of dissipativity [20] agrees with the thermodynamic meaning.

Related frameworks
Later attempts to properly unify port-Hamiltonian systems with thermodynamics diverged into three distinct frameworks: Firstly, just like exergetic port-Hamiltonian systems, Irreversible Port-Hamiltonian Systems [25] use the extensive thermodynamic variables as state variables but their structure is significantly different. The modification is necessary to encode not only the first but also the second law of thermodynamics while sticking with the total energy as the Hamiltonian function. Secondly, contact geometry is a natural setting for thinking about Legendre transformations which has been used in equilibrium thermodynamics since [26]. The contact-geometric approach has been extended to nonequilibrium thermodynamics and open systems, see e.g [27]. The core idea is to enlarge the state space such that it also includes the intensive variables. The dynamics are then restricted to a Lagrangian submanifold which is generated by a thermodynamic potential and thus expresses material properties. For one and the same thermodynamic system, there are two contact-geometric descriptions, namely one where energy (or a Legendre transformation of it) and one were entropy (or a Legendre transformation of it) is used as the generating function of the Legendre submanifold. Thirdly, Port-Thermodynamic Systems [28] are based on a symplectization of the contact-geometric description. By adding one more dimension to the state space, energetic and entropic representations can be expressed simultaneously as projectivizations. A comparison of the advantages and (current) limitations of the different frameworks is missing in the literature and is also beyond the scope of the present article. Yet, the order in which we listed the three frameworks reflects a trend of adding more geometric structure and in the two latter cases also more redundant state variables. While this may be advantageous for certain purposes, it has drawbacks as well. Successful application of a modelling framework critically depends on how easily it can be picked up by practitioners. Exergetic port-Hamiltonian systems shine because of their relative simplicity and their readily available diagrammatic language. This fits one of our main research goals, namely to develop a framework which can form an adequate basis for various near-term engineering efforts to tackle the sustainability crisis.

The GENERIC framework
In the 1980s, some researchers started to combine reversible Hamiltonian dynamics with dissipative gradient dynamics [29][30][31][32]. The resulting framework for nonequilibrium thermodynamics has later been termed GENERIC, an acronym for General Equation for Non-Equilibrium Reversible-Irreversible Coupling [33,34]. After the appearance of many articles and two monographs [35,36], its active development continues.
Thermodynamic systems consist of an extremely large number of constituents and therefore can be seen at multiple scales. At the microscopic scale, their governing equations are widely-believed to be invariant under time-reversal transformation [37] and the Hamiltonian formalism is a natural choice to express the reversible energy exchange between kinetic and potential energy domains of the numerous constituents. Despite of this microscopic reversibility, the dynamics turn out to be biased at a more macroscopic scale: An isolated system relaxes and thereby approaches its equilibrium state which maximizes entropy. In some sense, entropy arises because of the uncertainty (incomplete information) regarding the microscopic state. Entropy production can be seen from the information perspective as a dynamic maximally-unbiased (MaxEnt) estimate given only knowledge about mesoscopic/macroscopic quantities [38,39].
The relaxation processes can be modelled directly at a more macroscopic scale as (generalized) gradient dynamics. This requires three ingredients: 1. An adequate choice of state variables to describe the system at the desired scale. 2. An entropy function which tends to its constrained maximum during the approach to equilibrium. 3. A dissipation potential which yields the constitutive relations describing the relaxation processes, see e. g [40]. Gradient dynamics uses quadratic potentials, whereas generalized gradient dynamics uses non-quadratic potentials, see [41] for a statistical motivation. Gradient dynamics is essentially equivalent to Linear Irreversible Thermodynamics (LIT) [41]. In LIT, thermodynamic fluxes (such as heat flux) depend linearly on thermodynamic forces (such as temperature differences). However, the linear relations (such as Fourier's law) may depend arbitrarily on the state (like in the case of temperature-dependent thermal conductivity). A large class of relaxation phenomena (including irreversible transport phenomena) can be modelled using gradient dynamics/LIT, see for instance [42]. An exception are thermodynamic systems which are so far from equilibrium that the concept of temperature loses its meaning, or in other words, systems for which a local equilibrium assumption cannot be made. Some of them can be modelled by the Boltzmann equation. In this case, the GENERIC formulation hinges on a non-quadratic dissipation potential [33]. Alternatively, constitutive relations of irreversible processes may be stated in the even more general quasi-linear form. This amounts to the choice of a symmetric, positive semidefinite linear operator (called dissipation operator) which may depend on the system's state and on the differential of the entropy function with respect to the state variables, see [43]. The quasi-linear form is equivalent to (generalized) gradient dynamics if the dissipation operator fulfils an integrability condition [40]. If the operator does not depend on the differential of the entropy function, this condition is trivially satisfied and the resulting relations are essentially equivalent to LIT [35]. Since we are going to use internal energy as a thermodynamic potential and consequently entropy as a state variable (see in particular Example 5.2), the total entropy function is the sum of the entropy state variables and therefore its differential is constant. Thus, we may consider gradient dynamics, quasi-linear relations, and LIT as essentially equivalent.
According to [37], the GENERIC fixes a splitting: The Hamiltonian dynamics have to be invariant under time-reversal transformation, and they must conserve entropy. The (generalized) gradient dynamics may not be time-reversal invariant, they must conserve energy, and they must be dissipative. Both contributions have to conserve mass and volume. The GENERIC framework guides the modelling process and asserts thermodynamic consistency of evolution equations. Some progress has been made to derive structure-preserving integration methods [44] and to extend the framework to open systems using ideas from port-Hamiltonian theory [45,46].

Port-Hamiltonian systems and exergy
It was realized in [47] that exergy can be used as a storage function 3 for passivity-based control. This idea has been picked up several times in the literature on port-Hamiltonian systems, see e.g [48,49]. However, exergy was not used as the Hamiltonian generating the dynamics, but as an additional quantity used for control design.
In [50] it was shown how the GENERIC formulation of a compressible fluid can be rewritten as a port-Hamiltonian system by using an exergy-like Hamiltonian and by factorizing the dissipation operator. The approach was used in [51] for modelling of district heating networks. Since both the GENERIC and the port-Hamiltonian framework combine Hamiltonian and gradient dynamics, it is not too surprising that such a reformulation is possible.

Contribution
The result in [50] suggests that the port-Hamiltonian framework may be linked to the GENERIC by using exergy as a Hamiltonian function. We continue to investigate this idea more deeply. In doing so, we arrive at a physically sound interpretation of dissipativity in the context of classical (i.e. isothermal) port-Hamiltonian systems. Furthermore, we start with the development of a thermodynamic modelling framework: Exergetic port-Hamiltonian systems borrow from the rich thermodynamic theory of the GENERIC framework and combine it with the port-Hamiltonian structure that is well suited for interconnection, optimization and control. In contrast to the result in [50], the framework does not rely on the factorization of the dissipation operator in the GENERIC. Instead, it is based on a refined definition of resistive structure that is in agreement with thermodynamic theory. Throughout, we showcase the diagrammatic representation of exergetic port-Hamiltonian systems based on a bond-graph syntax.

Assumptions and current limitations
The framework is inherently limited to systems for which the local equilibrium assumption can be made. It thus seems adequate to limit ourselves to quadratic dissipation potentials and the perspective of Linear Irreversible Thermodynamics.
In this article, we restrict ourselves to the finite-dimensional (lumped-parameter) setting. Further, the examples in this work do not include systems with mass transfer or chemical reactions. Despite making extensive use of a bond-graph syntax, we defer its precise definition to later.

Outline
In Section 2 we state the relevant definitions. In Section 3 we elaborate on the physical meaning of exergy. In Section 4 we show that the present framework seamlessly extends classical port-Hamiltonian theory which (implicitly) assumes equilibrium with an isothermal environment. In Section 5 we concern ourselves with the modelling nonisothermal systems. In Section 6 we state our conclusions.

Terminology and notation
We always use the word 'energy' in the thermodynamic sense. We use Latin letters for extensive quantities and lowercase Greek letters for intensive quantities. In particular, we use u for internal energy, s for entropy, θ for temperature, v for volume, π for pressure, m for mass, and μ for chemical potential. Uppercase U, S, etc. denote corresponding potential functions. We use N for total mass because M is used for the dissipation operator in the GENERIC. A system is called closed if mass (of every type of atom) is constant. It is called isolated if no exchange of energy and mass is possible across its boundaries. A system or process is called isochoric/isothermal/isobaric if volume/temperature/pressure is constant.
For tensorial quantities, we use (abstract) index notation with Einstein's convention: Indices of contravariant slots are written as superscript and indices of covariant slots are written as subscript. Repeated indices (up-down pairs) imply contraction. With X a smooth manifold, TX denotes the tangent bundle and T � X the cotangent bundle over X . We write A ! X for a general vector bundle with total space A and base space X . When the latter is clear from the context, we just write A. For vector bundles A ! X and B ! X , A � B is the vector bundle over X where "x 2 X : ðA � BÞ The set of all sections of A is denoted by ΓðAÞ. Given a contravariant 2-tensor field L 2 ΓðTX � TX Þ, the sharp map L ] : T � X ! TX is the (curried) function defined by "α; β 2 ΓðT � X Þ : ðL ] ðαÞÞðβÞ ¼ Lðα; βÞ. Dually, the flat map ω [ : TX ! T � X corresponding to a covariant 2-tensor field ω 2 ΓðT � X � T � X Þ is a bundle map from the tangent to the cotangent bundle. Its name derives from the fact that in index notation, it lowers the up-index of a tangent vector X j into the down-index of the covector α i ¼ ω ij X j .

Fundamental definitions
To streamline the following presentation, in this section we state suitable definitions of GENERIC and port-Hamiltonian systems and their underlying geometric structures.

Definitions related to the GENERIC framework
Symplectic structures are quintessential in Hamiltonian mechanics. Poisson structures are more general, allowing a type of degeneracy that encodes conserved quantities other than energy. For details we refer to [52,53]. Definition 2.1 (Poisson structure). Let X be a state manifold. Let f ; g; h 2 C 1 ðX Þ be arbitrary smooth functions (observables) on X . A Poisson structure on X is a bilinear and antisymmetric map f�; �g : C 1 ðX Þ � C 1 ðX Þ ! C 1 ðX Þ called Poisson bracket, which fulfils the Jacobi identity fff ; gg; hg ¼ fff ; hg; gg þ ff ; fg; hgg and the Leibniz rule ff g; hg ¼ f fg; hg þ gff ; hg.
A Poisson structure on X makes the R-vector space of smooth functions C 1 ðX Þ into a R-algebra. This so-called Poisson algebra is an abstract Lie algebra since for some fixed Vector fields are (isomorphic to) derivations on the commutative R-algebra of smooth functions C 1 ðX Þ with pointwise multiplication. The Leibniz rule says that for some fixed h we have X h ðf � gÞ ¼ f � X h ðgÞ þ X h ðf Þ � g and thus it asserts that X h is a vector field.
The Leibniz rule also implies that for some f ; g 2 C 1 ðX Þ, their bracket ff ; gg depends only on the differentials df ; dg 2 ΓðT � X Þ. It follows that the Poisson bracket can be defined in terms of an antisymmetric 4 contravariant 2-tensor field L 2 ΓðTX^TX Þ like so: ff ; ggj x ¼ @f @x i L ij ðxÞ @g @x j . In terms of the Poisson bivector (field) L, the Jacobi identity can be expressed as L il @L jk @x l þ L ij @L jk @x l þ L kl @L ij @x l ¼ 0. The Jacobi identity also implies that the map C 1 ðX Þ ! ΓðTX Þ, h7 !X h is a Lie algebra antihomomorphism from the Poisson algebra (of generating functions) to the Jacobi-Lie algebra of (Hamiltonian) vector fields, i.e. fg; hg7 ! X h ; X f � � .

Definition 2.3 (Hamiltonian system). A Hamiltonian system is a triple
Due to antisymmetry, _ H ¼ fH; Hg ¼ 0, i.e. the Hamiltonian is conserved. If L ] : T � X ! TX is degenerate, there exist distinguished observables C k 2 C 1 ðX Þ called Casimir functions such that for all x 2 X we have L ij ðxÞ @C k @x j ¼ 0. Any function of Casimirs is a (dependent) Casimir. In particular, this holds for the Poisson bracket of two Casimirs. Locally, the number of independent Casimirs is equal to the dimension of the kernel of L ] ðxÞ. Since _ C k ¼ fC k ; Hg ¼ 0 for any generating function H, these conserved quantities are referred to as structural invariants.

Example 2.4 (Harmonic oscillator). Let
In the canonical position and momentum coordinates, the bivector defines a constant Poisson structure on X for which the Jacobi identity is trivially satisfied. The Hamiltonian H : X ! R, HðxÞ ¼ q 2 = 2c ð Þ þ p 2 = 2m ð Þ represents the system's energy. The constant c is the compliance of the spring and m is the mass.
To define reversibility and irreversibility, we consider a transformation applied to the equations of motion, which primarily reverses time but also flips the sign of state quantities having odd parity. The concept of even and odd parities is purely axiomatic. It reflects our expectation that certain quantities, like velocities and momenta, flip their sign/direction (odd parity À 1) when a recording is suddenly played backwards, whereas most other quantities, like energy, entropy, configuration, pressure and temperature momentarily stay the same (even parity þ 1).

Definition 2.5 (Time-reversal transformation). Time-reversal transformation TRT is defined as TRTðyÞ
Theorem 2.6 (Reversibility of Hamiltonian dynamics). Evolution equations of a Hamiltonian system _ Proof. The condition follows from requiring equality of the original evolution equation and the transformed one. For arbitrarily fixed indices i, j, applying TRT to the left-hand side yields since the parities are constant and applying TRT to the right-hand side yields Thus, the transformed equations are equal to the original equations if and only if Regarding Example 2.4, the condition holds because for a constant L we have even parities PðL ij Þ ¼ 1 and PðqÞ ¼ 1, PðpÞ ¼ À 1. For more details we refer to [37]. Definition 2.7 (Gradient structure). Let X be a state manifold. A gradient structure on X is a dissipation potential Ψ : Thus, the dissipation potential Ψ is quadratic and convex in x � 2 T � x X .
Definition 2.8 (Gradient system). A gradient system is a triple X ; Ψ; S ð Þ where X is the state space, Ψ defines the gradient structure on X , and S 2 C 1 ðX Þ is the entropy function. The state evolves according to _ Due to positive semidefiniteness of M, _ S � 0, i.e. the dynamics is dissipative.
If the dissipation operator M ] : T � X ! TX is degenerate, there exist conserved quantities called 'metric Casimirs' [40]. For details regarding (generalized) gradient dynamics we refer to [36,41] and references therein. Theorem 2.9 (Irreversibility of gradient dynamics). Evolution equations of a gradient system _

irreversible) if and only if
Proof. The proof proceeds along the same lines as the proof of Theorem 2.6, except that inequality of the original and the transformed evolution equation is required. Definition 2.10 (GENERIC system). A GENERIC system is a 5-tuple X ; f�; �g; Ψ; E; S ð Þ where X is the state space; f�; �g defines the Poisson structure, Ψ defines the gradient structure, E 2 C 1 ðX Þ is the energy function, and S 2 C 1 ðX Þ is the entropy function. The Hamiltonian system X ; f�; �g; E ð Þ is required to be reversible according to Theorem 2.6 and the gradient system X ; Ψ; S ð Þ is required to be irreversible according to Theorem 2.9. The entropy function must be a symplectic Casimir, i.e. L ij @S @x j ¼ 0, making the Hamiltonian dynamics non-dissipative. The energy function must be a metric Casimir, i.e. M ij @E @x j ¼ 0, making the gradient dynamics energyconserving. Further, total mass N and volume V must be both symplectic and metric Casimirs. The state evolves according to

Definitions related to port-Hamiltonian systems
In the case of a Poisson bivector L 2 ΓðTX^TX Þ, degeneracy of L ] : T � X ! TX is related to conserved quantities, whereas in the case of a presymplectic form ω 2 ΓðT � X^T � X Þ, degeneracy of ω [ : TX ! T � X is related to algebraic constraints. Dirac structures combine both directions/features, see [15,54]. Further, their definition may involve vector bundles more general than TX � T � X allowing for interconnection of systems in the port-Hamiltonian framework, see [17,19,55,56]. Instead of referring to (components of) tangent and cotangent vectors, one speaks more generally of flows and efforts. Dirac structures admit various representations. We base the following definition on a particular one, namely the hybrid input-output representation [57,58].
Definition 2.11 (Dirac structure). Let X be a manifold. Let F ! X be a vector bundle which may have (a subbundle of) TX as a subbundle and let E be the dual bundle of F . A Dirac structure D on F ! X is a subbundle of F � E admitting the following representation (after partitioning components of F and correspondingly E into index sets A and B): For every x 2 X , with JðxÞ a skew-symmetric linear map.
Compared to the kernel representation, this representation is biased in the sense that flows f 2 F and efforts e 2 E are partitioned into 'inputs' e A , f B and 'outputs' f A , e B . This makes it suitable for encoding computational causality, see Remark 4.3. Dirac structures model a power-conserving interconnection of system components since the net power e i f i vanishes due to skew-symmetry of JðxÞ. Integrability of Dirac structures is discussed in [15,17,18,54]. Definition 2.12 (Resistive structure). Let X be a manifold. Let F ! X be a vector bundle and let E be the dual bundle of F . A resistive structure R on F ! X is a subbundle of F � E admitting the following representation: For every x 2 X , with R a contravariant symmetric positive semidefinite 2-tensor (field).
Consequently, the dissipated power e i f i is always non-negative. The definition could be generalized by using a hybrid input-output or kernel representation but it is not necessary for our purposes.

Definition 2.13 (Port-Hamiltonian system). A port-Hamiltonian system is a 6-tuple
Þ where X is the state space, F R ! X is the bundle of resistive flows, F B ! X is the bundle of boundary flows, D is the Dirac structure on TX � F R � F B , R is the resistive structure on F R , and H 2 C 1 ðX Þ is the Hamiltonian. For an isolated system F B ¼ X � f0g is a zero vector bundle 5  For now, we skip a definition of the composition of port-Hamiltonian systems (see [19,55,56]). We also skip a precise definition of the bond-graph syntax that we introduce and use in the following sections.

Exergy and its physical meaning
Work can universally and fully be turned into heat. However, the same does not hold for the reverse direction. For biological life and engineering, production of work is central. Work is a form of energy exchange which has 100% exergy content because all work can do work, according to Newton's third law, Kirchhoff's circuit laws, etc. In a certain sense, work is energy which is under our control, because it is exchanged at 'our' mesoscale. The mental and computational models behind our engineered devices are able to resolve all relevant degrees of freedom which are involved in exchange of work. The first (widely known) study of physical laws which limit production of work was by Carnot [59]. He observed that the passage of 'caloric' (heat) from a high to a low temperature level allows the production of work. His theory was developed further by Thomson (Kelvin). Carnot's theory allowed Thomson to introduce the concept of absolute temperature [60] which in turn allowed him and Joule to give a concrete expression for the amount of work which can be produced by Carnot's ideal engine [61]. Soon after, Clausius formulated the first law of thermodynamics as we know it today [62]. At roughly the same time, Thomson also introduced the concept of available energy [63]. Some years later, Clausius introduced the concept of entropy to efficiently express the second law of thermodynamics [64]. Seventeen years later, Gibbs was able to give a more concrete expression to Thomson's concept of available energy [3]. The concept was further developed by Keenan (who called it availability) [6]. Rant gave the concept the name exergy [2]. Since then, there has been active development of thermodynamic design and optimization methods based on and related to exergy (analysis), see the engineering monographs [1,7,8,65]. With hindsight, we could say that the idea which Carnot had about 'caloric' matured to eventually become the exergy concept. For him, 'caloric' was always conserved because he conducted his study by imagining an ideal engine.
As a main takeaway, it is crucial to distinguish between reversible/nondissipative and irreversible/dissipative processes. Exergy is a thermodynamic quantity which is conserved by the former and destroyed by the latter. We can divide exergy components into two kinds, namely pure exergy components (kinetic, potential, magnetic, electric), which can be exchanged purely as work and so-called physical exergy components (corresponding to internal energy), which are involved in irreversible processes.
The upshot of Thomson's 1852 paper [63] introducing the available energy concept was that eventually all available energy will be destroyed. However, the boundary conditions and expansion rate of the universe are not well known to mankind and therefore we must not conclude that the physical universe approaches thermodynamic equilibrium [66] which in this context is often referred to as the dead state (of the system) or heat death (of the universe). This uncertainty and the pessimism about life carried by this terminology speak against its use.
Every engine has an underlying operational design which allows it to extract part of its exergy input and turn it into work, according to the intent of its designers. Similarly, biological life can be understood as the interaction of open thermodynamic systems which function based on an 'operational design' (DNA, etc.). This is in stark contrast to other (unmanaged) thermodynamic processes in nature which merely happen spontaneously [67]. Real machines and beings destroy exergy, meaning that they cannot operate completely reversibly. Some level of exergy destruction is required to meet robustness and performance requirements.
The first and second law of thermodynamics are in principle not required to model and simulate physical systems. However, they serve important purposes: On the one hand, they limit the set of physically meaningful governing equations, which provides structure and guidance in the modelling process. Indeed, all models expressible in the GENERIC and the introduced framework are coherent with the two laws. On the other hand, they are of utmost relevance for developing the operational design, which is by no means less important than the underlying physical laws, at least for engineers. The proposed framework informs the design process by clearly indicating how the theoretically available work is lost or used by the system.
In the GENERIC and the present framework, the distinction between reversible and irreversible processes manifests itself in two types of relations, namely Poisson/Dirac and gradient/resistive structures. In the GENERIC, each type has an associated generating function: The reversible dynamics is generated by the (total) energy function and the irreversible dynamics is generated by the entropy function. For exergetic port-Hamiltonian systems, reversible and irreversible aspects are combined into a single storage function. This exergetic Hamiltonian is understood as a multiphysical and systems-theoretical generalization of what Gibbs called the 'available energy of body and medium' [3]. The environment (medium) is an infinite reservoir. It serves as a reference for assessing the potential of the system (body) to do work as it relaxes to equilibrium (with itself and the environment). Since the environment is always in equilibrium (with itself), its exergy content is zero by definition. Thus, we can always consider the environment as part of the system (without changing its Hamiltonian). More concretely, the environment can be understood as an atmosphere whose temperature and pressure remain constant. For many energy systems such as regenerative heat engines or district heating networks, a typical weather condition or ground temperature serves as a natural reference. In summary, the exergy content of a system (including its environment) is the amount of work that can be extracted in the reversible limit before the system reaches its equilibrium state where its exergy content becomes zero.

Reversible heat exchange and the Carnot engine
The maximum amount of work which can be extracted from heat was studied by Carnot [59] using the concept of an ideal heat engine, see e.g [68] (p. 118). Since the engine executes a fully reversible cycle, it generates no entropy and achieves the highest efficiency which is possible for any heat engine operating between the two given thermal reservoirs. Figure 1 depicts a Carnot engine extracting work from a hot reservoir with constant temperature θ h . The reversible heat intake per cycle Q in > 0 is associated with an entropy intake s in ¼ Q in =θ h > 0. Cyclic operation implies that entropy cannot accumulate in the engine. Reversible operation implies that no entropy is generated in the engine.
Consequently, the intake of entropy s in is balanced by a discharge of entropy s out ¼ s in . This is necessarily associated with a reversible discharge of heat Q out ¼ θ 0 s out > 0 to the ambient reservoir at constant temperature θ 0 .
The entropy balance equation s out ¼ s in can thus be written as This results in the energy balance equation The ratio is called Carnot efficiency and expresses how much work W out can be obtained for a given heat intake Q in in the reversible limit. Consequently, we call _ W ¼ η Carnot _ Q the exergetic power of the heat source and note that it is defined relative to a fixed reference temperature θ 0 . Heat exchanged at a higher temperature carries more exergy.
As an aside, when irreversibility of thermal conduction is considered, the efficiency in the opposite extreme (maximum-work or 'free-fuel' limit) was studied via the Curzon-Ahlborn engine [69,70]. Surprisingly, it turns out to be of a similar form, namely η CA ¼ 1 À ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi θ 0 =θ h p . For exergetic port-Hamiltonian systems, the Carnot efficiency is immediately relevant, since Dirac structures model reversible exchange of power, generalizing ideal wires from circuit theory.

Exergetic storage function
Let us consider a closed and isochoric system with entropy s and internal energy u given according to its fundamental equation u ¼ UðsÞ. We assume that the system is in equilibrium with itself but not necessarily with its environment. The system's exergy A is AðsÞ ¼ UðsÞ À Uðs 0 Þ À θ 0 s À s 0 ð Þ (10) Figure 1. A Carnot engine operating between two isothermal reservoirs. In each cycle, the fictitious engine consumes the heat Q in at the temperature level θ h and rejects the heat Q out at the temperature level θ 0 to perform the work W out . Entropy is conserved.
where the constant s 0 is the system's entropy once the overall system has reached its equilibrium state. Thus, an infinitesimal change of exergy is written as dA ¼ ðθðsÞ À θ 0 Þ ds with θðsÞ ¼ @UðsÞ @s . Reversible exchange of heat power _ Q at the temperature level θ is associated with an entropy exchange rate _ s ¼ 1 θ _ Q. Consequently, the corresponding exergetic power is given as The exergetic power _ A is equal to the (energetic) thermal power _ Q multiplied with the Carnot efficiency which is defined by the temperature θ at which the thermal power is exchanged and the environment temperature θ 0 , see Equation (9). Now, let us consider a closed system with entropy s, volume v, and internal energy u given according to its fundamental equation u ¼ Uðs; vÞ. The system further has potential energy E pot depending on its configuration q and kinetic energy E kin depending on its momentum p. Its exergy A is The exergy function is obtained from the energy function by adding linear terms which determine the equilibrium and constant terms which make the exergy zero at equilibrium. The physical meaning of the term linear in s has been explained above using the Carnot engine as a theoretical device. The meaning of the term linear in v can be explained as follows: The infinitesimal change of exergy caused by an infinitesimal change of volume is -ðπðs; vÞ À π 0 Þ dv with pressure 6 πðs; vÞ ¼ À @Uðs;vÞ @v . If the system expands at the rate _ v, it has to displace the atmosphere which is assumed to have a fixed pressure π 0 . This requires the mechanical power π 0 _ v. Hence, only π À π 0 ð Þ_ v remains as exergetic power. The first line in Equation (12) represents the physical exergy, while the second line represents the (macroscopic) mechanical energy/exergy. The constant À E pot ðq 0 Þ, corresponding to the configuration with least potential energy, makes the total exergy zero in the equilibrium state.
In the sequel, we usually omit the constant terms in the exergy function to save space and because in practice these constants are often not known a priori. In general, we consider potentials differing only by an additive constant to be equivalent.

Two similar viewpoints
According to Definition 2.10, the state of a GENERIC system evolves according to _ x i ¼ L ij ðxÞ @E @ x j þ M ij ðxÞ @S @x j . Using the degeneracy conditions, we can rewrite this as θ 0 2 R þ are some constant multipliers. The GENERIC models the approach to the equilibrium state x 0 and Φ is a Lyapunov function for the stability of x 0 [33,36]. Adding some constant shifts, Φ can equivalently be defined as Equation (13b) shows the relationship between GENERIC and the maximum entropy principle: The equilibrium state x 0 maximizes the entropy S under the constraints of constant total energy E, volume V, and mass N. Thus, 1 θ 0 , π 0 θ 0 and μ 0 θ 0 can be seen as Lagrange multipliers for constraints stemming from fundamental conservation laws that hold for isolated systems. The multipliers are thus determined by the values of the intensive variables at equilibrium, as suggested by the notation.
We can also write the GENERIC in the unconventional, yet equivalent form with generating function AðxÞ ¼ EðxÞ À Eðx 0 Þ À θ 0 ðSðxÞ À Sðx 0 ÞÞ þ π 0 ðVðxÞ À Vðx 0 ÞÞ À μ 0 ðNðxÞ À Nðx 0 ÞÞ: While Equation (13) corresponds to maximizing entropy subject to constraints which must be satisfied for an isolated system, Equation (14) corresponds to minimizing the exergy function defined by Equation (14b). If we consider a system consisting of a subsystem (body) together with an infinitely large environment (medium), Equation (14b) may be seen as a Lagrangian for the maximum amount of work EðxÞ À Eðx 0 Þ that can be extracted from the overall system while keeping its total entropy 7 S, the volume V and the mass N constant. Since the medium is infinitely large, its intensive variables are equal to the intensive variables of the overall system at equilibrium. These variables are the Lagrange multipliers θ 0 ; π 0 ; μ 0 . Of course, Equation (14b) is also a Lyapunov function for thermodynamic equilibrium. This makes immediate sense since, again, exergy is the amount of work which can be extracted from the system until it reaches thermodynamic equilibrium where no (spontaneous) changes can occur [6]. We conclude that we are dealing with two similar viewpoints, namely one which is biased towards entropy and relaxation and another one which is biased towards energy and its degradation. The former is more natural for the GENERIC framework, while the latter is taken by exergetic port-Hamiltonian systems.

Isothermal systems
In this section, we introduce classical dissipative port-Hamiltonian systems by means of the simplest example, namely the damped harmonic oscillator. After a discussion of the physical meaning of the Hamiltonian function, we will see how a subtle modification of the oscillator model leads us to an isothermal exergetic port-Hamiltonian system. We conclude that the present framework is a straightforward extension of the classical theory. The extension provides a thermodynamic structure to port-Hamiltonian systems which can be understood as a compositional version of the GENERIC structure.

Classical port-Hamiltonian systems
In the classical port-Hamiltonian framework, physical systems are ultimately comprised of energy storage components, energy routing components, and (free) energy dissipating components, as the following example shows:  Figure 2(a) is commonly modelled via the differential equation where q is the displacement, m is the mass, d is the damping coefficient, and c is the spring compliance.
The port-Hamiltonian formulation uses separate state variables for each storage component, namely the extension of the spring q and the momentum of the mass p. Figure 2(b). shows a bond-graph expression of the port-Hamiltonian system. Regarding the syntax, blue boxes represent storage components, green boxes represent the Dirac structure, and red boxes represent the resistive structure. The sum of the terms annotated inside the blue boxes represents the Hamiltonian H. The term annotated inside the red box represents the dissipated power. An arrowhead on every bond indicates the direction in which power flows when the respective pairing e i f i (for some fixed i) takes a positive value. We fix this direction such that positive values correspond to stored power, dissipated power, and power supplied to other systems. For now, we also annotate each bond with a causal stroke: A transversely-oriented bar is placed on that end which assigns the value of the flow variable, see Remark 4.3 at the end of the section. In this article, we additionally annotate each bond with formulaic expressions of its associated flow and effort variables.
The state x ¼ q; p ð Þ 2 X ¼ R 2 evolves according to   Since H is bounded from below, the system is passive. The damper represents an irreversible process that conserves energy as it turns work into heat. However, Figure 2(b) shows the damper as a one-port component and the power e 3 f 3 going into it disappears, meaning that it is not balanced by an outgoing power of equal amount. Thus, there is an obvious discrepancy between the port-Hamiltonian structure and the first law of thermodynamics, at least as long as we believe that the Hamiltonian represents energy in the thermodynamic sense.
For classical port-Hamiltonian systems we can argue from a thermodynamics viewpoint as follows: A reversible interaction with a thermal reservoir of constant temperature is (implicitly) assumed in the modelling process. The interaction maintains thermal equilibrium of the system and the (waste heat) reservoir (i.e. environment), thereby making the overall system isothermal.
In [23] (p. 25), it is stated that the physical meaning of the Hamiltonian (of a classical dissipative port-Hamiltonian system) is 'free energy', rather than energy. In equilibrium thermodynamics, the Helmholtz free energy is a thermodynamic potential obtained from the internal energy via a Legendre transformation with respect to entropy. A potential contains all thermodynamic information about the behaviour of a material at equilibrium, see e.g [36] (p. 10). The thermodynamic potential named after Helmholtz is called a free energy because the maximum entropy principle applied to an isothermal and isochoric nonequilibrium system implies the minimization of its free energy. The difference between its free energy in the initial state and its free energy in the equilibrium state corresponds to the maximum (reversible) work production which can occur as the system passes from the initial to the equilibrium state while interacting with the isothermal reservoir at the same temperature, see e.g [68] (ch. 6). The statement that the Hamiltonian is a 'free energy' can thus be explained as follows: An (irrelevant) additive constant in the Hamiltonian can be identified with the combined Helmholtz free energy corresponding to all (neglected) internal energy storage of the overall system. Hence, the term 'free energy', as used in [23], additively combines electro-mechanical energy components and constant Helmholtz free energy components corresponding to internal energy storage in the isothermal system and environment. The electro-mechanical energy components have no entropy content since all related degrees of freedom are resolved by the model. Therefore, they are not Legendre-transformed quantities. In this (perhaps not obvious) sense, the Hamiltonian is a (Helmholtz) free energy. The GENERIC literature also mentions that Helmholtz free energy can be used as a single generator for isothermal systems [36] (p. 136).
The idea of summing different electro-mechanical and 'free energy' components is more commonly understood within the more general exergy concept. Indeed, for isothermal and isochoric systems, the total Helmholtz free energy essentially coincides with exergy. Similarly, for isothermal and isobaric systems, the total Gibbs free energy essentially coincides with exergy, see [6].
Inspired by bond-graph modelling, the wish to model non-isothermal systems and to accurately express the first law within the port-Hamiltonian framework first led to the use of lossless systems, see e.g [23]. For Example 4.1, this means that a thermal port is added to the damper, making it a power-conserving component. Obviously, the passivity property of being lossless relates only to the first law of thermodynamics and ignores the second law of thermodynamics and all its implications.

Isothermal exergetic port-Hamiltonian systems
In contrast, exergetic port-Hamiltonian systems are coherent with both the first and the second law of thermodynamics and link passivity to degradation of energy. We consider the oscillator from Example 4.1 as an exergetic port-Hamiltonian system: Example 4.2 (Exergetic model of the damped harmonic oscillator). Let us explicitly assume that the system is isothermal because it is in thermal equilibrium with its environment having constant temperature θ 0 . We consider the environment as an (infinitely large) thermal reservoir. We need not consider for instance its volume, mass or chemical composition because at this point only thermal interaction with the environment is relevant. From the isothermal condition θ 0 ¼ ! θðsÞ ¼ @UðsÞ @s ; (16) it follows that u ¼ UðsÞ ¼ θ 0 s is a fundamental equation for the environment. Its exergy content with respect to itself is UðsÞ À θ 0 s ¼ 0, see Equation (10). Thus, the Hamiltonian already is (or at least can be interpreted as) an exergetic storage function.
Since the damper remains at θ 0 , its exergetic heating power is zero. Therefore, it might seem reasonable to omit its thermal port, cf. [23] (p. 25). However, for exergetic port-Hamiltonian systems, this is made (or kept) explicit, as shown in Figure 3: The environment is like a storage component containing zero exergy. As already said in Section 3, we consider it as a part of the system. This is in line with how Gibbs thought of body and medium as an isolated system. The outer box framing the bond-graph expression in Figure 3 should be seen as the system boundary. According to the assumption, the damper is held at the environment temperature (e 4 ¼ θ 0 À θ 0 ¼ 0). Consequently, the exergetic power e 4 f 4 ¼ e 5 f 5 vanishes. The net power at the damper e 3 f 3 þ e 4 f 4 ¼ d υ 2 is its exergy destruction rate (or dissipated power).
The state x ¼ q; p; s e ð Þ 2 X ¼ R 3 evolves according to where υ ¼ 1 m p. Equation (17a) defines the storage port. The mass and the spring are mechanical components containing pure exergy. Therefore, no shift appears in their contribution to the Hamiltonian. The last component of the storage port corresponds to the environment. Equation (17b) defines the D m component (m for mechanical) and Equation (17c) defines the D t component (t for thermal) of the Dirac structure. Finally, Equation (17d) defines the resistive structure.
While the diagrammatic expression for the mathematical model structure might seem unnecessarily complicated at first, the D t component represents that the interaction of the damping process and the environment is a reversible one. At D t , local conservation of entropy holds (f 5 ¼ À f 4 ).
Regarding the syntax, it is not meaningful to directly connect a blue component (exposing a storage port) with a red component (defining resistive structure). The green components (defining reversible interconnection) mediate all exergy exchange.
By removing the shifts written in blue in Figure 3, we obtain a lossless energetic port-Hamiltonian system, asserting that the damper conserves energy, see Remark 4.5. Equivalently, we can assert that e 3 ; e 4 ð Þ ¼ υ; θ 0 ð Þ lies in the kernel of the positive semidefinite matrix in Equation (17d).
Equation (17) reduces to However, for the seemingly simpler system in Equation (18), checking thermodynamic consistency is much harder, especially for a computer. For more complex, practical examples, a structured and compositional modelling framework with a diagrammatic syntax is clearly superior. We arrive at the following conclusion: If we assume that a classical dissipative port-Hamiltonian system is in thermal equilibrium with its isothermal (and isobaric) environment then its Hamiltonian represents the exergy of the overall system. The suggested framework thus is a straightforward extension of the classical theory.

Remark 4.3 (Causal strokes)
. Causal strokes do not indicate physical causality but mark how information propagates when using an explicit time-integration scheme for simulation. Hence, at storage components, flows must be inputs. If a consistent computational causality assignment is not possible for all storage components, the model yields an implicit system of differential-algebraic equations (DAE). For instance, placing two capacitors directly in parallel results in an algebraic constraint demanding equality of voltages. In this case, inconsistent initial conditions for the DAE system are related to the 'two capacitor paradox'. The degeneracy could be avoided by taking into account the resistance of the wire that connects the capacitors.
According to Definition 2.11, it defines a Dirac structure on TX � F R where F R ¼ X � R is the trivial vector bundle on which the resistive structure is defined. We focus on the class of input-output systems where the bottom-right block is zero. The top-left block defines a Poisson structure on X and thus a Dirac structure D TX on TX . The top-right block defines a vector bundle map À g : F R ! TX covering the identity 8 on X . The bottom-left block defines its negative dual g � . According to Definition 4.2 in [19], is an open forward input-output structure. Our impression is that [19] introduces a theoretically very appealing and possibly also practically useful framework for expressing the interconnection of port-Hamiltonian systems. However, Dirac structures which can directly connect a resistive and a boundary port or two different boundary ports (feed-through) are quite important in practice.
Remark 4.5 (Classical bond graphs with thermal port). Let us assume for a moment that the annotated bond-graph expressions shown in this work are merely figures, rather than (yet to be defined) mathematical objects in their own right. Then, we could say that removing all shifts in the annotated components of the storage function and in the annotated expressions for the effort variables manipulates such a figure into a classical energetic bond graph corresponding to a lossless port-Hamiltonian system whose passivity property is tantamount to the first law of thermodynamics only. However, there is no corresponding straightforward modification of the equations defining the resistive structure. This is no surprise since lossless systems have no resistive structure. The red boxes must turn into power-preserving transformers. Of course, the given annotated expressions for the flows at the red boxes can straightforwardly be manipulated into the equations defining these transformers by writing them in terms of the efforts (without shifts) at the red boxes. It is important to note that the resulting equations do not follow from a structured representation enjoying certain properties, as it is the case for resistive structures in the exergetic port-Hamiltonian framework.

Non-isothermal systems
We now come to physical modelling of non-isothermal systems. The following example shows that the Carnot efficiency naturally appears in the pairing of effort and flow variables of a bond representing (reversible) exchange of heat.
Example 5.1 (Carnot engine). Figure 4 shows a bond-graph expression of a Carnot engine operating between a thermal reservoir at temperature θ h and the environment at temperature θ 0 . The environment serves as a reference to define the exergetic storage function H ¼ H h þ H e ¼ H h . Again, by ignoring the shifts, we obtain the energetic power balance _ Q in ¼ _ Q out þ _ W out . Due to reversible operation, 9 the engine conserves exergy. The incoming thermal exergetic power À e 1 f 1 is thus fully converted into mechanical power e 3 f 3 which is supplied to another system via the boundary port: According to the first law of thermodynamics, all physical systems are cyclolossless if the storage function represents energy. In contrast, only perfectly reversible systems, like the Carnot engine, are cyclo-lossless if the storage function represents exergy.
To define the exergetic storage function of a thermodynamic system, we need to know its internal energy as a function of its entropy. For instance, the thermal reservoir in Example 5.1 has a linear and thus unbounded internal energy function, reflecting infinite capacity. Alternatively, we could use entropy as a function of internal energy. The next example compares the two choices: Example 5.2 (Gas-filled compartment). Figure 5 shows two alternative bond-graph expressions for a gas-filled compartment with boundary ports for the exchange of heat (i.e. thermal exergetic power) e h f h and pressure-volume work (i.e. mechanical power) e w f w . f h is the rate of entropy leaving the system, f w is its rate of compression, e h is its temperature, and e w is its negative pressure, both relative to the environment.
The expression in Figure 5(a) uses entropy and volume as state variables. Since exchange of heat/work does not change volume/entropy, 10 D t and D m have the same trivial form as D t in Figure 3 and Equation (17c).
The expression in Figure 5(b) uses internal energy and volume as state variables. Exchange of work changes both variables, leading to coupling. Since entropy is used as a potential, D is modulated according to @S We will henceforth use internal energy as a (local) thermodynamic potential, since this yields simpler models. Next, we consider again the damped harmonic oscillator but this time without assuming thermal equilibrium with the environment. Example 5.3 (Non-isothermal damped harmonic oscillator). In contrast to Example 4.2, we model the damper with a thermal capacity characterized by an energy function U, allowing it to heat up. Further, we model heat transfer characterized by a coefficient α, allowing the damper to cool down again. Figure 6 shows the model. Equations defining the Dirac structure according to Definition 2.11 and the resistive structure according to Definition 2.12 are annotated.
The damper consumes the mechanical power e 3 f 3 ¼ d υ 2 and produces the thermal exergetic power À e 5 f 5 ¼ θ d À θ 0 θ d d υ 2 which is its heat release rate d υ 2 multiplied by the efficiency of a Carnot engine operating between the temperature level of the damper θ d ¼  θ d is its exergy destruction rate which is its entropy production rate d υ 2 θ d multiplied by the environment temperature.
Once θ d > θ 0 , heat is irreversibly transferred from the damper's thermal capacity to the environment, destroying exergy at the rate e 6 f 6 þ e 8 f 8 . We have e 8 f 8 ¼ 0 because heat at the environment temperature cannot be used to do work.
In summary, the green components define the Dirac structure which encodes reversible (lossless) exchange of exergy. The net exergetic power at every green component is zero. Energy and entropy are conserved. The red components define the resistive structure which encodes irreversible processes (relaxation). At every red component, the net energetic power is zero. At the same time, entropy is produced (or conserved), implying a loss (or conservation) of exergy. Cyclo-passivity consequently corresponds to the fact that exergy is either conserved or destroyed.
We now model an isolated cylinder-piston device both as an exergetic port-Hamiltonian system and as a GENERIC system. Example 5.4 (Isolated cylinder-piston device). Figure 7 and 8 shows a system comprising of two compartments filled with a fixed amount of ideal gas which are separated by a piston. Its state is Given that U 1 , U 2 , and U 3 are expressions for the internal energy of the two compartments 11 and the piston, we define functions E; S; V; H 2 C 1 ðX Þ for total energy, entropy, volume and exergy by HðxÞ ¼ EðxÞ À θ 0 SðxÞ þ π 0 VðxÞ þ const: (23d) Figure 7. Isolated cylinder of cross-sectional area A containing a piston with mass m and momentum p 3 . The gas in both compartments exchanges heat (coefficient α) with the piston. When the piston moves, heat is added to it due to friction (coefficient d).
The Dirac structure D m is defined by The degeneracy of the top-left block corresponds to the (symplectic) Casimir v 1 þ v 2 . The Dirac structures D t1 , D t2 and D t3 are defined basically as in Example 5.3. The resistive structure corresponding to the damping is defined by The factorization RðxÞ ¼ CðxÞDðxÞ C T ðxÞ shows positive semidefiniteness. Energy is conserved since C T ðxÞ υ 3 θ 3 ½ � T ¼ 0. The remaining term À θ 0 in e 12 comes from À θ 0 SðxÞ in H, reflecting that the entropy function generates the gradient dynamics. The resistive structure corresponding to thermal conduction between the left compartment and the piston is defined by conduction and l is a length in that direction, the Hodge star ? essentially becomes A l and A l κ indeed corresponds to α. In the lumped-parameter setting, the term θ 2 is replaced by the squared geometric mean θ 1 θ 3 of the two known temperatures.
In the final example, we consider an electrically-heated cylinder-piston device which exchanges heat and work with the (isothermal and isobaric) environment. Example 5.6 (Heated cylinder-piston device). Figure 9 depicts the device, while the bond-graph expression in Figure 10 conveys the structure of the corresponding physical model. Since the system is open, the expression features an outer box exposing a boundary port. To determine a dynamics, we have to interconnect the electric boundary port f 6 ; e 6 ð Þ with another system having the same environment such that the composite forms an isolated system. Beyond the bond-graph expression, we merely state some details which differ from previous examples.
The Dirac structure D m is defined by The degeneracy of the top-left block corresponds to (symplectic) Casimirs v c þ v 0 and v c =A À q s . When the gas expands, it has to displace the isobaric environment. The corresponding mechanical work cannot be extracted and hence e 4 f 4 ¼ 0. Only the exergetic power e 1 f 1 could possibly be extracted. The resistive structure corresponding to the resistor is defined by Figure 9. An open cylinder with cross-sectional area A and isolating walls contains an electric heater with resistance R h . A piston with mass m and momentum p p closes the cylinder. Attached to it is a spring with compliance c and extension q s . The gas in the compartment and the environment both exchange heat (coefficient α) with the piston. When the piston moves, heat is added to it due to friction (coefficient d).
which is analogous to the case of friction.
In the port-Hamiltonian systems literature, 'energy exchange with the environment' is used interchangeably with 'interaction with other systems'. Within the exergetic port-Hamiltonian framework, the word 'environment' is reserved and carries a distinct meaning. It has singleton semantics, implying that different environment components in a system automatically represent the same reference environment. Since semantics must be preserved by composition, interconnection of systems requires a common reference environment.

Conclusions
The compositional nature of port-Hamiltonian systems makes them attractive for modelling of interconnected and controlled physical systems. The present framework enhances classical port-Hamiltonian theory with a physically sound interpretation of dissipativity and a refined structure that ensures thermodynamic consistency. Future work must address the question how the interconnection of exergetic port-Hamiltonian systems can be formalized and implemented such that thermodynamic consistency of constituent systems implies the consistency of composite systems.
At the heart of this work lies the goal to develop a practical framework to support the design and operation of sustainable energy systems. Not only as a consequence of the increasing demand for sustainable technology, teams are becoming more interdisciplinary. This is one important reason why intuitive abstractions with computational meaning are of central importance for the future of engineering. Exergetic port-Hamiltonian systems admit a diagrammatic syntax that is inspired by bond-graph modelling. Future work must thus address the question how expressions in this syntax can be formalized as mathematical objects based on which computations can be performed. Once this foundation is established, exergetic port-Hamiltonian systems can become a valuable tool for thermodynamic design and optimization. Their diagrammatic syntax will help humans to think and communicate, while their underlying mathematical framework, together with modern compiler technology, will efficiently handle the tedious parts of computational procedures such as modular and hierarchical composition, model transformations, simulation, optimization, and control design. Since many and eventually possibly all such procedures will provably preserve the compositional and the thermodynamic structure, a lot of effort previously spent on arranging parts, bookkeeping and verification can then be spent on sustainable design.