Logarithmic penalty function method for invex multi-objective fractional programming problems

In this paper, a new logarithmic penalty function method is used for solving nonlinear multi-objective fractional programming problems (MOFPP) involving invex objectives and constraints functions with respect to the same function η. This approach is implemented by modifying fractional objective function to -invex function, no parameterizations to multi-objective fractional programming problem are required. Furthermore, the constrained multi-objective FPP has been converted to a sequence of unconstrained optimization problems by adding a new logarithmic penalty function to each objective function.


Introduction
Fractional programming problem (FPP) has received much interest from researchers in recent decades, particularly the multi-objective programming problem. Decision theory, game theory, economics, and many more are some of the practical problems required to be optimized in terms of the ratio of several linear and nonlinear functions.
The nonlinear multi-objective fractional programming problem to be studied in this article is as follows: Generally, the multi-objective fractional programming problem is either parameterize or transformed to another suitable form, so that an equivalent multiobjective fractional programming can be obtained.
Bernard and Ferland [1] outline basic approaches and main types of available algorithms to deal with the fractional programming problems, and their convergence analysis was also reviewed.
Kuk, Lee, and Tanino [2] establishes generalized Kuhn-Tucker necessary and sufficient optimality conditions and derive the duality theorems regarding a class of nonsmooth multi-objective fractional programming problems involving V − ρ− invex functions.
Other researchers extend the idea to optimality and duality for nonsmooth multi-objective fractional programming problem (see for example [2][3][4]).
Santos, Osuna-Gómez and Rojas-Medar [5] use a notion of generalized convexity, called KT-invexity and study a class of nonconvex and nondifferentiable MOFP. Furthermore, a dual problem was defined and establish some duality results. The idea of invexity was first introduced by Hanson [6] and named by Craven [7].
There are several methods available for solving the considered optimization problem (P). In recent years, most of the powerful algorithms were designed explicitly for the unconstrained optimization problem, and these lead to inventing the penalty function approach that will enable the researchers to solve the constrained problem.
Penalty function method is one of the most critical approaches for solving an optimization problem, and the idea is implemented by incorporating constraints into an objective function by adding the penalty term, the penalty function ensures that the feasible solutions do not violate the constraints.
The concept of penalty function approach was first introduced by Zangwill [8], an algorithm was presented to handle the penalized optimization problem constructed based on nondifferentiable exact penalty function, the method appears to be more useful in the concave case.
The notion was extended by Eremin [9] via the exact penalty function method to solve nonlinear optimization with convex function. The assumption of convexity plays a vital role in most of the exact penalized optimization approaches in the literature. Antczak [10] established some characterization of the l 1 exact penalty method, and he used the technique to solve a new class of nonconvex optimization problems with inequality constraints. Some of the problems in operations research may be express in terms of the ratio of linear or nonlinear functions, Antczak [11] presents an equivalent modified the multi-objective function in which the pareto optimal coincides with that of the original optimization problem. Recently, Jayswal and Choudhury [12] extended the application of exponential penalty function method for solving multi-objective programming problem which was initially introduced by Liu and Feng [13] to deal with a multi-objective fractional programming problem. Hassan and Baharum [14] proposed a new logarithmic penalty function approach to solve a nonlinear programming problem; the new LPF can deal with some problems with irregular features. Nevertheless, it is designed explicitly for nonlinear optimization problem involving equality constraints only.
Motivated by the work of Antczak [10], Jayswal and Choudhury [12] and Hassan and Baharum [14], we combined the notion of modified objective function involving invex differentiable functions with that of a new logarithmic penalty function method to solve multiobjective fractional programming problem with an inequality constraints, this approach does not require any parameterization to the original fractional programming problem.
This paper is organized as follows: In Section 2, some definitions and theorems were presented, in Section 3, Karush-Kuhn-Tucker multiplier is derived regarding logarithmic penalty function, in Section 4, an exact penalty method for an invex optimization was discussed.
Finally, in Section 5, some examples have been solved, which shows how the pareto optimal for both methods coincide.

Preliminary definitions
Definition 1 [6]: Let X be a nonempty open subset of R n and f : X → R be a differentiable function defined on X. Then f is said to be (strictly) invex at u X on X with respect to η if there exist a vector-valued function η : If (1) is satisfied for any u ∈ X. Then f is an invex function on X with respect to η.

Definition 2 [15]
: Let X be a nonempty open subset of R n and f : X → R be a differentiable function defined on X. Then f is said to be (strictly) incave at u X on X with respect to η if there exists a vector-valued function η : If (2) is satisfied for any u ∈ X. Then f is an incave function on X with respect to η.

Definition 3 [11]
: Let X be a nonempty open set X ⊂ R n and ϕ : X → R be a differentiable function defined on X. Then ϕ is said to be (strictly) α i -invex at u X on X with respect to η if there exists a vector-valued function η : X × X → R n and α i : If (3) is satisfied for any u ∈ X. Then f is an α i -invex function on X with respect to η.

Definition 4 [11]:
A pointx ∈ D is said to be an efficient (pareto optimal) for fractional programming problem (P) if and only if / ∃x ∈ D such that for some s ∈ {1, 2, . . . , n}

Definition 5 [11]:
A pointx ∈ D is said to be weak efficient (weak pareto optimal) for multi-objective fractional programming if and only if / ∃x ∈ D such that

Definition 6 [8]:
A continuous function p : R n → R satisfying the following conditions: is said to be a penalty function for constrained optimization problem.
Conventionally, a penalty function approach introduced by Zang will [8] for both equality and inequality constraints was popularly known as absolute value penalty function, it is of the following form: n} are invex functions with respect to the same function η : X × X → R at u on X. then the fractional function ) is α i -invex with respect to the same function η at u on X and with respect to the function given by From the hypothesis, f i and g i are both invex functions with respect to the same function η : X × X → R at u on X. Then by Definition 1 and Equation (1), we have By differential calculus, we have the following (8) can be written in this form For simplicity, we can represent f i g i = ϕ i By (6), (11) can be re-written in the following form Therefore, by Definition 3 f i /g i is α i -invex with respect to the function η at u on X and with respect to the function α i (x, u). For simplicity, since α i (x, u) > 0 for all i ∈ I we consider ϕ i to be an invex function, such that Now that we have a modified multi-objective fractional programming problem (P) Minimize where ϕ i : X → R, i ∈ I and h j : X → R, j ∈ J, are nonempty differentiable functions on an open set X ⊂ R n .

Kuhn-Tucker multiplier for logarithmic penalty function
In any nonlinear optimization problem, the first order necessary conditions for a nonlinear optimization problem to be optimal is Karush-Kuhn-Tucker (KKT) conditions, considering that some constraints qualifications are satisfied. However, Courant-Beltrami penalty function may not be differentiable at a point h j (x) = 0 for some i ∈ I. But for the constrained optimization problem both objective function and constraints may be partially differentiable on R n while at the same time the penalized problem is not, being differentiability is not among the properties of max{0, h j (x)}. Therefore, some additional hypothesis may be imposed on the constraint function h j (x), i.e. if the constraint h j (x) has continuous first-order partial derivatives on R n , for this reason [h + j (x)] 2 admit the same. Therefore, where r is the multi-variable indexes. Considering Equation (14), if p(x) : R n → R is a logarithmic penalty function and the constraints h j (x) has continuous first-order partial derivative on R n , then From (15), we can define Kuhn-Tucker multiplier as follows:

Theorem 2 [16]:
Letx be the optimal solution in the problem (P) and assume that any suitable constraint qualification in [17] be satisfied atx. Then there exists a Lagrange multiplierμ ∈ R m such that

The logarithmic penalty method for an invex optimization problems
Transforming a constrained optimization to a single unconstrained problem for a single objective mathematical programming, or a sequence of an unconstrained problem for multi-objective optimization problem can be actualized employing the penalty function. If we consider the new logarithmic penalty function introduced by Hassan and Baharum [14] for equality constraints, we modified a Courant-Beltrami penalty function of the form for inequality constraints, the modified Courant-Beltrami penalty should be constructed as follows: This leads to the following logarithmic penalized optimization problem for multi-objective fractional programming (P); We can now completely characterize solutions for the minimization for the problem (P) in terms of the minimizers of the logarithmic penalty parameter that exceeds some suitable threshold. For a sufficiently large value of c, under imposed suitable invexity assumption on the functions in the problem (P). The KKT point minimizes the auxiliary function P c (x) if and only if it minimizes optimization problem (P).
We are required to show that a KKT point in the optimization problem yields the minimizer of logarithmic penalty function in the associated penalized optimization problem.

Theorem 3 [15]:
Letx be a feasible solution in the mathematical programming problem (P), and the KKT necessary optimality conditions hold atx with the Lagrange multipliersμ j , j ∈ J. Furthermore, assume that the objective function ϕ is invex atx on X with respect to η and the function − m j=1 h j (x) is an incave atx on X with respect to the same function η. If c is assumed to be sufficiently large (it is sufficient to set c > max{μ j , j ∈ J}, whereμ j , j ∈ J, are Lagrange multipliers associated with the constraints h j , respectively), thenx is also a minimizer in the associated penalized optimization problem (P c (x)) with the l 1 exact penalty function.

Proof:
To prove thatx is optimal to the associated penalized optimization problem P c (x), we proceed by contradiction. In contrary to the assumption, suppose thatx is not an optimal solution of the associated penalized optimization problem P c (x) with logarithmic penalty function. Therefore, there existsx ∈ X such that By (16), we have Sincex is a feasible solution in the mathematical programming problem (P). Therefore, Moreover, by Equation (18), Equation (17) becomes Again, by hypothesis c > max{μ j , i ∈ I}, whereμ j , i ∈ I, are Lagrange multipliers associated with the constraints h j , respectively. Then for eachx ∈ X, Equation (19) can be transformed to the following form; Since the KKT necessary optimality conditions are fulfilled. Being thatx is a feasible point in the mathematical programming (P), it follows that Thus, By assumption, the objective function ϕ is an invex atx on X with respect to η. Therefore, by Definitions (1) and (2), respectively, we rewrite the above inequality as follows: Hence, The above strict inequality contradicts the KKT necessary optimality (i). Therefore, the conclusion of the theorem is established.

Numerical examples
Example 1 [11]: Let us now consider the following nonlinear multi-objective fractional programming problem with single constraint: x ≥ 0}, set D is the set of all feasible solutions andx = 0 is a Pareto optimal solution in the considered multi-objective fractional programming problem.
Using the modified objective function (11) ϕ i i = 1, 2, for considered nonlinear multi-objective fractional programming problem, the following associated optimization can be constructed: Equation (20) is a simple constraint optimization problem, which can be transformed into unconstrained optimization as in (16).
Substituting the functions in (17) minimize P c (x) Clearly,x = 0 is a pareto optimal solution in the above penalized optimization problem, irrespective of the value of penalty parameter c. These shows that the use of penalty function method to the transformed multiobjective fractional programming yield the same pareto optimal as the original problem.

Example 2:
Consider the multi-objective fractional programming problem with more than one variable The set of all feasible solutions to the above problem is given by Now, we construct the unconstrained multi-objective fractional programming based on logarithmic penalized optimization problem as in Equation (14), 2 1 + 2x 2 + 6)/(x 2 1 + 2). Therefore, we are to find the pareto optimal for the following unconstrained objective functions: min P c (x) = 4x 2 1 + 2x 2 + 5 Considering the sets of feasible solutions D, and pairing each point with 1 unit as an interval for both variables x 1 and x 2 . Table 1 summarized the feasible points and its corresponding multi-objective values via logarithmic penalty function method for the multi-objective fractional programming problem.
Obviously, from Table 1,x = (0, −1) is a pareto optimal solution in the above-penalized optimization problem, irrespective of the value of penalty parameter c.

Conclusion
In this paper, a new logarithmic penalty function method has been used and implemented on a modified multi-objective fractional programming problem, and if the functions constituting the original mathematical programming are invex/incave and differentiable with respect to the same function η, then the modified objective functions are also invex and differentiable. The notion of transforming constrained into an unconstrained optimization problem via penalty function yield the same pareto optimal between the original optimization problem and it associated penalized optimization problem. The result obtained shows how crucial is the logarithmic penalty function method is.
Future works will be mainly on solving practical applications via metaheuristic algorithms.