Performance enhancement of multivariable model reference optimal adaptive motor speed controller using error-dependent hyperbolic gain functions

The main contribution of this paper is to formulate a robust-adaptive and stable state-space speed control strategy for DC motors. The linear-quadratic-integral (LQI) controller is utilized as the baseline controller for optimal speed-regulation, accurate reference-tracking and elimination of steady-state fluctuations in the motor’s response. To reject the influence of modelling errors, the LQI controller is augmented with a Lyapunov-based model reference adaptation system (MRAS) that adaptively modulates the controller gains while maintaining the asymptotic stability of the controller. To further enhance the system’s robustness against parametric uncertainties, the adaptation gains of MRAS online gain-adjustment law are dynamically adjusted, after every sampling interval, using smooth hyperbolic functions of motor’s speed-error. This modification significantly improves the system’s response-speed and damping against oscillations, while ensuring its stability under all operating conditions. It dynamically re-configures the control-input trajectory to enhance the system’s immunity against the detrimental effects of random faults occurring in practical motorized systems such as bounded impulsive-disturbances, modelling errors, and abrupt load–torque variations. The efficacy of the proposed control strategy is validated by conducting credible hardware-in-the-loop experiments on QNET 2.0 DC Motor Board. The experimental results successfully validate the superior tracking accuracy and disturbance-rejection capability of the proposed control strategy as compared to other controller variants benchmarked in this article.


Introduction
Owing to their size, cost-effectiveness, ease of control, and variable torque handling capability, the permanent magnet direct current (PMDC) motors are widely favoured in industrial conveyor systems, hybrid electric vehicles, unmanned aircrafts, rolling mills, numerically controlled machine tools, robotic systems, disk-drives, etc. [1,2]. However, there is a dire need for optimal and noise-tolerant closed-loop controllers for speed tracking in industrial servo applications requiring high accuracy [3]. A plethora of speed control mechanisms have been devised and discussed in the scientific literature. The proportional-integral (PI) controllers are widely garnered as an industrial standard for motor speed control applications. They are preferred due to their simple structure and reliable control yield. Despite their benefits, the integer-order PI controller lacks robustness to handle higher-order systems [4,5]. Furthermore, constituting a well-tuned PI controller is an ill-posed problem. The fractional-order controllers have also been used to attain a flexible speed control effort [6,7]. However, it is quite difficult to select a trivial set of weighting factors in order to satisfy the design constraints [8]. The sliding mode controllers, despite their robustness, inevitably inject highfrequency chattering in the actuator response [9,10]. The neural-fuzzy schemes generally require large training data or empirically defined elaborate rule-bases, respectively, to realize a robust control system [11,12]. The model-based linear-quadratic-integral (LQI) controllers are also preferred for speed control applications because of their optimal control yield [13]. The LQI controller introduces a weighted integral control term that eliminates the steady-state errors and enables the system to track time-varying trajectories [14]. However, unlike the conventional optimal regulators, the integral damping introduced by LQI slows down the system's response, which deteriorates its asymptotic convergence rate and the disturbance-attenuation capability for the same set of state-and control-weighting matrices [15]. The numerical integration causes the accumulation of noise which leads to actuator saturation or wind-up. Moreover, the performance of statespace controllers gets degraded by modelling errors [16,17]. The self-tuning adaptive linear controllers tend to improve the robustness of dynamical systems against the problems mentioned above [18]. These problems have been addressed in the literature by utilizing statedependent Riccati Equation controllers [19], collaborative controllers [20], hybrid self-tuning controllers [21], and H ∞ controllers [22], etc. However, apart from being computationally expensive, these techniques require perfectly identified system models to yield promising results. Another robust approach to address the aforementioned problems is the utilization of model reference adaptive system (MRAS) that alters the system's performance by dynamically adjusting the parameters of the closed-loop speed controller, after every sampling interval [23,24]. The MRAS accomplishes this task by minimizing the error between the outputs of the reference model and the actual system, in real-time [25,26].
This article synthesizes a robust multi-variable model reference adaptive control scheme for DC motor speed control. Wherein, the LQI controller is used as a reference model that provides the desired input-output characteristics of the system. An asymptotically stable parameter adjustment law is derived using the Lyapunov theory [27]. The fixed gains of the parameter adjustment lack robustness in compensating the exogenous disturbances and parametric variations. A possible approach to solve this problem has been proposed in [28]. In this technique, two different adaptation gain matrices are defined for the parameter adjustment law, such that, one matrix addresses the transient behaviour and the other compensates the steady-state response. The parameter adjustment law transits between the two matrices based upon pre-defined switching criterion. The abrupt gain transition caused by the binary switching phenomenon injects inevitable chattering in the response.
The novel contribution of this article is the augmentation of the MRAS's parameter adjustment law with an optimized self-tuning mechanism for the dynamic adjustment of its adaptation gains. Each adaptation gain of the parameter adjustment law is updated, after every sampling interval, with the aid of individual nonlinear hyperbolic functions that depend on the instantaneous error in speed, e ω . Each hyperbolic function is pre-optimized offline using a well-defined performance criterion associated with its respective state-variable. Consequently, the error-dependent selftuning of adaptation gains speeds-up the transientrecovery while continually suppressing the oscillations, overshoots, and steady-state fluctuations. This setup avoids making a trade-off between the system's transient and steady-state performance. Moreover, the hyperbolic function offers a smooth transition between the varying adaptation gains under unprecedented dynamic state variations, which effectively inhibits the chattering in the response. The main motivation for choosing the aforementioned adaptive control architecture is to strengthen the controller's immunity against random faults encountered by practical DC motor systems, such as un-modelled bounded impulsive disturbances, random modelling errors, and abrupt load-torque variations. The performance of the proposed adaptive controller is compared with a welltuned PI controller as well as a conventional adaptive-LQI controller with fixed adaptation gains. The hyperparameters of the above-mentioned controller-variants are pre-calibrated via adaptive particle swam optimization (APSO) algorithm [29]. Credible hardware experiments are conducted on QNET DC Motor Board to validate the superior speed-regulation, tracking accuracy, disturbance-rejection, and modelling-error compensation capability of the proposed adaptive controller. The proposed scheme cohesively amalgamates the characteristics such as robustness, stability, time-optimality and computational simplicity in a single framework. This idea has not been explicitly attempted previously in open literature.
The remaining paper is organized as follows. The system description is presented in Section 2. The formulation of the standard PI controller is discussed in Section 3. The theoretical background of the model reference adaptive LQI controller is discussed in Section 4. The synthesis of the proposed error-dependent nonlinear adaptive LQI controller is presented in Section 5. The optimization methodology of the controller parameters is discussed in Section 6. The experimental execution and corresponding results are summarized in Section 7. The article is concluded in Section 8.

System description
The state-space model of a linear dynamical system is generally given by (1) and (2).
where x(t) is the state-vector, y(t) is the output-vector, u(t) is the control-input signal, A is the system-matrix, B is the input-matrix, C is the output-matrix, and D is the feed-forward matrix. The state-variable regarding the integral-of-error, ε, is also included in the conventional state-space model of DC motor. This augmentation aids in effectively eliminating the steady-state fluctuations, inhibiting the overshoots, damping the oscillations, and enabling the systems to track timevarying reference trajectories. The error-integral term is given by (3).
where ω ref is the reference speed of the motor, and ω is the actual speed of the motor. The state-vector is given by (5).
where i(t) is the armature current of the motor. The motor voltage, V m (t), acts as the control-input signal of the motor. With the introduction of the auxiliary integral state-variable, the overall state-space model of the DC motor for this research is given by (6), [13].
The motor parameters are identified in Table 1, [7].

Fixed-gain PI control scheme
The ubiquitous PI controller is a model-free control mechanism that is generally employed as the standard speed control scheme in the process control industry, owing to its simple structure, resilience, and reliability [30]. The control decisions are derived by computing the weighted linear combination of the instantaneous measurements of error and integral-of-error in the controlled variable. The PI control law constituted for this research is given by (7).
where k P and k I represent the proportional gain and integral-gain associated with the PI control law, respectively. The proportional control term is responsible for tracking the deviations of the response from the reference. It acts on the instantaneous value of error to ensure convergence of the response to the set-point value. The integral control term acts on the accumulated value of error. It improves the damping of the system by introducing a closed-loop pole at the origin.
This modification effectively attenuates the peak overshoots (or undershoots) and mitigates the steady-state fluctuations by manipulating the magnitude and duration of error. A well-postulated PI controller offers reasonable tracking accuracy and damping against oscillations. Hence, in this research, the PI gains are optimally selected using the APSO algorithm (discussed in Section 6).

Adaptive optimal control architecture
This section provides the theoretical background of the conventional LQI control schemes and its transformation into an adaptive-optimal controller by augmenting it with MRAS.

Fixed-gain LQI controller
The state-feedback controller for a linear dynamical system is implemented by minimizing an energy-like quadratic performance index in order to generate optimal control decisions [31]. As compared to the conventional PI controllers, the LQI scheme includes the auxiliary information regarding the state of motor's armature-current which re-configures the control procedure to deliver flexible correctional effort. The statefeedback gain vector of the LQI controller, denoted as K lqi , is calculated by using the expression in (8).
The gain vector relocates the closed-loop poles of the system to synthesize an optimal control trajectory. The matrix, P, is a symmetric positive definite matrix that is evaluated offline by solving the Algebraic Riccati equation (ARE) [32], given by (9).
The quadratic cost function is expressed in (10).
where Q and R are the state and control penalty matrices, respectively. They are chosen such that; Q = Q T ≥ 0 and R = R T > 0. The penalty matrices used in this work are given by (11).
Relatively larger weighting factors are selected for the error-integral and control-input variable in order to prevent the integral wind-up and the actuator saturation, respectively. The optimal fixed-gain linear control law is given by (12).
The fixed state-feedback gain vector evaluated by using the system description, given in the previous section, is given in (13).

Model reference adaptive LQI controller
The fixed-gain LQI controller lacks robustness against the influences of modelling errors caused by faulty identification, un-modelled intrinsic nonlinearities, or environmental indeterminacies. Thus, it is retrofitted with a stable online indirect MRAS [33]. The MRAS adaptively modulates the state-feedback gains, as a function of the gradient of error between the actual closed-loop system and reference system, after every sampling interval [34]. The adaptive self-tuning of state-feedback gains enhances the robustness of the system. It renders a significant improvement in the system's error-convergence rate while eliminating fluctuations and overshoots (or undershoots), even in the presence of bounded exogenous disturbances and parametric uncertainties. The reference model is implemented in a 64-bit computer. It functions concurrently with the actual system and generates control decisions based on the actual state-feedback. The derivation of the proposed model-reference Adaptive LQI (ALQI) controller is presented as follows [28]. Consider the linear system described by (14).
The objective is to construct a stable control law such that the response of the controlled system imitates that of the reference system, given by (15).
The proposed linear adaptive control law is given by (16).
where K c is the adaptive state-feedback gain vector that is adjusted online with respect to variations in the state-trajectories. The closed-loop representation of the actual as well as the reference system can be expressed according to (17) and (18).
whereK c is the adaptive state-feedback gain vector of the reference model. The difference between the actual states and the reference states is given by (19).
The error equation presented in (19) aids in deciding the convergence rate of the adaptation mechanism. The time-derivative of the error equation yields the expression in (20).
The error-derivative equation is simplified by the simultaneous addition and subtraction of the term, A ref x(t), on the right-hand side of equation (20). The simplified expression is shown in (21). where, The simplified error-derivative expression in (21) assumes that the conditions required for accurate model-tracking are perfectly satisfied. The next step is to derive a stable online adaptive adjustment law for the state-feedback gain vector, K c . For this purpose, a quadratic Lyapunov function, given by (23), is utilized.
(23) where β and P are both positive definite matrices. The function, G(e, K c ), is also positive semi-definite. The matrix β is denoted as the adaptation-gain matrix. It decides as to how quickly does the error, e(t), converge to zero for the given Lyapunov adaptation mechanism. The matrix P is evaluated using the equation given by (24).
The Q matrix is already identified in (11). If A ref is stable then there always exists a pair of positive definite matrices, P and Q, following the aforementioned mathematical property. In order to deduce the authenticity of the selected Lyapunov function, its time-derivative is computed. The expression is given by (25).
Now, if the state-feedback gain-adjustment law is chosen to be the expression in (26).
Then, the time-derivative of the Lyapunov function,Ġ, will always be negative-definite. This assertion implies that the error, e(t), will eventually converge to zero. Substituting the expression for ϕ in the gain-adjustment law provides the expression in (27).
All the states of the system, x(t), are measurable. Integrating both sides of expression (27) provides an iterative adaptation mechanism for the dynamic adjustment of the state-feedback gain vector, given by (28).
where T s is the sampling interval of the system. In this research, the closed-loop system obtained by applying the LQI controller on the mathematical model of the DC motor is used as the reference model, in order to implement the proposed adaptive controller. The closed-loop reference model is shown in (29).
The vector K lqi , identified in (13), is used as the initial state-feedback gain vector of the gain-adjustment law in (28). The formulated ALQI controller alters the state-feedback gains after every sampling interval, based on the real-time state-variations and errordynamics of the closed-loop system. The standard ALQI control law is given by (30).
The quantitative information of all the parameters in the expression of adaptive gain vector, K c (t), is available except for the adaptation-gain matrix, β. The matrix β has to be carefully selected by the designer based on some performance-criterion. As discussed earlier, β is a diagonal matrix as shown in (31).
The gains, β i , β ω , and β , directly affect the flexibility and the error convergence rate of the parameter adjustment law given in (28). A trivial set of adaptation gains are meta-heuristically selected via APSO algorithm in this research. The selection methodology of β is presented in Section 6.

Nonlinear self-tuning model-reference adaptive LQI controller
The performance of the practical dynamical systems is prone to be degraded under the influence of parametric uncertainties and external disturbances. Thus, using fixed adaptation gains in the parameter adjustment law is irrational. During transient conditions, the adaptation gains are required to adapt quickly to track and compensate the abrupt deviations occurring in the state-variables in real-time. Thus, the variation-rates of speed and current gains are inflated and integraldamping response is depressed in order to stiffen the control effort during transient conditions. On the contrary, the state-feedback gains are required to change very gently during the steady-state conditions in order to avoid any overshoots or oscillations in the response. Hence, in steady-state, a smaller adaptation gain is preferable for the speed and current gains to render a softer control effort, while enhancing the sensitivity of integral gain to strengthen the damping phenomenon. The aforementioned characteristics offer rapid transits in the response with enhanced damping against oscillations while minimizing the control energy expenditure, even in the presence of bounded exogenous disturbances.
This rationale can be used to devise simple predefined analytical rules that capture the variations in e ω signal to dynamically adjust the adaptation gains after every sampling interval. In this research, a practicable and computationally efficient solution is presented in the form of nonlinear scaling functions of e ω for online modification of adaptation gains. A plethora of nonlinear scaling functions have been proposed in the literature, each offering its distinct attributes [35,36]. In this research, the Hyperbolic Secant Functions (HSFs) of e ω are used to adaptively modulate the weighting factors of β. The waveform of HSF is smooth, bounded, symmetrical, and differentiable [20]. Correspondingly, the smooth transition of adaptation gains with respect to the variations in e ω renders superior damping and negligible oscillations in the response under rapidly changing operating conditions [14]. Moreover, the control effort can be further harnessed by appropriately selecting the nonlinearity-index of the function. This feature enhances the flexibility of the Lyapunov parameter adjustment law, which enables the controller to exhibit a relatively faster error-convergence rate and stronger damping against exogenous disturbances. The HSFs employed for the nonlinear self-tuning of the three adaptation gains are provided in (32)- (34).
β ω (e ω ) = β ω,max − (β ω,max − β ω,min ) × sech α ω × e ω (t) ω ref (33) β ε (e ω ) = β ε,max − (β ε,max − β ε,min ) where α i , α ω , and α ε , are the nonlinearity-indices of the HSFs of β i , β ω , and β , respectively. The upper and lower bounds of each functions are decided by β x,max and β x,min , respectively. The block diagram of the proposed adaptive control scheme is shown in Figure 1. The upper bounds, lower bounds, and nonlinearityindices are meta-heuristically selected using the APSO algorithm. The HSFs of β i and β ω increase nonlinearly with respect to error. The HSF of β decreases nonlinearly with error, which allows for a stronger integral control action during the steady-state, and vice versa. Consequently, the response exhibits minimumtime transient recovery with stronger damping against oscillations. The updated matrix of error-dependent nonlinear self-tuning adaptation gains is represented according to (35).
The updated parameter adjustment law is given by (36).
The vector K lqi , identified in (13), is used as the initial state-feedback gain of the gain-adjustment law in (36). Hence, the Nonlinear Self-tuning Adaptive LQI (NALQI) control law is given by (37).

Parameter optimization
The PSO algorithm is a stochastic parameter optimization technique that initializes with a random population of potential candidate solutions, known as the "particles" [37]. It searches the population to acquire the global best solution. Each particle has a position and velocity. The mathematical expressions of the velocity (Y i ) and position (X i ) of the kth particle for jth iteration are given in (38) and (39), respectively [38].
where c 1 , c 2 are the cognitive-coefficients having values 2.08 and 2.06, respectively, r 1 , r 2 are random realnumbers between 0 and 1, and w is the inertia-weight. The cognitive coefficients are selected via trial-anderror to ensure that their sum is greater than 4 [29]. The values of both r 1 and r 2 are randomly selected as 0.19. The fitness of each particle is evaluated and compared with the existing best-fit particles, also known as "localbest" (P k ). The particle with the highest fitness-value recorded so far is chosen as the new value of P k . The particle with the best fitness value among all the particles in the population is chosen as the "global-best" (P g ). The function used to vary w in this research is given by (40), [29].
where w o is the initial value of inertia weight and is selected as 1.4 in this research [29]. As the optimization process progresses, the values of P g and P k converge to a similar value. If the particles are farther away from the global-best solution, P k is much greater than P g and hence, the ratio of P g /P k is less than one. Consequently, w retains a large value and supports global-searching. As the particles get closer to P g , the ratio continues to increase and w continues to decrease, enhancing the local-searching and thus, yielding a relatively faster convergence rate. The value of j max in this research is 100. The quadratic cost function, shown in (41), is used to optimally select the gains of the conventional PI controller so it may offer the best control effort in transient as well as steady-state conditions.
The fixed weighting factors of β (for ALQI controller) as well as the nonlinearity-indices of the HSFs  (for NALQI controller) are also optimized using the cost function given in (41), since they compensate the nonlinear characteristics of the response in transient as well as steady-state conditions. The system undergoes abrupt state-variations during transient conditions; therefore, the objective for the response is to quickly converge to reference [39]. Hence, the following cost function is minimized to optimize the related bounds (β i,max , β ω,max , and β ε,min ) in the adaptationgain HSFs of NALQI controller.
where t r is the time taken by the system to reach within ±10% of the reference speed and t s is the time-taken for the system to settle within ±5% of the reference speed. The cost function in (42) yields minimum-time transient recovery. During steady-state conditions, the system dynamics change gently. The objective is to attenuate the overshoots, undershoots, oscillations and steady-state fluctuations [39]. Hence, the following cost function is minimized to optimize the related bounds (β i,min , β ω,min , and β ε,max ) in the adaptation-gain HSFs of NALQI controller.
where M p is the peak-magnitude (overshoot or undershoot) incurred in the system's response during startup or upon the application of external disturbance. An initial population of 100 particles is chosen for the optimal selection of each parameter. The selected values of the parameters, the range of search-space, and the number of iterations required to converge them to the P g value, are provided in Table 2. The iterative parameter optimization history and convergence pattern of PI, ALQI, and NALQI controller is graphically illustrated in Figures 2-4, respectively. Based on the optimized values of the nonlinearity-indices, the waveforms of nonlinear adaptation-gain functions are illustrated in Figure 5.

Experimental evaluation
This section presents experimental test-cases and their corresponding results to validate the efficacy of the proposed control scheme. The hardware-in-the-loop experiments are conducted on the QNET 2.0 DC Motor Board [40].

Experimental setup
The QNET 2.0 DC Motor Control Board, shown in Figure 6, is used to experimentally test the proposed controller's performance in real-time [41]. The QNET 2.0 DC Motor Board consists of a permanent magnet DC motor that is equipped with a tachometer and current sensor to measure the real-time variations in rotational-speed and the armature-current of the motor, respectively. The motor is actuated via a dedicated on-board bidirectional pulse-width-modulated motor driver circuit. The motor shaft is coupled to an inertial disc of mass 0.15 kg. The experimental setup is integrated with a LABVIEW based graphical user interface (GUI) that aids in recording and visualizing   the motor's speed response [42]. The NI-ELVIS II board acts as a serial-communication bridge between the motor and the GUI [43]. The GUI and the control software are implemented in a 64-bit embedded computer. The control software serially acquires the sensor measurements and serially transmits the appropriate voltage control commands to the motor driver circuit at 9600 bps. The acquired data is sampled at 1000 Hz.

Tests and results
To better appraise the benefits of the proposed NALQI controller, its performance is benchmarked against the conventional ALQI controller and a well-tuned PI controller. Five unique test-cases are used to analyse the reference-tracking capability, impulsive disturbancerejection capability, and robustness against modelling errors of each controller.  Figure 8. (C) Impulsive-disturbance rejection: The immunity of the control scheme against exogenous disturbances is tested by externally injecting an additive and subtractive bounded impulsive signal, also known as the "dither" signal, directly in the control-input signal. The dither signal has a magnitude of ± 5.0 V and a duration of 8.0 ms. The dither signal application perturbs the steady-state   response of the system by introducing transients in it. The resulting disturbance-rejection response of each controller is illustrated in Figure 9.

(D) Variable load-torque compensation:
The robustness of the control scheme against changing load-torques is studied by coupling the primary DC motor with another identical motor (minigenerator), as shown in Figures 10 and 11. At t = 2.5 s, a resistor of 100 is connected across the output terminals of the coupled generator in order to abruptly increase the load-torque across the primary motor, which perturbs the motor's steady-state response. The load-torque compensation capability of each controller is graphically illustrated in Figure 12. (E) Modelling-error attenuation: The controller's capability to compensate for the influence of modelling errors is analysed by introducing a step-increment in the armature resistance, R, of the motor. The motor is started normally. At t = 2.5 s, a 1.0 resistor is connected in series with one of the primary-motor's terminals and hence, the R. This is done by moving the position of switch from A to B, as shown in Figure 13. The modelling error attenuation response of each controller is illustrated in Figure 14.

Quantitative analysis
A comprehensive comparative performance assessment of the experimental results is summarized in Table 3.
The performance is analysed in term of t r , t s , absolute value of M p , transient-recovery time (t rec ), and the  root-mean-squared value of steady-state fluctuations (E ss ) in the time-domain response. The consolidated quantitative performance analysis derived from the numerical data recorded in Table 3 is as follows. The percentage-improvement contributed by NALQI controller in the time-domain performance parameters is calculated by considering the corresponding parameters of PI controller as the reference.
In test A, the PI controller converges slowly to the reference after the initial start-up and manifests substantial fluctuations after convergence. The ALQI controller converges relatively quickly to the reference while suppressing the steady-state fluctuations. However, it shows an overshoot of 13.5 rad/s. The NALQI controller demonstrates the fastest transient-response without rendering any oscillations or overshoots in the  response. It shows 70.6% improvement in t s and 43.2% reduction in E ss as compared to PI controller.
In test B, the conventional PI controller lags the reference trajectory by 0.10 s. The ALQI controller shows relatively faster convergence rate. The NALQI controller transits to the reference in minimum time and tracks it with relatively higher accuracy. It shows 68.4% improvement in t s and 59.2% reduction in E ss as compared to PI controller.
In test C, the PI controller converges very slowly to the reference after recovering from the disturbance and contributes large peaks of 43.6 rad/s in the response. The ALQI controller converges in a relatively shorter time-span as well as a smaller M p . The NALQI  In test D, the PI controller recovers slowly from the large peak-undershoot induced by the abrupt load-torque variation. The ALQI controller converges in relatively lesser time while slightly depressing the peak-undershoot. The NALQI controller shows the most time-optimal effort. It significantly attenuates the magnitude of undershoot and manifest minimum-time transient recovery. It shows 46.7% improvement in t rec and 40.7% reduction in the M p as compared to PI controller.
In test E, the PI controller takes a significantly long time and a large peak-undershoot to compensate for the effect of modelling error. The ALQI controller shows considerable improvement but induces oscillations in the response. The NALQI controller takes minimum-time to reject and recover from the electrical damping. It shows 58.2% improvement in t rec and 25.0% reduction in the M p as compared to PI controller.
In all of the test-cases discussed earlier, the conventional fixed-gain PI controller demonstrates poor transient and steady-state response. The ALQI controller shows mediocre improvement in time-domain performance. The superior position-regulation, tracking accuracy, and disturbance-rejection capability of the NALQI controller clearly validate its efficacy for motor control applications. The enhanced robustness of NALQI controller is attributed to the nonlinear-scaling of adaptation gains which improve the efficiency of the online gain-adjustment law to quickly respond to unprecedented parametric variations.

Conclusion
This paper presents a methodical approach to synthesize a robust and optimal multivariable modelreference adaptive control scheme to enhance the speed control performance of a PMDC motor, even under the influence of bounded exogenous disturbances and parametric variations. The Lyapunov's stability theory is used to derive a stable parameter adaptation law for the online adjustment of the state-feedback gains of the LQI controller. The adaptation gains of the Lyapunov parameter adaptation law are dynamically adjusted with the aid of secant-hyperbolic functions that directly capture the variations in e ω . The online nonlinear scaling of the adaptation gains serves to adaptively manipulate the control-input profile which enhances the flexibility of the controller to efficiently respond to realtime variations in state-dynamics. This modification speeds up the transient response, eliminates the steadystate fluctuations, and significantly improves the system's immunity against parametric uncertainties, while maintaining the asymptotic stability of the controller under every operating condition. The efficacy of the proposed adaptive controller is justified via credible "hardware-in-the-loop" experiments. The experimental results of the proposed controller exhibit rapid transits with improved damping and minimal steady-state error in the system's response, even under the influence of bounded exogenous disturbances. These observations clearly validate the superior robustness and timeoptimality of the proposed controller. The NALQI controller does not put recursive computational burden on the embedded computer which makes it practical for real-time motor speed control applications. In future, intelligent adaptation mechanisms can be investigated for flexible and efficient online adjustment of adaptation gains.