A multi-modal integrated deep neural networks for the prediction of cardiovascular disease in type-2 diabetic males

Heart disease is a leading cause of mortality and illness worldwide. Heart disease identification and prediction may considerably improve patient outcomes. We use deep neural networks (DNNs) and heart rate variability (HRV) data to construct a deep learning strategy for diagnosing cardiovascular abnormalities in diabetic men. The non-invasive HRV test shows how the autonomic nervous system affects heart function. It show promise for diagnosing heart dysfunction. DNNs, noted for their ability to interpret complex data patterns, are useful for prediction and diagnosis. Our unique system, DNHRV (Deep Neural Network with HRV Features), integrates two networks using DNN and DCNN methods (Deep Convolutional Neural Network). Our DNN analyses clinical risk variables using powerful deep learning architecture, while the DCNN trains. We integrate HRV signals, medical pictures, and other clinical parameters with deep neural network computing power in the suggested technique (DNNs). This multimodal technique gives us a complete picture of each patient's cardiovascular health by utilising physiological and imaging-based indicators. Our DNHRV model outperformed earlier models in accuracy, precision, F1-score, and other parameters. Our prediction model was evaluated using SHAREEDB, proving its accuracy and stability. The DNHRV model exceeds state-of-the-art CVD prediction methods by a large margin, with 98.8% accuracy, according to extensive SHAREEDB dataset tests. By highlighting CVD predicting data points, the suggested technique increased interpretability and accuracy.


Introduction
Cardiovascular diseases (CVDs), such as heart disease, stroke, and hypertension, are highly prevalent worldwide and constitute a significant contributor to mortality and disability rates.Improving patient outcomes, lowering mortality rates, and making the most efficient use of healthcare resources can all be accomplished by the diagnosis and precise prediction of cardiac problems at an early stage.Traditional diagnostic approaches frequently rely on invasive procedures or pricey imaging methods, which makes them less accessible and limits the general deployment of these methods.In recent years, approaches for deep learning, when combined with physiological data, have shown significant potential for predicting cardiac problems in a non-invasive and cost-effective manner using deep learning techniques [1].
The term "cardiac abnormalities" refers to a wide variety of disorders, some of which include arrhythmias, heart failure, coronary artery disease, and myocardial infarction.These disorders are frequently distinguished by distinct shifts in the ways in which the autonomic nerve system regulates the heart, which ultimately results in modifications in the dynamics of the heart rate.Heart rate variability (HRV) is a valuable indicator that assesses the performance of the cardiac autonomic regulation by analysing the fluctuations in the temporal gaps between successive heartbeats.HRV is an extremely helpful measure of how well the cardiac autonomic control system is working.HRV serves as an informative indicator of cardiovascular function, shedding light on the dynamic balance between the sympathetic and parasympathetic branches of the autonomic nervous system, thus providing insights into overall cardiovascular well-being [2].
Diabetes is linked to a higher likelihood of experiencing cardiovascular conditions such as cardiac autonomic neuropathy, myocardial ischaemia, and arrhythmias.It is widely established that higher cardiovascular morbidity and mortality are linked to type 2 diabetes (T2DM).Individuals diagnosed with type 2 diabetes mellitus (T2DM) are at a heightened risk of developing coronary heart disease, ischaemic stroke, and experiencing mortality.This risk is significantly higher, ranging from 1.5 to 3.6 times, when compared to individuals without T2DM.Additionally, T2DM is linked to a maximum likelihood of death resulting from conditions like congestive heart failure, insufficient blood supply to the extremities, and complications affecting the microvasculature.Patients with diabetes are predicted to have a mortality rate four to eight years higher than the normal population.Early detection and risk stratification of cardiac abnormalities in diabetic individuals can facilitate timely interventions and improve clinical outcomes.HRV analysis examines the fluctuation in time durations between successive heartbeats and provides insights into autonomic regulation of the cardiovascular system [3].
According to the 2016 Global World Health Organisation (WHO) Report on Diabetes, the prevalence of diabetes among adults has shown a significant rise.In 1980, it stood at 4.7%, but by 2014, it had increased to 8.5%.This increase was particularly prominent in middle-and low-income countries.It is now possible that the World Health Organization's previous prediction of 439 million adults with diabetes by 2030 may be exceeded due to this growth rate.Men, as indicated by an adjusted Hazard Ratio (95% CI) of 2.56 (2.53-2.60),face a higher risk of experiencing a new myocardial infarction (heart attack) compared to women.The disparity between genders in diabetes patients is considerably smaller compared to the overall population, showing an HR of 1.22 (95% CI 1.18-1.25).Consequently, the likelihood of an acute myocardial infarction in a diabetic patient is markedly lower solely due to their gender when compared to individuals without diabetes.This research explores the relationship between HRV parameters and cardiac abnormalities in Type-2 diabetic males [4].
Traditionally, HRV analysis has involved extracting various temporal, frequency-domain, and nonlinear measures from electrocardiogram (ECG) signals.These features capture different aspects of the underlying physiological processes, such as sympathetic and parasympathetic modulation, sympathovagal balance, and overall cardiac autonomic regulation.HRV analysis has been extensively studied and has demonstrated associations with various cardiac abnormalities and overall cardiovascular risk.Figure 1 represents the HRV signal.
The emergence of deep neural networks (DNNs) and machine learning techniques has revolutionized medical research and healthcare by providing powerful tools for complex pattern recognition and prediction [5].By leveraging the rich information contained in HRV signals, coupled with the capabilities of DNNs, it is possible to develop accurate predictive models for cardiac abnormalities.Models like these have the potential to facilitate early identification, categorization of risk, and prompt intervention, ultimately resulting in better results for patients and decreased expenses in healthcare [2].
The significance of predicting cardiac abnormalities using HRV and DNNs lies in several key aspects: Non-Invasive and Cost-Effective: HRV analysis represents a non-intrusive and readily available approach, as it requires only a standard ECG recording.By incorporating HRV features into predictive models, clinicians can leverage existing diagnostic data without the need for additional invasive procedures or expensive imaging tests.This approach makes cardiac abnormality prediction more cost-effective and widely applicable, particularly in resource-constrained settings [6].
Early Detection and Prevention: Predictive models based on HRV and DNNs can identify subtle patterns and deviations from normal cardiac autonomic function, enabling early detection of cardiac abnormalities even before symptoms manifest.Early identification facilitates timely interventions, such as lifestyle modifications, medication management, or referrals for further investigations, potentially preventing the progression of cardiovascular diseases and reducing associated complications [7].
Personalized Medicine: The application of HRV and DNNs in cardiac abnormality prediction allows for personalized medicine approaches.By considering individual variations in HRV patterns, predictive models can tailor risk stratification and treatment strategies to specific patient profiles.This personalized approach enhances the precision and effectiveness of interventions, optimizing patient care and outcomes [8].
Healthcare Resource Optimization: Accurate prediction of cardiac abnormalities can optimize healthcare resources by directing diagnostic and therapeutic interventions to individuals at higher risk.By focusing resources on those who need them most, healthcare providers can improve efficiency, reduce unnecessary procedures or tests, and allocate resources more effectively, benefiting both patients and healthcare systems [9].The detailed representation of ECG signals is provided in Figure 2.
The proposed approach makes use of the capabilities of multimodal integrated deep neural networks (DNNs) to improve CVD prediction in males with type-2 diabetes.This novel method combines information from several sources to provide a more complete picture of a person's cardiovascular health, such as their heart rate variability (HRV) signals, medical imaging data, and clinical factors.The DNN model is able to capture complex patterns, nuanced connections, and non-linear correlations because of the smooth fusion of the multimodal input.The proposed model correctly classifies 27 cardiac abnormalities and predicts 8 classes of cardiac abnormalities Tachycardia, hypoglycaemia, low QRScomplex, normal sinus rhythm, pacing rhythm, sinusitis arrhythmia, sinus bradycardia, and sinus tachycardia.

Literature review
Ramdas Kapila et al. [10] provided an ensemble Quine McCluskey Binary Classifier (QMBC) approach to distinguish the patients with and without a history of heart disease.When applied to binary class datasets, the QMBC model's seven-model ensemble -composed of logistic regression, decision tree, random forest, K-nearest neighbour, naive Bayes, support vector machine, and multilayer perceptron demonstrated good performance.The prediction process was sped up with the help of feature selection and feature extraction methods.To build a smaller dataset, the author used Chi-Square and ANOVA methods to choose the top 10 traits.Then, principal components analysis was used to separate the subset into nine main factors.To achieve the minimum Boolean expression for the desired feature, an ensemble of all seven models and the Quine McCluskey method.The outputs of the seven models (x0, x1, x2, . . ., x6) were independent features, whereas the dependent feature was the goal characteristic.The seven machine learning models' predictions were combined with the desired feature to create a foaming dataset.Using the Quine McCluskey minimum Boolean equation and an 80:20 split for training and testing, to ensemble model to the dataset [10].
Arudra Sandeep Kumar et al. [11] focused on developing an efficient feature vector for classifying cardiovascular diseases using a two-level recurrent neural network (RNN).The authors proposed a method to extract associated features from raw data related to cardiovascular diseases.These associated features were then used to construct an efficient feature vector representation.The RNN architecture was employed to capture temporal dependencies and patterns in the data, allowing for a more accurate classification of cardiovascular diseases.The researchers sought to enhance the classification model's performance by implementing a two-tiered RNN structure.The first level of the RNN captures local temporal dependencies within short time intervals, while the second level captures longer-term dependencies over larger time intervals.This hierarchical approach helped in extracting relevant information and improving the classification accuracy.The study emphasized the significance of efficient feature representation and the use of recurrent neural networks for cardiovascular disease classification [11].
Hakseung Kim et al. [12] analysed the relationship between haemodynamic instability, cardiovascular events, and outcome prediction after traumatic brain injury (TBI).The researchers utilized a deep belief network approach to forecast the prognosis of patients with traumatic brain injury by tackling data distortions.The researchers investigated how hemodynamic instability and cardiovascular events, which can occur following TBI, can be indicative of the patient's prognosis.It has been proposed to utilize a deep belief network, which is a specialized form of deep learning architecture, for the purpose of examining physiological data and making predictions about potential outcomes.The study highlighted the importance of addressing artefacts in the physiological data collected from TBI patients.By effectively removing these artifacts, the researchers aimed to improve the accuracy of outcome prediction using the deep belief network analysis [12].
Swapna et al. [13] described how to use deep learning architectures to distinguish between HRV signals associated with diabetes and those associated with healthy individuals.To capture complex temporal patterns in the HRV data, the researcher employed a combination of long short-term memory (LSTM), convolutional neural network (CNN), and their hybrid techniques.To classify the data, the extracted features were fed into a support vector machine (SVM).The author found that utilizing SVM resulted in a 0.03% improvement in CNN performance and a 0.06% improvement in CNN-LSTM architecture [13].
Paolo Melillo et al. [14] focused on the utilization of HRV analysis for the automatic prediction of cardiovascular and cerebrovascular events.The authors proposed an automated approach that leverages HRV analysis to forecast the likelihood of cardiovascular and cerebrovascular events.This involved collecting and analyzing HRV data from individuals, identifying relevant HRV parameters, and utilizing machine learning algorithms to build prediction models.By applying machine learning techniques to HRV data, the researchers aimed to develop a reliable and automated system capable of identifying individuals at risk of cardiovascular and cerebrovascular events.The predictive models created through this approach potentially enabled early detection and intervention, improving patient outcomes [14].
Several major study gaps were visible in the current literature on cardiovascular disease (CVD) prediction in males with type-2 diabetes.First, the potential advantages of combining other types of data for CVD prediction, such as heart rate variability (HRV) signals, medical imaging, and clinical factors, have only been partially explored.Most research just used one type of data, missing out on the richness of the complementing information that other data types might provide.
Second, deep learning techniques were not being fully exploited here.Although deep neural networks (DNNs) have shown potential in healthcare analytics, further research is needed into how DNNs may be integrated with multimodal data to predict CVD in males with type 2 diabetes.There was room for the investigation of DNNs and their potential benefits because many current studies depend on conventional machine learning algorithms.
Third, multimodal data presents difficulties for feature engineering, an essential part of healthcare data analysis.It's possible that the complex linkages within such data will elude the current feature engineering methodologies.Therefore, there was a need for novel feature selection and engineering approaches targeted to healthcare data integration, yet this area of study has received insufficient attention.
Finally, type-2 diabetic men as a unique subpopulation are typically overlooked in the research.While many studies extrapolate their results to a larger population, this particular subgroup's characteristics and risk factors merit more in-depth investigation.By filling in these knowledge gaps, we created more accurate CVD prediction models and provided better care for men with type 2 diabetes.

Data and processing
This study used the publicly accessible PhysioNet database [15] to select male individuals with diabetes who had both electrocardiogram (ECG) recordings and clinical information available.The dataset consisted of HRV measurements from a diverse population, including both healthy individuals and patients diagnosed with cardiac abnormalities.The dataset covered a wide range of demographic and clinical characteristics such as time-domain, frequency-domain, and non-linear measures.The database provides insight into a subset of the South eastern American population.There are 10,344 12-lead ECGs in this training set, 5,551 from males and 4,793 from females, all of them 10 s long and sampled at 500 hertz.
On the ECG signal, there are a total of five nodes that are utilized in the derivation of various properties.These nodes have been given the labels P, Q, R, S, and T respectively.P waves, QRS complexes, T waves, and U waves are the individual components that make up these.One of these characteristics was the RR interval, which refers to the amount of time that passes between an ECG's two R nodes.This minuscule fraction of time accurately reflects an individual's heartbeat.A person's heart rate can be ascertained by measuring the temporal gap between two consecutive R waves.
The ECG signal can be used to determine several features, one of which is the heart rate.This was accomplished by examining the relationship between the RR intervals and their placement.In the representation of the heart rate array, each peak on the R-wave denotes a node.The instantaneous heart rate is equal to the interval of time between each R-wave peak.It was possible that spending the entire day with 12 electrodes attached to the body could be difficult, and there is technology available that can track the heart rate without the usage of an ECG signal.
The electrocardiogram (ECG) signals that are provided by the PhysioNet dataset were gathered in laboratories, and as a result, they could contain a significant amount of noise.This noise must be eliminated before the data can be used by deep learning models.In order to acquire ECG signals of a high quality for the models to analyse, it was critical to first clean the data and then remove any noise that may have been there.Figure 3 illustrates the comprehensive structure of the integrated model.

Removal of noise and other distractions
Both high-and low-frequency waves can introduce unwanted noise or interference into an electrocardiogram recording [16].Possible sources of noise include electrode interference, interference from muscle motion, channel interference, fluctuations in the baseline signal, and interference from power lines.Therefore, the following filters were used to clean ECG signals: • IIR Notch Filters • FIR Filters

Infinite impulse response
IIR (infinite impulse response) notch filters were digital filters used in signal processing to attenuate or eliminate a specific frequency or narrow band of frequencies, known as the notch frequency.These filters were distinguished by possessing an infinite impulse response, indicating that the output of the filter is influenced not just by the present and previous inputs, but also by the previous outputs [17].
Notch filters were commonly used to remove unwanted noise or interference from a signal while preserving the rest of the signal's frequency spectrum.They are particularly useful in applications where a specific frequency component needs to be removed, such as in audio processing, telecommunications, biomedical signal analysis, and power line noise cancellation.The basic principle behind an IIR notch filter involved the use of poles and zeros in the filter transfer function.The placement of zeros at the notch frequency aims to cancel out the undesired component, while the poles determine the shape and characteristics of the filter response.
One popular implementation of an IIR notch filter was the second-order notch filter, which has a transfer function of the form: where ω 0 was the normalized angular frequency of the notch, and r was the filter's damping factor.The location and sharpness of the notch in the frequency spectrum were controlled by adjusting these parameters [18].

Finite impulse response
Finite impulse response (FIR) filters were employed in signal processing to alter or extract specific frequency elements from a signal.In contrast to Infinite Impulse Response (IIR) filters, FIR filters possess a finite impulse response, implying that the filter's output solely relies on the present and previous inputs [19].
FIR filters were identified based on their response to an impulse input, known as the impulse response.These coefficients determined the filter's frequency response and defined the desired filtering characteristics [20].The result of an FIR filter was obtained by performing a convolution operation between the input signal and the filter taps.This operation involved multiplying each input sample with the corresponding filter coefficient and adding up the products.To implement this process, a delay line and a set of multipliers were utilized.Figure 4 represents the ECG signals after denoising.
The mathematical representation of an FIR filter's transfer function is: where p0, p1, p2 . . .pN are the filter coefficients, and N is the filter order.The filter order determines the length of the impulse response and affects the filter's frequency response characteristics.
where ω 0 was the normalized angular frequency of the notch, and r was the filter's damping factor.The location and sharpness of the notch in the frequency spectrum were controlled by adjusting these parameters.Here it took accurate ECG readings that were based on readings between 1 and 100 Hz.

R-peaks detection
The heart muscles generate electrical signals that are recorded in an electrocardiogram (ECG) reading.The ECG recording comprises three primary elements: the P wave, the QRS complex, and the T wave.Inside the QRS complex, it is possible to identify three discernible waves: the Q-wave, R-wave, and S-wave [21].There are several approaches available for R-peak detection, including the Hamilton method, the Christov method, the Engelse and Zeelenberg method, the Pan and Tompkins method, the Stationary Wavelet Transform, and the Two Moving Average method.The Rpeak, as defined in reference [22], corresponds to the time interval starting from the onset of the QRS complex and extending to the highest point of the R-wave.Among these methods, the approach developed by Pan and Tompkins yielded the most accurate results for detecting R-peaks, as depicted in Figures 5 and 6.

Calculation of RR intervals
HRV, a measure of the variability between successive R-peaks in the RR intervals, is calculated using the To calculate the RR interval the first step was to acquire or obtain an ECG signal.We utilized ECG data obtained from a publicly accessible PhysioNet database to perform calculations.Subsequently, signal processing methods were applied to identify and position the R-peaks in the ECG record.The R-peaks in the ECG waveform represent the initiation of ventricular depolarization.After identifying the R-peaks, we determined the time intervals between consecutive peaks by measuring the duration from the start of one R peak to the start of the next R peak.The RR interval was measured in milliseconds (ms) or seconds (s).Then saved the calculated RR interval values for further processing [23].This involved creating a time series of RR intervals.The representation of the RR intervals is given in Figures 7 and 8.

Outliers removal
Outlier removal in RR interval data was performed to eliminate abnormal or erroneous values that may distort the analysis of HRV or other related parameters.Once the RR intervals were found, the Interquartile   range (IQR) method was followed to remove the outliers.Calculated the IQR (difference between the 75th and 25th percentiles) and considered RR intervals outside a certain range of the IQR as outliers [24].

Extract HRV features
The time-domain features were obtained by calculating the Mean RR Interval, which represents the average duration of RR intervals and provides information about the average heart rate.The standard deviation of RR Intervals (SDNN) was used to assess the overall variability of RR intervals, indicating the overall heart rate variability [25].
Frequency-domain characteristics were obtained by analyzing the Power Spectral Density of HRV signals.This involved converting the signals into various frequency components using techniques like the Fast Fourier Transform (FFT).The primary frequency ranges considered were the Very Low Frequency (VLF) range, which includes frequencies below 0.04 Hz and is connected to hormonal and thermoregulatory influences.The LF range, covering frequencies from 0.04 to 0.15 Hz, shows a correlation with both sympathetic and parasympathetic nervous system activity.On the other hand, the HF range, which spans from 0.15 to 0.4 Hz, mainly represents parasympathetic (vagal) activity.Researchers employed the LF/HF Ratio as a measure to evaluate the equilibrium between sympathetic and parasympathetic activity.This ratio is determined by dividing the power of LF by the power of HF, serving as an index in their assessment.
Non-linear Features were extracted using Sample Entropy (SampEn) which measured the complexity and irregularity of the HRV signal, reflecting the presence of non-linear dynamics.Statistical Features were extracted using Minimum and Maximum RR Intervals in which the shortest and longest RR intervals were recorded.In the end, the HRV features were computed by applying the right mathematical methods.There were a total of 26 HRV features calculated in this study, and despite this large number, all of them were useful.

Normalization and data splitting
Normalization and data splitting were crucial steps in preparing the dataset for predicting cardiac abnormalities using heart rate variability (HRV) and deep neural networks (DNNs).Normalization ensured that the HRV features were on a consistent scale, avoiding biases caused by differences in measurement units or scales.The normalization technique applied in this research work was Min-Max Scaling [26].Min-Max scaling rescales the values of HRV features to a specified range, such as [0, 1].
The min-max scaling equation can be expressed as follows: In this formula, the variable "S" denotes the rescaled value, while "O" refers to the original value undergoing scaling.The terms "min" and "max" correspond to the smallest and largest values found in the dataset.Employing this method ensures that the proportional relationships among the data points are preserved during the scaling process.

Data splitting strategies
The partitioning of the dataset into training, validation, and testing sets is crucial for evaluating the performance of a predictive model and addressing concerns related to overfitting.In this research paper, a technique referred to as the Train-Validation-Test Split method was utilized.This approach entailed the division of the dataset into three distinct subsets: the training set, the validation set, and the testing set.
Traditionally, datasets were commonly split into approximately 70-80% for training, 10-15% for validation, and an additional 10-15% for testing purposes.Throughout the training process, the model acquired knowledge about patterns and relationships within the data through exposure to the training set.The validation dataset was then employed to fine-tune the model's hyperparameters and determine the most effective variant of the model.Finally, the testing dataset provided an unbiased assessment of the model's performance on unseen data.By applying cross-validation techniques, the validation process ensured that the model's performance was rigorously assessed, and hyperparameters were optimized while guarding against overfitting.This approach enhanced the model's ability to generalize well to new data and make accurate predictions for CVD in type-2 diabetic males.

Integrated deep neural network with HRV features
This study utilized a combined deep learning method that incorporates multiple types of data to predict cardiovascular disease (CVD).The effectiveness of this approach was assessed alongside conventional clinical risk factors for CVD in the validation datasets [3,[27][28][29][30].DNNs are a type of artificial neural networks that involve interconnected layers of neurons.They enable the creation of intricate hierarchical representations and the ability to learn from data.Resnet-50, which achieved high performance with fewer parameters than competing CNN architecture, was employed for this study's analysis of the images [31].There are 64 convolution layers spread throughout five convolution layers and four dense blocks in Resnet-50.Additionally, we concatenated it with completely connected layers by attaching it to them.We used unsupervised learning to "pretrain" the convolutional neural network, which enhanced its classification performance.The multimodal network included the pretrained Resnet-50 architecture.The DNN was utilized to understand the complex relationships among Clinical risk factors.The DNN consisted of five interconnected layers, each followed by a rectified linear unit.The network's weights were initialized randomly.Two individual networks were trained independently and then combined into a single structure with four fully connected layers.In order to mitigate overfitting, a dropout rate of 0.2 was applied.Subsequently, a final layer incorporating a sigmoid function was added to classify diseases.We applied optimization methods for parameter update.We updated the model's parameters using an optimization algorithm.The optimization algorithm determines how the gradients are used to update the parameters.Adam optimizer was used for optimization and grid search technique was used for hyper parameter tuning.Adam was an optimization algorithm that combined the benefits of AdaGrad and RMSprop.It had the ability to dynamically adjust the learning rate for each parameter based on their past gradients.This feature resulted in faster convergence and improved generalization.To prevent overfitting, early stopping was employed to monitor the performance on a separate validation set during training.The training procedure was prematurely terminated if the model's efficiency on the validation set began to decline.During the 50epoch training process, the model was trained using the Adam optimizer with an initial learning rate of 0.001.Every 10 epochs, the learning rate was reduced by 0.9 as a decay factor was applied.A batch size of 10 was employed, and the objective was to minimize the binary cross-entropy loss.The number of epochs determines how many times the entire training dataset is passed forward and backward through the neural network during training.Fixing the number of epochs is often done to control training time and prevent overfitting.Refer Figure 7 for the architecture of the integrated Multimodal Deep Learning Neural Network with HRV features (DNHRV).Because of its ability to learn key information autonomously, represent complex non-linear relationships, scale to huge datasets, and efficiently integrate multimodal data, neural networks have the potential to improve CVD prediction models for type-2 diabetic males.These skills are wellsuited to the healthcare data analytics field's particular demands and potential rewards (Figure 9).

Model evaluation
The integrated DNN model's effectiveness was assessed using various evaluation metrics, including accuracy, precision, recall, F1-score, and AUC-ROC.Furthermore, a perplexity matrix was employed to evaluate the model's performance.In order to enhance our comprehension of the model's abilities and pinpoint areas that could be improved, we employed appropriate plots or charts to visualize the evaluation outcomes.Then we compared the model's performance against baseline models or existing approaches to assess its effectiveness.

Evaluation metrics and performance evaluation
Evaluation metrics are essential in evaluating the effectiveness of a DNN model in predicting cardiac abnormalities based on HRV.A variety of performance metrics were utilized, encompassing accuracy, precision, recall (sensitivity), F1-score, negative predictive value (NPV), area under the receiver operating characteristic curve (AUC-ROC), and a representation of results through a confusion matrix.These measures provided us with valuable insights into the model's effectiveness,   enabling us to make well-informed assessments of its performance.The predictive model's efficacy in detecting cases of cardiovascular disease (CVD) while simultaneously reducing the number of false positives and negatives is measured using a battery of indicators.For the model to be useful in healthcare decision-making, this evaluation procedure is essential.Figures 10 and 11 represent the graph of the comparison of performance metrics.

Results and discussion
We compared the proposed integrated model with VGG16, Inception, LeNet 5, Alex Net and LSTM.When compared with these models the proposed model performed well in the model evaluation and prediction.
Refer Table 1 for the performance metric evaluation in the database.This section presents the findings and analysis of a study that focused on using HRV and a DNN to predict cardiac abnormalities.The DNN model's performance was evaluated using a range of assessment metrics and then contrasted with both baseline models and established techniques.The findings provide significant insights into the effectiveness of the suggested approach (Figure 12).
Model architecture, data dimensionality (combining HRV, medical imaging, and clinical variables), hyperparameters like batch size, and hardware all contribute to the computational complexity of our proposed DNHRV model for predicting CVD in males with type-2 diabetes.The length of each training session varies accordingly.The model was trained on a high-performance GPU, which greatly accelerated the process in comparison to training on a CPU.When working with a dataset of an acceptable size, the model tends to converge in a manageable amount of time.Overall, the results demonstrate the promise of using HRV and DNNs for predicting cardiac abnormalities.The findings support the potential clinical utility of this approach and pave the way for further advancements in cardiac risk assessment and early intervention strategies.The graph (Figure 8) shows the progression of metric evaluations during the model's training phase.

Performance metrics
To evaluate the effectiveness of the suggested approach, an examination was conducted utilizing the PhysioNet SHAREEDB for validation purposes.A total of 139 hypertensive individuals' nominal 24-h electrocardiographic (ECG) Holter recordings were included in the SHAREE database.Figure 13 represents the evaluation of the metric during training.They pave the way for this predictive model to be used in everyday clinical settings.This tool can be used as a supplementary diagnostic resource in a healthcare setting analogous to the real world.The programme can analyse the HRV data, medical imaging, and clinical history of a patient with type-2 diabetes seeking medical attention to offer a risk estimate for cardiovascular disease.Clinical decision-making is aided by this data, allowing for earlier interventions and more individualized treatment strategies.

Conclusion
In this research, we have presented a series of deeplearning methods that exhibit high accuracy in predicting cardiac disorders or events, offering promising applications in the field of medical forecasting.By implementing these models in real-time, we can potentially facilitate personalized and continuous monitoring of patients' heart health.Furthermore, these models can be utilized to monitor individuals working in stressful environments or for extended durations.Our novel model, DNHRV, has shown exceptional performance in comparison to previous models, exhibiting higher accuracy, precision, F1-score, and other performance metrics.To validate our prediction model, we conducted experiments using the PhysioNet SHAREEDB, which confirmed the model's excellent accuracy and stability.The DNHRV model achieved an impressive accuracy rate of 98.8%.

Disclosure statement
No potential conflict of interest was reported by the author(s).

Figure 3 .
Figure 3.The overall architecture of the integrated approach.

Figure 9 .
Figure 9. Architecture of the integrated multimodal deep learning neural network.

Figure 10 .
Figure 10.Comparison of accuracy with different models.

Figure 11 .
Figure 11.Comparison of performance metrics with various models.

Table 1 .
Performance evaluation metrics of the proposed model compared with other DNNs.