Brain Tumor Classification Based on Hybrid Optimized Multi-features Analysis Using Magnetic Resonance Imaging Dataset

ABSTRACT Brain tumors are deadly but become deadliest because of delayed and inefficient diagnosis process. Large variations in tumor types also instigate additional complexity. Machine vision brain tumor diagnosis addresses the problem. This research’s objective was to develop a brain tumor classification model based on machine vision techniques using brain Magnetic Resonance Imaging (MRI). For this purpose, a novel hybrid-brain-tumor-classification (HBTC) framework was designed and evaluated for the classification of cystic (cyst), glioma, meningioma (menin), and metastatic (meta) brain tumors. The proposed framework lessens the inherent complexities and boosts performance of the brain tumor diagnosis process. The brain MRI dataset was input to the HBTC framework, pre-processed, segmented to localize the tumor region. From the segmented dataset Co-occurrence matrix (COM), run-length matrix (RLM), and gradient features were extracted. After the application of hybrid multi-features, the nine most optimized features were selected and input to the framework’s classifiers, namely multilayer perception (MLP), J48, meta bagging (MB), and random tree (RT) to classify cyst, glioma, menin, and meta tumors. Maximum brain tumor classification performance achieved by the HBTC framework was 98.8%. The components and performance of the proposed framework show that it is a novel and robust classification framework.


Introduction
In the twenty-first century, no doubt, people's living standards, technology paradigms, healthcare facilities, and infrastructure have improved remarkably where computing and information technology have grabbed all over the other fields like an octopus (Patel, Patel, and Scholar 2016). Besides the developments, humankind is still facing several problems such as poverty, hunger, food, terrorism, water, diseases, climate change, and environmental pollution. Many fatal diseases are harming people like cancer, hepatitis, diabetes, heart problems, tuberculosis, and Alzheimer's (Bloom and Cadarette 2019). of the site and status of a brain tumor can steer to effective treatment, including chemotherapies or surgeries, depending upon the condition (Chourmouzi et al. 2014;Perkins,A. Liu 2016;J Strong and Garces 2016).
Manual diagnosis of brain tumors is tedious, delayed, qualitative, and imprecise; in contrast, we need an early and accurate quantitative diagnosis to save lives. Additionally, doctors also require the precise quantification of the tumor region for specific treatment (Perkins and Liu 2016;Wahid, Fayaz, and Salam Shah 2016).
Nowadays large arrays of MV techniques have been employed for data mining and classification in diverse fields. These include bioinformatics, agroinformatics, social sciences, robotics, etc. (Batchelor 2012a). Many MV techniques have recently evolved to develop automated brain tumor classification systems that assist radiologists and doctors for early and accurate diagnosis. Two primary objectives of these techniques are to segment and classify brain tumors. This research focused on developing a novel and robust machine vision system for automatic diagnosis and classification of brain tumors incorporating hybrid optimized multi-features analysis and machine vision classifiers to classify the four brain tumors using brain MRI scans El-Dahshan et al. 2014;Gonzalez 2018;Wahid, Fayaz, and Salam Shah 2016).
Major contributions in this research study are the acquisition of MRI dataset from the RD-BVH, hybrid multi-features optimization and implementation of HBTC framework.

Literature Review
From near past to recent, many researchers have been devoted in segmenting and classifying brain MRIs. This section provides a quick overview of some of the previous newer and the state-of-the-art approaches.
Arunkumar with his research fellows developed an outstanding brain tumor classification model based on classic machine vision approaches including Fourier transform image enhancement, fully automated trainable segmentation, histogram-of-oriented-gradients (HOG) feature extraction, ANN-based classification model. Non-ROI brain components are filtered using size, circularity and gray-dcale average. The developed model classified normal and abnormal brain slices with overall 92.14% classification accuracy using k-foldcross validation method (Arunkumar et al. 2020).
Sarah and research participants made a comparative analysis of the two brain tumor segmentation algorithms namely active-counter and ostuthreshold. Multimodel-Brain-Tumor-Image-Segmentation (BRATS) benchmark brain MRI dataset is used in this comparative analysis. Both algorithms were implemented using MATLAB and their similarity coefficients were evaluated by Dice, BFScore, and Jaccard evaluations. Results showed that similarity index of active-counter was higher than the ostu-threshold (Husham et al. 2021).
Mallikarjan, with his research fellows, proposed a brain tumor classification system to classify benign and malignant tumors. Region-growing was used for segmentation, and center-symmetric-level-binary-pattern (CSLBP) and gray-level -run-length matrix (GLRLM) features were fused, and the system gained noteworthy classification accuracy (Mudda, Manjunath, and Krishnamurthy 2020a).
Santhosh and his fellows presented a classification model to classify normal and abnormal brain tissues. The system was based on threshold and watershed segmentation. SVM gave overall classification accuracy up to 85.32% (Seere and Karibasappa 2020).
Hafeez Ullah and research fellows proposed a brain tumor classification model based on brain MRIs, acquired from RD-BVH. Intensity, shape and texture features were extracted from brain MRI slices and the proposed methology gained overall 97% classification accuracy (Ullah, Batool, and Gilanie 2018).
Rafael, with other researchers, suggested a system to classify glioblastoma and metastatic. First-and second-order statistics features were extracted, and Interclass-correlation-coefficient (ICC) was applied for feature reduction. support vector machine (SVM) gave 89.6% accuracy rate for area-under-thecurve (AUC) (Ortiz-Ramón et al. 2020).
Gupta and Sasidhar described a brain tumor classification model to classify low-grade and high-grade brain tumors. Ostu-thresholding was used for segmentation and 18 Segmentation-Based Fractal Texture Analysis (SFTA) features were input to SVM which gave 87% accuracy (Gupta and Sasidhar 2020).
In another research study, Gilani with his research fellows acquired brain MRI datasets from RD-BVH and Harvard Medical School (HMS), and suggested a brain tumor cross-validation train-test classification model, based on multiple texture parameters. The model achieved classification accuracies for different categories ranging from 86% to 92% .
Zacharacki, with his coauthors, proposed a brain tumor classification and grading system using machine learning techniques. Gliomas, meningioma, glioblastoma, and metastases were classified in a binary manner. Statistical features were extracted and optimized using rank-based criteria. Classification accuracies were notable using 3-fold-cross-validation (Zacharaki et al. 2009).
Marco, with his fellows, proposed a model to classify benign and malignant brain tumors. Brain images were segmented using adaptive thresholding. Fast Fourier transform (FFT) features were extracted and then optimized by minimal-redundancy-maximal-relevance (MRMR). Finally, SVM was applied to classify brain images into normal and abnormal (Alfonse and Salem 2016).
Mohsin, with his companion researchers, suggested a hybrid machine learning model for brain tumor identification. Segmentation was based on a feedback pulse-coupled neural network, and wavelet features were extracted. Feedforward backpropagation neural networks remarkably classified the brain images (Mohsen, El-Dahshan, and Salem 2012).
With fellow researchers, Selvaraj proposed a least-square-support-vectormachine (LS-SVM) based classification model to classify normal and abnormal brain MRI scans. Different statistical features, including GLCM features, were extracted. LS-SVM gave the highest results compared with K-nearest neighbor, MLP, and radial-basis-function (RBF) (Selvaraj et al. 2007).
Tiwari, with his coauthors, presented a model to classify meningioma and astrocytoma. Features including GLCM and GLRLM were extracted from ROIs. Firstly, 263 features were extracted, then 108 Laws texture energy measures (LTEM) features were added. Multilayered ANN gave results between 78.10% and 92.43% (Tiwari et al. 2017).
Salama and research fellows introduced a novel Multiple Features Evaluations Approach (MFEA) method to improve the Parkinson's diagnosis process based on classification of voice variations. MFEA used five feature selection agents, each giving the dominant features. Features obtained by all agents are then combined to form the optimal features set. Next, multiple classifiers are evaluated on the original features and the filtered optimal features set. Neural network gave maximum performance on the original features where Random Forest gave the maximum 99.49% classification accuracy using a 10-fold-cross-validation method (Mostafa et al. 2019).
Anter and Aboul described a liver tumor classification system to classify benign and malignant tumors. Feature vector comprised GLCM, LBP, SFTA, first-order statistics (FOS), and fused feature (FF). SVM, RF, artificial neural network (ANN), and KNN were applied using a cross-validation method to classify tumors (Anter and Hassenian 2018). M.A. Mohammed with colleague researchers proposed a breast cancer classification model. The model was based on the classical SFTA (segmentation-based fractal texture analysis) feature extraction method and ANN. Multi-fractal dimension features sets were created for 72 normal and 112 abnormal images. The method applied two-threshold-binary-decomposition (TTBD) and marginal boundaries to compute 12 fractal dimension sets. Next, ANN gave the noteworthy classification accuracy (Mohammed et al. 2018).
Dilliraj, with his coauthors, demonstrated different brain tumor segmentation approaches. Advanced Fuzzy C-Means, self-organizing map, and k-means algorithms were applied to compute the tumor region (Dilliraj, Vadivu, and Anbarasi 2014).
George and Manuel classified four grades of astrocytoma. For preprocessing, pulse coupled neural network and median filter were applied, and fuzzy c-means (FCM) were used for segmentation. First-order and second-order statistical features were reduced to 14 optimized features, which then were input to a deep neural network (DNN) and got 91% accuracy (George and Manuel 2019).
A deep learning scheme was designed by Heba and research team to classify four brain tumor classes namely normal, glioblastoma, sarcoma and metastatic bronchogenic carcinoma. Feature extraction was combined with discrete wavelet transform (DWT) and principal components analysis (PCA). Classifications were performed by deep neural network with seven hidden layers using 7-fold-cross-validation. The model gained overall 96.97% classification rate (Mohsen et al. 2018) Bhanumathi and Sangeetha proposed a model to classify glioma, acoustic neuroma, and meningioma. In the study, 20 normal and 30 infected images were taken. Convolutional neural network (CNN) models were used for classification where GoogLeNet gave the best accuracy (Bhanumathi and Sangeetha 2019).
Gumaei, with his companion researchers, proposed a classification model using regularized extreme learning machine (RELM) to discriminate between benign and malignant brain tumors. MRIs of meningioma, glioma, and pituitary were acquired and preprocessed. Feature selection was made using GIST, normalized GIST (NGIST), and PCA-NGIST. Using fivefold crossvalidation, RELM gave overall accuracy of 92.6144% (Gumaei et al. 2019).
Many of the described approaches are just based on two-class classification: normal and abnormal. Some lacks the localized tumor dataset and many of the above methodologies needs to improve the precision. Even if deep learning systems boost the precision but these systems have lot of processing overhead for training. Thus, we have introduced a novel hybrid-brain-tumorclassification novel HBTC framework based on hybrid features optimization and multifeatures analysis for the brain tumor classification.

Material and Method
In this research, four brain tumor types, namely, cyst, menin, glioma, and meta, were classified using the proposed HBTC framework. This section describes the mythology of the proposed HBTC framework.

Methodology
Methodology of the proposed HBTC framework mainly comprises dataset acquisition, pre-processing, segmentation, feature extraction, feature optimization, classification, and evaluation steps. Algorithm 1 presents the procedural steps of the proposed HBTC framework, shown in Table 1. The whole MRI dataset of the four brain tumor types is input in the first step. In the second step, the process entered into a loop. Every image is enhanced by applying a hybrid, Kernel plus Sobel plus Low-pass (K-S-L) filters preprocessing scheme. Next, a tumor region is segmented, and ROIs are created. Next, COM, RLM, and Gradient features are extracted from each segmented ROI, to form a features vector. This process continues until there remain no more unprocessed MRIs. In the third step, a hybrid features optimization technique is applied to obtain the most relevant properties of the images for multi-features analysis. In the fourth step, machine vision classifiers were applied using 10-fold cross-validation on the hybrid optimized features to classify brain tumors. In the fifth step, framework performance was evaluated.
The HBTC framework was run using MaZda 4.6 (Strzelecki et al. 2013) and Waikato Environment for Knowledge Analysis (Weka 3.8) (Witten et al. 2016), on Intel(R) Core (TM) i7-8550 U CPU with 16.0 GB of memory and 64bit operating system of Microsoft Windows 10. The following sections explain each step in detail.

Dataset Acquisition
Prime initiative to implement of the HBTC framework was the collection of brain tumor MRI dataset. This section describes the dataset acquision. MRI works well on soft tissues such as the liver, brain, lungs, etc. (Seere and Karibasappa 2020) [18]. For this study, T2-weighted MRI dataset of the four brain tumor types was acquired from MRI machine with specification (OptimaTM MR450w -70 cm) (Phal et al. 2008). installed in Radiology Department of Bahawal Victoria Hospital (RD-BVH) (Attique et al. 2012;Gilanie et al. 2019;Iqbal 2009;Ullah, Batool, and Gilanie 2018). The sample MRIs of the brain tumors are shown in Figure 1. The dataset comprised 250 patients for each tumor type. Therefore, a total dataset of 1000 (250 × 4) MRIs was collected. The expert radiologist examined and marked all the collected brain MRIs, to ensure the ground truth.

Pre-Processing
Medical images contain inherent inhomogeneity, poor quality, and noise, thus we need to enhance the quality of the collected brain MR images. This section describes the details of preprocessing federated into the HBTC framework. Preprocessing comprised re-sizing, gray-level conversion, cropping, normalization, and image enhancement. Thus, the acquired MRI datasets were required to enhance. For this study, at the first step, all the collected MRI datasets were converted into the standard format of 8-bit (.bmp) gray-scale and normalized by using histogram equalization. In the second step, the image noise was removed by applying K-S-L image enhancement. (Gonzalez and Rafael 2018;Lakshmi Devasena and Hemalatha 2011). Gradient masks of Sobel filter of size 3 × 3 were applied on the x-axis (Grx) and the other two on the y-axis (Gry), and gradient magnitude (Gr) with its approximation also computed. The effects of applying the filters are shown in Figure 2. Gradient equations are also given below. (1)

Segmentation
Image segmentation is a phase in image processing during which an image is split into various sub-groups according to its properties and features. It reduces the image's complexity to simplify further processing or analysis. This section discusses segmentation carried out in HBTC framework. Image segmentation falls into three categories: manual, automated, and semi-automated. Manual segmentation is tedious and error-prone because of human observational variability. Thus, it is used as a golden standard. The semi-automated method solves some problems of manual segmentation by using algorithms but still has limitations. There are diverse forms of semi-automated segmentation that reduce some observational variability, but not all of them (Sachdeva et al. 2013). Automated schemes do not involve users interactively. They fall into two classes: learning-based algorithms and non-learning-based algorithms. Learning approaches rely on training and testing phases, whereas non-learning strategies depend upon image and disease characteristics (Kevin Zhou, Fichtinger, and Rueckert 2019).
In the HBTC framework, threshold and clustering-based segmentation (TACS) scheme was applied and segmented ROIs were created (Ortiz-Ramón et al. 2020). TACS's procedural steps are given in Table 2, and its overall model is expressed in Figure 3. Background pixels (B_P) were computed based on a specific threshold value in the first step. The image background was considered a complete cluster based on the threshold. In the next step, the point was determined as a value of base-pixel. This base value of the pixel was used to weigh up all the neighbors of the pixel. With the same approach, the whole image was completed. If the pixel gray intensity value of (P_G) was greater than the B_P value, it was examined as a pixel region of fore-ground (F_P) and grown-up for the whole cluster by determining its R_O_I (region of interest) or also commonly known as the foreground region.

Feature Extraction
This step intends to extract specific properties of the segmented ROIs to discriminate the patterns of input images (Radhakrishnan and Kuttiannan 2012). This section addresses the feature extraction phase of the HBTC framework. The relevant properties are collected into a feature vector to process for the next stage. For texture analysis, we extracted occurrence matrix (COM), run-length matrix (RLM), and gradient features from the segmented ROIs of the brain (Ortiz-Ramón et al. 2020;Tiwari et al. 2017). For this purpose, ROIs of sizes (10 × 10), (15 × 15), and (20 × 20) were created, and 220 COM, 20 RLM, and 5 Gradient features were extracted from each ROI (Anter and Hassenian 2018;Gonzalez and Rafael 2018;Seere and Karibasappa 2020). Thus, three datasets were obtained for experiments. The total feature vector volume (FVV) for each ROI dataset was 490000 (2000 x 245). We performed  all Experiments on a machine with processor Core (TM) i7 Intel, 1.8 GHz, containing 16 GB of RAM and a 64-bit operating system of Microsoft Windows 10. Below all the extracted features are described precisely.

Run Length Matrix (RLM)
Galloway proposed this method and is called run-length. It computes the gray or color level runs of various lengths also called length or range of run. Gray or color scales are measured as a multitude of contiguous pixels having the same gray or color scales in a linear fashion. The number of pixels is measured horizontally in four dimensions(0 � ,45 � ,90 � and 135 � ). In our study, we extracted 20 RLM features for each image, and matrices of various runs are formulated with respect to each specified θ. We G.L.N computes similarity factor between gray levels intensities of the given image and a less G.L.N value indicates a more similarity between intensities, and its equation is shown below in Equation 4.
G.L.N.N is a normalized version of G.L.N with significant quality improvement and it also measures similarity factor between gray level intensities of the given image and a less G.L.N value indicates a more similarity between intensities, and its equation is shown in Equation 5) G:L:N:N ¼ P g k m¼1 ½ P g r n¼1 2 m; njσ ð Þ� 2 g r σ ð Þ 2 (5) R.L.N computes similarity index between run-lengths of the whole image and a less R.L.N value indicates high homogeneity factor, and its equation is shown in Equation 6.
R.L.N.N is a normalized version of R.L.N with significant improvement in quality and it also measures similarity factor between run-lengths of the whole image and a less R.L.N.N value indicates high homogeneity factor, and its equation is given below in Equation 7.
Coarseness of the underlying texture is computed by R.P which is given below in Eq. (8).

Co-occurence Matrix (COM)
Co-occurrence matrix (COM) features are also called second-order statistical features, widely used for texture analysis (Haralick, Shanmugam, and Its'Hak 1973). Co-occurrence features measure the dependency and relationship between intensities of neighboring pixels by considering their distances and angles. This method is widely used to discriminate the texture of an underlying image.
Correlation determines the similarity of pixels for some pixel distance in the input image, given in Equation 10.
Entropy measures the whole content of an image and the neighborhood variability of voxels, and its equation is given in Equation 11.
To measure homogeneity at local level in an image the inverse difference is computed, given in Equation 12 .
To obtain the contrast factor in an image inertia value is quantified and its equation is given in Equation 13.
The gradient Variance describes similarity between the intensities of pixels within a given ROI, its equation is given in Equation 13.
The kurtosis of the gradient is calculated to measure the flatness of the distribution between pixels intensity and its equation is given in Equation 13. Reducing the number of input properties for a predictive model is known as feature optimization. This reduces the computational cost of modeling and enhances the performance. This section discusses the feature optimization incorporated in the HBTC framework. After feature extraction, the most significant part of our proposed machine vision HBTC framework was feature optimization. The main objective of this task was to extract the most dominant features and discard irrelevant ones. We were observed that all extracted features of the underlying MRI dataset in this research experiment were insignificant for brain tumors classification. The extracted feature vector volume (FVV) comprised a large number of 4,90,000 (2000 × 245) features. Such a large FVV was not sufficient for tumor classification. Moreover, time and memory were also additional issues to deal with such a large dataset. It cannot be previously determined what the best features for texture analysis (Chandrashekar and Sahin 2014;Sachdeva et al. 2013). Thus, the feature optimization phase plays a vital role in improving the quality of the mining and analysis process in image processing, particularly in medical image analysis, reducing the curse-of-dimensionality (COD) problem (Pereira et al. 2016). There are many feature optimization methods where principal component analysis (PCA) is a well-known approach but with some barriers (El-Dahshan et al. 2014). PCA is not sufficient to operate on datasets that are large and linearly inseparable . Furthermore, it is unsupervised, but our dataset was labeled. However, we adopted a novel hybrid feature optimization technique based on (F+ MI+PA) + CFS. At first, F+ MI+PA reduced FVV to 30 optimized features. Still, such a large (2000 × 30 = 60000) FVV was insufficient for rich texture analysis. Thus, further CSF reduced the wide-ranged FVV to 9 optimized features with a sufficiently decreased (2000 × 9 = 18000). Below are the Mathematical formulations and descriptions of all the mentioned approaches.

Fisher Coefficient (F)
The feature reduction technique should select the highest discriminated features and discard the other ones. If V is a feature vector {f1, f2, . . ., fn}, then Fisher index gives the measure of discrimination between fi (i = 1 to n), and it also applies between classes in the same manner. Dominant features have a high Fisher index, and the others with a lower Fisher index are considered the low ones. This method uses the fisher coefficient for feature reduction and describes as a ratio between classes or within-class variance (Saqlain et al. 2019).

Probability of Error Plus Average Correlation Coefficient (POE + ACC)
POE describes the ratio of improper classified samples to the total number of samples analyzed in the underlying dataset. The average correlation coefficient computes the absolute value between previously chosen features and newly selected features. When the extended average sum for the correlation coefficient is computed, this sum is called the average correlation coefficient (ACC). This study combined both approaches in the feature selection process by adding weighted values in the formula. Our hybrid approach gave the features, which were selected with the lowest value of POE + ACC. Below are the sequences of extended formulas POE's (Chandrashekar and Sahin 2014;Shehzad et al. 2020).

Mutual Information (MI)
It is a rank-based method to determine dependency between two random variables. The probability density functions of these variables are required to compute MI. This method uses separate random variables representing texture features and classification decisions, and the large value helps to discriminate the key features or class membership. This method also gives up to 10 optimized features for a large value of mutual information coefficient (Chandrashekar and Sahin 2014;Shehzad et al. 2020).

Correlation-based-Feature-Selection (CFS)
It is a supervised feature selection technique known as CFS. In this research, CFS combined with F+ MI+PA gave nine selected optimal features, shown in Table 3. The mathematical formulation of CFS is given below .
U z ¼ V� ρ pM ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi Classification is a technique in which input types are classified into an analogous group of classes. The selection of suitable classifiers involves many factors. These factors include performance, accuracy, and computational resource (Anter and Ella Hassenian 2018;El-Dahshan et al. 2014). This section describes the classification phase of the HBTC framework.
After the selecting the optimal set of features, the next step in the HBTC framework was to predict and assemble the dataset into the four tumor classes. In this experiment, four MV classifiers, namely MLP, J48, MB, and RT, were deployed using the k-fold cross-validation method, where k was set to 10. MV classifiers were deployed on the nine selected features to classify the four brain tumors (Batchelor 2012;Witten et al. 2016). RT builds a tree in which attributes are selected randomly at every node with no pruning, and it also provides an option to compute probabilities of classes based on backfitting (Witten 2017). MB builds random subsets of a primary dataset, forms an accumulated prediction by the productions produced by its supported classifiers, and minimizes variance to overcome over-fitting problems. J48 is an advanced random tree with tree pruning to increase results accuracy (Witten 2017). MLP is a strong layered, supervised-learning neural network style for data classification, trained by backpropagation. It has a non-linear activation so works well on non-linear datasets (Alfonse and Salem 2016; Shehzad et al. 2020;Witten 2017). The complete framework of HBTC is given in Figure 4.

Evaluation
Performance evaluation is an integral part of an analytical model. It is the task of measuring the statistical score of the model and assessing the significance of the generated results. In this section, we present the performance evaluation of the HBTC framework. We evaluated the HBTC framework with these performance measuring parameters such as kappa Statistics, true positive (T_P), false positive (F_P), receiver operating characteristic (R_O_C), Time in seconds (Time Sec), and overall accuracy Hajian-Tilaki 2013).

Results and Discussion
This section provides the details of all the three experiments performed during the classification phase of the HBTC framework. During this phase, we performed three different experiments. In each experiment, four MV classifiers, namely, MLP, J48, MB, and RT, were deployed on finally selected hybrid optimized multi-features dataset to classify cyst, glioma, menin, and meta brain tumors. All the classifiers performed well, but MLP defeated the others. MLP is a good classifier for low-quality, massive and noisy datasets, as in the case of medical imaging datasets. A mathematical formulation of MLP is given in the following equation (Witten 2017).
where I denotes number of input neurons, σ n represents bias, c n denotes input, and μ n c n determines the weight. Activation function is given below.
Neuronal output of MLP is presented by the equation given as: Parameters for MLP are shown in the following Table 4.

Experiment 1
In the first experiment, the multi-features dataset of ROIs of sizes 10 × 10 was input to the four MV classifiers. MLP gave an overall accuracy of 64.8% to classify the four brain tumors. Results of the four MV classifiers for classification of a cyst, menin, glioma, and meta tumor, are shown in Table 5. The confusion matrix table of MLP is shown in Table 6.

Experiment 2
In the first experiment, results were not satisfactory in first experiment and remained less than 70%, thus, we started the second experiment and created the dataset of ROIs of sizes 15 × 15 was made, and input it to the MV classifiers. Classification accuracies improved in this experiment, and J48 and MLP gave overall classification accuracies of 89.5% and 88.9%, respectively. Classification results of MB and RT were 87.8% and 81.4%, respectively. Table 7 shows the significant parameters of this experiment, and the confusion matrix values are shown in Table 8.

Experiment 3
When we increased the ROI size, the results improved in the second experiment. Thus we started the third experiment by again increasing the ROI size. This experiment created the ROIs of sizes 20 × 20 and formed a new multi-features dataset on the same MRIs. It was then input to the four MV classifiers. In this experiment, MLP outperformed and gave overall accuracy of 98.3%. MLP gave the best results, but other classifiers also improved on this dataset. J48, MB, and RT gave 96.8%, 95.8%, and 94.8% accuracy. These results are presented in Table 9, and confusion matrix values are shown in Table 10.   Total  cyst  239  5  3  3  250  glioma  5  245  0  0  250  menin  3  77  167  3  250  meta  4  0  2  244  250 Performance comparison graph of MLP for the four brain tumor types on datasets of ROIs of sizes 10 × 10, 15 × 15, and 20 × 20 is shown in Figure 5. MLP gave the best classification results when applied to the ROI dataset of sizes 20 × 20. The overall comparison graph of the classification results of all the four classifiers is shown in Figure 6. The figure shows that MLP outperformed for classification of four tumor types named cyst, glioma, menin, and meta. Now we summarize our discussion with the following highlights. This study introduced a novel HBTC framework based on machine vision approaches to classify brain tumors. We successfully designed, implemented, and evaluated all the components of the proposed HBTC framework. In the initial phase, we acquired MR images of four brain tumor types, and after preprocessing, we    a better result of 95.8% as it reduces variance and uses a strong aggregate prediction scheme, but still, it does not grasp dataset bias properly Witten 2017). Since J48 applies pruning on the target dataset, noncritical sections of the generated tree are removed, and the over-fitting problem is overcome (Zhou, Rueckert, Daniel,, Fichtinger, Gabor,, 2020). For this reason, J48 produced an average performance of 96.8%. Finally, MLP outmatched others because it provides a strong neural network train-test layered model. It also yields non-linear activations and outperforms on datasets that are linearly non-separable (Asha Kiranmai and Jaya Laxmi 2018; Gonzalez and Rafael 2018;Mohan and Subashini 2018). It is concluded that the proposed HBTC framework outperformed in the brain tumors classification. A comparison between our proposed framework and other streamed classification techniques is given in Table 11.

Conclusion
The main objective of this research study was to classify four brain tumor types using a novel machine vision-based HBTC framework. The proposed framework input MRIs and applied histogram equalization to normalize brain images, and the hybrid K-S-L scheme reduced the image noise. To segment the tumor region, we carried out the TACS scheme. Following that multiple feature extraction approaches were used to extract texture characteristics of brain tumors. The multi-features dataset included COM, RLM, and Gradient texture features. Next, our framework applied a hybrid multi-features optimization method on the feature vector, which produced a fully optimized feature dataset. In the end, MV classifiers, namely RT, MB, J48, and MLP, were evaluated on the dataset. All classifiers provide splendid results, but MLP showed outstanding accuracy of 98.3% to classify four brain tumors. The average accuracy of J48, MB, and RT was 96.8%, 95.8%, and 94.8%, respectively. The framework will help radiologists and doctors to diagnose brain tumors correctly. The framework is robust and will surely minimize human error in diagnosing brain tumors.