Anti-cancer and immunomodulatory evaluation of new nicotinamide derivatives as potential VEGFR-2 inhibitors and apoptosis inducers: in vitro and in silico studies

Abstract New nicotinamide derivatives 6, 7, 10, and 11 were designed and synthesised based on the essential features of the VEGFR-2 inhibitors. Compound 10 revealed the highest anti-proliferative activities with IC50 values of 15.4 and 9.8 µM against HCT-116 and HepG2, respectively compared to sorafenib (IC50 = 9.30 and 7.40 µM). Compound 7 owned promising cytotoxic activities with IC50 values of 15.7 and 15.5 µM against the same cell lines, respectively. Subsequently, the VEGFR-2 inhibitory activities were assessed for the titled compounds to exhibit VEGFR-2 inhibition with sub-micromolar IC50 values. Moreover, compound 7 induced the cell cycle cessation at the cycle at %G2-M and G0-G1phases, and induced apoptosis in the HCT-116. Compounds 7 and 10 reduced the levels of TNF-α by 81.6 and 84.5% as well as IL-6 by 88.4 and 60.9%, respectively, compared to dexamethasone (82.4 and 93.1%). In silico docking, molecular dynamics simulations, ADMET, and toxicity studies were carried out.


Chemistry and material
All melting points were carried out by open capillary method on a Gallen kamp Melting point apparatus. The infrared spectra were recorded on pye Unicam SP 1000 IR spectrophotometer using potassium bromide disc technique. Proton magnetic resonance 1HNMR spectra were recorded on a Bruker 400 Megahertz-nuclear magnetic resonance (400 MHZ-NMR) spectrophotometer. Carbon-13 (C13) nuclear magnetic resonance (13CNMR) spectra were recorded on a Bruker 100 Megahertz-nuclear magnetic resonance (100 MHZ-NMR) spectrophotometer. Tetramethylsilane (TMS) was used as internal standard and chemical shifts were measured in δ scale one part per million (ppm). All compounds were within ±0.4 of the theoretical values. The reactions were monitored by thin-layer chromatography (TLC) using TLC sheets precoated with UV fluorescent silica gel Merck 60 F254 plates and were visualized using ultraviolet (UV) lamp and different solvents as mobile phases.

2-
Spectral data of final target compounds 6,7,10 and 11 8 were studied for molecular docking against immunomodulatory proteins TNF-α (PDB ID: 2AZ5) and IL-6 (PDB ID: 1ALU) to investigate interactions patterns toward these active proteins. To prepare the target proteins, water molecules were removed, and the valances of atoms were corrected through protonation of the whole molecule. Then energy minimization was carried out by applying CHARMM and MMFF94 force fields. After that, the active binding site was defined and prepared for docking. The validation process was performed by redocking the co-crystallized ligand. The designed compounds were drawn using ChemBioDraw Ultra 14.0 and saved as MDL-SD format.
The sketched compounds were constructed from fragment libraries in MOE program, protonated, followed by energy minimization then prepared for docking. Docking process was carried through Triangle matcher placement inserted in compute window, and the scoring function was London dG. Ten conformers (poses) for each molecule were generated using genetic algorithm searches. The free energies and binding modes of the designed molecules against VEGFR-2 were determined. The most ideal pose was selected according to its binding free energy as well as its binding mode with target molecule. • Binding mode of Sorafenib

ADMET studies
ADMET descriptors (absorption, distribution, metabolism, excretion and toxicity) of the synthesized compounds were determined using Discovery studio 4.0.
At first, the CHARMM force field was applied then the compounds were prepared and minimized according to the preparation of small molecule protocol. Then ADMET descriptors protocol was applied to carry out these studies.

Toxicity studies
The toxicity parameters of the synthesized compounds were calculated using Discovery studio 4.0. Sorafenib was used as a reference drug. At first, the CHARMM force field was applied then the compounds were prepared and minimized according to the preparation of small molecule protocol. Then different parameters were calculated from toxicity prediction (extensible) protocol.

Molecular dynamics simulation & MM/PBSA
The system was prepared using the web-based CHARMM-GUI [1][2][3] interface with the CHARMM36 force field [4]. All the simulations were done using the NAMD 2.13 [5] package. The TIP3P explicit solvation model was used [6], and the periodic boundary conditions were set with a dimension of 82.65 Å, 82.36 Å, and 82.64 Å in x, y, and z, respectively. The parameters for the top docking results were generated using the CHARMM general force field [7] Afterward, the system was neutralized using ----(Cl -/Na + ) ions. The MD protocols involved minimization, equilibration, and production. a 2 fs time step of integration was chosen for all MD simulations, the equilibration was carried in the canonical (NVT) ensemble, while the isothermalisobaric (NPT) ensemble was for the production. Through the 100 ns of MD production, the pressure was set at 1 atm using the Nose´-Hoover Langevin piston barostat [8,9] with a Langevin piston decay of 0.05 ps and a period of 0.1 ps. The temperature was set at 298.15 K using the Langevin thermostat [10]. A distance cutoff of 12.0 Å was applied to short-range nonbonded interactions with a pair list distance of 16 Å, and Lennard Jones interactions were smoothly truncated at 8.0 Å. Long-range electrostatic interactions were treated using the particle-mesh Ewald (PME) method [11,12], where a grid spacing of 1.0 Å was used for all simulation cells. All covalent bonds involving hydrogen atoms were constrained using the SHAKE algorithm [13]. For consistency, we have applied the same protocol for all MD simulations.

Binding Energy Calculations
The one-average molecular mechanics generalized Born surface area (MM/GBSA) [14,15] approach implemented in the MOLAICAL code[16] was used for the relative binding energy calculations, in which the ligand ( ) L binds to the protein  Prediction: Positive if the Bayesian score is above the estimated best cutoff value from minimizing the false positive and false negative rate. Probability: The esimated probability that the sample is in the positive category. This assumes that the Bayesian score follows a normal distribution and is different from the prediction using a cutoff. Enrichment: An estimate of enrichment, that is, the increased likelihood (versus random) of this sample being in the category. Bayesian Score: The standard Laplacian-modified Bayesian score. Mahalanobis Distance: The Mahalanobis distance (MD) is the distance to the center of the training data. The larger the MD, the less trustworthy the prediction. Mahalanobis Distance p-value: The p-value gives the fraction of training data with an MD greater than or equal to the one for the given sample, assuming normally distributed data. The smaller the p-value, the less trustworthy the prediciton. For highly nonnormal X properties (e.g., fingerprints), the MD p-value is wildly inaccurate.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set.

1.
All properties and OPS components are within expected ranges.

Feature Contribution
Top features for positive contribution Prediction: Positive if the Bayesian score is above the estimated best cutoff value from minimizing the false positive and false negative rate. Probability: The esimated probability that the sample is in the positive category. This assumes that the Bayesian score follows a normal distribution and is different from the prediction using a cutoff. Enrichment: An estimate of enrichment, that is, the increased likelihood (versus random) of this sample being in the category. Bayesian Score: The standard Laplacian-modified Bayesian score. Mahalanobis Distance: The Mahalanobis distance (MD) is the distance to the center of the training data. The larger the MD, the less trustworthy the prediction. Mahalanobis Distance p-value: The p-value gives the fraction of training data with an MD greater than or equal to the one for the given sample, assuming normally distributed data. The smaller the p-value, the less trustworthy the prediciton. For highly nonnormal X properties (e.g., fingerprints), the MD p-value is wildly inaccurate.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set.

1.
All properties and OPS components are within expected ranges.

Feature Contribution
Top features for positive contribution Prediction: Positive if the Bayesian score is above the estimated best cutoff value from minimizing the false positive and false negative rate. Probability: The esimated probability that the sample is in the positive category. This assumes that the Bayesian score follows a normal distribution and is different from the prediction using a cutoff. Enrichment: An estimate of enrichment, that is, the increased likelihood (versus random) of this sample being in the category. Bayesian Score: The standard Laplacian-modified Bayesian score. Mahalanobis Distance: The Mahalanobis distance (MD) is the distance to the center of the training data. The larger the MD, the less trustworthy the prediction. Mahalanobis Distance p-value: The p-value gives the fraction of training data with an MD greater than or equal to the one for the given sample, assuming normally distributed data. The smaller the p-value, the less trustworthy the prediciton. For highly nonnormal X properties (e.g., fingerprints), the MD p-value is wildly inaccurate.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set.

1.
All properties and OPS components are within expected ranges.

Feature Contribution
Top features for positive contribution Prediction: Positive if the Bayesian score is above the estimated best cutoff value from minimizing the false positive and false negative rate. Probability: The esimated probability that the sample is in the positive category. This assumes that the Bayesian score follows a normal distribution and is different from the prediction using a cutoff. Enrichment: An estimate of enrichment, that is, the increased likelihood (versus random) of this sample being in the category. Bayesian Score: The standard Laplacian-modified Bayesian score. Mahalanobis Distance: The Mahalanobis distance (MD) is the distance to the center of the training data. The larger the MD, the less trustworthy the prediction. Mahalanobis Distance p-value: The p-value gives the fraction of training data with an MD greater than or equal to the one for the given sample, assuming normally distributed data. The smaller the p-value, the less trustworthy the prediciton. For highly nonnormal X properties (e.g., fingerprints), the MD p-value is wildly inaccurate.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set.

1.
All properties and OPS components are within expected ranges.

Feature Contribution
Top features for positive contribution Prediction: Positive if the Bayesian score is above the estimated best cutoff value from minimizing the false positive and false negative rate. Probability: The esimated probability that the sample is in the positive category. This assumes that the Bayesian score follows a normal distribution and is different from the prediction using a cutoff. Enrichment: An estimate of enrichment, that is, the increased likelihood (versus random) of this sample being in the category. Bayesian Score: The standard Laplacian-modified Bayesian score. Mahalanobis Distance: The Mahalanobis distance (MD) is the distance to the center of the training data. The larger the MD, the less trustworthy the prediction. Mahalanobis Distance p-value: The p-value gives the fraction of training data with an MD greater than or equal to the one for the given sample, assuming normally distributed data. The smaller the p-value, the less trustworthy the prediciton. For highly nonnormal X properties (e.g., fingerprints), the MD p-value is wildly inaccurate.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set.

1.
All properties and OPS components are within expected ranges.

Feature Contribution
Top features for positive contribution Prediction: Positive if the Bayesian score is above the estimated best cutoff value from minimizing the false positive and false negative rate. Probability: The esimated probability that the sample is in the positive category. This assumes that the Bayesian score follows a normal distribution and is different from the prediction using a cutoff. Enrichment: An estimate of enrichment, that is, the increased likelihood (versus random) of this sample being in the category. Bayesian Score: The standard Laplacian-modified Bayesian score. Mahalanobis Distance: The Mahalanobis distance (MD) is the distance to the center of the training data. The larger the MD, the less trustworthy the prediction. Mahalanobis Distance p-value: The p-value gives the fraction of training data with an MD greater than or equal to the one for the given sample, assuming normally distributed data. The smaller the p-value, the less trustworthy the prediciton. For highly nonnormal X properties (e.g., fingerprints), the MD p-value is wildly inaccurate.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set. Prediction: Positive if the Bayesian score is above the estimated best cutoff value from minimizing the false positive and false negative rate. Probability: The esimated probability that the sample is in the positive category. This assumes that the Bayesian score follows a normal distribution and is different from the prediction using a cutoff. Enrichment: An estimate of enrichment, that is, the increased likelihood (versus random) of this sample being in the category. Bayesian Score: The standard Laplacian-modified Bayesian score. Mahalanobis Distance: The Mahalanobis distance (MD) is the distance to the center of the training data. The larger the MD, the less trustworthy the prediction. Mahalanobis Distance p-value: The p-value gives the fraction of training data with an MD greater than or equal to the one for the given sample, assuming normally distributed data. The smaller the p-value, the less trustworthy the prediciton. For highly nonnormal X properties (e.g., fingerprints), the MD p-value is wildly inaccurate.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set. Prediction: Positive if the Bayesian score is above the estimated best cutoff value from minimizing the false positive and false negative rate. Probability: The esimated probability that the sample is in the positive category. This assumes that the Bayesian score follows a normal distribution and is different from the prediction using a cutoff. Enrichment: An estimate of enrichment, that is, the increased likelihood (versus random) of this sample being in the category. Bayesian Score: The standard Laplacian-modified Bayesian score. Mahalanobis Distance: The Mahalanobis distance (MD) is the distance to the center of the training data. The larger the MD, the less trustworthy the prediction. Mahalanobis Distance p-value: The p-value gives the fraction of training data with an MD greater than or equal to the one for the given sample, assuming normally distributed data. The smaller the p-value, the less trustworthy the prediciton. For highly nonnormal X properties (e.g., fingerprints), the MD p-value is wildly inaccurate.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set. Prediction: Positive if the Bayesian score is above the estimated best cutoff value from minimizing the false positive and false negative rate. Probability: The esimated probability that the sample is in the positive category. This assumes that the Bayesian score follows a normal distribution and is different from the prediction using a cutoff. Enrichment: An estimate of enrichment, that is, the increased likelihood (versus random) of this sample being in the category. Bayesian Score: The standard Laplacian-modified Bayesian score. Mahalanobis Distance: The Mahalanobis distance (MD) is the distance to the center of the training data. The larger the MD, the less trustworthy the prediction. Mahalanobis Distance p-value: The p-value gives the fraction of training data with an MD greater than or equal to the one for the given sample, assuming normally distributed data. The smaller the p-value, the less trustworthy the prediciton. For highly nonnormal X properties (e.g., fingerprints), the MD p-value is wildly inaccurate.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set.

1.
All properties and OPS components are within expected ranges.

Feature Contribution
Top features for positive contribution Prediction: Positive if the Bayesian score is above the estimated best cutoff value from minimizing the false positive and false negative rate. Probability: The esimated probability that the sample is in the positive category. This assumes that the Bayesian score follows a normal distribution and is different from the prediction using a cutoff. Enrichment: An estimate of enrichment, that is, the increased likelihood (versus random) of this sample being in the category. Bayesian Score: The standard Laplacian-modified Bayesian score. Mahalanobis Distance: The Mahalanobis distance (MD) is the distance to the center of the training data. The larger the MD, the less trustworthy the prediction. Mahalanobis Distance p-value: The p-value gives the fraction of training data with an MD greater than or equal to the one for the given sample, assuming normally distributed data. The smaller the p-value, the less trustworthy the prediciton. For highly nonnormal X properties (e.g., fingerprints), the MD p-value is wildly inaccurate.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set. Prediction: Positive if the Bayesian score is above the estimated best cutoff value from minimizing the false positive and false negative rate. Probability: The esimated probability that the sample is in the positive category. This assumes that the Bayesian score follows a normal distribution and is different from the prediction using a cutoff. Enrichment: An estimate of enrichment, that is, the increased likelihood (versus random) of this sample being in the category. Bayesian Score: The standard Laplacian-modified Bayesian score. Mahalanobis Distance: The Mahalanobis distance (MD) is the distance to the center of the training data. The larger the MD, the less trustworthy the prediction. Mahalanobis Distance p-value: The p-value gives the fraction of training data with an MD greater than or equal to the one for the given sample, assuming normally distributed data. The smaller the p-value, the less trustworthy the prediciton. For highly nonnormal X properties (e.g., fingerprints), the MD p-value is wildly inaccurate.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set. Prediction: Positive if the Bayesian score is above the estimated best cutoff value from minimizing the false positive and false negative rate. Probability: The esimated probability that the sample is in the positive category. This assumes that the Bayesian score follows a normal distribution and is different from the prediction using a cutoff. Enrichment: An estimate of enrichment, that is, the increased likelihood (versus random) of this sample being in the category. Bayesian Score: The standard Laplacian-modified Bayesian score. Mahalanobis Distance: The Mahalanobis distance (MD) is the distance to the center of the training data. The larger the MD, the less trustworthy the prediction. Mahalanobis Distance p-value: The p-value gives the fraction of training data with an MD greater than or equal to the one for the given sample, assuming normally distributed data. The smaller the p-value, the less trustworthy the prediciton. For highly nonnormal X properties (e.g., fingerprints), the MD p-value is wildly inaccurate.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set. Prediction: Positive if the Bayesian score is above the estimated best cutoff value from minimizing the false positive and false negative rate. Probability: The esimated probability that the sample is in the positive category. This assumes that the Bayesian score follows a normal distribution and is different from the prediction using a cutoff. Enrichment: An estimate of enrichment, that is, the increased likelihood (versus random) of this sample being in the category. Bayesian Score: The standard Laplacian-modified Bayesian score. Mahalanobis Distance: The Mahalanobis distance (MD) is the distance to the center of the training data. The larger the MD, the less trustworthy the prediction. Mahalanobis Distance p-value: The p-value gives the fraction of training data with an MD greater than or equal to the one for the given sample, assuming normally distributed data. The smaller the p-value, the less trustworthy the prediciton. For highly nonnormal X properties (e.g., fingerprints), the MD p-value is wildly inaccurate.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set. Prediction: Positive if the Bayesian score is above the estimated best cutoff value from minimizing the false positive and false negative rate. Probability: The esimated probability that the sample is in the positive category. This assumes that the Bayesian score follows a normal distribution and is different from the prediction using a cutoff. Enrichment: An estimate of enrichment, that is, the increased likelihood (versus random) of this sample being in the category. Bayesian Score: The standard Laplacian-modified Bayesian score. Mahalanobis Distance: The Mahalanobis distance (MD) is the distance to the center of the training data. The larger the MD, the less trustworthy the prediction. Mahalanobis Distance p-value: The p-value gives the fraction of training data with an MD greater than or equal to the one for the given sample, assuming normally distributed data. The smaller the p-value, the less trustworthy the prediciton. For highly nonnormal X properties (e.g., fingerprints), the MD p-value is wildly inaccurate.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set.

1.
All properties and OPS components are within expected ranges. Prediction: Positive if the Bayesian score is above the estimated best cutoff value from minimizing the false positive and false negative rate. Probability: The esimated probability that the sample is in the positive category. This assumes that the Bayesian score follows a normal distribution and is different from the prediction using a cutoff. Enrichment: An estimate of enrichment, that is, the increased likelihood (versus random) of this sample being in the category. Bayesian Score: The standard Laplacian-modified Bayesian score. Mahalanobis Distance: The Mahalanobis distance (MD) is the distance to the center of the training data. The larger the MD, the less trustworthy the prediction. Mahalanobis Distance p-value: The p-value gives the fraction of training data with an MD greater than or equal to the one for the given sample, assuming normally distributed data. The smaller the p-value, the less trustworthy the prediciton. For highly nonnormal X properties (e.g., fingerprints), the MD p-value is wildly inaccurate.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set.

1.
All properties and OPS components are within expected ranges. Prediction: Positive if the Bayesian score is above the estimated best cutoff value from minimizing the false positive and false negative rate. Probability: The esimated probability that the sample is in the positive category. This assumes that the Bayesian score follows a normal distribution and is different from the prediction using a cutoff. Enrichment: An estimate of enrichment, that is, the increased likelihood (versus random) of this sample being in the category. Bayesian Score: The standard Laplacian-modified Bayesian score. Mahalanobis Distance: The Mahalanobis distance (MD) is the distance to the center of the training data. The larger the MD, the less trustworthy the prediction. Mahalanobis Distance p-value: The p-value gives the fraction of training data with an MD greater than or equal to the one for the given sample, assuming normally distributed data. The smaller the p-value, the less trustworthy the prediciton. For highly nonnormal X properties (e.g., fingerprints), the MD p-value is wildly inaccurate.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set.

1.
All properties and OPS components are within expected ranges.

Feature Contribution
Top features for positive contribution Prediction: Positive if the Bayesian score is above the estimated best cutoff value from minimizing the false positive and false negative rate. Probability: The esimated probability that the sample is in the positive category. This assumes that the Bayesian score follows a normal distribution and is different from the prediction using a cutoff. Enrichment: An estimate of enrichment, that is, the increased likelihood (versus random) of this sample being in the category. Bayesian Score: The standard Laplacian-modified Bayesian score. Mahalanobis Distance: The Mahalanobis distance (MD) is the distance to the center of the training data. The larger the MD, the less trustworthy the prediction. Mahalanobis Distance p-value: The p-value gives the fraction of training data with an MD greater than or equal to the one for the given sample, assuming normally distributed data. The smaller the p-value, the less trustworthy the prediciton. For highly nonnormal X properties (e.g., fingerprints), the MD p-value is wildly inaccurate.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set.

1.
All properties and OPS components are within expected ranges. Prediction: Positive if the Bayesian score is above the estimated best cutoff value from minimizing the false positive and false negative rate. Probability: The esimated probability that the sample is in the positive category. This assumes that the Bayesian score follows a normal distribution and is different from the prediction using a cutoff. Enrichment: An estimate of enrichment, that is, the increased likelihood (versus random) of this sample being in the category. Bayesian Score: The standard Laplacian-modified Bayesian score. Mahalanobis Distance: The Mahalanobis distance (MD) is the distance to the center of the training data. The larger the MD, the less trustworthy the prediction. Mahalanobis Distance p-value: The p-value gives the fraction of training data with an MD greater than or equal to the one for the given sample, assuming normally distributed data. The smaller the p-value, the less trustworthy the prediciton. For highly nonnormal X properties (e.g., fingerprints), the MD p-value is wildly inaccurate.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set.

1.
All properties and OPS components are within expected ranges.

Feature Contribution
Top features for positive contribution Prediction: Positive if the Bayesian score is above the estimated best cutoff value from minimizing the false positive and false negative rate. Probability: The esimated probability that the sample is in the positive category. This assumes that the Bayesian score follows a normal distribution and is different from the prediction using a cutoff. Enrichment: An estimate of enrichment, that is, the increased likelihood (versus random) of this sample being in the category. Bayesian Score: The standard Laplacian-modified Bayesian score. Mahalanobis Distance: The Mahalanobis distance (MD) is the distance to the center of the training data. The larger the MD, the less trustworthy the prediction. Mahalanobis Distance p-value: The p-value gives the fraction of training data with an MD greater than or equal to the one for the given sample, assuming normally distributed data. The smaller the p-value, the less trustworthy the prediciton. For highly nonnormal X properties (e.g., fingerprints), the MD p-value is wildly inaccurate.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set.

1.
All properties and OPS components are within expected ranges. Mahalanobis Distance: The Mahalanobis distance (MD) is a generalization of the Euclidean distance that accounts for correlations among the X properties. It is calculated as the distance to the center of the training data. The larger the MD, the less trustworthy the prediction. Mahalanobis Distance p-value: The p-value gives the fraction of training data with an MD greater than or equal to the one for the given sample, assuming normally distributed data. The smaller the p-value, the less trustworthy the prediciton. For highly nonnormal X properties (e.g., fingerprints), the MD p-value is wildly inaccurate.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set.

1.
All properties and OPS components are within expected ranges.

Feature Contribution
Top features for positive contribution Mahalanobis Distance: The Mahalanobis distance (MD) is a generalization of the Euclidean distance that accounts for correlations among the X properties. It is calculated as the distance to the center of the training data. The larger the MD, the less trustworthy the prediction. Mahalanobis Distance p-value: The p-value gives the fraction of training data with an MD greater than or equal to the one for the given sample, assuming normally distributed data. The smaller the p-value, the less trustworthy the prediciton. For highly nonnormal X properties (e.g., fingerprints), the MD p-value is wildly inaccurate.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set.

1.
All properties and OPS components are within expected ranges.

Feature Contribution
Top features for positive contribution Mahalanobis Distance: The Mahalanobis distance (MD) is a generalization of the Euclidean distance that accounts for correlations among the X properties. It is calculated as the distance to the center of the training data. The larger the MD, the less trustworthy the prediction. Mahalanobis Distance p-value: The p-value gives the fraction of training data with an MD greater than or equal to the one for the given sample, assuming normally distributed data. The smaller the p-value, the less trustworthy the prediciton. For highly nonnormal X properties (e.g., fingerprints), the MD p-value is wildly inaccurate.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set.

1.
All properties and OPS components are within expected ranges.

Feature Contribution
Top features for positive contribution

Model Prediction
Prediction: 29.4 Unit: mg/kg_body_weight/day Mahalanobis Distance: 12.5 Mahalanobis Distance p-value: 6.85e-005 Mahalanobis Distance: The Mahalanobis distance (MD) is a generalization of the Euclidean distance that accounts for correlations among the X properties. It is calculated as the distance to the center of the training data. The larger the MD, the less trustworthy the prediction. Mahalanobis Distance p-value: The p-value gives the fraction of training data with an MD greater than or equal to the one for the given sample, assuming normally distributed data. The smaller the p-value, the less trustworthy the prediciton. For highly nonnormal X properties (e.g., fingerprints), the MD p-value is wildly inaccurate.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set. Mahalanobis Distance: The Mahalanobis distance (MD) is a generalization of the Euclidean distance that accounts for correlations among the X properties. It is calculated as the distance to the center of the training data. The larger the MD, the less trustworthy the prediction. Mahalanobis Distance p-value: The p-value gives the fraction of training data with an MD greater than or equal to the one for the given sample, assuming normally distributed data. The smaller the p-value, the less trustworthy the prediciton. For highly nonnormal X properties (e.g., fingerprints), the MD p-value is wildly inaccurate.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set.

1.
All properties and OPS components are within expected ranges.

Feature Contribution
Top features for positive contribution Mahalanobis Distance: The Mahalanobis distance (MD) is a generalization of the Euclidean distance that accounts for correlations among the X properties. It is calculated as the distance to the center of the training data. The larger the MD, the less trustworthy the prediction. Mahalanobis Distance p-value: The p-value gives the fraction of training data with an MD greater than or equal to the one for the given sample, assuming normally distributed data. The smaller the p-value, the less trustworthy the prediciton. For highly nonnormal X properties (e.g., fingerprints), the MD p-value is wildly inaccurate.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set.

Model Prediction
Prediction: 0.0774 Unit: g/kg_body_weight Mahalanobis Distance: 29 Mahalanobis Distance p-value: 6.55e-023 Mahalanobis Distance: The Mahalanobis distance (MD) is a generalization of the Euclidean distance that accounts for correlations among the X properties. It is calculated as the distance to the center of the training data. The larger the MD, the less trustworthy the prediction. Mahalanobis Distance p-value: The p-value gives the fraction of training data with an MD greater than or equal to the one for the given sample, assuming normally distributed data. The smaller the p-value, the less trustworthy the prediciton. For highly nonnormal X properties (e.g., fingerprints), the MD p-value is wildly inaccurate.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set. Mahalanobis Distance: The Mahalanobis distance (MD) is a generalization of the Euclidean distance that accounts for correlations among the X properties. It is calculated as the distance to the center of the training data. The larger the MD, the less trustworthy the prediction. Mahalanobis Distance p-value: The p-value gives the fraction of training data with an MD greater than or equal to the one for the given sample, assuming normally distributed data. The smaller the p-value, the less trustworthy the prediciton. For highly nonnormal X properties (e.g., fingerprints), the MD p-value is wildly inaccurate.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set.