Discovery of new nicotinamides as apoptotic VEGFR-2 inhibitors: virtual screening, synthesis, anti-proliferative, immunomodulatory, ADMET, toxicity, and molecular dynamic simulation studies

Abstract A library of modified VEGFR-2 inhibitors was designed as VEGFR-2 inhibitors. Virtual screening was conducted for the hypothetical library using in silico docking, ADMET, and toxicity studies. Four compounds exhibited high in silico affinity against VEGFR-2 and an acceptable range of the drug-likeness. These compounds were synthesised and subjected to in vitro cytotoxicity assay against two cancer cell lines besides VEGFR-2 inhibitory determination. Compound D-1 showed cytotoxic activity against HCT-116 cells almost double that of sorafenib. Compounds A-1, C-6, and D-1 showed good IC50 values against VEGFR-2. Compound D-1 markedly increased the levels of caspase-8 and BAX expression and decreased the anti-apoptotic Bcl-2 level. Additionally, compound D-1 caused cell cycle arrest at pre-G1 and G2-M phases in HCT-116 cells and induced apoptosis at both early and late apoptotic stages. Compound D-1 decreased the level of TNF-α and IL6 and inhibited TNF-α and IL6. MD simulations studies were performed over 100 ns.

Prediction: Positive if the Bayesian score is above the estimated best cutoff value from minimizing the false positive and false negative rate. Probability: The esimated probability that the sample is in the positive category. This assumes that the Bayesian score follows a normal distribution and is different from the prediction using a cutoff. Enrichment: An estimate of enrichment, that is, the increased likelihood (versus random) of this sample being in the category. Bayesian Score: The standard Laplacian-modified Bayesian score. Mahalanobis Distance: The Mahalanobis distance (MD) is the distance to the center of the training data. The larger the MD, the less trustworthy the prediction. Mahalanobis Distance p-value: The p-value gives the fraction of training data with an MD greater than or equal to the one for the given sample, assuming normally distributed data. The smaller the p-value, the less trustworthy the prediciton. For highly nonnormal X properties (e.g., fingerprints), the MD p-value is wildly inaccurate.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set.

1.
All properties and OPS components are within expected ranges.

Feature Contribution
Top features for positive contribution Prediction: Positive if the Bayesian score is above the estimated best cutoff value from minimizing the false positive and false negative rate. Probability: The esimated probability that the sample is in the positive category. This assumes that the Bayesian score follows a normal distribution and is different from the prediction using a cutoff. Enrichment: An estimate of enrichment, that is, the increased likelihood (versus random) of this sample being in the category. Bayesian Score: The standard Laplacian-modified Bayesian score. Mahalanobis Distance: The Mahalanobis distance (MD) is the distance to the center of the training data. The larger the MD, the less trustworthy the prediction. Mahalanobis Distance p-value: The p-value gives the fraction of training data with an MD greater than or equal to the one for the given sample, assuming normally distributed data. The smaller the p-value, the less trustworthy the prediciton. For highly nonnormal X properties (e.g., fingerprints), the MD p-value is wildly inaccurate.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set.

1.
All properties and OPS components are within expected ranges.

Feature Contribution
Top features for positive contribution Prediction: Positive if the Bayesian score is above the estimated best cutoff value from minimizing the false positive and false negative rate. Probability: The esimated probability that the sample is in the positive category. This assumes that the Bayesian score follows a normal distribution and is different from the prediction using a cutoff. Enrichment: An estimate of enrichment, that is, the increased likelihood (versus random) of this sample being in the category. Bayesian Score: The standard Laplacian-modified Bayesian score. Mahalanobis Distance: The Mahalanobis distance (MD) is the distance to the center of the training data. The larger the MD, the less trustworthy the prediction. Mahalanobis Distance p-value: The p-value gives the fraction of training data with an MD greater than or equal to the one for the given sample, assuming normally distributed data. The smaller the p-value, the less trustworthy the prediciton. For highly nonnormal X properties (e.g., fingerprints), the MD p-value is wildly inaccurate.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set.

1.
All properties and OPS components are within expected ranges.

Feature Contribution
Top features for positive contribution Prediction: Positive if the Bayesian score is above the estimated best cutoff value from minimizing the false positive and false negative rate. Probability: The esimated probability that the sample is in the positive category. This assumes that the Bayesian score follows a normal distribution and is different from the prediction using a cutoff. Enrichment: An estimate of enrichment, that is, the increased likelihood (versus random) of this sample being in the category. Bayesian Score: The standard Laplacian-modified Bayesian score. Mahalanobis Distance: The Mahalanobis distance (MD) is the distance to the center of the training data. The larger the MD, the less trustworthy the prediction. Mahalanobis Distance p-value: The p-value gives the fraction of training data with an MD greater than or equal to the one for the given sample, assuming normally distributed data. The smaller the p-value, the less trustworthy the prediciton. For highly nonnormal X properties (e.g., fingerprints), the MD p-value is wildly inaccurate.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set.

1.
All properties and OPS components are within expected ranges.

Feature Contribution
Top features for positive contribution Prediction: Positive if the Bayesian score is above the estimated best cutoff value from minimizing the false positive and false negative rate. Probability: The esimated probability that the sample is in the positive category. This assumes that the Bayesian score follows a normal distribution and is different from the prediction using a cutoff. Enrichment: An estimate of enrichment, that is, the increased likelihood (versus random) of this sample being in the category. Bayesian Score: The standard Laplacian-modified Bayesian score. Mahalanobis Distance: The Mahalanobis distance (MD) is the distance to the center of the training data. The larger the MD, the less trustworthy the prediction. Mahalanobis Distance p-value: The p-value gives the fraction of training data with an MD greater than or equal to the one for the given sample, assuming normally distributed data. The smaller the p-value, the less trustworthy the prediciton. For highly nonnormal X properties (e.g., fingerprints), the MD p-value is wildly inaccurate.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set.

1.
All properties and OPS components are within expected ranges.

Feature Contribution
Top features for positive contribution Prediction: Positive if the Bayesian score is above the estimated best cutoff value from minimizing the false positive and false negative rate. Probability: The esimated probability that the sample is in the positive category. This assumes that the Bayesian score follows a normal distribution and is different from the prediction using a cutoff. Enrichment: An estimate of enrichment, that is, the increased likelihood (versus random) of this sample being in the category. Bayesian Score: The standard Laplacian-modified Bayesian score. Mahalanobis Distance: The Mahalanobis distance (MD) is the distance to the center of the training data. The larger the MD, the less trustworthy the prediction. Mahalanobis Distance p-value: The p-value gives the fraction of training data with an MD greater than or equal to the one for the given sample, assuming normally distributed data. The smaller the p-value, the less trustworthy the prediciton. For highly nonnormal X properties (e.g., fingerprints), the MD p-value is wildly inaccurate.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set. Prediction: Positive if the Bayesian score is above the estimated best cutoff value from minimizing the false positive and false negative rate. Probability: The esimated probability that the sample is in the positive category. This assumes that the Bayesian score follows a normal distribution and is different from the prediction using a cutoff. Enrichment: An estimate of enrichment, that is, the increased likelihood (versus random) of this sample being in the category. Bayesian Score: The standard Laplacian-modified Bayesian score. Mahalanobis Distance: The Mahalanobis distance (MD) is the distance to the center of the training data. The larger the MD, the less trustworthy the prediction. Mahalanobis Distance p-value: The p-value gives the fraction of training data with an MD greater than or equal to the one for the given sample, assuming normally distributed data. The smaller the p-value, the less trustworthy the prediciton. For highly nonnormal X properties (e.g., fingerprints), the MD p-value is wildly inaccurate.

Model Applicability
Unknown features are fingerprint features in the query molecule, but not found or appearing too infrequently in the training set. Prediction: Positive if the Bayesian score is above the estimated best cutoff value from minimizing the false positive and false negative rate. Probability: The esimated probability that the sample is in the positive category. This assumes that the Bayesian score follows a normal distribution and is different from the prediction using a cutoff. Enrichment: An estimate of enrichment, that is, the increased likelihood (versus random) of this sample being in the category. Bayesian Score: The standard Laplacian-modified Bayesian score. Mahalanobis Distance: The Mahalanobis distance (MD) is the distance to the center of the training data. The larger the MD, the less trustworthy the prediction. Mahalanobis Distance p-value: The p-value gives the fraction of training data with an MD greater than or equal to the one for the given sample, assuming normally distributed data. The smaller the p-value, the less trustworthy the prediciton. For highly nonnormal X properties (e.g., fingerprints), the MD p-value is wildly inaccurate.