## ABSTRACT

### Introduction

Modern drug discovery incorporates various tools and data, heralding the beginning of the data-driven drug design (DD) era. The distributions of chemical and physical data used for Artificial Intelligence (AI)/Machine Learning (ML) and to drive DD have thus become highly important to be understood and used effectively.

### Areas covered

The authors perform a comprehensive exploration of the statistical distributions driving the data-intensive era of drug discovery, including Benford’s Law in AI/ML-based DD.

### Expert opinion

As the relevance of data-driven discovery escalates, we anticipate meticulous scrutiny of datasets utilizing principles like Benford’s Law to enhance data integrity and guide efficient resource allocation and experimental planning. In this data-driven era of the pharmaceutical and medical industries, addressing critical aspects such as bias mitigation, algorithm effectiveness, data stewardship, effects, and fraud prevention are essential. Harnessing Benford’s Law and other distributions and statistical tests in DD provides a potent strategy to detect data anomalies, fill data gaps, and enhance dataset quality. Benford’s Law is a fast method for data integrity and quality of datasets, the backbone of AI/ML and other modeling approaches, proving very useful in the design process.

## 1. Introduction

As we mark the beginning of the data-driven drug design era, understanding and effectively using the statistical distributions that form the core of AI/ML methodologies and drug discovery tools have garnered high importance. This work aims to elucidate the impact of these distributions, first with a section on methods, and then a section on distributions, with a keen focus on the application of Benford’s Law in DD. This approach provides a comprehensive discussion of statistical distributions used in the past and recently. It sheds light on how Benford’s Law can make a significant difference in AI/ML-based DD models, an area that has yet to be extensively explored in the literature.

### 1.1. Data-driven discovery

The slow and expensive drug discovery process can take approximately 15 years and $2 billion to develop a small-molecule drug. It may be sped up by advances in structural biology (cryo-EM, prediction) and the development of vast virtual libraries of drug-like small molecules, along with the availability of abundant computing resources, physics-based methods, artificial intelligence/machine learning (AI/ML,) and screening gigascale chemical spaces. This computer-driven drug discovery [Citation1,Citation2] may provide many initial hits in libraries of 10^{10} structures [Citation3]. However, an essential factor for distribution-based optimization is generating and accessing distributions that include negative or inactive data. Structure-Inactivity Relationships (SIRs) in drug discovery [Citation4]** may significantly help this and emphasize that machine and deep learning techniques can benefit from negative results. Currently, there is a gap in the literature regarding inactivity data that limits the use of in silico methods, as authors are more inclined to publish novel datasets rather than exhaustive ones. Negative data may be published as preprints as a first step [Citation4]** or in bespoke databases.

In some cases, positives can be labeled as ‘spies’, i.e. negatives to help identify a decision boundary, such as peptides in a neural network (NN) classifier giving similar results to real negatives [Citation5]. Experimentally confirmed hit rates for DD are still around 10–30%, free energy predictions are also circa 1 kcal/mol error [Citation3]. In addition, de-risking drug development necessitates proper dealing with therapeutically-relevant data from translatable animal or computational models, including accelerating the toxicological characterization of compounds.

Distributions for chemical bioavailability, lead-likeness, and fragment are well known [Citation6]. Lately, AI models have gained importance making use of the relative strengths of machine vision in AI/ML-enabled medical devices [Citation7], as well as being considered for replacing experiments in drug approvals by regulatory bodies [Citation8]. New data is becoming available using automated labs, organs-on-a-chip or functional organoids, multiparameter optimization, and consideration of polypharmacology. New tools for data science and AI/ML make finding patterns among multivariate and synthetic data faster and provide relevant information.

A vital consideration in this drug design transformation is the use of different statistical distributions that play a pivotal role in optimizing the process and outcomes. This work discusses ideas and perspectives for these distributions, describing data science and drug design techniques where these distributions are used, some new advances, and how they are employed. These include machine learning, Bayesian methods, quantum computing, multi-objective optimization, personalized medicine, and a section on statistical distributions used in cutting-edge methodologies in DD.

## 2. Machine learning and AI in drug design

The advent of Machine Learning and AI in drug design is a game-changer, with statistical distributions playing a pivotal role in optimizing outcomes. This section delves deeper into the specifics of how ML and AI utilize distributions such as Normal, Uniform, and Xavier/Glorot in drug design.

The initialization of weights in deep learning and NN often uses the Normal or Uniform distribution. More recently, the Xavier/Glorot distribution, a specific type of Normal distribution, has been used to maintain the variance of activations and back-propagated gradients stable across layers [Citation9]. The recent development of the AutoInit algorithm has advanced the initialization of weights in NNs by adapting to different network architectures and analytically tracking the mean and variance of signals, thus improving performance across various network settings [Citation10].

With the rise of deep learning, deep generative models like Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) have found use in de novo drug design. These models learn latent representations of molecular structures and use these to generate novel drug candidates [Citation11].** The DeepTarget model has been proposed as an end-to-end deep learning model for generating novel drug candidates based solely on the amino acid sequence of the target protein, reducing the heavy reliance on prior knowledge [Citation12]. Low-shot data: a single potent Nurr1 agonist as the template in fragment-augmentation, fine-tuned a chemical language model (CLM) using SMILES to obtain novel Nurr1 agonists using sampling frequency for design prioritization, demonstrating the usefulness of these methods in hit and lead generation [Citation13].

In drug design, reinforcement learning (RL) can optimize molecular properties and target binding through iterative optimization. The probability distributions in RL algorithms, such as policy gradients or Q-learning, help guide the search for promising drug candidates. The Softmax distribution is often used to convert the output of a NN into a probability distribution over actions. These probability distributions are advantageous in multi-armed bandit problems, where the agent must balance exploration and exploitation [Citation14]. The integration of RL with drug-target interaction for drug design has led to the development of a model that uses a recurrent NN for molecular modeling and drug-target affinity as the reward function for optimal molecular generation, thus improving the efficiency of drug design [Citation15].

Sudden jumps in properties such as bioactivities of closely related chemical compounds, called activity cliffs, are common in drug design and represent hard-to-model distributions since the changes in structure can be subtle, such as a change in one atom. Indeed, ML using molecular descriptors performed better than deep learning on activity cliffs in datasets [Citation16].

Bayesian approaches have gained popularity in drug design due to their capability to incorporate prior knowledge and uncertainty into the model. These methods leverage a variety of distributions to represent prior beliefs about the parameters being estimated. Here, the manuscript, the use by Bayesian methods of these distributions in the process of drug design is explored.

Bayesian models often use various probability distributions, such as Normal, Gamma, or Beta distributions, to represent prior beliefs about the parameters being estimated [Citation17]. In clinical trials, Bayesian statistical methods incorporate prior data into trial design, analysis, and decision-making. They are increasingly recognized for their potential to reduce the time and cost of bringing innovative medicines to patients [Citation18].

Gaussian processes are also used as priors in Bayesian optimization. A Gaussian process is a collection of random variables, any finite number of which have a joint Gaussian distribution, allowing to model uncertainty about the function to optimize [Citation19]. The choice of distribution in a Bayesian network depends on the nature of the variables. For binary variables, a Bernoulli distribution might be used. For positive integer counts, a Poisson distribution might be used instead. For continuous variables, a Normal distribution can be used [Citation20].

Quantum computing has emerged as a promising tool for describing chemical processes in drug design. Quantum states are described using complex probability amplitudes, a generalization of classical probabilities. The Born rule converts these amplitudes into probabilities, creating a unique distribution for each quantum state [Citation21]. Quantum computing has been proposed for molecular simulation algorithms [Citation22] and understanding reaction mechanisms [Citation23].

Drug design often involves balancing multiple objectives. The Pareto distribution is often a vital tool used to model the trade-off between different objectives. This section discusses the significance of the Pareto distribution for this task. Drug design involves optimizing multiple, often opposing, physicochemical and pharmacological properties. The Pareto distribution is often used to model the trade-off between different objectives in multi-objective optimization. The Pareto front represents the set of non-dominated solutions, where a solution ‘dominates’ another if it is better in at least one objective and no worse in the others. In molecular (inverse) design, graph-based, non-dominated sorting genetic algorithms (NSGA-II, NSGA-III) have been proposed for molecular multi-objective optimization [Citation24].

Also crucial in drug design is the response to medicaments. Personalized medicine tailors medical treatment to the individual characteristics of each patient and aims to improve drug design for populations and individuals. Statistical distributions play a crucial role in modeling individual differences in response to treatment. This part presents the importance of these distributions in personalized medicine. The random effects in a mixed effects model are often assumed to follow a Normal distribution. This distribution allows us to model individual differences in response to treatment [Citation25]. The Cox proportional hazards model is commonly used in survival analysis. This model assumes that the hazard function, which describes the risk of the event as a function of time, is a product of a baseline hazard function and an exponential function of the covariates. The baseline hazard function can take any form and is not associated with a specific distribution.

## 3. Distribution-centric design

In addition to the previously mentioned distributions, several others, such as Gaussian, Boltzmann, Poisson, Log-normal, Weibull, Gamma, Beta, and Gumbel distributions, have also found application in drug design. Each of these distributions has strengths and limitations and a unique role in different aspects of drug design, which are discussed in this section. Some of these distributions have an established role in following patient data, e.g., and others have received renewed interest. The choice of distribution depends on the specific context, problem, and the available data.

The Gaussian distribution (Equation. 1) has a long history in modeling. It is heavily used in quantitative structure-activity relationship (QSAR) modeling of (normally distributed) descriptors to understand the relationship between molecular properties and biological activity, which aids in drug design [Citation3].

The Gaussian distribution is bell-shaped and symmetric about the mean $\mathit{\mu}$. It is fully characterized by its mean and standard deviation $\mathit{\sigma}$.

The generalized Boltzmann distribution (Equation. 2) has been applied in statistical thermodynamics and molecular dynamics simulations for the past few decades to sample various molecular conformations, helping researchers understand the thermodynamics and kinetics of drug-target interactions [Citation6].

The Boltzmann distribution is a decreasing exponential function, where *E* is the energy of a microstate, $\mathit{T}$ is the temperature, $\mathit{k}$ is the Boltzmann constant, and *F* is the free energy Helmholtz free energy).

The Poisson distribution (Equation. 3) is used in cheminformatics to model rare events, such as the occurrence of specific chemical patterns or motifs within a large chemical library [Citation26].

The Poisson distribution is discrete and represents the probability of a given number of events occurring in a fixed interval of time or space. Its shape depends on the parameter $\mathit{\lambda}$ (the average rate or expected value of events) and the number of occurrences, *x*.

The Log-normal distribution (Equation. 4) is used in pharmacokinetics and pharmacodynamics to model the distribution of various drug-related parameters, such as drug clearance, half-life, and volume of distribution [Citation27].

The log-normal distribution arises from the multiplicative product of many independent random variables, each of which is positive, positively skewed, and characterized by its parameters $\mathit{\mu}$ and $\mathit{\sigma}$. The variable $\mathit{x}$ is always positive.

The Weibull distribution (Equation. 5) is frequently utilized in the analysis of survival, which plays a pivotal role in clinical trials. An instance of this would be the time until a patient experiences a specific side effect, which may conform to a Weibull distribution. These times to side effects can aid in comprehending the safety profile of a drug Citation28.

The Weibull distribution can take on a variety of shapes depending on its shape parameter $\mathit{k}$ and scale parameter $\mathit{\lambda}$, both positive numbers.

The Gamma distribution (Equation. 6) is employed in modeling waiting times between events. In drug discovery, it may be used to model the time until a given reaction transpires in a biochemical process, which can be critical in comprehending the mechanism of action of a drug [Citation29].

The shape of the Gamma distribution depends on the shape parameter $\mathit{\alpha}$ and the rate parameter $\mathit{\beta}$.

The Beta distribution (Equation. 7) is utilized in Bayesian statistics, which is gaining popularity in drug discovery. It can model the prior knowledge concerning the efficacy of a drug, which is subsequently updated with fresh data to furnish a posterior distribution for the drug’s efficacy [Citation30].

The Beta distribution is defined on the interval [0, 1] and can take a variety of shapes depending on its shape parameters $\mathit{\alpha}$ and $\mathit{\beta}$.

The Gumbel distribution (Equation. 8) is usually used in extreme value theory. However, in drug discovery, the Gumbell distribution could be utilized to model the drug-drug effects (efficacy or toxicity) that drugs can induce, which is significant in understanding the drug’s therapeutic [Citation35].

$$\text{where\hspace{0.17em}}\mathit{z}\text{\hspace{0.17em}}=\text{\hspace{0.17em}}\left(\text{x-}\mathit{\mu}\right)/\text{\hspace{0.17em}}\mathit{\beta}$$

The Gumbel distribution is asymmetric and skewed to the right. It can model the distribution of the maximum (or the minimum) of several samples of various distributions.

A particular distribution that can be further explored in dataset design is Benford’s Law. Also called the First-Digit Law, it provides a pattern in the leading digits of some natural datasets. It has potential for application in drug design, particularly in identifying anomalies or inconsistencies in large chemical databases. In this section, we focus on the potential of Benford’s Law in drug design, and how it can enhance the quality of datasets, which is crucial for AI/ML and modeling.

Benford’s Law is a statistical principle that posits that the primary digit (initial non-zero digit) is often small in numerous (not all) naturally occurring datasets. The likelihood *P* of observing a leading digit *d* (*d* ∈ {1, 2, …, 9}) is precisely described by Equation. 9:

Despite being extensively utilized in various fields, including accounting, finance, and data analysis for anomaly and fraud detection, Benford’s Law has not yet been widely applied in drug design. Nevertheless, there is potential for applying Benford’s Law in drug design, particularly in identifying anomalies and inconsistencies in large chemical databases or datasets. For instance, scientists could employ Benford’s Law to recognize potential data entry errors, inconsistencies in experimental measurements, or biases in the reported data. Identifying and correcting such issues could enhance the quality of datasets utilized in various drug design tasks such as QSAR modeling, molecular docking, or virtual screening.

The Chi-squared (${\chi}^{2}$) distribution statistic is then helpful in checking statistical significance against Benford’s distribution, i.e. if the *p*-value is smaller than a defined significance level (say 5%), then the null hypothesis that the observed distribution fits the distribution of first digits expected under Benford’s Law can be rejected, indicating that the observed distribution significantly deviates from what is expected under Benford’s Law.

Benford’s Law has been proposed to check the manipulation of data for QSAR/QSPR models since they are usually subject to high selection to attain high correlation values [Citation31]*. Also, it can be used for the distribution of mRNA transcription data from a large number of organisms, solubility and activity data, with several data sets available from ChEMBL [Citation32] and NCBI [Citation33] following Benford’s Law distribution [Citation31]*.

Burgeoning data-centric technologies are pivotal in enhancing pattern detection while accentuating the influence of data quality and potential bias on processes like ML and modeling. An approach makes use of the distribution of the initial significant digits of critical parameters in medicinal chemistry (specifically log*P*, log*S*, and pKa, both predicted and observed) to evaluate their compliance with Benford’s Law, an underlying pattern discernible in numerous natural phenomena [Citation34]*. Data quality is heavily contingent upon datasets’ dimensions, diversity, and scale. The logarithm of the octanol/water coefficient (log*P*) estimates the predicted ability of a chemical to transverse a biological membrane. The solubility of a compound (measured as *Solubility* in mg/L or the logarithm log*S*) has a crucial impact on the ability to deliver a chemical to its site if action in a sufficient dose and in a sufficiently long-lasting action time. The minus logarithm of the acid dissociation constant, pKa, indicates the possible ionization states of compounds at physiological conditions, which also affect the compound’s solubility, bioavailability, and permeability. Distributions of experimentally-determined values for these parameters for drug compounds follow Benford’s Law with statistical significance, but not as well as larger datasets of experimental or computationally-obtained values (), also seen in their *p*-values.

LogPexp, pKa_exp, and Solubility_exp distributions show (non-statistically significant) deviations from Benford’s Law, especially for the first few digits. These deviations could indicate that these datasets may have inherent biases, errors, or anomalies. LogP_ALOGPS, LogP_JCHEM, pKa_JCHEM, and LogS_ALOGPS show distributions with a closer fit to Benford’s Law for the first few digits, suggesting that the data may be more uniformly distributed across different orders of magnitude. However, deviations for later digits indicate that there may still be some anomalies or biases in the data. The distribution of LogS_exp shows deviations from Benford’s Law, particularly for the first few digits. However, it aligns more closely with the later digits. This alignment suggests that specific ranges of values are overrepresented in the dataset. For Solubility_ALOGPS and LogPexp_NCI, there are deviations from Benford’s Law across all digits, suggesting that the data may be skewed or biased in some way. Distributions that deviate the most often have smaller dataset sizes. This deviation could be due to the ‘Law of Large Numbers’, which states that as a sample size grows, its expected value of a result gets closer to the average for the whole population. In other words, smaller datasets have a higher chance of observing deviations from expected patterns, such as Benford’s Law.

Drug-centric profiling may be overly restrictive or undersized since there is a relatively small amount of approved drug compounds (a few thousand); hence, deploying more extensive collections of predicted or experimentally verified values can reinstate the distribution typically observable in other natural occurrences. This approach may be instrumental in refining, profiling, ML, comprehensive dataset analysis, and other data-driven methodologies, thereby improving automatic data generation and compound design processes.

Another application is in data manipulation and fraud detection in chemical processes, reporting, and regulatory filing, as following compliance with Benford’s Law distribution can quickly assess if the underlying numerical distributions are likely to be sampled from a non-manipulated distribution.

ML and similar technologies are profoundly reliant on data quality. Benford’s Law is a rapid statistical tool to determine if some specific types of non-bounded data that traverse multiple orders of magnitude are likely to align with natural phenomena. This methodology is particularly effective for large datasets, and the first significant digit distributions of experimental and predicted values of log*P*, pKa, and solubility can significantly impact drug design campaigns and processes.

These methods are just a few examples of the many probability distributions and approaches used in drug design. As the field continues to evolve, newer methods and algorithms may be developed and implemented to improve the drug discovery process.

Data integrity is key to ML monitoring, though fundamental procedures such as addressing missing values, range violation, feature analysis and engineering, and type mismatch must be performed prior to using data for training a model. In addition, subject domain knowledge is irreplaceable for finding and interpreting any anomaly or manipulation in the data presented. ML-based models are generally driven by a pipeline with complex features and automated workflows that can cause multiple transformations of the data for the model to train. Studying the distributions in the data can give at least preliminary observations of the underlying data and other integrity tests.

## 4. Conclusion

In conclusion, the effective use of statistical distributions, particularly in AI/ML models, is central to the future of drug design. Substantial datasets are required to fully utilize AI models, which have increasingly gained prominence in this field. They can be drawn from automated labs, organs-on-a-chip, or functional organoids, among other sources. These models also rely on a variety of statistical distributions, including Normal, Uniform, Xavier/Glorot, Gaussian processes, Bernoulli, and others, each having its strengths and limitations. For instance, deep learning employs various distributions for weight initialization in neural networks and generating novel drug candidates. Reinforcement learning uses probability distributions for optimizing molecular properties, and Bayesian methods incorporate prior knowledge and uncertainty into models. Quantum computing and multi-objective optimization also utilize specific distributions, and personalized medicine relies on models like the Cox proportional hazards model.

Further, Benford’s Law, which outlines a pattern in the leading digits of natural datasets, may serve as a valuable tool for anomaly detection in large chemical databases. Utilising Benford’s Law could enhance data quality, which is critical for AI/ML and modeling. For instance, essential parameters in medicinal chemistry, such as log*P*, log*S*, and pKa, can be evaluated for compliance with Benford’s Law, leading to improvements in profiling and data-driven methods.

The effective use of these technologies, distributions, and methodologies is central to the future of drug design, with the potential to vastly improve the efficiency and cost-effectiveness of the drug discovery process. Our contribution to the literature is emphasizing the potential of Benford’s Law in enhancing the quality of large and small datasets, ultimately improving the drug discovery process.

## 5. Expert opinion

While Benford’s Law has not yet been a prominent method in drug design, it could be employed to detect anomalies in data and improve the quality of the datasets used in the drug discovery process.

Benford’s Law can also identify data gaps in distributions and, thus, better design experiments for improving the quality and representativeness of data sets, and planning and guiding optimization processes while controlling the use of resources. Data sets of more compounds than the few thousand existing approved drugs are needed to represent better the full phenomena of bioactive (and bioinactive) compounds.

Data-driven discovery will become ever more present in several fields, including the pharmaceutical and medical. As such, it is envisioned that more and better data, methods to appropriately deal with data bias, algorithm bias, their effects, ownership, tools, access, and fair use of these will be crucial aspects to focus on.

Issues such as accountability, fraud, data manipulation, consent, representativeness, and reliability of assumptions will be central for further research and development.

It will also be necessary for regulators to consider the effects of AI/ML and distribution-based devices and methods, such as through pregistered trials and research, open source and ethical supervision of data collection, processing, storing, featurisation, model building, model deployment, inference, as well as effects including indirect ones on different populations and individuals.

Machine vision is consolidating, and initial successes are already plentiful. Though less speedy till now, we expect successes to also come from other ML areas such as AI, signal processing, time series, automated monitoring, predictive analysis, and tools to process in a secure and quick manner vast amounts of curated data presently stored in pharmaceutical companies.

## Article highlights

As the relevance of AI/ML and data-driven discovery escalates in fields like pharmaceuticals and medical devices, meticulous scrutiny of datasets utilizing principles like Benford’s Law is anticipated to enhance data integrity and guide efficient resource allocation, experimental planning, and strategy.

In the era of data-driven discovery, addressing critical aspects including bias mitigation, algorithm effectiveness, data stewardship, and fraud prevention, is essential.

Harnessing Benford’s Law and other distributions in drug design provides a potent and fast strategy to detect data anomalies, fill data gaps, and enhance dataset quality.

For a more comprehensive and accurate portrayal of bioactive and bioinactive compounds and their phenomena, data sets need to encompass a broader spectrum than the mere few thousand presently approved drugs.

These considerations, coupled with the ability to generate distribution-based, automated experimental data and securely and swiftly process vast volumes of proprietary, curated data, could revolutionize areas for drug design.

Advances can follow successes seen in the machine vision field in other fields, such as AI, signal processing, and sparse, limited, and noisy data.

## Declaration of interest

The author has no other relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript apart from those disclosed.

## Reviewer disclosures

Peer reviewers on this manuscript have no relevant financial or other relationships to disclose.

## Additional information

### Funding

## References

- Frye L, Bhat S, Akinsanya K, et al. From computer-aided drug discovery to computer-driven drug discovery. Drug Discovery Today. 2021;39:111–117. ISSN 1740-6749. doi: 10.1016/j.ddtec.2021.08.001
- Peña-Guerrero J, Nguewa PA, García-Sosa AT. Machine learning, artificial intelligence, and data science breaking into drug design and neglected diseases. Wiley Interdiscip Rev Comput Mol Sci. 2021;11(5):e1513. doi: 10.1002/wcms.1513
- Sadybekov AV, Katritch V. Computational approaches streamlining drug discovery. Nature. 2023;616(7958):673–685. doi: 10.1038/s41586-023-05905-z
- López-López E, Fernández-de Gortari E, Medina-Franco JL. Yes SIR! On the structure–inactivity relationships in drug discovery. Drug Discovery Today. 2022;27(8):2353–2362. ISSN 1359-6446 doi:10.1016/j.drudis.2022.05.005
- Ansari A, White AD. Learning peptide properties with positive examples only. bioRxiv. 2023;2023.06. 01.543289.doi: 10.1101/2023.06.01.543289
- García AT, Maran U, Hetenyi C. Molecular property filters describing pharmacokinetics and drug binding. Curr Med Chem. 2012;19(11):1646–1662. PMID: 22376034. doi: 10.2174/092986712799945021
- Food and Drug Administration. 2023 sep 06. Artificial intelligence and machine learning (AI/ML)-enabled medical devices. https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices
- Science.org, 2023 sep 06. FDA no longer needs to require animal tests for human drug trials. https://www.science.org/content/article/fda-no-longer-needs-require-animal-tests-human-drug-trials
- Glorot X, Bengio Y. Understanding the difficulty of training deep feedforward neural networks. In Appearing in Proceedings of the 13th International Conference on Artificial Intelligence and Statistics (AISTATS) 2010; Chia Laguna Resort, Sardinia, Italy, (Vol 9). JMLR, W&CP; 2010. p. 249–256.
- Bingham G, Miikkulainen R, 2023. AutoInit: analytic signal-preserving weight initialization for neural networks. In: Proceedings of the 37th AAAI Conference on Artificial Intelligence. arXiv:2021.08958
- Gómez-Bombarelli R, Wei JN, Duvenaud D, et al. Automatic chemical design using a data-driven continuous representation of molecules. ACS Central Sci. 2018;4(2):268–276. doi: 10.1021/acscentsci.7b00572
- Chen Y, Wang Z, Wang L, et al. Deep generative model for drug design from protein target sequence. J Cheminform. 2023 Mar 28;15(1):38. PMID: 36978179; PMCID: PMC10052801. doi: 10.1186/s13321-023-00702-2
- Ballarotto M, Willems S, Stiller T, et al. De novo design of nurr1 agonists via fragment-augmented generative deep learning in low-data regime. J Med Chem. 2023;66(12):8170–8177. doi: 10.1021/acs.jmedchem.3c00485
- Zhou Z, Kearnes S, Li L, et al. Optimization of molecules via deep reinforcement learning. Sci Rep. 2019;9:10752. doi: 10.1038/s41598-019-47148-x
- Zhang Y, Li S, Xing M, et al. Universal approach to de novo drug design for target proteins using deep reinforcement learning. ACS Omega. 2023 Feb 6;8(6):5464–5474. PMID: 36816653; PMCID: PMC9933084. doi: 10.1021/acsomega.2c06653
- van Tilborg D, Alenicheva A, Grisoni F. Exposing the limitations of molecular machine learning with activity cliffs. J Chem Inf Model. 2022;62(23):5938–5951. doi: 10.1021/acs.jcim.2c01073
- Gelman A, Carlin JB, Stern HS, et al. Bayesian data analysis. 3rd ed. CRC press; 2021. http://www.stat.columbia.edu/~gelman/book/BDA3.pdf
- Ruberg SJ, Beckers F, Hemmings R, et al. Application of Bayesian approaches in drug development: starting a virtuous cycle. Nat Rev Drug Discov. 2023;22(3):235–250. doi: 10.1038/s41573-023-00638-0
- Shahriari B, Swersky K, Wang Z, … & De Freitas N. Taking the human out of the loop: a review of Bayesian optimization. Proc IEEE. 2016;104(1):148–175. doi: 10.1109/JPROC.2015.2494218
- Koller D, Friedman N. Probabilistic graphical models: principles and techniques. MIT press; 2009. https://mitpress.mit.edu/9780262013192/probabilistic-graphical-models/
- Zurek WH. Decoherence, einselection, and the quantum origins of the classical. Rev Mod Phys. 2003;75(3):715. doi: 10.1103/RevModPhys.75.715
- Cao Y, Romero J, Olson JP, et al. Quantum chemistry in the age of Quantum computing. Chem Rev. 2019;119(19):10856–10915. doi: 10.1021/acs.chemrev.8b00803
- Reiher M, Wiebe N, Svore KM, et al. Elucidating reaction mechanisms on quantum computers. Proc Nat Acad Sci. 2017;114(29):7555–7560. doi: 10.1073/pnas.1619152114
- Verhellen J. Graph-based molecular Pareto optimisation. Chem Sci. 2022;13(25):7526–7535. doi: 10.1039/D2SC00821A
- Verbeke G, Molenberghs G. Linear mixed models for longitudinal data. Springer Science & Business Media; 2000. https://link.springer.com/book/10.1007/978-1-4419-0300-6
- Kuai L, O’Keeffe T, Arico-Muendel C, et al. Randomness in DNA Encoded Library Selection Data Can Be Modeled for More Reliable Enrichment Calculation. SLAS Discovery. 2018;23(5):405–416. doi: 10.1177/2472555218757718
- Zhang CL, Popp FA. Log-normal distribution of physiological parameters and the coherence of biological systems. Med Hypotheses. 1994;43(1):11–16. doi: 10.1016/0306-9877(94)90042-6
- Carroll KJ. On the use and utility of the Weibull model in the analysis of survival data. Contr Clini Tria. 2003;24(6):682–701. doi: 10.1016/S0197-2456(03)00072-2
- Sun Y, Jusko WJ. Transit Compartments versus Gamma Distribution Function To Model Signal Transduction Processes in Pharmacodynamics. Jou of Pharma Scie. 1998;87(6):732–737. doi: 10.1021/js970414z
- Wu Y, Shih WJ, Moore DF, et al. Elicitation of a Beta Prior for Bayesian Inference in Clinical Trials. Biom. J. 2008;50(2):212–223. doi: 10.1002/bimj.200710390
- Orita M, Moritomo A, Niimi T, et al. Use of Benford’s law in drug discovery data. Drug Discov Today. 2010;15(9–10):328–331. Epub 2010 Mar 16. PMID: 20298800. doi: 10.1016/j.drudis.2010.03.003
- European Bioinformatics Institute. 2023 sep 06. ChEMBL. https://www.ebi.ac.uk/chembl
- National Center for Biotechnology Information, 2023 sep 06. NCBI. https://www.ncbi.nlm.nih.gov
- García-Sosa AT. Benford’s law in medicinal chemistry: implications for drug design. Future Med Chem. 2019;11(17):2247–2253. PMID: 31581910. doi: 10.4155/fmc-2019-0006
- Dai Y, Guo C, Guo W, et al. Drug–drug interaction prediction with Wasserstein Adversarial Autoencoder-based knowledge graph embeddings. Briefi in Bioinfor. 2021;22(4). doi: 10.1093/bib/bbaa256