Introduction to Predictive Toxicology Tools and Methods
Laura Suter-Dick and Friedlieb Pfannkuch
1.1 Computational Tools and Bioinformatics
1.1.1 In Silico Prediction Tools
Computational tools are used in many life sciences research areas, including toxicity prediction. They take advantage of complex mathematical models to predict the effects caused by a given compound on an organism. Due to the complexity of the possible interactions between a treatment and a patient and the diversity of possible outcomes, models are applied to well-defined and specific fields, such as DNA damaging potential, estimation of the necessary dose to elicit an effect in a patient, or identification of relevant gene expression changes.
In silico tools make use of information regarding chemical structures and the immense data legacy that allows inferring interactions between chemical structures, physicochemical properties, and biological processes. These methods are farthest away from traditional animal studies, since they rely on existing databases rather than on generating experimental animal data.
Due to the complexity of this task, there are a fairly small number of endpoints that can be predicted with commonly employed in silico tools such as DEREK, VITIC, and M-Case with acceptable accuracy. In order to improve the current models and to expand to additional prediction algorithms, further validation and extension of the underlying databases is ongoing.
Similarly, modeling and simulation (M&S) can generate mathematical models able to simulate and therefore predict how a compound will behave in humans before clinical data become available. In the field of nonclinical safety, complex models allow for a prediction of the effect of an organism on a compound (pharmacokinetic models) as well as, to some extent, pharmacodynamic extrapolations, based on data generated in animal models as well as in in vitro human systems.
In addition to the in silico and modeling tools described above, the dramatically increasing amount of toxicologically relevant data needs to be appropriately monitored and collected. All "new" technologies produce very high volumes of data and thus having and using bioinformatics tools that can collect data from diverse sources and mine them to detect relevant patterns of change is vital. For this purpose, large databases are necessary, along with bioinformatics tools that can deal with diverse data types, multivariate analysis, and supervised and unsupervised discrimination algorithms. These tools take advantage of advanced statistics, combined with the large data sets stored in the databases generated using technologies such as omics or high-content imaging.
1.2 Omics Technologies
The omics technologies arose with the advent of advanced molecular biology techniques able to determine changes in the whole transcriptome, proteome, or metabolome. These powerful techniques were considered the ultimate holistic approach to tackle many biological questions, among them toxicological assessment. Several companies have invested in these areas of toxicological research.
1.2.1 Toxicogenomics (Transcriptomics)
Toxicogenomics is the more widespread of the omics technologies. Predictive approaches are based on databases with compounds (toxic/nontoxic) generated by (pharmaceutical) companies as well as by commercial vendors in the 1990s. All share the same focus of investigation: target organ toxicity to the liver and the kidney.
In addition, gene expression data are often the basis for mechanistic understanding of biological processes in several fields, including toxicology, pharmacology, and disease phenotype. Thus, transcriptomic data can be used as a merely predictive tool, as a mechanistic tool, or as a combinatio