Deconvoluting gene and environment interactions to develop an “epigenetic score meter” of disease

Abstract

Human health is determined both by genetics (G) and environment (E). This is clearly illustrated in groups of individuals who are exposed to the same environmental factor showing differential responses. A quantitative measure of the gene-environment interactions (GxE) effects has not been developed and in some instances, a clear consensus on the concept has not even been reached; for example, whether cancer is predominantly emerging from “bad luck” or “bad lifestyle” is still debated. In this article, we provide a panel of examples of GxE interaction as drivers of pathogenesis. We highlight how epigenetic regulations can represent a common connecting aspect of the molecular bases. Our argument converges on the concept that the GxE is recorded in the cellular epigenome, which might represent the key to deconvolute these multidimensional intricated layers of regulation. Developing a key to decode this epigenetic information would provide quantitative measures of disease risk. Analogously to the epigenetic clock introduced to estimate biological age, we provocatively propose the theoretical concept of an “epigenetic score-meter” to estimate disease risk.

REACH out-numbered! The future of REACH and animal numbers

Abstract

The EU’s REACH (Registration, Evaluation, Authorisation and Restriction of Chemicals) Regulation requires animal testing only as a last resort. However, our study (Knight et al., 2023) in this issue reveals that approximately 2.9 million animals have been used for REACH testing for reproductive toxicity, developmental toxicity, and repeated-dose toxicity alone as of December 2022. Currently, additional tests requiring about 1.3 million more animals are in the works. As compliance checks continue, more animal tests are anticipated. According to the European Chemicals Agency (ECHA), 75% of read-across methods have been rejected during compliance checks. Here, we estimate that 0.6 to 3.2 million animals have been used for other endpoints, likely at the lower end of this range. The ongoing discussion about the grouping of 4,500 regis-tered petrochemicals can still have a major impact on these numbers. The 2022 amendment of REACH is estimated to add 3.6 to 7.0 million animals. This information comes as the European Parliament is set to consider changes to REACH that could further increase animal testing. Two proposals currently under discussion would likely necessitate new animal testing: extending the requirement for a chemical safety assessment (CSA) to Annex VII substances could add 1.6 to 2.6 million animals, and the registration of polymers adds a challenge comparable to the petrochemical discussion. These findings high-light the importance of understanding the current state of REACH animal testing for the upcoming debate on REACH revisions as an opportunity to focus on reducing animal use.

G × E interactions as a basis for toxicological uncertainty

Abstract

To transfer toxicological findings from model systems, e.g. animals, to humans, standardized safety factors are applied to account for intra-species and inter-species variabilities. An alternative approach would be to measure and model the actual compound-specific uncertainties. This biological concept assumes that all observed toxicities depend not only on the exposure situation (environment = E), but also on the genetic (G) background of the model (G × E). As a quantitative discipline, toxicology needs to move beyond merely qualitative G × E concepts. Research programs are required that determine the major biological variabilities affecting toxicity and categorize their relative weights and contributions. In a complementary approach, detailed case studies need to explore the role of genetic backgrounds in the adverse effects of defined chemicals. In addition, current understanding of the selection and propagation of adverse outcome pathways (AOP) in different biological environments is very limited. To improve understanding, a particular focus is required on modulatory and counter-regulatory steps. For quantitative approaches to address uncertainties, the concept of “genetic” influence needs a more precise definition. What is usually meant by this term in the context of G × E are the protein functions encoded by the genes. Besides the gene sequence, the regulation of the gene expression and function should also be accounted for. The widened concept of past and present “gene expression” influences is summarized here as Ge. Also, the concept of “environment” needs some re-consideration in situations where exposure timing (Et) is pivotal: prolonged or repeated exposure to the insult (chemical, physical, life style) affects Ge. This implies that it changes the model system. The interaction of Ge with Et might be denoted as Ge × Et. We provide here general explanations and specific examples for this concept and show how it could be applied in the context of New Approach Methodologies (NAM).

Making in silico predictive models for toxicology FAIR

Abstract

In silico predictive models for toxicology include quantitative structure-activity relationship (QSAR) and physiologically based kinetic (PBK) approaches to predict physico-chemical and ADME properties, toxicological effects and internal exposure. Such models are used to fill data gaps as part of chemical risk assessment. There is a growing need to ensure in silico predictive models for toxicology are available for use and that they are reproducible. This paper describes how the FAIR (Findable, Accessible, Interoperable, Reusable) principles, developed for data sharing, have been applied to in silico predictive models. In particular, this investigation has focussed on how the FAIR principles could be applied to improved regulatory acceptance of predictions from such models. Eighteen principles have been developed that cover all aspects of FAIR. It is intended that FAIRification of in silico predictive models for toxicology will increase their use and acceptance.

Genomic and proteomic biomarker landscape in clinical trials

Abstract

The use of molecular biomarkers to support disease diagnosis, monitor its progression, and guide drug treatment has gained traction in the last decades. While only a dozen biomarkers have been approved for their exploitation in the clinic by the FDA, many more are evaluated in the context of translational research and clinical trials. Furthermore, the information on which biomarkers are measured, for which purpose, and in relation to which conditions are not readily accessible: biomarkers used in clinical studies available through resources such as ClinicalTrials.gov are described as free text, posing significant challenges in finding, analyzing, and processing them by both humans and machines. We present a text mining strategy to identify proteomic and genomic biomarkers used in clinical trials and classify them according to the methodologies by which they are measured. We find more than 3000 biomarkers used in the context of 2600 diseases. By analyzing this dataset, we uncover patterns of use of biomarkers across therapeutic areas over time, including the biomarker type and their specificity. These data are made available at the Clinical Biomarker App at https://www.disgenet.org/biomarkers/, a new portal that enables the exploration of biomarkers extracted from the clinical studies available at ClinicalTrials.gov and enriched with information from the scientific literature. The App features several metrics that assess the specificity of the biomarkers, facilitating their selection and prioritization. Overall, the Clinical Biomarker App is a valuable and timely resource about clinical biomarkers, to accelerate biomarker discovery, development, and application.

A continuous in silico learning strategy to identify safety liabilities in compounds used in the leather and textile industry

Abstract

There is a widely recognized need to reduce human activity’s impact on the environment. Many industries of the leather and textile sector (LTI), being aware of producing a significant amount of residues (Keßler et al. 2021; Liu et al. 2021), are adopting measures to reduce the impact of their processes on the environment, starting with a more comprehensive characterization of the chemical risk associated with the substances commonly used in LTI. The present work contributes to these efforts by compiling and toxicologically annotating the substances used in LTI, supporting a continuous learning strategy for characterizing their chemical safety. This strategy combines data collection from public sources, experimental methods and in silico predictions for characterizing four different endpoints: CMR, ED, PBT, and vPvB. We present the results of a prospective validation exercise in which we confirm that in silico methods can produce reasonably good hazard estimations and fill knowledge gaps in the LTI chemical space. The proposed protocol can speed the process and optimize the use of resources including the lives of experimental animals, contributing to identifying potentially harmful substances and their possible replacement by safer alternatives, thus reducing the environmental footprint and impact on human health.