Refining Drug-Induced Cholestasis Prediction: An Explainable Consensus Model Integrating Chemical and Biological Fingerprints

Abstract

Effective drug safety assessment, guided by the 3R principle (Replacement, Reduction, Refinement) to minimize animal testing, is critical in early drug development. Drug-induced liver injury (DILI), particularly drug-induced cholestasis (DIC), remains a major challenge. This study introduces a computational method for predicting DIC by integrating PubChem substructure fingerprints with biological data from liver-expressed targets and pathways, alongside nine hepatic transporter inhibition models. To address class imbalance in the public cholestasis data set, we employed undersampling, a technique that constructs a small and robust consensus model by evaluating distinct subsets. The most effective baseline model, which combined PubChem substructure fingerprints, pathway data and hepatic transporter inhibition predictions, achieved a Matthews correlation coefficient (MCC) of 0.29 and a sensitivity of 0.79, as validated through 10-fold cross-validation. Subsequently, target prediction using four publicly available tools was employed to enrich the sparse compound-target interaction matrix. Although this approach showed lower sensitivity compared to experimentally derived targets and pathways, it highlighted the value of incorporating specific systems biology related information. Feature importance analysis identified albumin as a potential target linked to cholestasis within our predictive model, suggesting a connection worth further investigation. By employing an expanded consensus model and applying probability range filtering, the refined method achieved an MCC of 0.38 and a sensitivity of 0.80, thereby enhancing decision-making confidence. This approach advances DIC prediction by integrating biological and chemical descriptors, offering a reliable and explainable model.

Causal, Predictive or Observational? Different Understandings of Key Event Relationships for Adverse Outcome Pathways and their implications on practice

Abstract

The Adverse Outcome Pathways (AOPs) framework is pivotal in toxicology, but the, terminology describing Key Event Relationships (KERs) varies within AOP guidelines.This study examined the usage of causal, observational and predictive terms in AOP, documentation and their adaptation in AOP development. A literature search and text, analysis of key AOP guidance documents revealed nuanced usage of these terms, with KERs often described as both causal and predictive. The adaptation of, terminology varies across AOP development stages. Evaluation of KER causality often, relies targeted blocking experiments and weight-of-evidence assessments in the, putative and qualitative stages. Our findings highlight a potential mismatch between,terminology in guidelines and methodologies in practice, particularly in inferring,causality from predictive models. We argue for careful consideration of terms like, causal and essential to facilitate interdisciplinary communication. Furthermore, integrating known causality into quantitative AOP models remains a challenge.

The long way from raw data to NAM-based information: Overview on data layers and processing steps

Abstract

Toxicological test methods generate raw data and provide instructions on how to use these to determine a final outcome such as a classification of test compounds as hits or non-hits. The data processing pipeline provided in the test method description is often highly complex. Usually, multiple layers of data, ranging from a machine-generated output to the final hit definition, are considered. Transition between each of these layers often requires several data processing steps. As changes in any of these processing steps can impact the final output of new approach methods (NAMs), the processing pipeline is an essential part of a NAM description and should be included in reporting templates such as the ToxTemp. The same raw data, processed in different ways, may result in different final outcomes that may affect the readiness status and regulatory acceptance of the NAM, as an altered output can affect robustness, performance, and relevance. Data management, pro­cessing, and interpretation are therefore important elements of a comprehensive NAM definition. We aim to give an overview of the most important data levels to be considered during the devel­opment and application of a NAM. In addition, we illustrate data processing and evaluation steps between these data levels. As NAMs are increasingly standard components of the spectrum of toxi­cological test methods used for risk assessment, awareness of the significance of data processing steps in NAMs is crucial for building trust, ensuring acceptance, and fostering the reproducibility of NAM outcomes.

Animal-free Safety Assessment of Chemicals: Project Cluster for Implementation of Novel Strategies (ASPIS) definition of new approach methodologies

Abstract

Since the release of the U.S. National Academy’s report calling for toxicology to evolve from an observation-based to a mechanism-based science (National Research Council, 2007), scientific advances have shown that mechanistic approaches provide a deeper understanding of hazards associated with chemical exposures. New approach methodologies (NAMs) have emerged to assess the hazards and risks associated with exposure to anthropogenic and/or nonanthropogenic stressors within the context of reduce, refine, and replace (the 3Rs). Replacement refers to achieving a research goal without using animals. Reduction means applying methods that allow an investigator to obtain comparable information and precision using fewer animals. Refinement refers to changes in procedures that decrease or eliminate the animals’ pain, stress, and discomfort both during experimental procedures and in their daily social and physical environments (Russell & Burch, 1959). The development, acceptance and implementation of NAMs has become an international priority for human health and that of wildlife and ecosystems. The global commitment to nonanimal research is driven by societal values on animal welfare and the uncertainty of mammalian model species as reliable human surrogates. In addition, NAM-based information can potentially unite the different branches of toxicology by its relevance in protecting human health, wildlife, and ecosystems, thereby contributing to public safety, ecological resilience, and sustainability.

Transcriptomic changes and mitochondrial toxicity in response to acute and repeat dose treatment with brequinar in human liver and kidney in vitro models

Abstract

The potent dihydroorotate dehydrogenase (DHODH) inhibitor brequinar has been investigated as an anticancer, immunosuppressive, and antiviral pharmaceutical agent. However, its toxicity is still poorly understood. We investigated the cellular responses of primary human hepatocytes (PHH) and telomerase-immortalised human renal proximal tubular epithelial cells (RPTEC/TERT1) after a single 24-h exposure up to 100 μM brequinar. Additionally, RPTEC/TERT1 cells underwent repeated daily exposure for five consecutive days at 0.3, 3, and 20 μM. Transcriptomic analysis revealed that PHH were less sensitive to brequinar treatment than RPTEC/TERT1 cells. Upregulation of various phase I and II drug-metabolising enzymes, particularly Cytochrome P450 (CYP) 1 A and 3 A enzymes, in PHH suggests potential detoxification. Furthermore, brequinar exposure led to a significant upregulation of several stress response pathways in PHH and RPTEC/TERT1 cells, including the unfolded protein response, Nrf2, p53, and inflammatory responses. RPTEC/TERT1 cells exhibited greater sensitivity to brequinar at 0.3 μM with repeated exposure compared to a single exposure. Furthermore, brequinar could impair the mitochondrial respiration of RPTEC/TERT1 cells after 24 h. This study provides new insights into the differential responses of PHH and RPTEC/TERT1 cells in response to brequinar exposure and highlights the biological relevance of implementing repeated dosing regimens in in vitro studies.

Chemical and Biological Mechanisms Relevant to the Rescue of MG-132-Treated Neurons by Cysteine

Abstract

Proteasome dysfunctions are observed in many human pathologies. To study their role and potential treatment strategies, models of proteasome inhibition are widely used in biomedical research. One frequently used tool is the proteasome inhibitor MG-132. It triggers the degeneration of human neurons, and several studies show protection from pathological events by glutathione or its precursors. It has therefore been concluded that glutathione protects cells from proteasome dysfunction. However, an alternative explanation is that MG-132, which is a peptide aldehyde, is chemically inactivated by thiols, and the apparent protection by glutathione from proteasome dysfunction is an artefact. To clarify this issue, we examined the chemical inactivation of MG-132 by thiols and the role of such reactions for neuroprotection. Using mass spectrometry and nuclear magnetic resonance spectroscopy, we found that MG-132 reacted with L-cysteine to form a stable end product and with glutathione to form an unstable intermediate. Using a cell-free proteasome inhibition assay, we found that high concentrations of L-cysteine can scavenge a substantial fraction of MG-132 and thus reduce proteasome inhibition. Glutathione (or N-acetyl-cysteine) did not alter proteasome inhibition (even at high concentrations). In a final step, we studied human neuronal cultures. We exposed them to MG-132, supplemented the culture medium with various thiols, and assessed intracellular L-cysteine concentrations. The transcriptome response pattern also indicated an inhibition of the proteasome by MG-132 in the presence of L-cysteine. We conclude that thiol concentrations that can be reached in cells do not inactivate MG-132 in pathological models. They rather act in a cytoprotective way as antioxidants.