Definition of the Neurotoxicity-Associated Metabolic Signature Triggered by Berberine and Other Respiratory Chain Inhibitors

Abstract

To characterize the hits from a phenotypic neurotoxicity screen, we obtained transcriptomics data for valinomycin, diethylstilbestrol, colchicine, rotenone, 1-methyl-4-phenylpyridinium (MPP), carbaryl and berberine (Ber). For all compounds, the concentration triggering neurite degeneration correlated with the onset of gene expression changes. The mechanistically diverse toxicants caused similar patterns of gene regulation: the responses were dominated by cell de-differentiation and a triggering of canonical stress response pathways driven by ATF4 and NRF2. To obtain more detailed and specific information on the modes-of-action, the effects on energy metabolism (respiration and glycolysis) were measured. Ber, rotenone and MPP inhibited the mitochondrial respiratory chain and they shared complex I as the target. This group of toxicants was further evaluated by metabolomics under experimental conditions that did not deplete ATP. Ber (204 changed metabolites) showed similar effects as MPP and rotenone. The overall metabolic situation was characterized by oxidative stress, an over-abundance of NADH (>1000% increase) and a re-routing of metabolism in order to dispose of the nitrogen resulting from increased amino acid turnover. This unique overall pattern led to the accumulation of metabolites known as biomarkers of neurodegeneration (saccharopine, aminoadipate and branched-chain ketoacids). These findings suggest that neurotoxicity of mitochondrial inhibitors may result from an ensemble of metabolic changes rather than from a simple ATP depletion. The combi-omics approach used here provided richer and more specific MoA data than the more common transcriptomics analysis alone. As Ber, a human drug and food supplement, mimicked closely the mode-of-action of known neurotoxicants, its potential hazard requires further investigation.

The integrated stress response-related expression of CHOP due to mitochondrial toxicity is a warning sign for DILI liability

Abstract

Background and aims: Drug-induced liver injury (DILI) is one of the most frequent reasons for failure of drugs in clinical trials or market withdrawal. Early assessment of DILI risk remains a major challenge during drug development. Here, we present a mechanism-based weight-of-evidence approach able to identify certain candidate compounds with DILI liabilities due to mitochondrial toxicity.

Methods: A total of 1587 FDA-approved drugs and 378 kinase inhibitors were screened for cellular stress response activation associated with DILI using an imaging-based HepG2 BAC-GFP reporter platform including the integrated stress response (CHOP), DNA damage response (P21) and oxidative stress response (SRXN1).

Results: In total 389, 219 and 104 drugs were able to induce CHOP-GFP, P21-GFP and SRXN1-GFP expression at 50 μM respectively. Concentration response analysis identified 154 FDA-approved drugs as critical CHOP-GFP inducers. Based on predicted and observed (pre-)clinical DILI liabilities of these drugs, nine antimycotic drugs (e.g. butoconazole, miconazole, tioconazole) and 13 central nervous system (CNS) agents (e.g. duloxetine, fluoxetine) were selected for transcriptomic evaluation using whole-genome RNA-sequencing of primary human hepatocytes. Gene network analysis uncovered mitochondrial processes, NRF2 signalling and xenobiotic metabolism as most affected by the antimycotic drugs and CNS agents. Both the selected antimycotics and CNS agents caused impairment of mitochondrial oxygen consumption in both HepG2 and primary human hepatocytes.

Conclusions: Together, the results suggest that early pre-clinical screening for CHOP expression could indicate liability of mitochondrial toxicity in the context of DILI, and, therefore, could serve as an important warning signal to consider during decision-making in drug development.

Keywords: CHOP; drug-induced liver injury; high-throughput screening; integrated stress response; mitochondrial toxicity.

Protectiveness of NAM-based hazard assessment – which testing scope is required?

Abstract

Hazard assessment (HA) requires toxicity tests to allow deriving protective points of departure (PoDs) for risk assessment irrespective of a compound’s mode of action (MoA). The scope of in vitro test batteries (ivTB) thereby necessitated for systemic toxicity is still unclear. We explored the protectiveness regarding systemic toxicity of an ivTB with a scope, which was guided by previous findings from rodent studies, where examining six main targets, including liver and kidney, was sufficient to predict the guideline scope-based PoD with high probability. The ivTB comprises human in vitro models representing liver, kidney, lung and the neuronal system covering transcriptome, mitochondrial dysfunction and neuronal outgrowth. Additionally, 32 CALUX®- and 10 HepG2 BAC-GFP reporters cover a broad range of disturbance mechanisms. Eight compounds were chosen for causing adverse effects such as immunotoxicity or anemia in vivo, i.e., effects not directly covered by assays in the ivTB. PoDs derived from the ivTB and from oral repeated dose studies in rodents were extrapolated to maximum unbound plasma concentrations for comparison. The ivTB-based PoDs were one to five orders of magnitude lower than in vivo PoDs for six of eight compounds, implying that they were protective. The extent of in vitro response varied across test compounds. Especially for hematotoxic substances, the ivTB showed either no response or only cytotoxicity. Assays better capturing this type of hazard would be needed to complement the ivTB. This study highlights the potentially broad applicability of ivTBs for deriving protective PoDs of compounds with unknown MoA.

Acceptance criteria for new approach methods in toxicology and human health-relevant life science research – part I

Abstract

Every test procedure, scientific and non-scientific, has inherent uncertainties, even when performed according to a standard operating procedure (SOP). In addition, it is prone to errors, defects, and mistakes introduced by operators, laboratory equipment, or materials used. Adherence to an SOP and comprehensive validation of the test method cannot guarantee that each test run produces data within the acceptable range of variability and with the precision and accuracy determined during the method validation. We illustrate here (part I) why controlling the validity of each test run is an important element of experimental design. The definition and application of acceptance criteria (AC) for the validity of test runs is important for the setup and use of test methods, particularly for the use of new approach methods (NAM) in toxicity testing. AC can be used for decision rules on how to handle data, e.g., to accept the data for further use (AC fulfilled) or to reject the data (AC not fulfilled). The adherence to AC has important requirements and consequences that may seem surprising at first sight: (i) AC depend on a test method’s objectives, e.g., on the types/concentrations of chemicals tested, the regulatory context, the desired throughput; (ii) AC are applied and documented at each test run, while validation of a method (including the definition of AC) is only performed once; (iii) if AC are altered, then the set of data produced by a method can change. AC, if missing, are the blind spot of quality assurance: Test results may not be reliable and comparable. The establishment and uses of AC will be further detailed in part II of this series.

Usage of model combination in computational toxicology

Abstract

New Approach Methodologies (NAMs) have ushered in a new era in the field of toxicology, aiming to replace animal testing. However, despite these advancements, they are not exempt from the inherent complexities associated with the study’s endpoint. In this review, we have identified three major groups of complexities: mechanistic, chemical space, and methodological. The mechanistic complexity arises from interconnected biological processes within a network that are challenging to model in a single step. In the second group, chemical space complexity exhibits significant dissimilarity between compounds in the training and test series. The third group encompasses algorithmic and molecular descriptor limitations and typical class imbalance problems. To address these complexities, this work provides a guide to the usage of a combination of predictive Quantitative Structure-Activity Relationship (QSAR) models, known as metamodels. This combination of low-level models (LLMs) enables a more precise approach to the problem by focusing on different sub-mechanisms or sub-processes. For mechanistic complexity, multiple Molecular Initiating Events (MIEs) or levels of information are combined to form a mechanistic-based metamodel. Regarding the complexity arising from chemical space, two types of approaches were reviewed to construct a fragment-based chemical space metamodel: those with and without structure sharing. Metamodels with structure sharing utilize unsupervised strategies to identify data patterns and build low-level models for each cluster, which are then combined. For situations without structure sharing due to pharmaceutical industry intellectual property, the use of prediction sharing, and federated learning approaches have been reviewed. Lastly, to tackle methodological complexity, various algorithms are combined to overcome their limitations, diverse descriptors are employed to enhance problem definition and balanced dataset combinations are used to address class imbalance issues (methodological-based metamodels). Remarkably, metamodels consistently outperformed classical QSAR models across all cases, highlighting the importance of alternatives to classical QSAR models when faced with such complexities.

Analysis of health concerns not addressed by REACH for low tonnage chemicals and opportunities for new approach methodology

Abstract

In Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) the criterion for deciding the studies that must be performed is the annual tonnage of the chemical manufactured or imported into the EU. The annual tonnage may be considered as a surrogate for levels of human exposure but this does not take into account the physico-chemical properties and use patterns that determine exposure. Chemicals are classified using data from REACH under areas of health concern covering effects on the skin and eye; sensitisation; acute, repeated and prolonged systemic exposure; effects on genetic material; carcinogenicity; and reproduction and development. We analysed the mandated study lists under REACH for each annual tonnage band in terms of the information they provide on each of the areas of health concern. Using the European Chemicals Agency (ECHA) REACH Registration data base of over 20,000 registered substances, we found that only 19% of registered substances have datasets on all areas of health concern. Information limited to acute exposure, sensitisation and genotoxicity was found for 62%. The analysis highlighted the shortfall of information mandated for substances in the lower tonnage bands. Deploying New Approach Methodologies (NAMs) at this lower tonnage band to assess health concerns which are currently not covered by REACH, such as repeat and extended exposure and carcinogenicity, would provide additional information and would be a way for registrants and regulators to gain experience in the use of NAMs. There are currently projects in Europe aiming to develop NAM-based assessment frameworks and they could find their first use in assessing low tonnage chemicals once confidence has been gained by their evaluation with data rich chemicals.