Evaluating the Impact of Explainable AI Models on Transparency in Scientific Research Applications

Authors

  • Marcelo K. Henrique Research Software Engineer, Brazil. Author

Keywords:

Explainable AI, Scientific Transparency, Research Ethics, Interpretability, Reproducibility, AI in Science, Trust in AI

Abstract

Purpose: This paper investigates how the adoption of Explainable Artificial Intelligence (XAI) models influences transparency within scientific research workflows, particularly in domains where AI systems play a critical role in data analysis, hypothesis generation, and decision-making processes.

Methodology: A mixed-method approach was utilized, combining systematic literature review, case analysis of selected research projects using XAI tools, and survey data from 150 academic researchers across multiple disciplines. Both qualitative and quantitative data were synthesized to assess perceptions of transparency and interpretability improvements attributable to XAI models.

Findings: Results indicate a significant increase in perceived research transparency, reproducibility, and trust in AI-driven results when explainable models are employed. Researchers emphasized the value of post-hoc interpretability tools and inherently interpretable models in facilitating peer review and collaborative validation.

Practical implications: The study provides empirical support for the integration of XAI systems in research workflows, especially where regulatory compliance and reproducibility are essential. These findings are particularly relevant for funding bodies and institutions prioritizing ethical AI usage in research settings.

Originality: This paper contributes a novel synthesis of XAI’s role in fostering epistemic transparency in science, offering both practical insights and a conceptual framework for evaluating AI interpretability in knowledge-producing contexts.

References

Doshi-Velez, Finale, and Been Kim. "Towards a rigorous science of interpretable machine learning." arXiv preprint arXiv:1702.08608, 2017.

Gil, Yolanda, et al. "Explainable Artificial Intelligence for Scientific Discovery." Communications of the ACM, vol. 63, no. 11, 2020, pp. 58–66.

Lipton, Zachary C. "The Mythos of Model Interpretability." arXiv preprint arXiv:1606.03490, 2016.

Rudin, Cynthia. "Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead." Nature Machine Intelligence, vol. 1, no. 5, 2019, pp. 206–215.

Stiglic, Gregor, et al. "Interpretability of machine learning-based prediction models in healthcare." Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, vol. 10, no. 5, 2020.

Gunning, David, and David Aha. "DARPA’s Explainable Artificial Intelligence Program." AI Magazine, vol. 40, no. 2, 2019, pp. 44–58.

Miller, Tim. "Explanation in artificial intelligence: Insights from the social sciences." Artificial Intelligence, vol. 267, 2019, pp. 1–38.

Molnar, Christoph. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. Leanpub, 2022.

Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. "Why should I trust you? Explaining the predictions of any classifier." Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 1135–1144.

Lundberg, Scott M., and Su-In Lee. "A unified approach to interpreting model predictions." Advances in Neural Information Processing Systems, vol. 30, 2017, pp. 4765–4774.

Tonekaboni, S., et al. "What clinicians want: contextualizing explainable machine learning for clinical end use." Proceedings of the Machine Learning for Healthcare Conference, 2019, pp. 359–380.

Weller, Adrian. "Transparency: Motivations and challenges." Proceedings of the 1st Workshop on Fairness, Accountability and Transparency in Machine Learning (FAT/ML), 2017.

Downloads

Published

2026-01-08

How to Cite

Marcelo K. Henrique. (2026). Evaluating the Impact of Explainable AI Models on Transparency in Scientific Research Applications. INTERNATIONAL JOURNAL OF ENGINEERING AND TECHNOLOGY RESEARCH & DEVELOPMENT, 7(1), 1–5. https://ijetrd.com/index.php/ijetrd/article/view/IJETRD.07.01.002