Explainable Information Retrieval Techniques in Academic Search Engines
DOI:
https://doi.org/10.51983/ijiss-2025.IJISS.15.3.44Keywords:
XAI, Interfaces for Trust, Explainable AI, Trustworthy AI, Information Retrieval, Academic Search Engines, XAI, User Trust, TransparencyAbstract
Due to the rapid increase in scholarly publications globally, researchers rely heavily on specialized academic search engines to gather pertinent information. However, the algorithms used in many of these systems create a black box that breaches transparency, significantly eroding user trust and interpretability. Thanks to XIR, Explainable Information Retrieval, this issue has become a thing of the past. Users can now receive easily understood rationales for why documents were retrieved and how the documents were ranked. This work examines the various XIR techniques integrated into academic search tools, assesses their application methods, and analyzes how effectively they enhance users' understanding, satisfaction, decision-making, and information processing. The paper also formulates a central proposal, which incorporates important elements of XIR, and highlights remnant problems that require deeper analysis.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 The Research Publication

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.







