Loading [a11y]/accessibility-menu.js
Enhancing Drug Safety Documentation Search Capabilities with Large Language Models: A User-Centric Approach | IEEE Conference Publication | IEEE Xplore

Enhancing Drug Safety Documentation Search Capabilities with Large Language Models: A User-Centric Approach


Abstract:

Integrating Large Language Models (LLMs) to enhance complex business document retrieval represents an emerging field known as retrieval-augmented generation (RAG). In hig...Show More

Abstract:

Integrating Large Language Models (LLMs) to enhance complex business document retrieval represents an emerging field known as retrieval-augmented generation (RAG). In highly regulated domains like drug safety (pharmacovigilance), its application has remained largely unexplored. This technology brings numerous advantages, including expedited staff on-boarding, enhanced comprehension of contextual queries, and swift information retrieval through natural language inquiries, surpassing conventional keyword searches. This study delves into various operational tasks, such as locating regulatory process guidance, navigating intricate scenarios for advice, and ensuring the LLM's competence in recognizing uncertainties to prevent misinformation. LLMs empower users to engage with documentation using natural language, markedly improving search efficiency. The case study underscores LLM's effectiveness in delivering prompt guidance within pharmacovigilance and adverse event processing and reporting, offering a user-centric solution that streamlines the search for intricate business documentation.
Date of Conference: 13-15 December 2023
Date Added to IEEE Xplore: 19 July 2024
ISBN Information:

ISSN Information:

Conference Location: Las Vegas, NV, USA

I. Introduction

Large language models (LLMs) have captured significant attention due to their versatile applications, particularly within the field of pharmacovigilance (PV). Pharmacovigilance in-volves the systematic evaluation of medication and vaccine safety in routine healthcare delivery [1]. Despite the extensive training of LLMs on public knowledge, their accessibility remains confined to publicly available information. Consequently, they often lack awareness of data hidden behind corporate firewalls, private sources, or specific contextual intricacies.

Contact IEEE to Subscribe

References

References is not available for this document.