QUESTION-ANSWER SYSTEM ON MEDICAL DOMAIN WITH LLMS USING VARIOUS FINE-TUNING METHODS
- P.P Savani University, Kosmba, Surat, Gujarat, India.
- Abstract
- Keywords
- Cite This Article as
- Corresponding Author
The challenge of developing artificial intelligence (AI) with the ability to comprehendand produce human language has persisted since the 1950s, when the TuringTest was first proposed. Language modelling techniques have advanced fromstatistical to neural models, recently focusing on pre-trained language models(PLMs) utilizing Transformer architecture. These PLMs, trained on vast datasets,excel in various natural language processing (NLP) tasks. Researchers have discoveredthat increasing the size of these models enhances their capabilities andeven imparts unique abilities like in-context learning and the ability to think likehuman brains. These more significant variants are referred to as large languagemodels (LLMs). This report examines recent LLM advances, encompassing pretraining,adaptation tuning, utilization, and capacity evaluation on specificallymedical domains with not-so-large language models. Also, work with the PEFTLibraries like the LoRa and QLora techniques to accommodate LLMs on a singleGPU. Index Terms Pre-trained language models(PLMs), ChatGPT, Large languagemodels(LLMs), Finetuning, Promt engineering, Reinforcement learning withhuman feedback, Chain-Of-Thoughts.
[Misha Patel, Mansi Kotadiya and Urvashi Solanki (2025); QUESTION-ANSWER SYSTEM ON MEDICAL DOMAIN WITH LLMS USING VARIOUS FINE-TUNING METHODS Int. J. of Adv. Res. (May). 1571-1585] (ISSN 2320-5407). www.journalijar.com
P P SAVANI UNIVERSITY
India