National Repository of Grey Literature 1 records found  Search took 0.00 seconds. 
Utilising Large Pretrained Language Models for Configuration and Support of a Clinical Information System
Sova, Michal ; Burget, Radek (referee) ; Rychlý, Marek (advisor)
The aim of this work is to get acquainted with the essence and use of large pre-trained language models, to get acquainted with the configuration options of the clinical information system FONS Enterprise and the possibility of its adaptation to the specific environment of customers. The work first presents large pre-trained language models and the FONS Enterprise clinical information system. This work examines possibilities of training models and implementing RAG methods on data from the clinical system. The implementation of the RAG architecture is supported by the tools LangChain and LlamaIndex. The results show that the RAG method with the Gemma model and the bge-m3 embedding model provides the most relevant answers on basic questions, but struggles to understand more complex questions. The method of pre-training the model does not produce the expected results, even after adjusting the training parameters.

Interested in being notified about new results for this query?
Subscribe to the RSS feed.