National Repository of Grey Literature 2 records found  Search took 0.00 seconds. 
Utilising Large Pretrained Language Models for Configuration and Support of a Clinical Information System
Sova, Michal ; Burget, Radek (referee) ; Rychlý, Marek (advisor)
The aim of this work is to get acquainted with the essence and use of large pre-trained language models, to get acquainted with the configuration options of the clinical information system FONS Enterprise and the possibility of its adaptation to the specific environment of customers. The work first presents large pre-trained language models and the FONS Enterprise clinical information system. This work examines possibilities of training models and implementing RAG methods on data from the clinical system. The implementation of the RAG architecture is supported by the tools LangChain and LlamaIndex. The results show that the RAG method with the Gemma model and the bge-m3 embedding model provides the most relevant answers on basic questions, but struggles to understand more complex questions. The method of pre-training the model does not produce the expected results, even after adjusting the training parameters.
Denoise Pre-Training For Segmentation Neural Networks
Kolarik, Martin
This paper proposes a method for pre-training segmentation neural networks on small datasets using unlabelled training data with added noise. The pre-training process helps the network with initial better weights settings for the training itself and also augments the training dataset when dealing with small labelled datasets especially in medical imaging. The experiment comparing results of pre-trained and not pre-trained networks on MRI brain segmentation task has shown that the denoise pre-training helps the network with faster training convergence without overfitting and achieving better results in all compared metrics even on very small datasets.

Interested in being notified about new results for this query?
Subscribe to the RSS feed.