Národní úložiště šedé literatury Nalezeno 3 záznamů.  Hledání trvalo 0.00 vteřin. 
Evaluation of Image Quality and Camera Setup
Ondris, Ladislav ; Fučík, Otto (oponent) ; Zemčík, Pavel (vedoucí práce)
This work aims to develop a foundation for a general-purpose camera capable of updating its parameters based on the observed scene. This approach combines image quality assessment metrics with scene recognition. A set of metrics was collected, such as those used to assess contrast and sharpness. Additionally, a scene recognition machine learning model was developed to identify the scene, which serves as the basis for selecting appropriate camera parameters tailored to the specific scene. The work demonstrates the practical application of utilizing the metrics to optimize selected Image Signal Processor parameters and to detect optical aberrations.
Depth-Based Determination of a 3D Hand Position
Ondris, Ladislav ; Tinka, Jan (oponent) ; Drahanský, Martin (vedoucí práce)
This work aims to offer a real-time, depth-based gesture recognition system using a hand's skeletal information. The Tiny YOLOv3 neural network detects the hand in the depth image. The detected hand is rid of the background and used by the JGR-P2O neural network, which estimates the hand's skeleton represented by 21 key points. Furthermore, a novel technique for gesture recognition from hand key points that compares the input skeleton with user-defined gestures has been proposed. A dataset consisting of four thousand images was captured to evaluate the system.
Depth-Based Determination of a 3D Hand Position
Ondris, Ladislav ; Tinka, Jan (oponent) ; Drahanský, Martin (vedoucí práce)
This work aims to offer a real-time, depth-based gesture recognition system using a hand's skeletal information. The Tiny YOLOv3 neural network detects the hand in the depth image. The detected hand is rid of the background and used by the JGR-P2O neural network, which estimates the hand's skeleton represented by 21 key points. Furthermore, a novel technique for gesture recognition from hand key points that compares the input skeleton with user-defined gestures has been proposed. A dataset consisting of four thousand images was captured to evaluate the system.

Chcete být upozorněni, pokud se objeví nové záznamy odpovídající tomuto dotazu?
Přihlásit se k odběru RSS.