National Repository of Grey Literature 30 records found  previous11 - 20next  jump to record: Search took 0.00 seconds. 
Lose video signal compression – quantisation
Balada, Radek ; Kratochvíl, Tomáš (referee) ; Frýza, Tomáš (advisor)
The aim of my bachelor’s thesis is a lose video signal compression. Because sequences of digital video signals typically require vast amount of electronic memory for storage, and occupy much bandwidth during the transmission, the video signals must be compressed. Several video compression standards use two-dimensional discrete cosine transform. My job is to extend one more dimension (time) and make tests of three-dimensional discrete cosine transform. I focused on video quality compared to the compression ratio. This paper also describes a technique for generating the quantisation values for three dimensional discrete cosine transform coefficients.
JPEG 2000 Implementation
Zlatohlávek, Adam ; Klíma, Ondřej (referee) ; Bařina, David (advisor)
The aim of this thesis is to propose a image compression methods in JPEG 2000. This consists of description of techniques used in base level stadard and analyze options in encoding process. The goal is to create compression process of input image from preprocessing to own output format. In the end of paperwork are presented results of own implementation in memory consuptions and encoding performance.
Compression of ECG signals recorded using mobile ECG device
Had, Filip ; Vítek, Martin (referee) ; Němcová, Andrea (advisor)
Signal compression is necessary part for ECG scanning, because of relatively big amount of data, which must be transmitted primarily wirelessly for analysis. Because of the wireless sending it is necessary to minimize the amount of data as much as possible. To minimize the amount of data, lossless or lossy compression algorithms are used. This work describes an algorithm SPITH and newly created experimental method, based on PNG, and their testing. This master’s thesis there is also a bank of ECG signals with parallel sensed accelerometer data. In the last part, modification of SPIHT algorithm, which uses accelerometer data, is described and realized.
Analysis of JPEG 2000 Settings
Mitaš, Matěj ; Klíma, Ondřej (referee) ; Bařina, David (advisor)
This thesis analyses settings of JPEG 2000 format. The aim is to find appropriate values for each setting. Main criteria are compression performance, elapsed time and used memory. There are two libraries used that are implementing JPEG 2000 format: Kakadu and OpenJPEG. Testing was secured by framework implemented in Python. Results of this thesis enable users to use JPEG 2000 more effectively and with deeper understanding of inner workings.
JAVA-based effective implementation of an image compression tool
Průša, Zdeněk ; Rajmic, Pavel (referee) ; Malý, Jan (advisor)
This diploma thesis deals with digital image lossy compression. Lossy compression in general inserts some kind of distorsion to the resulting image. The distorsion should not be interupting or even noticable in the better case. For image analysis there is used process called transformation and for choosing relevant coefficients process called coding. Evaluation of image quallity can be done by objective or subjective method. There is encoder introduced and realized in this work. Encoder utilizes two-dimension wavelet transform and SPIHT algortihm for coefficient coding. It was made use of accelerated method of wavelet transform computation by lifting scheme. Coder can proccess color information of images using modificated original SPIHT algorithm. For implementation the JAVA programming language was employed. The object-oriented design principes was made use of and thus the program is easy to extended. At demonstaration pictures there are shown effectiveness and characteristic way of distorsion of the proposed coder at high compression rates.
Image quality analysis using Fourier transform
Tkadlecová, Markéta ; Druckmüller, Miloslav (referee) ; Hoderová, Jana (advisor)
This thesis deals with two-dimensional Fourier transform and its use in digital image quality assessment. An algorithm based on amplitude spectra is introduced and tested on different sets of images. Its possible use and disadvantages are described. For understanding the basics of the algorithm the fundamental mathematical theory is included. Mainly the properties of amplitude spectra are explained. Next, the theory of digital images and their characteristics, which can affect their quality, is described. For testing the quality of the image the demonstrative program has been developed.
Compression and Quality Assessment of ECG Signals
Němcová, Andrea ; Tkacz,, Professor Ewaryst (referee) ; Kudrna,, Petr (referee) ; Vítek, Martin (advisor)
Ztrátová komprese signálů EKG je užitečná a v současnosti stále se rozvíjející oblast. Stále se vyvíjí nové a nové kompresní algoritmy. V této oblasti ale chybí standardy pro hodnocení kvality signálu po kompresi. Existuje tedy sice mnoho různých kompresních algoritmů, které ale buď nelze objektivně porovnat vůbec, nebo jen zhruba. V oblasti komprese navíc nikde není popsáno, zda mají na výkon kompresních algoritmů vliv patologie, popřípadě jaký. Tato dizertační práce poskytuje přehled všech nalezených metod pro hodnocení kvality signálů EKG po kompresi. Navíc bylo vytvořeno 10 nových metod. V rámci práce byla provedena analýza všech těchto metod a na základě jejích výsledků bylo doporučeno 12 metod vhodných pro hodnocení kvality signálu EKG po kompresi. Také je zde představen nový kompresní algoritmus „Single-Cycle Fractal-Based (SCyF)“. Algoritmus SCyF je inspirován metodou založenou na fraktálech a využívá jednoho cyklu signálu EKG jako domény. Algoritmus SCyF byl testován na čtyřech různých databázích, přičemž kvalita signálů po kompresi byla vyhodnocena 12 doporučenými metodami. Výsledky byly porovnány s velmi populárním kompresním algoritmem založeným na vlnkové transformaci, který využívá metodu „Set Partitioning in Hierarchical Trees (SPIHT)“. Postup testování zároveň slouží jako příklad, jak by měl vypadat standard hodnocení výkonu kompresních algoritmů. Dále bylo statisticky prokázáno, že existuje rozdíl mezi kompresí fyziologických a patologických signálů. Patologické signály byly komprimovány s nižší efektivitou a kvalitou než signály fyziologické.
Lossy Light Field Compression
Dlabaja, Drahomír ; Milet, Tomáš (referee) ; Bařina, David (advisor)
The aim of this paper is to propose, implement and evaluate a new lossy compression method for light field images. The proposed method extends the JPEG method to four dimensions and brings new ideas which lead to better compression performance. Correlation between light field views is exploited in both dimensions by four-dimensional discrete cosine transformation. The lossy part of the encoding is performed by quantization, similarly to the JPEG method. The proposed method is implemented as a program library in a C++ language. This paper compares the proposed method to JPEG, JPEG 2000 and HEVC intra image compression methods and HEVC video compression method. The results show that the proposed method outperforms the reference methods with images with a higher amount of views. HEVC video method is better for images with fewer views or for very low bitrates.
Error Resilience Analysis for JPEG 2000
Kovalčík, Marek ; Klíma, Ondřej (referee) ; Bařina, David (advisor)
The aim of this thesis is to analyze modern image compression format of JPEG 2000. It analyzes the effect of error resilience mechanisms on image compression with different settings. The impact of using tag embedding to help repair damaged images or using compression modes to improve error resilience is examined here. Quality is evaluated by the PSNR metric that detects the similarity of compressed and reference file. Adding certain tags to the data stream or using certain compression modes should help secure the JPEG 2000 file against image reconstruction damage. To test this hipothesis, there was created a model that acidentally damage the compressed file and evaluate decompressed images. The Kakadu library, which provides efficient work with the JPEG 2000 format, is used for the work. The experimental data set consists of various photographs in uncompressed PPM format in smaller but also in higher resolutions. The result of this work is to find out which compression settings to use for which group of images to make the compression efficient and secure the best. The end of this thesis is devoted to comparison of error resilience of JPEG 2000 and CCSDS 122.0.
Analysis of JPEG 2000 Settings
Mitaš, Matěj ; Klíma, Ondřej (referee) ; Bařina, David (advisor)
This thesis analyses settings of JPEG 2000 format. The aim is to find appropriate values for each setting. Main criteria are compression performance, elapsed time and used memory. There are two libraries used that are implementing JPEG 2000 format: Kakadu and OpenJPEG. Testing was secured by framework implemented in Python. Results of this thesis enable users to use JPEG 2000 more effectively and with deeper understanding of inner workings.

National Repository of Grey Literature : 30 records found   previous11 - 20next  jump to record:
Interested in being notified about new results for this query?
Subscribe to the RSS feed.