National Repository of Grey Literature 32 records found  previous11 - 20nextend  jump to record: Search took 0.01 seconds. 
Web interface for audio feature visualization
Putz, Viliam ; Ištvánek, Matěj (referee) ; Miklánek, Štěpán (advisor)
This thesis deals with methods of audio features extraction from audio files, visualization of these features and implementation of web interface, which provides the visualization. In the introduction, Music Information Retrieval field, with which this thesis is closely related, is described. Also, the current state in the area of applications for audio features extraction is described. Next, the most common libraries for audio feature extraction within the programming languages are listed. In the second chapter, the audio features that can be extracted from audio file are listed and described. In the third chapter, there is description of the process of implementation, used technologies, function diagram of the web interface, explanation of functionality and description of user interface and its functions.
System for finding duplicate recordings based on audio information
Švejcar, Michael ; Miklánek, Štěpán (referee) ; Ištvánek, Matěj (advisor)
This diploma thesis discusses different methods of detecting duplicates in a music file database. The problem at hand is that files containing the same recording may differ in sound quality, applause at the end of a performance and other such parameters. The aim of this thesis is to design and implement a system that identifies duplicate recordings and provides an output file for the comparison. The system needs to not be affected by the mentioned parameters but precise enough to prevent matching non-identical recordings. The system is realized using the Python programming language, freely available libraries for computing chroma features, Image Hashing technique and multiple variants of the dynamic time warping algorithm. Three comparison methods were implemented in the system, differing in precision and computation complexity. The methods were then tested on a prepared dataset and four preset precision options were created. The final system seems very precise and insusceptible to detecting recordings that are very similar but not identical as duplicates, for example in case of different interpretations of the same musical piece.
Beat Tracking: Is 441 kHz Really Needed?
Ištvánek, Matěj ; Miklánek, Štěpán
Beat tracking is essential in music informationretrieval, with applications ranging from music analysis and automaticplaylist generation to beat-synchronized effects. In recentyears, deep learning methods, usually inspired by well-knownarchitectures, outperformed other beat tracking algorithms. Thecurrent state-of-the-art offline beat tracking systems utilize temporalconvolutional and recurrent networks. Most systems use aninput sampling rate of 44.1 kHz. In this paper, we retrain multipleversions of state-of-the-art temporal convolutional networks withdifferent input sampling rates while keeping the time resolutionby changing the frame size parameter. Furthermore, we evaluateall models using standard metrics. As the main contribution,we show that decreasing the input audio recording samplingfrequency up to 5 kHz preserves most of the accuracy, and insome cases, even slightly outperforms the standard approach.
Piano chord analyzer
Poloček, Dominik ; Miklánek, Štěpán (referee) ; Ištvánek, Matěj (advisor)
The presented thesis deals with the analysis of chords by determining the frequencies of their components. The aim of thesis is to outline methods for determining the fundamental frequencies of single and multiple notes and to implement a system that can determine chords using these methods. The method, implemented in Python (spectral peak method), uses a fast Fourier transform to represent the signal in the frequency domain and then searches for spectral maxima, which it evaluates as fundamental frequencies after proper checking. The spectral peaks method was compared with the harmonic component modulus summation method and with the state-of-the- art system for transcribing recordings to MIDI (PianoTransctiprion) by running tests on the dataset created for this thesis (530 chord and note recordings). The best results are presented by PianoTranscription ( = 0.74, tot = 0.23), the second best performing method is the spectral peaks method with a known number of tones ( = 0.55, tot = 0.29), followed by the same method with unknown number of tones ( = 0.52, tot = 0.38) and finally the harmonic component modulus summation method ( = 0.26, tot = 0.81). The limitations of the implemented system are the inability to determine the number of tones (must be specified by the user) and the frequency minimum (138.59 Hz), below which the estimates are erroneous, which is probably due to the design of the piano and the braiding of strings.
Chord structure detection in music recordings
Kučera, Ondřej ; Miklánek, Štěpán (referee) ; Ištvánek, Matěj (advisor)
This thesis deals with music information retrieval, namely automatic chord recognition in audio recordings. The thesis defines the concepts of chord and chroma features and describes the methods of converting the signal from the time domain to the frequency domain. The thesis explores methods for automatic chord detection; the state-of-the-art methods are based on deep learning. The thesis includes a system implemented in Python that allows chord detection from audio recordings. Individual recordings and associated chord labels can be visualized. The system offers a choice of methods for chord recognition – a method based on chord templates, a method using deep chroma vectors, and a method based on a convolutional neural network. The results of the methods are evaluated on a multi-genre dataset compiled from freely available annotations and recordings.
Analysis of automatic parameter extraction on piano recordings
Kaplan, Josef ; Miklánek, Štěpán (referee) ; Ištvánek, Matěj (advisor)
This bachelor thesis deals with the analysis of the accuracy of automatic extraction of parameters, mainly of piano recordings. The given issue is described both from a technical and a musical perspective. This thesis summarizes knowledge from the field of music theory and the automatic detection of parameters that can be obtained from musical piano recordings. This thesis is focused on detecting onsets, beats, downbeats, pitch estimation and tempo. The analysis of piano recordings is realized using the Python programming language. The output is scripts that perform parameter detection based on user-selected methods that are commonly used to calculate parameters. The result is also testing the accuracy of individual methods based on annotations from different datasets, focusing primarily on piano recordings. The final part contains an evaluation based on selected metrics with an objective comparison.
Web application for visualization of music recording parameters
Klimeš, Martin ; Ištvánek, Matěj (referee) ; Miklánek, Štěpán (advisor)
This thesis focuses on the development of a web application for visualizing musical parameters. The goal is to provide users with an environment where they can easily visualize parameters of any music recording and compare these parameters across different interpretations of the composition. The musical parameters visualized in the application are based on the field of Music Information Retrieval. For each of these visualizations, the application implements various settings that are saved to a database for the loggedin user, allowing them to adjust the visualization display according to their individual needs. The reactive Vue.js framework was used for the client-side, Flask framework for the server-side, and the PostgreSQL relational database system for data storage.
Deep learning modelling of reverberation effects
Bilkovič, Ondrej ; Schimmel, Jiří (referee) ; Miklánek, Štěpán (advisor)
This master’s thesis deals with the theory of reverberation and ways of artificially simulating reverberation. It explains the basic workings of machine learning, categorizes neural networks and discusses their use in audio signal processing. The result of the thesis is the implementation of multiple neural network architectures for modelling of a reverberation effect and the parametrization of it’s controls. The quality of the performance of these network is judged by carrying out objective tests and a subjective listening test. The best performing model is capable of relatively good quality of modelling a reverberation effect and parametrizing it’s control for decay time.
Exploring the Possibilities of Automated Annotation of Classical Music with Abrupt Tempo Changes
Ištvánek, Matěj ; Miklánek, Štěpán
In this paper, we introduce options for automatic measure detection based on synchronization, beat detection, and downbeat detection strategy. We evaluate proposed methods on two motifs from the dataset of Leos Janacek's string quartet music. We use specific user-driven metrics to capture annotation efficiency simulating a scenario where a musicologist has to use the output of an automated system to create ground-truth annotations on given recordings. In the case of the first motif, synchronization outperformed other methods by detecting most of the measure positions correctly. This procedure was also the most suitable for the second motif—it achieved a low number of correct detections, but the vast majority of transferred time positions belonged within the outer tolerance window. Therefore, in most cases, only shifting operations were needed resulting in higher annotation efficiency. Results suggest that the state-of-the-art downbeat tracking is not yet efficient for expressive music.
Implementation of Waveshaper Audio Effect
Leitgeb, David ; Miklánek, Štěpán (referee) ; Schimmel, Jiří (advisor)
The aim of this thesis is the implementation of a non-linear audio effect called waveshaper. This type of distortion effect contains the following building blocks: user defined transfer function, several types of filters and an oversampling processor with multiple stages of oversampling. The first prototype of this audio effect was implemented using Matlab and its Audio Toolbox extension. Due to certain limitations of this prototype, the whole audio effect was later completely rewritten in C++. This new implementation uses the JUCE framework which is mainly used for audio application development. The transition to this framework allowed real time editing of the transfer function and a VST3 build of the effect. In addition to a brief introduction of the used system types, motivation for oversampling and the description of the implementation for both prototypes, this thesis also includes graphical examples demonstrating their correct functionality. Audio files related to these examples are included in the electronic attachment.

National Repository of Grey Literature : 32 records found   previous11 - 20nextend  jump to record:
Interested in being notified about new results for this query?
Subscribe to the RSS feed.