National Repository of Grey Literature 54 records found  previous11 - 20nextend  jump to record: Search took 0.01 seconds. 
Monte Carlo simulations of electron scattering in scanning transmission electron microscopy
Záchej, Samuel ; Hrubanová, Kamila (referee) ; Krzyžánek, Vladislav (advisor)
This thesis deals with an electron scattering in STEM microscopy on objects with dif-ferent shapes, such as cuboid, sphere and hollow capsule. Monte Carlo simulations are used for description of multiple electron scattering. Except the theoretical analysis of the electron scattering and simulation methods, the thesis contains design and realiza-tion of an algorithm simulating electron scattering in given objects. In addition, there is a design for robustness evaluation of the simulation, based on comparison between results and known signals for a given object. Reliability of the algorithm was verified by experimental measurements of the electron scattering on a carbon layer.
Statistical Method Selection Matters: Vanilla Methods in Regression May Yield Misleading Results
Kalina, Jan
The primary aim of this work is to illustrate the importance of the choice of the appropriate methods for the statistical analysis of economic data. Typically, there exist several alternative versions of common statistical methods for every statistical modeling task\nand the most habitually used (“vanilla”) versions may yield rather misleading results in nonstandard situations. Linear regression is considered here as the most fundamental econometric model. First, the analysis of a world tourism dataset is presented, where the number of international arrivals is modeled for 140 countries of the world as a response of 14 pillars (indicators) of the Travel and Tourism Competitiveness Index. Heteroscedasticity is clearly recognized in the dataset. However, the Aitken estimator, which would be the standard remedy in such a situation, is revealed here to be very inappropriate. Regression quantiles represent a much more suitable solution here. The second illustration with artificial data reveals standard regression quantiles to be unsuitable for data contaminated by outlying values. Their recently proposed robust version turns out to be much more appropriate. Both\nillustrations reveal that choosing suitable methods represent an important (and often difficult) part of the analysis of economic data.
Statistical Method Selection Matters: Vanilla Methods in Regression May Yield Misleading Results
Kalina, Jan
The primary aim of this work is to illustrate the importance of the choice of the appropriate methods for the statistical analysis of economic data. Typically, there exist several alternative versions of common statistical methods for every statistical modeling task and the most habitually used (“vanilla”) versions may yield rather misleading results in nonstandard situations. Linear regression is considered here as the most fundamental econometric model. First, the analysis of a world tourism dataset is presented, where the number of international arrivals is modeled for 140 countries of the world as a response of 14 pillars (indicators) of the Travel and Tourism Competitiveness Index. Heteroscedasticity is clearly recognized in the dataset. However, the Aitken estimator, which would be the standard remedy in such a situation, is revealed here to be very inappropriate, regression quantiles represent a much more suitable solution here. The second illustration with artificial data reveals standard regression quantiles to be unsuitable for data contaminated by outlying values, their recently proposed robust version turns out to be much more appropriate. Both illustrations reveal that choosing suitable methods represent an important (and often difficult) part of the analysis of economic data.
Some Robust Approaches to Reducing the Complexity of Economic Data
Kalina, Jan
The recent advent of complex (and potentially big) data in economics requires modern and effective tools for their analysis including tools for reducing the dimensionality (complexity) of the given data. This paper starts with recalling the importance of Big Data in economics and with characterizing the main categories of dimension reduction techniques. While there have already been numerous techniques for dimensionality reduction available, this work is interested in methods that are robust to the presence of outlying measurements (outliers) in the economic data. Particularly, methods based on implicit weighting assigned to individual observations are developed in this paper. As the main contribution, this paper proposes three novel robust methods of dimension reduction. One method is a dimension reduction within a robust regularized linear regression, namely a sparse version of the least weighted squares estimator. The other two methods are robust versions of feature extraction methods popular in econometrics: robust principal component analysis and robust factor analysis.
Robustnost fuzzy řízení
Hebelka, Marek
Hebelka M. Robustness of fuzzy control. Diploma thesis. Brno: Mendel University, 2023. The thesis deals with the assessment of the robustness of the fuzzy controller in the framework of removing the influence of disturbance variables in the form of a uniform unit step of 10, 20, 30 and 40% of the input signal and adjustment of time constants by 5, 10, 15 and 20% of the original value. The thesis describes the basics of fuzzy control, simulation tools, classic controllers, and plants. As part of the work, the settings of individual systems are subsequently described and the effects of the key parameters of the regulated systems are analyzed. The resulting work is an evaluation and assessment of the robustness of the regulators in the control systems evaluated.
Some Robust Approaches to Reducing the Complexity of Economic Data
Kalina, Jan
The recent advent of complex (and potentially big) data in economics requires modern and effective tools for their analysis including tools for reducing the dimensionality (complexity) of the given data. This paper starts with recalling the importance of Big Data in economics and with characterizing the main categories of dimension reduction techniques. While there have already been numerous techniques for dimensionality reduction available, this work is interested in methods that are robust to the presence of outlying measurements (outliers) in the economic data. Particularly, methods based on implicit weighting assigned to individual observations are developed in this paper. As the main contribution, this paper proposes three novel robust methods of dimension reduction. One method is a dimension reduction within a robust regularized linear regression, namely a sparse version of the least weighted squares estimator. The other two methods are robust versions of feature extraction methods popular in econometrics: robust principal component analysis and robust factor analysis.
Safe and Secure High-Risk AI: Evaluation of Robustness
Binterová, Eliška ; Špelda, Petr (advisor) ; Střítecký, Vít (referee)
The aim of the thesis is to examine Invariant Risk Minimization (IRM) as an existing method for achieving model robustness and assess whether it could potentially serve as means for conformity assessment in the emerging legislative framework of the European Artificial Intelligence Act. Research shows that many cases of erroneous performance in AI systems are caused by machine learning models lacking robustness to changes in data distributions and thus being unable to properly generalize to new environments. In order to achieve reliable performance, the models must exhibit a certain level of robustness to these changes. IRM is a relatively new method designed to achieve such outcomes. This is very much in alignment to the objectives of the EU AI Act that aims for trustworthy AI. The thesis thus examines the congruence of the IRM method and the requirements in the EU AI Act and asks whether IRM can serve as a universal method for ensuring safe and secure AI compliant with European legal requirements through the analysis of existing empirical and theoretical results.
Application Of Implicitly Weighted Regression Quantiles: Analysis Of The 2018 Czech Presidential Election
Kalina, Jan ; Vidnerová, Petra
Regression quantiles can be characterized as popular tools for a complex modeling of a continuous response variable conditioning on one or more given independent variables. Because they are however vulnerable to leverage points in the regression model, an alternative approach denoted as implicitly weighted regression quantiles have been proposed. The aim of current work is to apply them to the results of the second round of the 2018 presidential election in the Czech Republic. The election results are modeled as a response of 4 demographic or economic predictors over the 77 Czech counties. The analysis represents the first application of the implicitly weighted regression quantiles to data with more than one regressor. The results reveal the implicitly weighted regression quantiles to be indeed more robust with respect to leverage points compared to standard regression quantiles. If however the model does not contain leverage points, both versions of the regression quantiles yield very similar results. Thus, the election dataset serves here as an illustration of the usefulness of the implicitly weighted regression quantiles.
Asset-Liability Management:Application of Stochastic Programmingwith Endogenous Randomness andContamination
Rusý, Tomáš ; Kopa, Miloš (advisor) ; Consigli, Giorgio (referee) ; Branda, Martin (referee)
Title: Asset-Liability Management: Application of Stochastic Programming with Endogenous Randomness and Contamination Author: RNDr. Tomáš Rusý Department: Department of Probability and Mathematical Statistics Supervisor: doc. RNDr. Ing. Miloš Kopa, PhD., Department of Probability and Mathematical Statistics Abstract: This thesis discusses a stochastic programming asset-liability management model that deals with decision-dependent randomness and a subsequent contamination analysis. The main model focuses on a pricing problem and the connected asset- liability management problem describing the typical life of a consumer loan. The endogeneity stems from the possibility of their customer rejecting the loan, the possibility of the customer defaulting on the loan and the possibility of prepay- ment which are all affected by the company's decision on interest rate of the loan. Another important factor, which plays a major role for liabilities, is the price of money in the market. There, we focus on the scenario generation procedure and develop a new calibration method for estimating the Hull-White model [Hull and White, 1990] under the real-world measure. We define the method for the gen- eral class of one-factor short-rate models and perform an extensive analysis to assess the estimation performance and...
Flexibility, Robustness and Discontinuities in Nonparametric Regression Approaches
Maciak, Matúš ; Hušková, Marie (advisor) ; Hlávka, Zdeněk (referee) ; Horová, Ivanka (referee)
Thesis title: Flexibility, Robustness and Discontinuity in Nonparametric Regression Approaches Author: Mgr. Matúš Maciak, M.Sc. Department: Department of Probability and Mathematical Statistics, Charles University in Prague Supervisor: Prof. RNDr. Marie Hušková, DrSc. huskova@karlin.mff.cuni.cz Abstract: In this thesis we focus on local polynomial estimation approaches of an unknown regression function while taking into account also some robust issues like a presence of outlying observa- tions or heavy-tailed distributions of random errors as well. We will discuss the most common method used for such settings, so called local polynomial M-smoothers and we will present the main statistical properties and asymptotic inference for this method. The M-smoothers method is especially suitable for such cases because of its natural robust flavour, which can nicely deal with outliers as well as heavy-tailed distributed random errors. Another important quality we will focus in this thesis on is a discontinuity issue where we allow for sudden changes (discontinuity points) in the unknown regression function or its derivatives respectively. We will propose a discontinuity model with different variability structures for both independent and dependent random errors while the discontinuity points will be treated in a...

National Repository of Grey Literature : 54 records found   previous11 - 20nextend  jump to record:
Interested in being notified about new results for this query?
Subscribe to the RSS feed.