National Repository of Grey Literature 10 records found  Search took 0.01 seconds. 
The 2022 Election in the United States: Reliability of a Linear Regression Model
Kalina, Jan ; Vidnerová, Petra ; Večeř, M.
In this paper, the 2022 United States election to the House of Representatives is analyzed by means of a linear regression model. After the election process is explained, the popular vote is modeled as a response of 8 predictors (demographic characteristics) on the state-wide level. The main focus is paid to verifying the reliability of two obtained regression models, namely the full model with all predictors and the most relevant submodel found by hypothesis testing (with 4 relevant predictors). Individual topics related to assessing reliability that are used in this study include confidence intervals for predictions, multicollinearity, and also outlier detection. While the predictions in the submodel that includes only relevant predictors are very similar to those in the full model, it turns out that the submodel has better reliability properties compared to the full model, especially in terms of narrower confidence intervals for the values of the popular vote.
Application Of Implicitly Weighted Regression Quantiles: Analysis Of The 2018 Czech Presidential Election
Kalina, Jan ; Vidnerová, Petra
Regression quantiles can be characterized as popular tools for a complex modeling of a continuous response variable conditioning on one or more given independent variables. Because they are however vulnerable to leverage points in the regression model, an alternative approach denoted as implicitly weighted regression quantiles have been proposed. The aim of current work is to apply them to the results of the second round of the 2018 presidential election in the Czech Republic. The election results are modeled as a response of 4 demographic or economic predictors over the 77 Czech counties. The analysis represents the first application of the implicitly weighted regression quantiles to data with more than one regressor. The results reveal the implicitly weighted regression quantiles to be indeed more robust with respect to leverage points compared to standard regression quantiles. If however the model does not contain leverage points, both versions of the regression quantiles yield very similar results. Thus, the election dataset serves here as an illustration of the usefulness of the implicitly weighted regression quantiles.
On kernel-based nonlinear regression estimation
Kalina, Jan ; Vidnerová, Petra
This paper is devoted to two important kernel-based tools of nonlinear regression: the Nadaraya-Watson estimator, which can be characterized as a successful statistical method in various econometric applications, and regularization networks, which represent machine learning tools very rarely used in econometric modeling. This paper recalls both approaches and describes their common features as well as differences. For the Nadaraya-Watson estimator, we explain its connection to the conditional expectation of the response variable. Our main contribution is numerical analysis of suitable data with an economic motivation and a comparison of the two nonlinear regression tools. Our computations reveal some tools for the Nadaraya-Watson in R software to be unreliable, others not prepared for a routine usage. On the other hand, the regression modeling by means of regularization networks is much simpler and also turns out to be more reliable in our examples. These also bring unique evidence revealing the need for a careful choice of the parameters of regularization networks
Evolutionary approaches to image representation and generation
Romanský, Patrik ; Neruda, Roman (advisor) ; Vidnerová, Petra (referee)
This thesis focuses on exploring different variants of evolutionary algorithms in the area of image data representation and generation. In the contrast of the majority of similar works, this work differs in modular approach to the creation of evolutionary algorithms. The aim of this work is to create an extensible library for creating evolutionary algorithms and comparing the algorithms based on real image data. Compared types of evolutionary algorithms are genetic algorithm, CMA-ES and Differential evolution. Based on experiments, we assessed the success rate of individual evolutionary algorithms and proposed a parallelization of the CMA-ES method.
Least Weighted Absolute Value Estimator with an Application to Investment Data
Vidnerová, Petra ; Kalina, Jan
While linear regression represents the most fundamental model in current econometrics, the least squares (LS) estimator of its parameters is notoriously known to be vulnerable to the presence of outlying measurements (outliers) in the data. The class of M-estimators, thoroughly investigated since the groundbreaking work by Huber in 1960s, belongs to the classical robust estimation methodology (Jurečková et al., 2019). M-estimators are nevertheless not robust with respect to leverage points, which are defined as values outlying on the horizontal axis (i.e. outlying in one or more regressors). The least trimmed squares estimator seems therefore a more suitable highly robust method, i.e. with a high breakdown point (Rousseeuw & Leroy, 1987). Its version with weights implicitly assigned to individual observations, denoted as the least weighted squares estimator, was proposed and investigated in Víšek (2011). A trimmed estimator based on the 𝐿1-norm is available as the least trimmed absolute value estimator (Hawkins & Olive, 1999), which has not however acquired attention of practical econometricians. Moreover, to the best of our knowledge, its version with weights implicitly assigned to individual observations seems to be still lacking.
Regression for High-Dimensional Data: From Regularization to Deep Learning
Kalina, Jan ; Vidnerová, Petra
Regression modeling is well known as a fundamental task in current econometrics. However, classical estimation tools for the linear regression model are not applicable to highdimensional data. Although there is not an agreement about a formal definition of high dimensional data, usually these are understood either as data with the number of variables p exceeding (possibly largely) the number of observations n, or as data with a large p in the order of (at least) thousands. In both situations, which appear in various field including econometrics, the analysis of the data is difficult due to the so-called curse of dimensionality (cf. Kalina (2013) for discussion). Compared to linear regression, nonlinear regression modeling with an unknown shape of the relationship of the response on the regressors requires even more intricate methods.
Implicitly weighted robust estimation of quantiles in linear regression
Kalina, Jan ; Vidnerová, Petra
Estimation of quantiles represents a very important task in econometric regression modeling, while the standard regression quantiles machinery is well developed as well as popular with a large number of econometric applications. Although regression quantiles are commonly known as robust tools, they are vulnerable to the presence of leverage points in the data. We propose here a novel approach for the linear regression based on a specific version of the least weighted squares estimator, together with an additional estimator based only on observations between two different novel quantiles. The new methods are conceptually simple and comprehensible. Without the ambition to derive theoretical properties of the novel methods, numerical computations reveal them to perform comparably to standard regression quantiles, if the data are not contaminated by outliers. Moreover, the new methods seem much more robust on a simulated dataset with severe leverage points.
Meta-Parameters of Kernel Methods and Their Optimization
Vidnerová, Petra ; Neruda, Roman
In this work we deal with the problem of metalearning for kernel based methods. Among the kernel methods we focus on the support vector machine (SVM), that have become a method of choice in a wide range of practical applications, and on the regularization network (RN) with a sound background in approximation theory. We discuss the role of kernel function in learning, and we explain several search methods for kernel function optimization, including grid search, genetic search and simulated annealing. The proposed methodology is demonstrated on experiments using benchmark data sets.
Behaviour Emergence of Robotic Agents: Neuroevolution
Vidnerová, Petra ; Slušný, Stanislav ; Neruda, Roman
This paper deals with emergence of intelligent behaviour of mobile robotic agents using evolutionary learning. Evolutionary learning is demonstrated on several experiments, including different neural network architectures

Interested in being notified about new results for this query?
Subscribe to the RSS feed.