Národní úložiště šedé literatury Nalezeno 8 záznamů.  Hledání trvalo 0.01 vteřin. 
Sparse robust portfolio optimization via NLP regularizations
Branda, Martin ; Červinka, Michal ; Schwartz, A.
We deal with investment problems where we minimize a risk measure\nunder a condition on the sparsity of the portfolio. Various risk measures\nare considered including Value-at-Risk and Conditional Value-at-Risk\nunder normal distribution of returns and their robust counterparts are\nderived under moment conditions, all leading to nonconvex objective\nfunctions. We propose four solution approaches: a mixed-integer formulation,\na relaxation of an alternative mixed-integer reformulation and\ntwo NLP regularizations. In a numerical study, we compare their computational\nperformance on a large number of simulated instances taken\nfrom the literature.
Diagnostics for Robust Regression: Linear Versus Nonlinear Model
Kalina, Jan
Robust statistical methods represent important tools for estimating parameters in linear as well as nonlinear econometric models. In contrary to the least squares, they do not suffer from vulnerability to the presence of outlying measurements in the data. Nevertheless, they need to be accompanied by diagnostic tools for verifying their assumptions. In this paper, we propose the asymptotic Goldfeld-Quandt test for the regression median. It allows to formulate a natural procedure for models with heteroscedastic disturbances, which is again based on the regression median. Further, we pay attention to nonlinear regression model. We focus on the nonlinear least weighted squares estimator, which is one of recently proposed robust estimators of parameters in a nonlinear regression. We study residuals of the estimator and use a numerical simulation to reveal that they can be severely heteroscedastic also for data generated from a model with homoscedastic disturbances. Thus, we give a warning that standard residuals of the robust nonlinear estimator may produce misleading results if used for the standard diagnostic tools
On Exact Heteroscedasticity Testing for Robust Regression
Kalina, Jan ; Peštová, Barbora
The paper is devoted to the least weighted squares estimator, which is one of highly robust estimators for the linear regression model. Novel permutation tests of heteroscedasticity are proposed. Also the asymptotic behavior of the permutation test statistics of the Goldfeld-Quandt and Breusch-Pagan tests is investigated. A numerical experiment on real economic data is presented, which also shows how to perform a robust prediction model under heteroscedasticity.
Some Robust Distances for Multivariate Data
Kalina, Jan ; Peštová, Barbora
Numerous methods of multivariate statistics and data mining suffer from the presence of outlying measurements in the data. This paper presents new distance measures suitable for continuous data. First, we consider a Mahalanobis distance suitable for high-dimensional data with the number of variables (largely) exceeding the number of observations. We propose its doubly regularized version, which combines a regularization of the covariance matrix with replacing the means of multivariate data by their regularized counterparts. We formulate explicit expressions for some versions of the regularization of the means, which can be interpreted as a denoising (i.e. robust version) of standard means. Further, we propose a robust cosine similarity measure, which is based on implicit weighting of individual observations. We derive properties of the newly proposed robust cosine similarity, which includes a proof of the high robustness in terms of the breakdown point.
Some Robust Estimation Tools for Multivariate Models
Kalina, Jan
Standard procedures of multivariate statistics and data mining for the analysis of multivariate data are known to be vulnerable to the presence of outlying and/or highly influential observations. This paper has the aim to propose and investigate specific approaches for two situations. First, we consider clustering of categorical data. While attention has been paid to sensitivity of standard statistical and data mining methods for categorical data only recently, we aim at modifying standard distance measures between clusters of such data. This allows us to propose a hierarchical agglomerative cluster analysis for two-way contingency tables with a large number of categories, based on a regularized measure of distance between two contingency tables. Such proposal improves the robustness to the presence of measurement errors for categorical data. As a second problem, we investigate the nonlinear version of the least weighted squares regression for data with a continuous response. Our aim is to propose an efficient algorithm for the least weighted squares estimator, which is formulated in a general way applicable to both linear and nonlinear regression. Our numerical study reveals the computational aspects of the algorithm and brings arguments in favor of its credibility.
Robust Regularized Cluster Analysis for High-Dimensional Data
Kalina, Jan ; Vlčková, Katarína
This paper presents new approaches to the hierarchical agglomerative cluster analysis for high-dimensional data. First, we propose a regularized version of the hierarchical cluster analysis for categorical data with a large number of categories. It exploits a regularized version of various test statistics of homogeneity in contingency tables as the measure of distance between two clusters. Further, our aim is cluster analysis of continuous data with a large number of variables. Various regularization techniques tailor-made for high-dimensional data have been proposed, which have however turned out to suffer from a high sensitivity to the presence of outlying measurements in the data. As a robust solution, we recommend to combine two newly proposed methods, namely a regularized version of robust principal component analysis and a regularized Mahalanobis distance, which is based on an asymptotically optimal regularization of the covariance matrix. We bring arguments in favor of the newly proposed methods.
Autocorrelated residuals of robust regression
Kalina, Jan
The work is devoted to the Durbin-Watson test for robust linear regression methods. First we explain consequences of the autocorrelation of residuals on estimating regression parameters. We propose an asymptotic version of the Durbin-Watson test for regression quantiles and trimmed least squares and derive an asymptotic approximation to the exact null distribution of the test statistic, exploiting the asymptotic representation for both regression estimators. Further, we consider the least weighted squares estimator, which is a highly robust estimator based on the idea to down-weight less reliable observations. We compare various versions of the Durbin-Watson test for the least weighted squares estimator. The asymptotic test is derived using two versions of the asymptotic representation. Finally, we investigate a weighted Durbin-Watson test using the weights determined by the least weighted squares estimator. The exact test is described and also an asymptotic approximation to the distribution of the weighted statistic under the null hypothesis is obtained.
Robustness Aspects of Knowledge Discovery
Kalina, Jan
The sensitivity of common knowledge discovery methods to the presence of outlying measurements in the observed data is discussed as their major drawback. Our work is devoted to robust methods for information extraction from data. First, we discuss neural networks for function approximation and their sensitivity to the presence of noise and outlying measurements in the data. We propose to fit neural networks in a robust way by means of a robust nonlinear regression. Secondly, we consider information extraction from categorical data, which commonly suffers from measurement errors. To improve its robustness properties, we propose a regularized version of the common test statistics, which may find applications e.g. in pattern discovery from categorical data.

Chcete být upozorněni, pokud se objeví nové záznamy odpovídající tomuto dotazu?
Přihlásit se k odběru RSS.