Institute of Information Theory and Automation

Institute of Information Theory and Automation 1,573 records found  beginprevious31 - 40nextend  jump to record: Search took 0.00 seconds. 
Computing the Decomposable Entropy of Graphical Belief Function Models
Jiroušek, Radim ; Kratochvíl, Václav ; Shenoy, P. P.
In 2018, Jiroušek and Shenoy proposed a definition of entropy for Dempster-Shafer (D-S) belief functions called decomposable entropy. Here, we provide an algorithm for computing the decomposable entropy of directed graphical D-S belief function models. For undirected graphical belief function models, assuming that each belief function in the model is non-informative to the others, no algorithm is necessary. We compute the entropy of each belief function and add them together to get the decomposable entropy of the model. Finally, the decomposable entropy generalizes Shannon’s entropy not only for the probability of a single random variable but also for multinomial distributions expressed as directed acyclic graphical models called Bayesian networks.
Characterizing Uncertainty In Decision-Making Models For Maintenance In Industry 4.0
Ahmed, U. ; Carpitella, Silvia ; Certa, A.
Decision-making involves our daily life at any level, something that entails uncertainty and potential occurrence of risks of varied nature. When dealing with industrial engineering systems, effective decisions are fundamental in terms of maintenance planning and implementation. Specifically, several forms of uncertainty may affect decision-making procedures, for which adopting suitable techniques seems to be a good strategy to attain the main maintenance goals by taking into account system criticality along with decision-maker(s) opinions. A wide variety of factors contributes to uncertainty, being some of them greatly important while other ones less significant. However, all of these factors in synergy can impact the functioning of systems in a positive, neutral, or negative way. In this case, the question is whether obtaining a complete picture of such uncertainty can improve decision-making capabilities and mitigate both through-life costs and unforeseen problems. The fundamental issues include dealing with ambiguity in the maintenance decision-making process by employing numerous evaluation criteria and dealing with real-world scenarios in the maintenance environment. In this study, the Multi-Criteria Decision-Making (MCDM) approach is analysed, with particular reference to the Fuzzy Technique for Order of Preference by Similarity to Ideal Solution (FTOPSIS), technique capable to effectively rank alternatives while dealing with uncertainty for maintenance decision-making. A final case study is developed to demonstrate the applicability of the method to the field of maintenance in industry 4.0. The proposed study may be useful in supporting intelligent and efficient decisions resulting in favorable maintenance outcomes.
How to find it in the data?
Zitová, Barbara ; Šorel, Michal
The lecture aims to introduce the activities of the Image Processing Department of the Institute of Image Information of the CAS in the field of Copernicus data analysis to the professional public. The department has long been involved in the development of digital image processing and deep learning methods. During the last two years, in cooperation with the MFF UK and FJFI CTU, several student demonstration projects using data from Sentinel satellites have been finished, such as crop type recognition from Sentinel-2 time-series images, automatic segmentation of areas by land use or surface type using machine learning methods learning, more accurate cloud detection in Sentinel-2 data, in collaboration with the Institute of Hydrodynamics of the CAS Czech Republic, procedures for estimating landscape surface moisture from Sentinel-2 data and increasing the resolution of Sentinel-3 thermal data using deep learning methods. The second part will present the application of developed methods for other areas in remote sensing.
Recursive mixture estimation with univariate multimodal Poisson variable
Uglickich, Evženie ; Nagy, Ivan
Analysis of count variables described by the Poisson distribution is required in many application fields. Examples of the count variables observed per a time unit can be, e.g., number of customers, passengers, road accidents, Internet traffic packet arrivals, bankruptcies, virus attacks, etc. If the behavior of such a variable exhibits a multimodal character, the problem of clustering and classification of incoming count data arises. This issue can touch, for instance, detecting clusters of the different behavior of drivers in traffic flow analysis as well as cyclists or pedestrians. This work focuses on the model-based clustering of Poisson-distributed count data with the help of the recursive Bayesian estimation of the mixture of Poisson components. The aim of the work is to explain the methodology in details with an illustrative simple example, so that the work is limited to the univariate case and static pointer.
Media Treatment of Monetary Policy Surprises and Their Impact on Firms’ and Consumers’ Expectations
Pinter, J. ; Kočenda, Evžen
We empirically investigate whether monetary policy announcements affect firms’ and consumers’ expectations by taking into account media treatments of monetary policy announcements. To identify exogenous changes in monetary policy stances, we use the standard financial monetary policy surprise measures in the euro area. We then analyze how a general newspaper and a financial newspaper (Le Monde and The Financial Times) report on announcements. We find that 87 % of monetary policy surprises are either not associated with the general newspaper reporting a change in the monetary policy stance to their readers or have a sign that is inconsistent with the media report of the announcement. When we use the raw monetary policy surprises variable as an independent variable in the link between monetary policy announcements and firms’/consumers’ expectations, we mostly do not find, in line with several previous studies, any statistically significant association. When we take only monetary policy surprises that are consistent with the general newspaper report, in almost all cases we find that monetary policy surprises on the immediate monetary policy stance do affect expectations. Surprises related to future policy inclination and information shocks usually do not appear to matter. The results appear to be in line with rational inattention theories and highlight the need for caution in the use of monetary policy surprise measures for macroeconomic investigations.
Does the Spillover Index Respond Significantly to Systemic Shocks? A Bootstrap-Based Probabilistic Analysis
Greenwood-Nimmo, M. ; Kočenda, Evžen ; Nguyen, V. H.
The spillover index developed by Diebold and Yilmaz (Economic Journal, 2009, vol. 119, pp. 158-171) is widely used to measure connectedness in economic and financial networks. Abrupt increases in the spillover index are typically thought to result from systemic events, but evidence of the statistical significance of this relationship is largely absent from the literature. We develop a new bootstrap-based technique to evaluate the probability that the value of the spillover index changes over an arbitrary time period following an exogenously defined event. We apply our framework to the original dataset studied by Diebold and Yilmaz and obtain qualified support for the notion that the spillover index increases in a timely and statistically significant manner in the wake of systemic shocks.
Yield Curve Dynamics and Fiscal Policy Shocks
Kučera, A. ; Kočenda, Evžen ; Maršál, Aleš
We show that government spending does play a role in shaping the yield curve which has important consequences for the cost of private and government financing. We combine government spending shock identification strategies from the fiscal macro literature with recent advancements in no-arbitrage affine term structure modeling, where we account for time-varying macroeconomic trends in inflation and the equilibrium real interest rate. We stress in our empirical macro-finance framework the importance of timing in the response of yields to government spending. We find that the yield curve responds positively but mildly to a surprise in government spending shocks where the rise in risk-neutral yields is compensated by a drop in nominal term premia. The news shock in expectations about future expenditures decreases yields across all maturities. Complementarily, we also analyze the effect of fiscal policy uncertainty where higher fiscal uncertainty lowers yields.
On kernel-based nonlinear regression estimation
Kalina, Jan ; Vidnerová, P.
This paper is devoted to two important kernel-based tools of nonlinear regression: the Nadaraya-Watson estimator, which can be characterized as a successful statistical method in various econometric applications, and regularization networks, which represent machine learning tools very rarely used in econometric modeling. This paper recalls both approaches and describes their common features as well as differences. For the Nadaraya-Watsonestimator, we explain its connection to the conditional expectation of the response variable. Our main contribution is numerical analysis of suitable data with an economic motivation and a comparison of the two nonlinear regression tools. Our computations reveal some tools for the Nadaraya-Watson in R software to be unreliable, others not prepared for a routine usage. On the other hand, the regression modeling by means of regularization networks is much simpler and also turns out to be more reliable in our examples. These also bring unique evidence revealing the need for a careful choice of the parameters of regularization networks
Application Of Implicitly Weighted Regression Quantiles: Analysis Of The 2018 Czech Presidential Election
Kalina, Jan ; Vidnerová, P.
Regression quantiles can be characterized as popular tools for a complex modeling of a continuous response variable conditioning on one or more given independent variables. Because they are however vulnerable to leverage points in the regression model, an alternative approach denoted as implicitly weighted regression quantiles have been proposed. The aim of current work is to apply them to the results of the second round of the 2018 presidential election in the Czech Republic. The election results are modeled as a response of 4 demographic or economic predictors over the 77 Czech counties. The analysis represents the first application of the implicitly weighted regression quantiles to data with more than one regressor. The results reveal the implicitly weighted regression quantiles to be indeed more robust with respect to leverage points compared to standard regression quantiles. If however the model does not contain leverage points, both versions of the regression quantiles yield very similar results. Thus, the election dataset serves here as an illustration of the usefulness of the implicitly weighted regression quantiles.
Ockham's Razor from a Fully Probabilistic Design Perspective
Hoffmann, A. ; Quinn, Anthony
This research report investigates an approach to the design of an Ockham prior penalising parametric complexity in the Hierarchical Fully Probabilistic Design (HFPD) [1] setting. We identify a term which penalises the introduction of an additional parameter in the Wold decomposition. We also derive the objective Ockham Parameter Prior (OPI) in this context, based on earlier work [2], and we show that the two are, in fact, closely related. This confers validity on the HFPD Ockham term.

Institute of Information Theory and Automation : 1,573 records found   beginprevious31 - 40nextend  jump to record:
Interested in being notified about new results for this query?
Subscribe to the RSS feed.