National Repository of Grey Literature 29 records found  1 - 10nextend  jump to record: Search took 0.00 seconds. 
Unsupervised Verification of Fake News by Public Opinion
Grim, Jiří
In this paper we discuss a simple way to evaluate the messages in social networks automatically, without any special content analysis or external intervention. We presume, that a large number of social network participants is capable of a relatively reliable evaluation of materials presented in the network. Considering a simple binary evaluation scheme (like/dislike), we propose a transparent algorithm with the aim to increase the voting power of reliable network members by means of weights. The algorithm supports the votes which correlate with the more reliable weighted majority and, in turn, the modified weights improve the quality of the weighted majority voting. In this sense the weighting is controlled only by a general coincidence of voting members while the specific content of messages is unimportant. The iterative optimization procedure is unsupervised and does not require any external intervention with only one exception, as discussed in Sec. 5.2 .\n\nIn simulation experiments the algorithm nearly exactly identifies the reliable members by means of weights. Using the reinforced weights we can compute for a new message the weighted sum of votes as a quantitative measure of its positive or negative nature. In this way any fake news can be recognized as negative and indicated as controversial. The accuracy of the resulting weighted decision making was essentially higher than a simple majority voting and has been considerably robust with respect to possible external manipulations.\n\nThe main motivation of the proposed algorithm is its application in a large social network. The content of evaluated messages is unimportant, only the related decision making of participants is registered and compared with the weighted vote with the aim to identify the most reliable voters. A large number of participants and communicated messages should enable to design a reliable and robust weighted voting scheme. Ideally the resulting weighted vote should provide a generally acceptable emotional feedback for network participants and could be used to indicate positive or controversial news in a suitably chosen quantitative way. The optimization algorithm has to be simple, transparent and intuitive to make the weighted vote well acceptable as a general evaluation tool.\n
Payment regulatory mechanism as a source of wage increases in healthcare
Grim, Jiří
The principle of health insurance presupposes that the patient will contact a doctor who will provide him / her with professional help, whereby the doctor, medical expenses and additional examinations are paid by the health insurance company. The result is a spontaneous increase in health care costs well-known in the nineties. It is clear that there is no negative feedback in the system where the healthcare provided must be made by doctors in contact with patients and its costs are being covered by health insurance companies. As a result of this gross systemic error, there is a continuing pressure to increase healthcare spending and imminent insolvency forces the health insurers to introduce regulatory measures to curb the cost increase.
Feasibility Study of an Interactive Medical Diagnostic Wikipedia
Grim, Jiří
Considering different application possibilities of product distribution mixtures we have proposed three formal tools in the last years, which can be used to accumulate decision-making know-how from particular diagnostic cases. First, we have developed a structural mixture model to estimate multidimensional probability distributions from incomplete and possibly weighted data vectors. Second, we have shown that the estimated product mixture can be used as a knowledge base for the Probabilistic Expert System (PES) to infer conclusions from definite or even uncertain input information. Finally we have shown that, by using product mixtures, we can exactly optimize sequential decision-making by means of the Shannon formula of conditional informativity. We combine the above statistical tools in the framework of an interactive open-access medical diagnostic system with automatic accumulation of decision-making knowledge.
Approximating Probability Densities by Mixtures of Gaussian Dependence Trees
Grim, Jiří
Considering the probabilistic approach to practical problems we are increasingly confronted with the need to estimate unknown multivariate probability density functions from large high-dimensional databases produced by electronic devices. The underlying densities are usually strongly multimodal and therefore mixtures of unimodal density functions suggest themselves as a suitable approximation tool. In this respect the product mixture models are preferable because they can be efficiently estimated from data by means of EM algorithm and have some advantageous properties. However, in some cases the simplicity of product components could appear too restrictive and a natural idea is to use a more complex mixture of dependence-tree densities. The dependence tree densities can explicitly describe the statistical relationships between pairs of variables at the level of individual components and therefore the approximation power of the resulting mixture may essentially increase.
Evaluation of Screening Mammograms by Local Structural Mixture Models
Grim, Jiří ; Lee, G. L.
We consider the recently proposed evaluation of screening mammograms by local statistical models. The model is defined as a joint probability density of inside grey levels of a suitably chosen search window. We approximate the model density by a mixture of Gaussian densities. Having estimated the mixture parameters we calculate at all window positions the corresponding log-likelihood values which can be displayed as grey levels at the respective window centers. The resulting log-likelihood image closely correlates with the original mammogram and emphasizes the structural details. In this paper we try to enhance the log-likelihood images by using structural mixture model capable of suppressing the influence of noisy variables.
Fast Dependency-Aware Feature Selection in Very-High-Dimensional Pattern Recognition Problems
Somol, Petr ; Grim, Jiří
The paper addresses the problem of making dependency-aware feature selection feasible in pattern recognition problems of very high dimensionality. The idea of individually best ranking is generalized to evaluate the contextual quality of each feature in a series of randomly generated feature subsets. Each random subset is evaluated by a criterion function of arbitrary choice (permitting functions of high complexity). Eventually, the novel dependency-aware feature rank is computed, expressing the average benefit of including a feature into feature subsets. The method is efficient and generalizes well especially in very-high-dimensional problems, where traditional context-aware feature selection methods fail due to prohibitive computational complexity or to over-fitting. The method is shown well capable of over-performing the commonly applied individual ranking which ignores important contextual information contained in data.

National Repository of Grey Literature : 29 records found   1 - 10nextend  jump to record:
See also: similar author names
10 Grim, Jakub
1 Grim, Jan
Interested in being notified about new results for this query?
Subscribe to the RSS feed.