National Repository of Grey Literature 22 records found  1 - 10nextend  jump to record: Search took 0.00 seconds. 
Line search in descent methods
Moravová, Adéla ; Tichý, Petr (advisor) ; Vlasák, Miloslav (referee)
In this thesis, we deal with descent methods for functional minimalization. We discuss three conditions for the choice of the step length (Armijo, Goldstein, and Wolfe condition) and four descent methods (The steepest descent method, Newton's method, Quasi-Newton's method BFGS and the conjugate gradient method). We discuss their convergence properties and their advantages and dis- advantages. Finally, we test these methods numerically in the GNU Octave pro- gramming system on three different functions with different number of variables. 1
The choice of the step in trust region methods
Rapavý, Martin ; Tichý, Petr (advisor) ; Kučera, Václav (referee)
The main goal of this thesis is the choice of steps in trust region methods for finding a minimum of a given function. The step corresponds to the problem of finding a minimum of a model function on a trust region. We characterize a solu- tion of this problem (Moré-Sorensen theorem) and consider various techniques for approximating a solution of this problem (the Cauchy point method, the dogleg method, the conjugate gradients method). In the case of the first two techniques we prove convergence of the optimization method. Finally, the above techniques are tested numerically in MATLAB on properly chosen functions and initial data. We comment on advantages and disadvantages of considered algorithms. 1
Metody s proměnnou metrikou s omezenou pamětí, založené na invariantních maticích
Vlček, Jan ; Lukšan, Ladislav
A new class of limited-memory variable metric methods for unconstrained minimization is described. Approximations of inverses of Hessian matrices are based on matrices which are invariant with respect to a linear transformation. As these matrices are singular, they are adjusted for a computation of direction vectors. The methods have the quadratic termination property, which means that they will find a minimum of a strict quadratic function with an exact choice of a step-length after a finite number of steps. Numerical experiments show the efficiency of this method.
O Lagrangeových multiplikátorech v metodách s lokálně omezeným krokem
Lukšan, Ladislav ; Matonoha, Ctirad ; Vlček, Jan
Trust-region methods are globally convergent techniques widely used, for example, in connection with the Newton's method for unconstrained optimization. One of the most commonly-used iterative approaches for solving the trust-region subproblems is the Steihaug-Toint method which is based on conjugate gradient iterations and seeks a solution on Krylov subspaces. The paper contains new theoretical results concerning properties of Lagrange multipliers obtained on these subspaces.
Metody vnitřních bodů pro zobecněnou minimaxovou optimalizaci
Lukšan, Ladislav ; Matonoha, Ctirad ; Vlček, Jan
A new class of primal interior point methods for generalized minimax optimization is described. These methods use besides a standard logarithmic barrier function also barrier functions bounded from below which have more favourable properties for investigation of global convergence. It deals with descent direction methods, where an approxmation of the Hessian matrix is computed by gradient differences or quasi-Newton updates. Two-level optimization is used. A direction vector is computed by a Choleski decompostition of a sparse matrix. Numerical experiments concerning two basic applications, minimization of a point maximum and a sum of absolute values of smooth functions, are presented.

National Repository of Grey Literature : 22 records found   1 - 10nextend  jump to record:
Interested in being notified about new results for this query?
Subscribe to the RSS feed.