
Numerical Optimization Methods for the Falsification of Hybrid Dynamical Systems
Kuřátko, Jan ; Ratschan, Stefan (advisor) ; Bergamaschi, Luca (referee) ; Lukšan, Ladislav (referee)
Title: Numerical Optimization Methods for the Falsification of Hybrid Dynamical Systems Author: Jan Kuřátko Department: Department of Numerical Mathematics Supervisor: Stefan Ratschan, Institute of Computer Science, The Czech Academy of Sciences Abstract: This thesis consists of three published papers that contribute to the finding of error trajectories of hybrid dynamical systems. A hybrid dynamical system is a dynamical system that has both discrete and continuous state. For example, one can use it as a model for a thermostat in a room: Such a thermostat may have two discrete states, one where the heating is off, and another one, where the heating is on. Its continuous state is the temperature in the room. For such a model one may be interested in finding an error trajectory, that is, an evolution of the system that reaches an unsafe state that is to be avoided. Industry is in need of methods for automatized testing and verification of safety conditions in order to identify flaws in the design of systems. The thesis contains several contributions to finding error trajectories that are based on numerical optimization. Keywords: optimization, dynamical systems, saddlepoint matrix


Two limitedmemory optimization methods with minimum violation of the previous quasiNewton equations
Vlček, Jan ; Lukšan, Ladislav
Limitedmemory variable metric methods based on the wellknown BFGS update are widely used for large scale optimization. The block version of the BFGS update, derived by Schnabel (1983), Hu and Storey (1991) and Vlček and Lukšan (2019), satisfies the quasiNewton equations with all used difference vectors and for quadratic objective functions gives the best improvement of convergence in some sense, but the corresponding direction vectors are not descent directions generally. To guarantee the descent property of direction vectors and simultaneously violate the quasiNewton equations as little as possible in some sense, two methods based on the block BFGS update are proposed. They can be advantageously combined with methods based on vector corrections for conjugacy (Vlček and Lukšan, 2015). Global convergence of the proposed algorithm is established for convex and sufficiently smooth functions. Numerical experiments demonstrate the efficiency of the new methods.
Plný tet: PDF


Hybrid Methods for Nonlinear Least Squares Problems
Lukšan, Ladislav ; Matonoha, Ctirad ; Vlček, Jan
This contribution contains a description and analysis of effective methods for minimization of the nonlinear least squares function F(x) = (1=2)fT (x)f(x), where x ∈ Rn and f ∈ Rm, together with extensive computational tests and comparisons of the introduced methods. All hybrid methods are described in detail and their global convergence is proved in a unified way. Some proofs concerning trust region methods, which are difficult to find in the literature, are also added. In particular, the report contains an analysis of a new simple hybrid method with Jacobian corrections (Section 8) and an investigation of the simple hybrid method for sparse least squares problems proposed previously in [33] (Section 14).
Fulltext: PDF


Application of the Infinitely Many Times Repeated BNS Update and Conjugate Directions to LimitedMemory Optimization Methods
Vlček, Jan ; Lukšan, Ladislav
To improve the performance of the LBFGS method for large scale unconstrained optimization, repeating of some BFGS updates was proposed. Since this can be time consuming, the extra updates need to be selected carefully. We show that groups of these updates can be repeated infinitely many times under some conditions, without a noticeable increase of the computational time. The limit update is a block BFGS update. It can be obtained by solving of some Lyapunov matrix equation whose order can be decreased by application of vector corrections for conjugacy. Global convergence of the proposed algorithm is established for convex and sufficiently smooth functions. Numerical results indicate the efficiency of the new method.

 
 
 
 
 
 