Accessibility navigation

Novel optimisation methods for data assimilation

Kaouri, M. H. (2021) Novel optimisation methods for data assimilation. PhD thesis, University of Reading

Text - Thesis
· Please see our End User Agreement before downloading.

[img] Text - Thesis Deposit Form
· Restricted to Repository staff only


It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing.

To link to this item DOI: 10.48683/1926.00105523


Data assimilation (DA) is a technique used to estimate the state of a dynamical system. In DA, a prior estimate (background state) is combined with observations to estimate the initial state of a dynamical system over a given time-window. This estimate is known as the ‘analysis’ in DA. In variational data assimilation (VarDA), the DA problem is formulated as a nonlinear least-squares problem, usually solved using a variant of the classical GaussNewton (GN) optimisation method known as the incremental method. In the incremental method, the iterative minimisation of the nonlinear objective function and the linearised subproblem are referred to as the ‘outer loop’ and the ‘inner loop’ respectively. Within this thesis, we show how the convergence of GN can be improved through the use of safeguards that, unlike GN, guarantee convergence to the analysis from an arbitrary background state, while considering the limited time and cost available in DA. In particular, we consider GN equipped with line search (LS) and GN equipped with quadratic regularisation (REG), both of which achieve global convergence by guaranteeing a reduction in the VarDA objective function at each outer loop iteration. We prove global convergence of LS and REG and use idealised numerical experiments to show that these methods are able to improve the current estimate of the DA analysis even if the initial estimate of the solution is poor and a long assimilation time-window is used to include more observations. Furthermore, when GN performs poorly, a suitable choice of the initial regularisation parameter is critical in enhancing the performance of the REG method. We study the interaction between the REG parameter and the VarDA inner loop problem and use numerical experiments to show that choosing the initial REG parameter according to components of the VarDA problem results in REG locating a more accurate DA analysis than that obtained by GN, LS or the standard REG method. Various simplifications are made to solve the variational problem within the time and computational cost available in practice. One of these simplifications is the use of a reduced resolution spatial grid for use within the inner loop. It is known that the accuracy with which the inner loop is solved affects the convergence of the outer loop. The condition number of the Hessian is a measure of the sensitivity of the solution of the inner loop problem to perturbations and also influences the speed of convergence of the VarDA inner loop minimisations. We derive an upper bound on the condition number of the preconditioned VarDA Hessian that accounts for different inner loop resolutions of the incremental method. This bound provides a theoretical insight into how the level of resolution interacts with various components of the incremental method to influence its convergence.

Item Type:Thesis (PhD)
Thesis Supervisor:Lawless, A., Nichols, N. and Cartis, C.
Thesis/Report Department:School of Mathematical, Physical & Computational Sciences
Identification Number/DOI:
Divisions:Science > School of Mathematical, Physical and Computational Sciences
ID Code:105523
Date on Title Page:March 2021


Downloads per month over past year

University Staff: Request a correction | Centaur Editors: Update this record

Page navigation