Minisymposia Abstracts

Speaker: Jin Cheng
Fudan Univ. Shanghai, China
Title: Unique continuation on the analytic curve and its applications to inverse problems
Abstract: We consider an inverse source problem for the Helmholtz equation in domains with possibly unknown obstacles, which are much larger than the wavelength. The inverse problem is to determine the number and locations of point sources in the domain based on sparse measurements of the wave field. Our proposed strategy relies on solving a local inverse scattering problem to obtain the incoming directions of waves passing through an observation location. We formulate the problem as an L1 constrained minimization problem and solve it using the Bregman iterative procedure. The wave direction rays are then traced back and sources are uniquely determined at the intersection of the rays from several observing locations. We present examples in 2D, however all of the formulas and methods used have direct analogues in 3D.

Speaker: Markus Grasmair
Georgia Institute of Technology
Title: Convergence rates for positively homogeneous regularization functionals on Banach spaces
Abstract: Recently it has been shown that Tikhonov regularization with ℓ1 regularization terms can imply up to linear convergence rates in the setting where the true solution of the operator equation to be solved is assumed to have a sparse representation with respect to the given basis—that is, its support in ℓ1 is a finite set. In this talk it is shown that the same linear rates can also be derived for regularization with general positively homogeneous regularization functionals, provided that they allow for some meaningful notion of the support of the solution. Examples where this generalization applies include the settings of joint or group sparsity, but also that of discrete total variation regularization, which can be interpreted as sparse regularization of the discrete derivative. In all these cases, the error of the regularized solution, measured in terms of the regularization functional, is of the same order as the noise level.

Speaker: Bangti Jin
Texas A&M University
Title: A new approach to nonlinear constrained Tikhonov regularization
Abstract: In this talk we discuss a new approach to nonlinear constrained Tikhonov regularization from an optimization point of view. The second-order sufficient condition is suggested as a nonlinearity con- dition. The approach is exploited for several common choice rules by deriving a prior and a posterior convergence rates results. The idea is explored on a general class of nonlinear parameter identification problems, for which new source and nonlinearity conditions emerge naturally. The theory will be illustrated on concrete examples.

Speaker: Jijun Liu (Speaker), Southwest University, China,
and Masahiro Yamamoto, The University of Tokyo, Japan
University of Gottingen, Germany
Title: A backward problem for the time-fractional diffusion equation
Abstract: We consider a backward problem in time for a time-fractional partial differential equation in one-dimensional case. This models an anomaly diffusion process in porous media and such a backward problem is of practically great importance because we often do not know the initial density of substance, but we can observe the density at a positive moment. The backward problem is ill-posed and we propose a regularizing scheme by the quasi-reversibility with fully theoretical analysis and test its numerical performances. Our solution is established on the eigenfunction expansion of elliptic operator.

Speaker: Shuai Lu
Fudan Univ. Shanghai, China
Title: A note on the conditional stability of nonlinear ill-posed problems in Banach spaces
Abstract: In this talk, we revisit the conditional stability for ill-posed problems in Banach spaces. Both a priori and a posteriori choices of regularizing parameters are proposed. The convergence rate of the regularized solutions by means of Bregman distance is also proved.

Speaker: Stefan Kindermann
Joh. Kepler Univ. Linz, Austria
Title: On the degree of ill-posedness
Abstract: The degree of ill-posedness of inverse problems with compact operator is generally defined as the decay rate of the associated singular values. Nashed distinguish between type I ill-posed problems (with noncompact operator) and type II ill-posed problems (with compact operator). In this talk, we study the difficulties in extending the usual definition of degree of ill-posedness to the case of type I (noncompact case) ill-posed problems.

Speaker: Kamil Kazimierski
Univ. of Bremen, Germany
Title: Iterative regularization methods in Banach spaces
Abstract: In its general form an inverse problem consists of recovering certain parameters of an object from some noise-cluttered, measured data, given the model mapping the parameters to the data. The main aim of regularization theory is then to construct and analyze algorithms, which generate reconstructions of the true parameters with certain, desired properties. In many applications such desired property is a sparse structure of the reconstructions. For example in geophysical applications, and in particular when prospecting the soil, sparse structure is given by the layers of the ground. One is therefore interested in regularization methods, which enforce sparsity.

A popular method in this setting is the Tikhonov regularization with sparsity promoting penalty. This method is well-studied and many results considering necessary parameter-choices, source conditions and convergence rates are known. However, from the practitioners point of view this method is only feasible if there exists a global, exact minimization scheme for the related Tikhonov functional. In general no such scheme is available. Most minimization schemes (like e.g. steepest descent) output only an approximate minimizer (after finite number of iterations). Further, every change of the regularization parameter enforces a rerun of the minimization scheme, which makes the regularization computationally expensive.

In contrast, iterative regularization methods, like Landweber method or conjugate gradient method, are exact, i.e. the exact regularizing element is obtained after finite time resp. finite number of iteration steps. Further, since the regularization parameter is the stopping index, one only needs to carry out the iteration scheme once, resp. the iterates of a single run of the iteration scheme may be reused for different regularization parameters.

In the case where one enforces smoothness of the reconstruction, rather than sparsity, iterative methods have been well-studied. However, only few basic results known from the smoothness generating setting can be extended to the sparsity generating case. Therefore, recently, there has been an increasing interest in iterative regularization methods generating sparse reconstructions. In the seminal paper [4] the iterative scheme
x*n+1 := xδn+1x*n − μnA*J(Axn − yδ)  xn+1 := J*(x*n+1)
was introduced, which together with an appropriate stopping criterion based on the discrepancy principle of Morozov is a regularization. (The mappings J,J* are duality mappings.)

The convergence speed of the above iteration and therefore its practical viability depends heavily on the choice of the step-size μn. In the first part of the talk we will introduce and analyze an highly improved version of the step-size choice, which was published by Hein and the author in [2].

In the second part of the talk we will show that under certain, mild conditions the above iteration may be extended to nonlinear operators. It is then given by
x*n+1 := x*n − μn[F′(xn)]* J(F(xn) − yδ)   xn+1 := J*(x*n+1),
cf. [3]. We stress that, to the authors best knowledge, this is the first sparsity promoting, iterative regularization method for nonlinear operators.
Although, the above methods work well for any noise level, they are particularly suitable for data with high noise level. Therefore, finally, in the last part of the talk we will discuss an iterative method designed for data with low noise level. It is given by
ψ*n := A*J(Ax*n − yδ) + βnψ*n−1
  x*n+1 := x*n − μnψ*n   xn+1 := J*(x*n+1).

We remark, that the above method is an extension of a variant of the conjugate gradient method, namely the minimal error conjugate gradient method, cf. [1].

The presented results are joint work with Torsten Hein, Matheon, Berlin.


  1. T. Hein and K.S. Kazimierski, A conjugate gradient type method for ill-posed problems in Banach spaces, in preparation.
  2. T. Hein and K.S. Kazimierski, Accelerated Landweber iteration in Banach spaces, Inverse Problems 26(5) Article ID 055002, (2010), 19 pp.
  3. T. Hein and K.S. Kazimierski, Modified Landweber ireration in Banach spaces — convergence and convergence rates, Numer. Func. Anal. Optim. 31(10) (2010), 1158–1184.
  4. F. Schöpfer, A.K. Louis, and T. Schuster, Nonlinear iterative methods for linear ill-posed problems in Banach spaces, Inverse Problems 22 (2006), 311–329.

Speaker: Torsten Hein
TU Berlin, Germany
Title: Iterative regularization of gradient-type in Banach spaces
Abstract: We consider the linear ill-posed operator equation
Ax = y   x ∈ X, y ∈ Y,
where A: X → Y denotes a linear bounded operator mapping between the Banach spaces X and Y . For simplicity we assume A to be injective and there exists x ∈ X with Ax = y. For δ > 0 and given noisy data yδ ∈ Y with knowing bound ||yδ − y|| ≤ δ for the noise level we deal with the iteration approach
 xδ0 = x0 := G(x*0), x*0 ∈ X*
x*n+1 := x*n − μnψ*n
xδn+1 := G(x*n+1)  
together with the discrepancy principle as stopping criterion for the iteration process. Here, either
ψ*n := A*Jp(Axδn − yδ) ∈ X*
denotes the gradient of the objective functional x ↦ 1/p||Ax − yδ||p at the element xδn itself or some modified version of this. In this context, p ∈ (1,∞) and Jp denotes the duality mapping from the space Y into its dual space Y* with gauge function t ↦ tp−1. Moreover, a motivation for specific choices for the (nonlinear) transportation operator G: X* → X are discussed.

In order to achieve a tolerable decay rate for the error ||xδn − x|| we have to apply a proper choice of the step size parameter μn in each iteration step. Motivated by Hilbert space methods and taking into account the noisy data we suggest some choices for the parameter μn in each iteration. Further we present convergence and stability results of the method under consideration.

The theoretical results are illutrated by some numerical examples.

Please address administrative questions to Scientific questions should be addressed to the chair of the Scientific Program Committee: rundell AT

Copyright © 2010, Texas A&M University, Department of Mathematics, All Rights Reserved.