Minisymposia Abstracts
Speaker:  Jin Cheng Fudan Univ. Shanghai, China 
Title:  Unique continuation on the analytic curve and its applications to inverse problems 
Abstract:  We consider an inverse source problem for the Helmholtz equation in domains
with possibly unknown obstacles, which are much larger than the wavelength. The
inverse problem is to determine the number and locations of point sources in
the domain based on sparse measurements of the wave field. Our proposed
strategy relies on solving a local inverse scattering problem to obtain the
incoming directions of waves passing through an observation location. We
formulate the problem as an L^{1} constrained minimization problem and solve it
using the Bregman iterative procedure. The wave direction rays are then traced
back and sources are uniquely determined at the intersection of the rays from
several observing locations. We present examples in 2D, however all of the
formulas and methods used have direct analogues in 3D.

Speaker:  Markus Grasmair Georgia Institute of Technology 
Title:  Convergence rates for positively homogeneous regularization functionals on Banach spaces 
Abstract: 
Recently it has been shown that Tikhonov regularization with ℓ^{1} regularization
terms can imply up to linear convergence rates in the setting where the true
solution of the operator equation to be solved is assumed to have a sparse
representation with respect to the given basis—that is, its support in ℓ^{1}
is a finite set. In this talk it is shown that the same linear rates can also
be derived for regularization with general positively homogeneous regularization
functionals, provided that they allow for some meaningful notion of the support
of the solution. Examples where this generalization applies include the settings
of joint or group sparsity, but also that of discrete total variation regularization,
which can be interpreted as sparse regularization of the discrete derivative.
In all these cases, the error of the regularized solution, measured in terms
of the regularization functional, is of the same order as the noise level.

Speaker:  Bangti Jin Texas A&M University 
Title:  A new approach to nonlinear constrained Tikhonov regularization 
Abstract: 
In this talk we discuss a new approach to nonlinear constrained
Tikhonov regularization from an optimization point of view. The
secondorder sufficient condition is suggested as a nonlinearity con
dition. The approach is exploited for several common choice rules by
deriving a prior and a posterior convergence rates results. The idea
is explored on a general class of nonlinear parameter identification
problems, for which new source and nonlinearity conditions emerge
naturally. The theory will be illustrated on concrete examples.

Speaker:  Jijun Liu (Speaker), Southwest University, China, and Masahiro Yamamoto, The University of Tokyo, Japan University of Gottingen, Germany 
Title:  A backward problem for the timefractional diffusion equation 
Abstract:  We consider a backward problem in time for a timefractional partial
differential equation in onedimensional case. This models an anomaly
diffusion process in porous media and such a backward problem is of
practically great importance because we often do not know the initial
density of substance, but we can observe the density at a positive moment.
The backward problem is illposed and we propose a regularizing scheme by
the quasireversibility with fully theoretical analysis and test its
numerical performances. Our solution is established on the eigenfunction
expansion of elliptic operator.

Speaker:  Shuai Lu Fudan Univ. Shanghai, China 
Title:  A note on the conditional stability of nonlinear illposed problems in Banach spaces 
Abstract:  In this talk, we revisit the conditional stability for illposed
problems in Banach spaces. Both a priori and a posteriori choices
of regularizing parameters are proposed. The convergence rate of
the regularized solutions by means of Bregman distance is also proved.

Speaker:  Stefan Kindermann Joh. Kepler Univ. Linz, Austria 
Title:  On the degree of illposedness 
Abstract:  The degree of illposedness of inverse problems with compact
operator is generally defined as the decay rate of the
associated singular values. Nashed distinguish between type I
illposed problems (with noncompact operator) and type II
illposed problems (with compact operator).
In this talk, we study the difficulties in extending the usual
definition of degree of illposedness to the case of type I
(noncompact case) illposed problems.

Speaker:  Kamil Kazimierski Univ. of Bremen, Germany 
Title:  Iterative regularization methods in Banach spaces 
Abstract: 
In its general form an inverse problem consists of recovering certain parameters of an object from some noisecluttered, measured data, given the model mapping the parameters to the data. The main aim of regularization theory is then to construct and analyze algorithms, which generate reconstructions of the true parameters with certain, desired properties. In many applications such desired property is a sparse structure of the reconstructions. For example in geophysical applications, and in particular when prospecting the soil, sparse structure is given by the layers of the ground. One is therefore interested in regularization methods, which enforce sparsity.
A popular method in this setting is the Tikhonov regularization with sparsity promoting penalty. This method is wellstudied and many results considering necessary parameterchoices, source conditions and convergence rates are known. However, from the practitioners point of view this method is only feasible if there exists a global, exact minimization scheme for the related Tikhonov functional. In general no such scheme is available. Most minimization schemes (like e.g. steepest descent) output only an approximate minimizer (after finite number of iterations). Further, every change of the regularization parameter enforces a rerun of the minimization scheme, which makes the regularization computationally expensive.
In contrast, iterative regularization methods, like Landweber method or conjugate gradient method, are exact, i.e. the exact regularizing element is obtained after finite time resp. finite number of iteration steps. Further, since the regularization parameter is the stopping index, one only needs to carry out the iteration scheme once, resp. the iterates of a single run of the iteration scheme may be reused for different regularization parameters.
x*_{n+1} := x^{δ}_{n+1}x*_{n} − μ_{n}A*J(Ax_{n} − y^{δ}) x_{n+1} := J*(x*_{n+1})
was introduced, which together with an appropriate stopping criterion based on the discrepancy principle of Morozov is a regularization. (The mappings J,J* are duality mappings.)
The convergence speed of the above iteration and therefore its practical viability depends heavily on the choice of the stepsize μ_{n}. In the first part of the talk we will introduce and analyze an highly improved version of the stepsize choice, which was published by Hein and the author in [2].
x*_{n+1} := x*_{n} − μ_{n}[F′(x_{n})]* J(F(x_{n}) − y^{δ}) x_{n+1} := J*(x*_{n+1}),
cf. [3]. We stress that, to the authors best knowledge, this is the first sparsity promoting, iterative regularization method for nonlinear operators.Although, the above methods work well for any noise level, they are particularly suitable for data with high noise level. Therefore, finally, in the last part of the talk we will discuss an iterative method designed for data with low noise level. It is given by
ψ*_{n} := A*J(Ax*_{n} − y^{δ}) + β_{n}ψ*_{n−1}
x*_{n+1} := x*_{n} − μ_{n}ψ*_{n} x_{n+1} := J*(x*_{n+1}).
We remark, that the above method is an extension of a variant of the conjugate gradient method, namely the minimal error conjugate gradient method, cf. [1].
The presented results are joint work with Torsten Hein, Matheon, Berlin.

Speaker:  Torsten Hein TU Berlin, Germany 
Title:  Iterative regularization of gradienttype in Banach spaces 
Abstract:  We consider the linear illposed operator equation
Ax = y x ∈ X, y ∈ Y,
where A: X → Y denotes a linear bounded operator mapping between the Banach
spaces X and Y . For simplicity we assume A to be injective and there exists x^{†} ∈ X with Ax^{†} = y. For δ > 0 and given noisy data y^{δ} ∈ Y with knowing bound y^{δ} − y ≤ δ for the noise level we deal with the iteration approach
x^{δ}_{0} = x_{0} := G(x*_{0}), x*_{0} ∈ X*
together with the discrepancy principle as stopping criterion for the iteration process. Here, either
x*_{n+1} := x*_{n} − μ_{n}ψ*_{n} x^{δ}_{n+1} := G(x*_{n+1})
ψ*_{n} := A*J_{p}(Ax^{δ}_{n} − y^{δ}) ∈ X*
denotes the gradient of the objective functional x ↦ 1/pAx − y^{δ}^{p} at the element x^{δ}_{n} itself or some modified version of this. In this context, p ∈ (1,∞) and J_{p} denotes the duality mapping from the space Y into its dual space Y* with gauge function t ↦ t^{p−1}. Moreover, a motivation for specific choices for the (nonlinear) transportation operator G: X* → X are discussed.
In order to achieve a tolerable decay rate for the error x^{δ}_{n} − x^{†} we have to apply a proper choice of the step size parameter μ_{n} in each iteration step. Motivated by Hilbert space methods and taking into account the noisy data we suggest some choices for the parameter μ_{n} in each iteration. Further we present convergence and stability results of the method under consideration. The theoretical results are illutrated by some numerical examples. 
Please address administrative questions to aipc@math.tamu.edu. Scientific questions should be addressed to the chair of the Scientific Program Committee: rundell AT math.tamu.edu