Registered Data

[00184] Recent advances in data-driven methods for inverse problems

  • Session Time & Room :
    • 00184 (1/3) : 4D (Aug.24, 15:30-17:10) @E811
    • 00184 (2/3) : 4E (Aug.24, 17:40-19:20) @E811
    • 00184 (3/3) : 5B (Aug.25, 10:40-12:20) @E811
  • Type : Proposal of Minisymposium
  • Abstract : The remarkable success of deep learning has led to a transformative impact on the research landscape of inverse problems in imaging. This mini-symposium aims to bring together researchers who have made exciting contributions to understanding the theoretical foundations and empirical performance of deep learning in various imaging applications. The talks will cover a wide range of topics such as deep regularization, Bayesian methods, microlocal analysis, learned optimization solvers, and robustness of reconstruction methods to distribution shift and adversarial attacks, making the sessions of sufficient interest to a broad audience, while encouraging an exchange of ideas to advance the state-of-the-art.
  • Organizer(s) : Subhadip Mukherjee, Carola-Bibiane Schönlieb, Martin Burger
  • Classification : 68T07, 65J22, deep learning, inverse problems in imaging
  • Minisymposium Program :
    • 00184 (1/3) : 4D @E811 [Chair: Carola-Bibiane Schönlieb]
      • [05418] Machine learned regularization for inverse problems - the dos and don‘ts
        • Format : Online Talk on Zoom
        • Author(s) :
          • Carola-Bibiane Schönlieb (University of Cambridge)
        • Abstract : Inverse problems are about the reconstruction of an unknown physical quantity from indirect measurements. They appear in a variety of places, from medical imaging, for instance MRI or CT, to remote sensing, for instance Radar, to material sciences and molecular biology, for instance electron microscopy. Here, inverse problems is a tool for looking inside specimen, resolving structures beyond the scale visible to the naked eye, and to quantify them. It is a mean for diagnosis, prediction and discovery. Most inverse problems of interest are ill-posed and require appropriate mathematical treatment for recovering meaningful solutions. Classically, such approaches are derived almost conclusively in a knowledge driven manner, constituting handcrafted mathematical models. Examples include variational regularization methods with Tikhonov regularization, the total variation and several sparsity-promoting regularizers such as the L1 norm of Wavelet coefficients of the solution. While such handcrafted approaches deliver mathematically rigorous and computationally robust solutions to inverse problems, they are also limited by our ability to model solution properties accurately and to realise these approaches in a computationally efficient manner. Recently, a new paradigm has been introduced to the regularization of inverse problems, which derives solutions to inverse problems in a data driven way. Here, the inversion approach is not mathematically modelled in the classical sense, but modelled by highly over-parametrised models, typically deep neural networks, that are adapted to the inverse problems at hand by appropriately selected training data. Current approaches that follow this new paradigm distinguish themselves through solution accuracies paired with computational efficieny that were previously unconceivable. In this talk I will give an introduction to this new data-driven paradigm for inverse problems. Presented methods include data-driven variational models and plug-and-play approaches, learned iterative schemes aka learned unrolling, and learned post-processing. Throughout presenting these methodologies, we will discuss their theoretical properties and provide numerical examples for image denoising, deconvolution and computed tomography reconstruction. The talk will finish with a discussion of open problems and future perspectives.
      • [04644] Data-driven Regularization based on Diagonal Frame Decompostion
        • Format : Talk at Waseda University
        • Author(s) :
          • Yunseok Lee (Ludwig Maximilian University Munich)
          • Samira Kabri (Deutsches Elektronen-Synchrotron (DESY) Hamburg)
          • Martin Burger (Deutsches Elektronen-Synchrotron (DESY) Hamburg and University of Hamburg)
          • Gitta Kutyniok (Ludwig Maximilian University Munich)
        • Abstract : In this talk, we propose a data-driven framework to design optimal filters for inverse problems using frame decompositions, which generalize classical spectral filters. Frames are sets of vectors that allow for stable and redundant representations of signals in a Hilbert space. Our framework works by learning a linear transformation that modifies the frame coefficients of a measured signal to enhance or suppress certain features. This is achieved by formulating this as an optimization problem with a data-driven regularizer that incorporates prior knowledge from noise and ground truth data. Our approach comes with theoretical guarantees in terms of convergence as well as in terms of generalization to unseen data. We also illustrate its effectiveness on several numerical experiments using the Wavelet-Vaguelette decomposition as an example.
      • [03944] Fourier Neural Operators for data-driven regularization
        • Format : Talk at Waseda University
        • Author(s) :
          • Samira Kabri (Friedrich-Alexander-Universität Erlangen-Nürnberg)
        • Abstract : In this talk we investigate the use of Fourier Neural Operators (FNOs) for image processing in comparison to standard Convolutional Neural Networks (CNNs). FNOs - which are so-called neural operators with a specific parametrization - have been applied successfully in the context of parametric PDEs. We derive the FNO architecture as an example for continuous and Fréchet-differentiable neural operators on Lebesgue spaces and show how CNNs can be converted into FNOs and vice versa. Based on these insights, we explore possibilities of incorporating the ideas of FNOs into the data-driven regularization of inverse problems in imaging.
      • [05027] Data-driven regularization theory of invertible ResNets for solving inverse problems
        • Format : Online Talk on Zoom
        • Author(s) :
          • Clemens Arndt (ZeTeM University of Bremen)
          • Alexander Denker (ZeTeM University of Bremen)
          • Sören Dittmer (ZeTeM University of Bremen)
          • Nick Heilenkötter (ZeTeM University of Bremen)
          • Meira Iske (ZeTeM University of Bremen)
          • Tobias Kluth (University of Bremen)
          • Judith Nickel (ZeTeM University of Bremen)
        • Abstract : Data-driven solution techniques for inverse problems, typically based on specific learning strategies, exhibit remarkable performance in image reconstruction tasks. These learning-based reconstruction strategies often follow a two-step scheme. First, one uses a given dataset to train the reconstruction scheme, which one often parametrizes via a neural network. Second, the reconstruction scheme is applied to a new measurement to obtain a reconstruction. We follow these steps but specifically parametrize the reconstruction scheme with invertible residual networks (iResNets). We demonstrate that the invertibility opens the door to new investigations into the influence of the training and the architecture on the resulting reconstruction scheme. To be more precise, we analyze the effect of different iResNet architectures, loss functions, and prior distributions on the trained network. The investigations reveal a formal link to the regularization theory of linear inverse problems for shallow network architectures. Moreover, we analytically optimize the parameters of specific classes of architectures in the context of Bayesian inversion, revealing the influence of the prior and noise distribution on the solution.
    • 00184 (2/3) : 4E @E811 [Chair: Martin Burger]
      • [05430] Are neural operators really neural operators?
        • Format : Online Talk on Zoom
        • Author(s) :
          • Rima Alaifari (ETH Zurich)
        • Abstract : In operator learning, it has been observed that proposed models may not behave as operators when implemented, questioning the very essence of what operator learning should be. We contend that some form of continuous-discrete equivalence is necessary for an architecture to genuinely learn the underlying operator, rather than just discretizations of it. Employing frames, we introduce the framework of Representation equivalent Neural Operator (ReNO) to ensure operations at the continuous and discrete level are equivalent.
      • [04722] Plug-and-Play Models for Large-Scale Computational Imaging
        • Format : Talk at Waseda University
        • Author(s) :
          • Ulugbek Kamilov (Washington University in St. Louis)
        • Abstract : Computational imaging is a rapidly growing area that seeks to enhance the capabilities of imaging instruments by viewing imaging as an inverse problem. Plug-and-Play Priors (PnP) is one of the most popular frameworks for solving computational imaging problems through integration of physical and learned models. PnP leverages high-fidelity physical sensor models and powerful machine learning methods to provide state-of-the-art imaging algorithms. PnP models alternate between minimizing a data-fidelity term to promote data consistency and imposing a learned image prior in the form of an “image denoising” deep neural network. This talk presents a principled discussion of PnP, its theoretical foundations, its implementations for large-scale imaging problems, and recent results on PnP for the recovery of continuously represented images. We present several applications of our theoretical and algorithmic insights in bio-microscopy, computerized tomography, and magnetic resonance imaging.
      • [02097] Learned proximal operators meets unrolling for limited angle tomography
        • Format : Online Talk on Zoom
        • Author(s) :
          • Tatiana Alessandra Bubba (University of Bath)
          • Subhadip Mukherjee (University of Bath)
          • Luca Ratti (University of Genoa)
          • Andrea Sebastiani (University of Bologna)
        • Abstract : In recent years, limited angle tomography has become a challenging testing ground for several theoretical and numerical studies, where both variational regularisation and data-driven techniques have been investigated extensively. I will present a hybrid reconstruction framework where the proximal operator of an accelerated unrolled scheme is learned to ensure suitable theoretical guarantees. The recipe relays on the interplay between sparse regularisation, harmonic analysis, microlocal analysis and Plug and Play methods.
      • [04626] Plug-and-Play sampling for inverse problems in imaging
        • Format : Talk at Waseda University
        • Author(s) :
          • Julie Delon (Université Paris Cité)
          • Rémi Laumont (Technical University of Denmark,)
          • Marcelo Pereyra (Heriot-Watt University)
          • Andrés Almansa (Université Paris Cité)
          • Valentin De Bortoli (Ecole Normale Supérieure)
        • Abstract : In a Bayesian framework, image models are used as priors or regularisers and combined to explicit likelihood functions to define posterior distributions. These posterior distributions can be used to derive Maximum A Posteriori (MAP) estimators, leading to optimization problems that are generally well studied and understood. Sampling schemes can also be used to explore more finely these posterior distributions, derive other estimators, quantify uncertainties or perform other advanced inferences. In a manner akin to Plug \& Play (PnP) methods in optimization, these sampling schemes can be combined with denoising neural networks approximating the gradient of a log-prior on images. In this talk, we will focus on these PnP sampling schemes, which raise important questions concerning the correct definition of the underlying Bayesian models or the computed estimators, as well as their regularity properties, necessary to ensure the stability of the numerical schemes.
    • 00184 (3/3) : 5B @E811 [Chair: Subhadip Mukherjee]]
      • [03828] Recent advance of diffusion models in inverse problems
        • Format : Talk at Waseda University
        • Author(s) :
          • Jong Chul YE (KAIST)
        • Abstract : Recently, diffusion models have been used to solve various inverse problems for medical imaging applications in an unsupervised manner. In this talk, we propose an additional correction term inspired by the manifold constraint, which can be used synergistically with the previous solvers to make the iterations close to the manifold.
      • [04409] Conditional Image Generation with Score Based Models
        • Format : Talk at Waseda University
        • Author(s) :
          • Jan Pawel Stanczuk (University of Cambridge)
          • Georgios Batzolis (University of Cambridge)
        • Abstract : Score-based diffusion models have emerged as one of the most promising frameworks for deep generative modelling. In this work we conduct a systematic comparison and theoretical analysis of different approaches to learning conditional probability distributions with score-based diffusion models. In particular, we prove results which provide a theoretical justification for one of the most successful estimators of the conditional score. Moreover, we introduce a multi-speed diffusion framework, which leads to a new estimator for the conditional score, performing on par with previous state-of-the-art approaches.
      • [01555] Data-Driven Convex Optimization via Mirror Descent
        • Format : Talk at Waseda University
        • Author(s) :
          • Hong Ye Tan (University of Cambridge)
          • Subhadip Mukherjee (University of Bath)
          • Junqi Tang (University of Cambridge)
          • Carola Bibiane Schoenlieb (University of Cambridge)
          • Andreas Hauptmann (University of Oulu)
        • Abstract : Learning-to-optimize is an emerging framework that seeks to speed up the solution of certain optimization problems by leveraging training data. We propose a provably approximately convergent learning-to-optimize scheme for convex optimization based on a functional parameterization of the classical mirror descent algorithm. In particular, we model the underlying convex function with an input-convex neural network and derive corresponding convergence rate bounds. We demonstrate improved convergence rates on various convex image processing examples.
      • [03784] Multi-Modal Hypergraph Diffusion Network with Dual Prior for Alzheimer Classification
        • Format : Talk at Waseda University
        • Author(s) :
          • Angelica Aviles-Rivero (University of Cambridge)
        • Abstract : The automatic early diagnosis of prodromal stages of Alzhei\-mer's disease is of great relevance for patient treatment to improve quality of life. We address this problem as a multi-modal classification task. Multi-modal data provides richer and complementary information. However, existing techniques only consider lower order relations between the data and single/multi-modal imaging data. In this work, we introduce a novel semi-supervised hypergraph learning framework for Alzheimer’s disease diagnosis. Our framework allows for higher-order relations among multi-modal imaging and non-imaging data whilst requiring a tiny labelled set. Firstly, we introduce a dual embedding strategy for constructing a robust hypergraph that preserves the data semantics. We achieve this by enforcing perturbation invariance at the image and graph levels using a contrastive based mechanism. Secondly, we present a dynamically adjusted hypergraph diffusion model, via a semi-explicit flow, to improve the predictive uncertainty.