Plenary Speakers
The talks of the plenary speakers are available from https://rc.signalprocessingsociety.org/.
Abstract: Modern machine learning systems are susceptible to adversarial examples;
inputs that preserve the characteristic semantics of a given class, but whose classification is incorrect.
Current approaches to defense against adversarial attacks rely on modifications to the input (e.g. quantization)
or to the learned model parameters (e.g. via adversarial training), but are not always successful. This talk will:
1) Discuss some of the enablers of successful adversarial attacks via an empirical analysis of commonly used datasets.
2) Provide a survey of current attacks and defenses.
3) Propose a novel defense mechanism in which the model outputs are represented and decoded
in a fundamentally different way from current approaches. We demonstrate improved robustness via detailed testing on commonly used datasets.
The resulting architecture has several advantages: it yields meaningful probability estimates, it declares uncertainty when it should,
is fast during training and testing.
4) Discuss novel approaches to detection of adversarial examples using confidence metrics.
Abstract: Sparse approximation is a well-established theory, with a
profound impact on the fields of signal and image processing. In this
talk we start by presenting this model and its features, and then turn
to describe two special cases of it – the convolutional sparse coding
(CSC) and its multi-layered version (ML-CSC). Amazingly, as we will
carefully show, ML-CSC provides a solid theoretical foundation to ...
deep-learning architectures. Alongside this main message of bringing a
theoretical backbone to deep-learning, another central message that will
accompany us throughout the talk is this: Generative models for
describing data sources enable a systematic way to design algorithms,
while also providing a complete mechanism for a theoretical analysis of
these algorithms’ performance. This talk is meant for newcomers to this
field – no prior knowledge on sparse approximation is assumed.
Abstract: The temporal structure of macroscopic brain activity shows both oscillatory and scale-free dynamics.
While the functional implication of neural oscillations has been largely demonstrated, the observation of scale-free dynamics has raised numerous questions, related to their nature and functional relevance.
To address such issues, we will first describe a rich conceptual framework for the characterization of scale-free temporal dynamics combining self-similarity and multifractality and second define robust wavelet-based assessment procedures.
These tools will be used to analyze brain activity, recorded via magnetoencephalography (MEG), in human subjects enrolled in a learning task.
Comparing brain activity both at rest and during task show consistent infraslow (from 0.1 to 1.5 Hz) scale-free dynamics.
It also shows the existence of a fronto-occipital gradient in self-similarity, consistent with a hierarchy of temporal scales from sensory and associative to higher-order cortices. This gradient is further accentuated during task.
Additionally, while little multifractality is reported at rest, significant increase were observed during task.
A negative correlation across individuals in task vs rest variations between self-similarity and multifractality is also observed, mostly in the regions involved by the task.
This concomitant decrease of self-similarity and increase of mulfractality
reflects a significant change from globally well-structured temporal dynamics at rest to a less globally structured activity during task, with significant transient and bursty non Gaussian locally scale-free structures.
While the present results remain univariate in nature (the different source time series are analyzed independently),
the potential benefits of multivariate scale-free analysis will be discussed and are under current investigations to study multivariate brain dynamics.
Abstract: Hyperspectral unmixing (HU) is a key topic in hyperspectral remote sensing. The
problem is to leverage on the high spectral resolution of hyperspectral images to
identify the materials and their corresponding compositions in the scene. Early HU
research is based on smart intuitions from remote sensing, and recent involvements
from other fields—such as signal processing, optimization and machine
learning—have enriched the HU techniques substantially. In this talk we will use the
signal processing lens to reveal the fundamental insights of HU, namely, those arising
from convex geometry. We will see how such insights establish a unique branch of
provably powerful simplex-structured matrix factorization techniques. We will also
examine the connections between HU and a number of problems in blind source
separation, machine learning, data science, computer vision and biomedical imaging.
Abstract: The past few years have seen a dramatic increase in the performance of recognition systems thanks to the introduction of deep networks for representation learning.
However, the mathematical reasons for this success remain elusive. For example, a key issue is that the neural network training problem is non-convex, hence optimization algorithms may not return a global minima.
In addition, the regularization properties of algorithms such as dropout remain poorly understood. The first part of this talk will overview recent work on the theory of deep learning that aims
to understand how to design the network architecture, how to regularize the network weights, and how to guarantee global optimality. The second part of this talk will present sufficient conditions
to guarantee that local minima are globally optimal and that a local descent strategy can reach a global minima from any initialization. Such conditions apply to problems in matrix factorization,
tensor factorization and deep learning. The third part of this talk will present an analysis of the optimization and regularization properties of dropout for matrix factorization in the case of matrix factorization.
Examples from neuroscience and computer vision will also be presented.