Maxime Poli
I’m a first year PhD student in the Cognitive Machine Learning team of the LSCP at École Normale Supérieure, advised by Emmanuel Dupoux and Emmanuel Chemla.
I received an engineer’s degree from École des Ponts and a Master’s degree in Mathematics, Computer Vision and Machine Learning (MVA) from École Normale Supérieure Paris-Saclay.
News
- Nov 1, 2023: I’m starting my PhD at ENS!
Publications
Introducing topography in convolutional neural networks
Maxime Poli,
Emmanuel Dupoux,
and
Rachid Riad.
ICASSP 2023
arXivdoicode
@inproceedings{poli2023introducing,
title = {Introducing {{Topography}} in {{Convolutional Neural Networks}}},
booktitle = {{{ICASSP}} 2023 - 2023 {{IEEE International Conference}} on {{Acoustics}}, {{Speech}} and {{Signal Processing}} ({{ICASSP}})},
author = {Poli, Maxime and Dupoux, Emmanuel and Riad, Rachid},
year = {2023},
doi = {10.1109/ICASSP49357.2023.10096671},
}Parts of the brain that carry sensory tasks are organized topographically:
nearby neurons are responsive to the same properties of input signals. Thus, in this work,
inspired by the neuroscience literature, we proposed a new topographic inductive bias in
Convolutional Neural Networks (CNNs). To achieve this, we introduced a new topographic loss
and an efficient implementation to topographically organize each convolutional layer of any CNN.
We benchmarked our new method on 4 datasets and 3 models in vision and audio tasks and showed
equivalent performance to all benchmarks. Besides, we also showcased the generalizability of
our topographic loss with how it can be used with different topographic organizations in CNNs.
Finally, we demonstrated that adding the topographic inductive bias made CNNs more resistant
to pruning. Our approach provides a new avenue to obtain models that are more memory efficient
while maintaining better accuracy.
Shennong: a Python toolbox for audio speech features extraction
Mathieu Bernard*,
Maxime Poli*,
Julien Karadayi,
and
Emmanuel Dupoux.
Behavior Research Methods
arXivdoicode
@article{bernard2023shennong,
title = {Shennong: {{A Python}} Toolbox for Audio Speech Features Extraction},
author = {Bernard, Mathieu and Poli, Maxime and Karadayi, Julien and Dupoux, Emmanuel},
year = {2023},
journal = {Behavior Research Methods},
doi = {10.3758/s13428-022-02029-6},
}We introduce Shennong, a Python toolbox and command-line utility for audio speech
features extraction. It implements a wide range of well-established state-of-the-art
algorithms: spectro-temporal filters such as Mel-Frequency Cepstral Filterbank or Predictive
Linear Filters, pre-trained neural networks, pitch estimators, speaker normalization methods,
and post-processing algorithms. Shennong is an open source, reliable and extensible framework
built on top of the popular Kaldi speech processing library. The Python implementation makes
it easy to use by non-technical users and integrates with thirdparty speech modeling and machine
learning tools from the Python ecosystem. This paper describes the Shennong software architecture,
its core components, and implemented algorithms. Then, three applications illustrate its use.
We first present a benchmark of speech features extraction algorithms available in Shennong
on a phone discrimination task. We then analyze the performances of a speaker normalization
model as a function of the speech duration used for training. We finally compare pitch
estimation algorithms on speech under various noise conditions.