Th. Beroud, P. Abry, Y. Malevergne, M. Senneret, G. Perrin, J. Macq
Generating time series with rich temporal dynamics using Deep Neural Network remains a challenging issue, attracting much research efforts.
Instead of inventing yet another overperforming architecture, the present work aims to show that, when classic Wasserstein Generative Adversarial Networks are used, architectures can be modified to reduce the overall number of trainable parameters down to five orders of magnitude (from 10.000.000 to 1.000), without decrease in the synthesis quality. This work also explains that these architecture modifications present the side benefit of permitting to synthesize longer time series than those used for training, without requiring to retrain the Deep Neural Network.
These outcomes are obtained by using artificial time series whose complexity can be tuned and controlled, with scalefree, bursty and non time reversible temporal dynamics.
Synthesis quality is also assessed quantitatively by means of multiscale analyses.
This work can thus be considered a contribution towards sustainable Artificial Intelligence.
This paper was published in ICASSP 2023, 48th International Conference on Acoustics, Speech and Signal Processing.
> Download the research article
> Download presentation (slides)
> Download presentation (video)