×
We show that the tail of a pruned MAM-based autoencoder fits on the targeted device while keeping a good reconstruction accuracy.
Multiply-And-Max/min Neurons at the Edge: Pruned Autoencoder Implementation. IEEE postprint/Author's Accepted Manuscript. Publisher: Published. DOI:10.1109 ...
In this work, we implement MAM on-device for the first time to demonstrate the feasibility of MAM-based DNNs at the Edge. In particular, as a case study, we ...
The increased sparsification observed in MAM-based networks not only reduces memory requirements but can also lead to faster inference, as demonstrated in [12] ...
Multiply-And-Max/min Neurons at the Edge: Pruned Autoencoder Implementation. Philippe Bich, Luciano Prono, Mauro Mangia, Fabio Pareschi, Riccardo Rovatti, ...
Pruning is a necessary operation to achieve low-power and lightweight inference, which is fundamental for the implementation of neural networks on mobile ...
Missing: Autoencoder | Show results with:Autoencoder
Experimental results demonstrate the efficacy of the MAM-based approach in significantly sparsifying matrices through different pruning techniques, particularly ...
Missing: Autoencoder | Show results with:Autoencoder
Multiply-And-Max/min Neurons at the Edge: Pruned Autoencoder Implementation. P Bich, L Prono, M Mangia, F Pareschi, R Rovatti, G Setti. 2023 IEEE 66th ...
Moreover, most of the already existing state-of-the-art pruning techniques can be used with MAM layers with little to no changes. To test the pruning ...
Multiply-And-Max/min Neurons at the Edge: Pruned Autoencoder Implementation. Philippe Bich, Luciano Prono, Mauro Mangia, Fabio Pareschi, Riccardo Rovatti ...