A primary objective of Spiking Neural Networks is a very energy-efficient computation. To achieve this target, a small spike rate is of course very beneficial given the event-driven nature of such a computation. A network that processes information encoded in spike timing can, by its nature, have such a sparse event rate, but, as the network becomes deeper and larger, the spike rate tends to increase without any improvements in the final accuracy. If, on the other hand, a penalty on the excess of spikes is used during the training, the network may shift to a configuration where many neurons are silent, thus affecting the effectiveness of the training itself. In this paper, we present a learning strategy to keep the final spike rate under control by changing the loss function to penalize the spikes generated by neurons after the first ones. Moreover, we also propose a 2-phase training strategy to avoid silent neurons during the training, intended for benchmarks where such an issue can cause the switch off of the network.

Reducing the spike rate of deep spiking neural networks based on time-encoding

Fontanini R.;Pilotto A.;Esseni D.;Loghi M.
2024-01-01

Abstract

A primary objective of Spiking Neural Networks is a very energy-efficient computation. To achieve this target, a small spike rate is of course very beneficial given the event-driven nature of such a computation. A network that processes information encoded in spike timing can, by its nature, have such a sparse event rate, but, as the network becomes deeper and larger, the spike rate tends to increase without any improvements in the final accuracy. If, on the other hand, a penalty on the excess of spikes is used during the training, the network may shift to a configuration where many neurons are silent, thus affecting the effectiveness of the training itself. In this paper, we present a learning strategy to keep the final spike rate under control by changing the loss function to penalize the spikes generated by neurons after the first ones. Moreover, we also propose a 2-phase training strategy to avoid silent neurons during the training, intended for benchmarks where such an issue can cause the switch off of the network.
File in questo prodotto:
File Dimensione Formato  
Fontanini_2024_Neuromorph._Comput._Eng._4_034004.pdf

accesso aperto

Tipologia: Versione Editoriale (PDF)
Licenza: Creative commons
Dimensione 1.22 MB
Formato Adobe PDF
1.22 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11390/1287104
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact