Sorry, registration has ended.

Training deep resistive networks with equilibrium propagation. We present a mathematical framework of learning called "equilibrium propagation" (EP). The EP framework is compatible with gradient-descent optimization -- the workhorse of deep learning -- but in EP, inference and gradient computation are achieved using the same physical laws, and the learning rule for each weight (trainable parameter) is local, thus opening a path for energy efficient deep learning. We show that EP can be used to train electrical circuits composed of voltage sources, variable resistors and diodes – a class of networks that we dub "deep resistive networks" (DRNs). We show that DRNs are universal function approximators: they can implement or approximate arbitrary input-output functions. We then present a fast algorithm to simulate DRNs (on classical computers) as well as simulations of DRNs trained by EP on MNIST. We argue that DRNs are closely related to deep Hopfield networks (DHNs), and we present simulations of DHN trained by EP on CIFAR10, CIFAR100 and ImageNet 32x32. Altogether, we contend that DRNs and EP can guide the development of efficient processors for AI.


  • Date:12/02/2024 01:00 PM
  • Location Regent Court, Sheffield City Centre, Sheffield, UK (Map)