Physical networks can develop tuned responses, or functions, by design, by evolution, or by learning via local rules. In all of these cases, tunable degrees of freedom characterizing internal interactions are modified to lower a cost penalizing deviations from desired outputs. An important class of such networks follows dynamics that minimize a global physical quantity, or Lyapunov function, with respect to physical degrees of freedom. In such networks, learning is a "double optimization"process in which two quantities, one defined by the task and the other prescribed by physics, are minimized with respect to different but coupled sets of variables. Here, we show how this learning process couples the high-dimensional "cost landscape"to the "physical landscape,"linking the physical and cost Hessian matrices. Physical responses of trained networks to random perturbations thus reveal the functions to which they were tuned. Our results, illustrated using electrical networks with adaptable resistors, are generic to networks that perform tasks in the linear response regime.

APS
doi.org/10.1103/PhysRevLett.134.147402
Phys. Rev. Lett.
Learning Machines

Stern, M., Guzman, M., Martins, F., Liu, A., & Balasubramanian, V. (2025). Physical Networks Become What They Learn. Phys. Rev. Lett., 134(14), 147402: 1–7. doi:10.1103/PhysRevLett.134.147402