Probing Neural Networks With Finite Automata

Fredrik Dahlqvist will be presenting and discussing the paper: Finite state automata and simple recurrent networks (Cleeremans et al., 1989)

Abstract

We explore a network architecture introduced by Elman (1988) for predicting successive elements of a sequence. The network uses the pattern of activation over a set of hidden units from time-step t−1, together with element t, to predict element t + 1. When the network is trained with strings from a particular finite-state grammar, it can learn to be a perfect finite-state recognizer for the grammar. When the network has a minimal number of hidden units, patterns on the hidden units come to correspond to the nodes of the grammar, although this correspondence is not necessary for the network to act as a perfect finite-state recognizer. We explore the conditions under which the network can carry information about distant sequential contingencies across intervening elements. Such information is maintained with relative ease if it is relevant at each intermediate step; it tends to be lost when intervening elements do not depend on it. At first glance this may suggest that such networks are not relevant to natural language, in which dependencies may span indefinite distances. However, embeddings in natural language are not completely independent of earlier information. The final simulation shows that long distance sequential contingencies can be encoded by the network even if only subtle statistical properties of embedded strings depend on the early information.

References

  1. Finite state automata and simple recurrent networks
    Axel Cleeremans, David Servan-Schreiber, and James L McClelland
    Neural computation, 1989



Enjoy Reading This Article?

Here are some more articles you might like to read next:

  • Deep Learning is Not So Mysterious or Different
  • The Biophysical Principles Underlying Computation in Neural Substrates
  • Approaching Deep Learning through the Spectral Dynamics of Weights
  • Explaining Transformers Using Model-Based Stochastic Signal Processing
  • NeurIPS 2024 Recap