Movatterモバイル変換


[0]ホーム

URL:


Skip to Main Content

Advertisement

MIT Press Direct, home
header search
    Neural Computation
    Skip Nav Destination
    Article navigation
    September 01 1989

    Finite State Automata and Simple Recurrent Networks

    In Special Collection:CogNet
    Axel Cleeremans,
    Axel Cleeremans
    Department of Psychology, Carnegie-Mellon University, Pittsburgh, PA 15213 USA
    Search for other works by this author on:
    David Servan-Schreiber,
    David Servan-Schreiber
    Department of Computer Science, Carnegie-Mellon University, Pittsburgh, PA 15213 USA
    Search for other works by this author on:
    James L. McClelland
    James L. McClelland
    Department of Psychology, Carnegie-Mellon University, Pittsburgh, PA 15213 USA
    Search for other works by this author on:
    Crossmark: Check for Updates
    Axel Cleeremans
    Department of Psychology, Carnegie-Mellon University, Pittsburgh, PA 15213 USA
    David Servan-Schreiber
    Department of Computer Science, Carnegie-Mellon University, Pittsburgh, PA 15213 USA
    James L. McClelland
    Department of Psychology, Carnegie-Mellon University, Pittsburgh, PA 15213 USA
    Received:March 06 1989
    Accepted:April 20 1989
    Online ISSN: 1530-888X
    Print ISSN: 0899-7667
    © 1989 Massachusetts Institute of Technology
    1989
    Neural Computation (1989) 1 (3): 372–381.
    Article history
    Received:
    March 06 1989
    Accepted:
    April 20 1989
    Citation

    Axel Cleeremans,David Servan-Schreiber,James L. McClelland; Finite State Automata and Simple Recurrent Networks.Neural Comput 1989; 1 (3): 372–381. doi:https://doi.org/10.1162/neco.1989.1.3.372

    Download citation file:

    toolbar search
    toolbar search

      Abstract

      We explore a network architecture introduced by Elman (1988) for predicting successive elements of a sequence. The network uses the pattern of activation over a set of hidden units from time-stept−1, together with elementt, to predict elementt + 1. When the network is trained with strings from a particular finite-state grammar, it can learn to be a perfect finite-state recognizer for the grammar. When the network has a minimal number of hidden units, patterns on the hidden units come to correspond to the nodes of the grammar, although this correspondence is not necessary for the network to act as a perfect finite-state recognizer. We explore the conditions under which the network can carry information about distant sequential contingencies across intervening elements. Such information is maintained with relative ease if it is relevant at each intermediate step; it tends to be lost when intervening elements do not depend on it. At first glance this may suggest that such networks are not relevant to natural language, in which dependencies may span indefinite distances. However, embeddings in natural language are not completely independent of earlier information. The final simulation shows that long distance sequential contingencies can be encoded by the network even if only subtle statistical properties of embedded strings depend on the early information.

      This content is only available as a PDF.
      © 1989 Massachusetts Institute of Technology
      1989
      You do not currently have access to this content.

      Sign in

      Don't already have an account?Register

      Client Account

      You could not be signed in. Please check your email address / username and password and try again.
      Could not validate captcha. Please try again.

      Sign in via your Institution

      Sign in via your Institution
      322Views
      277Web of Science
      314Crossref

      Advertisement

      Related Book Chapters

      Formal Games and Automata
      Artificial Life IX: Proceedings of the Ninth International Conference on the Simulation and Synthesis of Living Systems
      Learning Finite Automata by Experimentation
      An Introduction to Computational Learning Theory
      JETS IN FINITE SPACE
      The Theory of Turbulent Jets

      Advertisement

      Neural Computation
      • Online ISSN 1530-888X
      • Print ISSN 0899-7667
      Close Modal
      Close Modal
      This Feature Is Available To Subscribers Only

      Sign In orCreate an Account

      Close Modal
      Close Modal
      This site uses cookies. By continuing to use our website, you are agreeing toour privacy policy. No content on this site may be used to train artificial intelligence systems without permission in writing from the MIT Press.
      Accept

      [8]ページ先頭

      ©2009-2025 Movatter.jp