Connectionism

First published Sun May 18, 1997; substantive revision Fri Aug 16, 2019

Connectionism is a movement in cognitive science that hopes to explain intellectual abilities using artificial neural networks (also known as “neural networks” or “neural nets”). Neural networks are simplified models of the brain composed of large numbers of units (the analogs of neurons) together with weights that measure the strength of connections between the units. These weights model the effects of the synapses that link one neuron to another. Experiments on models of this kind have demonstrated an ability to learn such skills as face recognition, reading, and the detection of simple grammatical structure.

Philosophers have become interested in connectionism because it promises to provide an alternative to the classical theory of the mind: the widely held view that the mind is something akin to a digital computer processing a symbolic language. Exactly how and to what extent the connectionist paradigm constitutes a challenge to classicism has been a matter of hot debate in recent years.

1. A Description of Neural Networks

A neural network consists of large number of units joined together in a pattern of connections. Units in a net are usually segregated into three classes: input units, which receive information to be processed, output units where the results of the processing are found, and units in between called hidden units. If a neural net were to model the whole human nervous system, the input units would be analogous to the sensory neurons, the output units to the motor neurons, and the hidden units to all other neurons.

Here is a simple illustration of a simple neural net:

a diagram of three columns with the first having seven circles, the second having four circles, and the third having three circles. Each circle in the first column is connected to each circle in the second by a line. Each circle in the second column is connected to each circle in the third column by a line.

Each input unit has an activation value that represents some feature external to the net. An input unit sends its activation value to each of the hidden units to which it is connected. Each of these hidden units calculates its own activation value depending on the activation values it receives from the input units. This signal is then passed on to output units or to another layer of hidden units. Those hidden units compute their activation values in the same way, and send them along to their neighbors. Eventually the signal at the input units propagates all the way through the net to determine the activation values at all the output units.

The pattern of activation set up by a net is determined by the weights, or strength of connections between the units. Weights may be either positive or negative. A negative weight represents the inhibition of the receiving unit by the activity of a sending unit. The activation value for each receiving unit is calculated according a simple activation function. Activation functions vary in detail, but they all conform to the same basic plan. The function sums together the contributions of all sending units, where the contribution of a unit is defined as the weight of the connection between the sending and receiving units times the sending unit’s activation value. This sum is usually modified further, for example, by adjusting the activation sum to a value between 0 and 1 and/or by setting the activation to zero unless a threshold level for the sum is reached. Connectionists presume that cognitive functioning can be explained by collections of units that operate in this way. Since it is assumed that all the units calculate pretty much the same simple activation function, human intellectual accomplishments must depend primarily on the settings of the weights between the units.

The kind of net illustrated above is called a feed forward net. Activation flows directly from inputs to hidden units and then on to the output units. More realistic models of the brain would include many layers of hidden units, and recurrent connections that send signals back from higher to lower levels. Such recurrence is necessary in order to explain such cognitive features as short-term memory. In a feed forward net, repeated presentations of the same input produce the same output every time, but even the simplest organisms habituate to (or learn to ignore) repeated presentation of the same stimulus. Connectionists tend to avoid recurrent connections because little is understood about the general problem of training recurrent nets. However Elman (1991) and others have made some progress with simple recurrent nets, where the recurrence is tightly constrained.

2. Neural Network Learning and Backpropagation

Finding the right set of weights to accomplish a given task is the central goal in connectionist research. Luckily, learning algorithms have been devised that can calculate the right weights for carrying out many tasks (see Hinton 1992 for an accessible review). These fall into two broad categories: supervised and unsupervised learning. Hebbian learning is the best known unsupervised form. As each input is presented to the net, weights between nodes that are active together are increased, while those weights connecting nodes that are not active together are decreased. This form of training is especially useful for building nets that can classify the input into useful categories. The most widely used supervised algorithm is called backpropagation. To use this method, one needs a training set consisting of many examples of inputs and their desired outputs for a given task. This external set of examples “supervises” the training process. If, for example, the task is to distinguish male from female faces, the training set might contain pictures of faces together with an indication of the sex of the person depicted in each one. A net that can learn this task might have two output units (indicating the categories male and female) and many input units, one devoted to the brightness of each pixel (tiny area) in the picture. The weights of the net to be trained are initially set to random values, and then members of the training set are repeatedly exposed to the net. The values for the input of a member are placed on the input units and the output of the net is compared with the desired output for this member. Then all the weights in the net are adjusted slightly in the direction that would bring the net’s output values closer to the values for the desired output. For example, when male’s face is presented to the input units the weights are adjusted so that the value of the male output unit is increased and the value of the female output unit is decreased. After many repetitions of this process the net may learn to produce the desired output for each input in the training set. If the training goes well, the net may also have learned to generalize to the desired behavior for inputs and outputs that were not in the training set. For example, it may do a good job of distinguishing males from females in pictures that were never presented to it before.

Training nets to model aspects of human intelligence is a fine art. Success with backpropagation and other connectionist learning methods may depend on quite subtle adjustment of the algorithm and the training set. Training typically involves hundreds of thousands of rounds of weight adjustment. Given the limitations of computers in the past, training a net to perform an interesting task took days or even weeks. More recently, the use of massively parallel dedicated processors (GPUs) has helped relieve these heavy computational burdens. But even here, some limitations to connectionist theories of learning will remain to be faced. Humans (and many less intelligent animals) display an ability to learn from single examples; for example, a child shown a novel two-wheeled vehicle and given the name “Segway”, knows right away what a Segway is (Lake, Zaremba et al. 2015). Connectionist learning techniques such as backpropagation are far from explaining this kind of “one shot” learning.

3. Samples of What Neural Networks Can Do

Connectionists have made significant progress in demonstrating the power of neural networks to master cognitive tasks. Here are three well-known experiments that have encouraged connectionists to believe that neural networks are good models of human intelligence. One of the most attractive of these efforts is Sejnowski and Rosenberg’s 1987 work on a net that can read English text called NETtalk. The training set for NETtalk was a large data base consisting of English text coupled with its corresponding phonetic output, written in a code suitable for use with a speech synthesizer. Tapes of NETtalk’s performance at different stages of its training are very interesting listening. At first the output is random noise. Later, the net sounds like it is babbling, and later still as though it is speaking English double-talk (speech that is formed of sounds that resemble English words). At the end of training, NETtalk does a fairly good job of pronouncing the text given to it. Furthermore, this ability generalizes fairly well to text that was not presented in the training set.

Another influential early connectionist model was a net trained by Rumelhart and McClelland (1986) to predict the past tense of English verbs. The task is interesting because although most of the verbs in English (the regular verbs) form the past tense by adding the suffix “-ed”, many of the most frequently verbs are irregular (“is” / “was”, “come” / “came”, “go” / “went”). The net was first trained on a set containing a large number of irregular verbs, and later on a set of 460 verbs containing mostly regulars. The net learned the past tenses of the 460 verbs in about 200 rounds of training, and it generalized fairly well to verbs not in the training set. It even showed a good appreciation of “regularities” to be found among the irregular verbs (“send” / “sent”, “build” / “built”; “blow” / “blew”, “fly” / “flew”). During learning, as the system was exposed to the training set containing more regular verbs, it had a tendency to overregularize, i.e., to combine both irregular and regular forms: (“break” / “broked”, instead of “break” / “broke”). This was corrected with more training. It is interesting to note that children are known to exhibit the same tendency to overregularize during language learning. However, there is hot debate over whether Rumelhart and McClelland’s is a good model of how humans actually learn and process verb endings. For example, Pinker and Prince (1988) point out that the model does a poor job of generalizing to some novel regular verbs. They believe that this is a sign of a basic failing in connectionist models. Nets may be good at making associations and matching patterns, but they have fundamental limitations in mastering general rules such as the formation of the regular past tense. These complaints raise an important issue for connectionist modelers, namely whether nets can generalize properly to master cognitive tasks involving rules. Despite Pinker and Prince’s objections, many connectionists believe that generalization of the right kind is still possible (Niklasson & van Gelder 1994).

Elman’s 1991 work on nets that can appreciate grammatical structure has important implications for the debate about whether neural networks can learn to master rules. Elman trained a simple recurrent network to predict the next word in a large corpus of English sentences. The sentences were formed from a simple vocabulary of 23 words using a subset of English grammar. The grammar, though simple, posed a hard test for linguistic awareness. It allowed unlimited formation of relative clauses while demanding agreement between the head noun and the verb. So for example, in the sentence

Any man that chases dogs that chase cats … runs.

the singular “man” must agree with the verb “runs” despite the intervening plural nouns (“dogs”, “cats”) which might cause the selection of “run”. One of the important features of Elman’s model is the use of recurrent connections. The values at the hidden units are saved in a set of so called context units, to be sent back to the input level for the next round of processing. This looping back from hidden to input layers provides the net with a rudimentary form of memory of the sequence of words in the input sentence. Elman’s nets displayed an appreciation of the grammatical structure of sentences that were not in the training set. The net’s command of syntax was measured in the following way. Predicting the next word in an English sentence is, of course, an impossible task. However, these nets succeeded, at least by the following measure. At a given point in an input sentence, the output units for words that are grammatical continuations of the sentence at that point should be active and output units for all other words should be inactive. After intensive training, Elman was able to produce nets that displayed perfect performance on this measure including sentences not in the training set. The work of Christiansen and Chater (1999a) and Morris, Cottrell, and Elman (2000) extends this research to more complex grammars. For a broader view of progress in connectionist natural language processing see summaries by Christiansen and Chater (1999b), and Rohde and Plaut (2003).

Although this performance is impressive, there is still a long way to go in training nets that can process a language like English. Furthermore, doubts have been raised about the significance of Elman’s results. For example, Marcus (1998, 2001) argues that Elman’s nets are not able to generalize this performance to sentences formed from a novel vocabulary. This, he claims, is a sign that connectionist models merely associate instances, and are unable to truly master abstract rules. On the other hand, Phillips (2002) argues that classical architectures are no better off in this respect. The purported inability of connectionist models to generalize performance in this way has become an important theme in the systematicity debate. (See Section 7 below.)

A somewhat different concern about the adequacy of connectionist language processing focuses on tasks that mimic infant learning of simple artificial grammars. Data on reaction time confirms that infants can learn to distinguish well-formed from ill-formed sentences in a novel language created by experimenters. Shultz and Bale (2001) report success in training neural nets on the same task. Vilcu and Hadley (2005) object that this work fails to demonstrate true acquisition of the grammar, but see Shultz and Bale (2006) for a detailed reply.

4. Strengths and Weaknesses of Neural Network Models

Philosophers are interested in neural networks because they may provide a new framework for understanding the nature of the mind and its relation to the brain (Rumelhart & McClelland 1986: Chapter 1). Connectionist models seem particularly well matched to what we know about neurology. The brain is indeed a neural net, formed from massively many units (neurons) and their connections (synapses). Furthermore, several properties of neural network models suggest that connectionism may offer an especially faithful picture of the nature of cognitive processing. Neural networks exhibit robust flexibility in the face of the challenges posed by the real world. Noisy input or destruction of units causes graceful degradation of function. The net’s response is still appropriate, though somewhat less accurate. In contrast, noise and loss of circuitry in classical computers typically result in catastrophic failure. Neural networks are also particularly well adapted for problems that require the resolution of many conflicting constraints in parallel. There is ample evidence from research in artificial intelligence that cognitive tasks such as object recognition, planning, and even coordinated motion present problems of this kind. Although classical systems are capable of multiple constraint satisfaction, connectionists argue that neural network models provide much more natural mechanisms for dealing with such problems.

Over the centuries, philosophers have struggled to understand how our concepts are defined. It is now widely acknowledged that trying to characterize ordinary notions with necessary and sufficient conditions is doomed to failure. Exceptions to almost any proposed definition are always waiting in the wings. For example, one might propose that a tiger is a large black and orange feline. But then what about albino tigers? Philosophers and cognitive psychologists have argued that categories are delimited in more flexible ways, for example via a notion of family resemblance or similarity to a prototype. Connectionist models seem especially well suited to accommodating graded notions of category membership of this kind. Nets can learn to appreciate subtle statistical patterns that would be very hard to express as hard and fast rules. Connectionism promises to explain flexibility and insight found in human intelligence using methods that cannot be easily expressed in the form of exception free principles (Horgan & Tienson 1989, 1990), thus avoiding the brittleness that arises from standard forms of symbolic representation.

Despite these intriguing features, there are some weaknesses in connectionist models that bear mentioning. First, most neural network research abstracts away from many interesting and possibly important features of the brain. For example, connectionists usually do not attempt to explicitly model the variety of different kinds of brain neurons, nor the effects of neurotransmitters and hormones. Furthermore, it is far from clear that the brain contains the kind of reverse connections that would be needed if the brain were to learn by a process like backpropagation, and the immense number of repetitions needed for such training methods seems far from realistic. Attention to these matters will probably be necessary if convincing connectionist models of human cognitive processing are to be constructed. A more serious objection must also be met. It is widely felt, especially among classicists, that neural networks are not particularly good at the kind of rule based processing that is thought to undergird language, reasoning, and higher forms of thought. (For a well known critique of this kind see Pinker and Prince 1988.) We will discuss the matter further when we turn to the systematicity debate.

There has been a cottage industry in developing more biologically-plausible algorithms for error-driven training that can be shown to approximate the results of backpropagation without its implausible features. Prominent examples include O’Reilly’s Generalized Error Recirculation algorithm (O’Reilly 1996), using randomized error signals rather than error signals individually computed for each neuron (Lillicrap, Cownden, Tweed, & Akerman 2016), and modifying weights using spike-timing dependent plasticity--the latter of which has been a favorite of prominent figures in deep learning research (Bengio et al. 2017). (For more on deep learning see section 11 below.)

5. The Shape of the Controversy between Connectionists and Classicists

The last forty years have been dominated by the classical view that (at least higher) human cognition is analogous to symbolic computation in digital computers. On the classical account, information is represented by strings of symbols, just as we represent data in computer memory or on pieces of paper. The connectionist claims, on the other hand, that information is stored non-symbolically in the weights, or connection strengths, between the units of a neural net. The classicist believes that cognition resembles digital processing, where strings are produced in sequence according to the instructions of a (symbolic) program. The connectionist views mental processing as the dynamic and graded evolution of activity in a neural net, each unit’s activation depending on the connection strengths and activity of its neighbors.

On the face of it, these views seem very different. However many connectionists do not view their work as a challenge to classicism and some overtly support the classical picture. So-called implementational connectionists seek an accommodation between the two paradigms. They hold that the brain’s net implements a symbolic processor. True, the mind is a neural net; but it is also a symbolic processor at a higher and more abstract level of description. So the role for connectionist research according to the implementationalist is to discover how the machinery needed for symbolic processing can be forged from neural network materials, so that classical processing can be reduced to the neural network account.

However, many connectionists resist the implementational point of view. Such radical connectionists claim that symbolic processing was a bad guess about how the mind works. They complain that classical theory does a poor job of explaining graceful degradation of function, holistic representation of data, spontaneous generalization, appreciation of context, and many other features of human intelligence which are captured in their models. The failure of classical programming to match the flexibility and efficiency of human cognition is by their lights a symptom of the need for a new paradigm in cognitive science. So radical connectionists would eliminate symbolic processing from cognitive science forever.

The controversy between radical and implementational connectionists is complicated by the invention of what are called hybrid connectionist architectures. Here elements of classical symbolic processing are included in neural nets (Wermter & Sun 2000). For example, Miikkulainen (1993) champions a complex collection of neural net modules that share data coded in activation patterns. Since one of the modules acts as a memory, the system taken as a whole resembles a classical processor with separate mechanisms for storing and operating on digital “words”. Smolensky (1990) is famous for inventing so called tensor product methods for simulating the process of variable binding, where symbolic information is stored at and retrieved from known “locations”. More recently, Eliasmith (2013) has proposed complex and massive architectures that use what are called semantic pointers, which exhibit features of classical variable binding. Once hybrid architectures such as these are on the table, it becomes more difficult to classify a given connectionist model as radical or merely implementational. This opens the interesting prospect that whether symbolic processing is actually present in the human brain may turn out to be a matter of degree.

The disagreement concerning the degree to which human cognition involves symbolic processing is naturally embroiled with the innateness debate—whether higher level abilities such as language and reasoning are part of the human genetic endowment, or whether they are learned. The success of connectionist models at learning tasks starting from randomly chosen weights gives heart to empiricists, who would think that the infant brain is able to construct intelligence from perceptual input using a simple learning mechanism (Elman et al. 1996). On the other hand, nativists in the rationalist tradition argue that at least for grammar-based language, the poverty of perceptual stimulus (Chomsky 1965: 58) entails the existence of a genetically determined mechanism tailored to learning grammar. However, the alignment between connectionism and non-nativism is not so clear-cut. There is no reason that connectionist models cannot be interpreted from a nativist point of view, where the ongoing “learning” represents the process of evolutionary refinement from generation to generation of a species. The idea that the human brain has domain specific knowledge that is genetically determined can be accommodated in the connectionist paradigm by biasing the initial weights of the models to make that knowledge easy or trivial to learn. Connectionist research makes best contact with the innateness debate by providing a new strategy for disarming poverty of stimulus arguments. Nativists argue that association of ideas, the mechanism for learning proposed by the traditional empiricist, is too slender a reed to support the development of higher level cognitive abilities. They suppose that innate mechanisms are essential for learning (for example) a grammar of English from a child’s linguistic input, because the statistical regularities available to “mere association” massively underdetermine that grammar. Connectionism could support an empiricism here by providing a proof-of-concept that such structured knowledge can be learned from inputs available to humans using only learning mechanisms found in non-classical architectures. Of course it is too soon to tell whether this promise can be realized.

6. Connectionist Representation

Connectionist models provide a new paradigm for understanding how information might be represented in the brain. A seductive but naive idea is that single neurons (or tiny neural bundles) might be devoted to the representation of each thing the brain needs to record. For example, we may imagine that there is a grandmother neuron that fires when we think about our grandmother. However, such local representation is not likely. There is good evidence that our grandmother thought involves complex patterns of activity distributed across relatively large parts of cortex.

It is interesting to note that distributed, rather than local representations on the hidden units are the natural products of connectionist training methods. The activation patterns that appear on the hidden units while NETtalk processes text serve as an example. Analysis reveals that the net learned to represent such categories as consonants and vowels, not by creating one unit active for consonants and another for vowels, but rather in developing two different characteristic patterns of activity across all the hidden units.

Given the expectations formed from our experience with local representation on the printed page, distributed representation seems both novel and difficult to understand. But the technique exhibits important advantages. For example, distributed representations, (unlike symbols stored in separate fixed memory locations) remain relatively well preserved when parts of the model are destroyed or overloaded. More importantly, since representations are coded in patterns rather than firings of individual units, relationships between representations are coded in the similarities and differences between these patterns. So the internal properties of the representation carry information on what it is about (Clark 1993: 19). In contrast, local representation is conventional. No intrinsic properties of the representation (a unit’s firing) determine its relationships to the other symbols. This self-reporting feature of distributed representations promises to resolve a philosophical conundrum about meaning. In a symbolic representational scheme, all representations are composed out of symbolic atoms (like words in a language). Meanings of complex symbol strings may be defined by the way they are built up out of their constituents, but what fixes the meanings of the atoms?

Connectionist representational schemes provide an end run around the puzzle by simply dispensing with atoms. Every distributed representation is a pattern of activity across all the units, so there is no principled way to distinguish between simple and complex representations. To be sure, representations are composed out of the activities of the individual units. But none of these “atoms” codes for any symbol. The representations are sub-symbolic in the sense that analysis into their components leaves the symbolic level behind.

The sub-symbolic nature of distributed representation provides a novel way to conceive of information processing in the brain. If we model the activity of each neuron with a number, then the activity of the whole brain can be given by a giant vector (or list) of numbers, one for each neuron. Both the brain’s input from sensory systems and its output to individual muscle neurons can also be treated as vectors of the same kind. So the brain amounts to a vector processor, and the problem of psychology is transformed into questions about which operations on vectors account for the different aspects of human cognition.

Sub-symbolic representation has interesting implications for the classical hypothesis that the brain must contain symbolic representations that are similar to sentences of a language. This idea, often referred to as the language of thought (or LOT) thesis may be challenged by the nature of connectionist representations. It is not easy to say exactly what the LOT thesis amounts to, but van Gelder (1990) offers an influential and widely accepted benchmark for determining when the brain should be said to contain sentence-like representations. It is that when a representation is tokened one thereby tokens the constituents of that representation. For example, if I write “John loves Mary” I have thereby written the sentence’s constituents: “John” “loves” and “Mary”. Distributed representations for complex expressions like “John loves Mary” can be constructed that do not contain any explicit representation of their parts (Smolensky 1990). The information about the constituents can be extracted from the representations, but neural network models do not need to explicitly extract this information themselves in order to process it correctly (Chalmers 1990). This suggests that neural network models serve as counterexamples to the idea that the language of thought is a prerequisite for human cognition. However, the matter is still a topic of lively debate (Fodor 1997).

The novelty of distributed and superimposed connectionist information storage naturally causes one to wonder about the viability of classical notions of symbolic computation in describing the brain. Ramsey (1997) argues that though we may attribute symbolic representations to neural nets, those attributions do not figure in legitimate explanations of the model’s behavior. This claim is important because the classical account of cognitive processing, (and folk intuitions) presume that representations play an explanatory role in understanding the mind. It has been widely thought that cognitive science requires, by its very nature, explanations that appeal to representations (Von Eckardt 2003). If Ramsey is right, the point may cut in two different ways. Some may use it to argue for a new and non-classical understanding of the mind, while others would use it to argue that connectionism is inadequate since it cannot explain what it must. However, Haybron (2000) argues against Ramsey that there is ample room for representations with explanatory role in radical connectionist architectures. Roth (2005) makes the interesting point that contrary to first impressions, it may also make perfect sense to explain a net’s behavior by reference to a computer program, even if there is no way to discriminate a sequence of steps of the computation through time.

The debate concerning the presence of classical representations and a language of thought has been clouded by lack of clarity in defining what should count as the representational “vehicles” in distributed neural models. Shea (2007) makes the point that the individuation of distributed representations should be defined by the way activation patterns on the hidden units cluster together. It is the relationships between clustering regions in the space of possible activation patterns that carry representational content, not the activations themselves, nor the collection of units responsible for the activation. On this understanding, prospects are improved for locating representational content in neural nets that can be compared in nets of different architectures, that is causally involved in processing, and which overcomes some objections to holistic accounts of meaning.

In a series of papers Horgan and Tienson (1989, 1990) have championed a view called representations without rules. According to this view classicists are right to think that human brains (and good connectionist models of them) contain explanatorily robust representations; but they are wrong to think that those representations enter in to hard and fast rules like the steps of a computer program. The idea that connectionist systems may follow graded or approximate regularities (“soft laws” as Horgan and Tienson call them) is intuitive and appealing. However, Aizawa (1994) argues that given an arbitrary neural net with a representation level description, it is always possible to outfit it with hard and fast representation-level rules. Guarini (2001) responds that if we pay attention to notions of rule following that are useful to cognitive modeling, Aizawa’s constructions will seem beside the point.

7. The Systematicity Debate

The major points of controversy in the philosophical literature on connectionism have to do with whether connectionists provide a viable and novel paradigm for understanding the mind. One complaint is that connectionist models are only good at processing associations. But such tasks as language and reasoning cannot be accomplished by associative methods alone and so connectionists are unlikely to match the performance of classical models at explaining these higher-level cognitive abilities. However, it is a simple matter to prove that neural networks can do anything that symbolic processors can do, since nets can be constructed that mimic a computer’s circuits. So the objection can not be that connectionist models are unable to account for higher cognition; it is rather that they can do so only if they implement the classicist’s symbolic processing tools. Implementational connectionism may succeed, but radical connectionists will never be able to account for the mind.

Fodor and Pylyshyn’s often cited paper (1988) launches a debate of this kind. They identify a feature of human intelligence called systematicity which they feel connectionists cannot explain. The systematicity of language refers to the fact that the ability to produce/understand/think some sentences is intrinsically connected to the ability to produce/understand/think others of related structure. For example, no one with a command of English who understands “John loves Mary” can fail to understand “Mary loves John.” From the classical point of view, the connection between these two abilities can easily be explained by assuming that masters of English represent the constituents (“John”, “loves” and “Mary”) of “John loves Mary” and compute its meaning from the meanings of these constituents. If this is so, then understanding a novel sentence like “Mary loves John” can be accounted for as another instance of the same symbolic process. In a similar way, symbolic processing would account for the systematicity of reasoning, learning and thought. It would explain why there are no people who are capable of concluding P from P & (Q & R), but incapable of concluding P from P & Q, why there are no people capable of learning to prefer a red cube to green square who cannot learn to prefer a green cube to the red square, and why there isn’t anyone who can think that John loves Mary who can’t also think that Mary loves John.

Fodor and McLaughlin (1990) argue in detail that connectionists do not account for systematicity. Although connectionist models can be trained to be systematic, they can also be trained, for example, to recognize “John loves Mary” without being able to recognize “Mary loves John.” Since connectionism does not guarantee systematicity, it does not explain why systematicity is found so pervasively in human cognition. Systematicity may exist in connectionist architectures, but where it exists, it is no more than a lucky accident. The classical solution is much better, because in classical models, pervasive systematicity comes for free.

The charge that connectionist nets are disadvantaged in explaining systematicity has generated a lot of interest. Chalmers (1993) points out that Fodor and Pylyshyn’s argument proves too much, for it entails that all neural nets, even those that implement a classical architecture, do not exhibit systematicity. Given the uncontroversial conclusion that the brain is a neural net, it would follow that systematicity is impossible in human thought. Another often mentioned point of rebuttal (Aizawa 1997b; Matthews 1997; Hadley 1997b) is that classical architectures do no better at explaining systematicity. There are also classical models that can be programmed to recognize “John loves Mary” without being able to recognize “Mary loves John,” for this depends on exactly which symbolic rules govern the classical processing. The point is that neither the use of connectionist architecture alone nor the use of classical architecture alone enforces a strong enough constraint to explain pervasive systematicity. In both architectures, further assumptions about the nature of the processing must be made to ensure that “Mary loves John” and “John loves Mary” are treated alike.

A discussion of this point should mention Fodor and McLaughlin’s requirement that systematicity be explained as a matter of nomic necessity, that is, as a matter of natural law. The complaint against connectionists is that while they may implement systems that exhibit systematicity, they will not have explained it unless it follows from their models as a nomic necessity. However, the demand for nomic necessity is a very strong one, and one that classical architectures clearly cannot meet either. So the only tactic for securing a telling objection to connectionists along these lines would be to weaken the requirement on the explanation of systematicity to one which classical architectures can and connectionists cannot meet. A convincing case of this kind has yet to be made.

As the systematicity debate has evolved, attention has been focused on defining the benchmarks that would answer Fodor and Pylyshyn’s challenge. Hadley (1994a, 1994b) distinguishes three brands of systematicity. Connectionists have clearly demonstrated the weakest of these by showing that neural nets can learn to correctly recognize novel sequences of words (e.g., “Mary loves John”) that were not in the training set. However, Hadley claims that a convincing rebuttal must demonstrate strong systematicity, or better, strong semantical systematicity. Strong systematicity would require (at least) that “Mary loves John” be recognized even if “Mary” never appears in the subject position in any sentence in the training set. Strong semantical systematicity would require as well that the net show abilities at correct semantical processing of the novel sentences rather than merely distinguishing grammatical from ungrammatical forms. Niklasson and van Gelder (1994) have claimed success at strong systematicity, though Hadley complains that this is at best a borderline case. Hadley and Hayward (1997) tackle strong semantical systematicity, but by Hadley’s own admission it is not clear that they have avoided the use of a classical architecture. Boden and Niklasson (2000) claim to have constructed a model that meets at least the spirit of strong semantical systematicity, but Hadley (2004) argues that even strong systematicity has not been demonstrated there. Whether one takes a positive or a negative view of these attempts, it is safe to say that no one has met the challenge of providing a neural net capable of learning complex semantical processing that generalizes to a full range of truly novel inputs.

Research on nets that clearly demonstrate strong systematicity has continued. Jansen and Watter (2012) provide a good summary of more recent efforts along these lines, and propose an interesting basis for solving the problem. They use a more complex architecture that combines unsupervised self-organizing maps with features of simple recurrent nets. However, the main innovation is to allow codes for the words being processed to represent sensory-motor features of what the words represent. Once trained, their nets displayed very good accuracy in distinguishing the grammatical features of sentences whose words never even appeared in the training set. This may appear to be cheating since the word codes might surreptitiously represent grammatical categories, or at least they may unfairly facilitate learning those categories. Jansen and Watter note however, that the sensory-motor features of what a word represents are apparent to a child who has just acquired a new word, and so that information is not off-limits in a model of language learning. They make the interesting observation that a solution to the systematicity problem may require including sources of environmental information that have so far been ignored in theories of language learning. This work complicates the systematicity debate, since it opens a new worry about what information resources are legitimate in responding to the challenge. However, this reminds us that architecture alone (whether classical or connectionist) is not going to solve the systematicity problem in any case, so the interesting questions concern what sources of supplemental information are needed to make the learning of grammar possible.

Kent Johnson (2004) argues that the whole systematicity debate is misguided. Attempts at carefully defining the systematicity of language or thought leaves us with either trivialities or falsehoods. Connectionists surely have explaining to do, but Johnson recommends that it is fruitless to view their burden under the rubric of systematicity. Aizawa (2014) also suggests the debate is no longer germane given the present climate in cognitive science. What is needed instead is the development of neurally plausible connectionist models capable of processing a language with a recursive syntax, which react immediately to the introduction of new items in the lexicon without introducing the features of classical architecture. The “systematicity” debate may have already gone as Johnson advises, for Hadley’s demand for strong semantical systematicity may be thought of as the requirement that connectionists exhibit success in that direction.

Recent work (Loula, Baroni, & Lake 2018) sheds new light on the controversy. Here recurrent neural nets were trained to interpret complex commands in a simple language that includes primitives such as “jump”, “walk”, “left”, “right”, “opposite” and “around”. “Opposite” is interpreted as a request to perform a command twice, and “around” to do so four times. So “jump around left” requests a left jump four times. The authors report that their nets showed very accurate generalization at tasks that qualify for demonstrating strong semantic systematicity. The nets correctly parsed commands in the test set containing “jump around right” even though this phrase never appeared in the training set. Nevertheless the net’s failures at more challenging tasks point to limitations in their abilities to generalize in ways that would demonstrate genuine systematicity. The nets exhibited very poor performance when commands in the test set were longer (or even shorter), than those presented in the training set. So they appeared unable to spontaneously compose the meaning of complex expressions from the meanings of their parts. New research is needed to understand the nature of these failures, whether they can be overcome in non-classical architectures, and the extent to which humans would exhibit similar mistakes under analogous circumstances.

It has been almost thirty years since the systematicity debate first began, with over 3,000 citations to Fodor and Pylyshyn’s original paper. So this brief account is necessarily incomplete. Aizawa (2003) provides an excellent view of the literature, and Calvo and Symons (2014) serves as another more recent resource.

8. Connectionism and Semantic Similarity

One of the attractions of distributed representations in connectionist models is that they suggest a solution to the problem of providing a theory of how brain states could have meaning. The idea is that the similarities and differences between activation patterns along different dimensions of neural activity record semantical information. So the similarity properties of neural activations provide intrinsic properties that determine meaning. However, when it comes to compositional linguistic representations, Fodor and Lepore (1992: Ch. 6) challenge similarity based accounts, on two fronts. The first problem is that human brains presumably vary significantly in the number of and connections between their neurons. Although it is straightforward to define similarity measures on two nets that contain the same number of units, it is harder to see how this can be done when the basic architectures of two nets differ. The second problem Fodor and Lepore cite is that even if similarity measures for meanings can be successfully crafted, they are inadequate to the task of meeting the desiderata which a theory of meaning must satisfy.

Churchland (1998) shows that the first of these two objections can be met. Citing the work of Laakso and Cottrell (2000) he explains how similarity measures between activation patterns in nets with radically different structures can be defined. Not only that, Laakso and Cottrell show that nets of different structures trained on the same task develop activation patterns which are strongly similar according to the measures they recommend. This offers hope that empirically well defined measures of similarity of concepts and thoughts across different individuals might be forged.

On the other hand, the development of a traditional theory of meaning based on similarity faces severe obstacles (Fodor & Lepore 1999), for such a theory would be required to assign sentences truth conditions based on an analysis of the meaning of their parts, and it is not clear that similarity alone is up to such tasks as fixing denotation in the way a standard theory demands. However, most connectionists who promote similarity based accounts of meaning reject many of the presupposition of standard theories. They hope to craft a working alternative which either rejects or modifies those presuppositions while still being faithful to the data on human linguistic abilities.

Calvo Garzón (2003) complains that there are reasons to think that connectionists must fail. Churchland’s response has no answer to the collateral information challenge. That problem is that the measured similarities between activation patterns for a concept (say: grandmother) in two human brains are guaranteed to be very low because two people’s (collateral) information on their grandmothers (name, appearance, age, character) is going to be very different. If concepts are defined by everything we know, then the measures for activation patterns of our concepts are bound to be far apart. This is a truly deep problem in any theory that hopes to define meaning by functional relationships between brain states. Philosophers of many stripes must struggle with this problem. Given the lack of a successfully worked out theory of concepts in either traditional or connectionist paradigms, it is only fair to leave the question for future research.

9. Connectionism and the Elimination of Folk Psychology

Another important application of connectionist research to philosophical debate about the mind concerns the status of folk psychology. Folk psychology is the conceptual structure that we spontaneously apply to understanding and predicting human behavior. For example, knowing that John desires a beer and that he believes that there is one in the refrigerator allows us to explain why John just went into the kitchen. Such knowledge depends crucially on our ability to conceive of others as having desires and goals, plans for satisfying them, and beliefs to guide those plans. The idea that people have beliefs, plans and desires is a commonplace of ordinary life; but does it provide a faithful description of what is actually to be found in the brain?

Its defenders will argue that folk psychology is too good to be false (Fodor 1988: Ch. 1). What more can we ask for the truth of a theory than that it provides an indispensable framework for successful negotiations with others? On the other hand, eliminativists will respond that the useful and widespread use of a conceptual scheme does not argue for its truth (Churchland 1989: Ch. 1). Ancient astronomers found the notion of celestial spheres useful (even essential) to the conduct of their discipline, but now we know that there are no celestial spheres. From the eliminativists’ point of view, an allegiance to folk psychology, like allegiance to folk (Aristotelian) physics, stands in the way of scientific progress. A viable psychology may require as radical a revolution in its conceptual foundations as is found in quantum mechanics.

Eliminativists are interested in connectionism because it promises to provide a conceptual foundation that might replace folk psychology. For example Ramsey, Stich, & Garon (1991) have argued that certain feed-forward nets show that simple cognitive tasks can be performed without employing features that could correspond to beliefs, desires and plans. Presuming that such nets are faithful to how the brain works, concepts of folk psychology fare no better than do celestial spheres. Whether connectionist models undermine folk psychology in this way is still controversial. There are two main lines of response to the claim that connectionist models support eliminativist conclusions. One objection is that the models used by Ramsey et al. are feed forward nets, which are too weak to explain some of the most basic features of cognition such as short term memory. Ramsey et al. have not shown that beliefs and desires must be absent in a class of nets adequate for human cognition. A second line of rebuttal challenges the claim that features corresponding to beliefs and desires are necessarily absent even in the feed forward nets at issue (Von Eckardt 2005).

The question is complicated further by disagreements about the nature of folk psychology. Many philosophers treat the beliefs and desires postulated by folk psychology as brain states with symbolic contents. For example, the belief that there is a beer in the refrigerator is thought to be a brain state that contains symbols corresponding to beer and a refrigerator. From this point of view, the fate of folk psychology is strongly tied to the symbolic processing hypothesis. So if connectionists can establish that brain processing is essentially non-symbolic, eliminativist conclusions will follow. On the other hand, some philosophers do not think folk psychology is essentially symbolic, and some would even challenge the idea that folk psychology is to be treated as a theory in the first place. Under this conception, it is much more difficult to forge links between results in connectionist research and the rejection of folk psychology.

10. Predictive Coding Models of Cognition

As connectionist research has matured from its “Golden Age” in the 1980s, the main paradigm has radiated into a number of distinct approaches. Two important trends worth mention are predicative coding and deep learning (which will be covered in the following section). Predictive coding is a well-established information processing tool with a wide range of applications. It is useful, for example, in compressing the size of data sets. Suppose you wish to transmit a picture of a landscape with a blue sky. Since most of the pixels in the top half of your image are roughly the same shade, it is very inefficient to record the color value (say Red: 46 Green: 78 Blue: FF in hexadecimal) over and over again for each pixel in the top half of the image. Since the value of one pixel strongly predicts the value of its neighbor, the efficient thing to do is record at each pixel location, the difference between the predicted value (an average of its neighbors) and the actual value for that pixel. (In the case of representing an even shaded sky, we would only need to record the blue value once, followed by lots of zeros.) This way, major coding resources are only needed to keep track of points in the image (such as edges) where there are large changes, that is points of “surprise” or “unexpected” variation.

It is well known that early visual processing in the brain involves taking differences between nearby values, (for example, to identify visual boundaries). It is only natural then to explore how the brain might take advantage of predictive coding in perception, inference, or even action. (See Clark 2013 for an excellent summary and entry point to the literature.) There is wide variety in the models presented in the predictive coding paradigm, and they tend to be specified at a higher level of generality than are connectionist models so far discussed. Assume we have a neural net with input, hidden and output levels that has been trained on a task (say face recognition) and so presumably has information about faces stored in the weights connecting the hidden level nodes. Three features would classify this net as a predictive coding (PC) model. First, the model will have downward connections from the higher levels that are able to predict the next input for that task. (The prediction might be a representation of a generic face.) Second, the data sent to the higher levels for a given input is not the value recorded at the input nodes, but the difference between the predicted values and the values actually present. (So in the example, the data provided tracks the differences between the face to be recognized and the generic face.) In this way the data being received by the net is already preprocessed for coding efficiency. Third, the model is trained by adjusting the weights in such a way that the error is minimized at the inputs. In other words, the trained net reduces as much as possible the “surprise” registered in the difference between the raw input and its prediction. In so doing it comes to be able to predict the face of the individual to be recognized to eliminate the error. Some advocates of predictive coding models suggest that this scheme provides a unified account of all cognitive phenomena, including perception, reasoning, planning and motor control. By minimizing prediction error in interacting with the environment, the net is forced to develop the conceptual resources to model the causal structure of the external world, and so navigate that world more effectively.

The predictive coding (PC) paradigm has attracted a lot of attention. There is ample evidence that PC models capture essential details of visual function in the mammalian brain (Rao & Ballard 1999; Huang & Rao 2011). For example, when trained on typical visual input, PC models spontaneously develop functional areas for edge, orientation and motion detection known to exist in visual cortex. This work also raises the interesting point that the visual architecture may develop in response to the statistics of the scenes being encountered, so that organisms in different environments have visual systems specially tuned to their needs.

It must be admitted that there is still no convincing evidence that the essential features of PC models are directly implemented as anatomical structures in the brain. Although it is conjectured that superficial pyramidal cells may transmit prediction error, and deep pyramidal cells predictions, we do not know that that is how they actually function. On the other hand, PC models do appear more neurally plausible than backpropagation architectures, for there is no need for a separate process of training on an externally provided set of training samples. Instead, predictions replace the role of the training set, so that learning and interacting with the environment are two sides of a unified unsupervised process.

PC models also show promise for explaining higher-level cognitive phenomena. An often-cited example is binocular rivalry. When presented with entirely different images in two eyes, humans report an oscillation between the two images as each in turn comes into “focus”. The PC explanation is that the system succeeds in eliminating error by predicting the scene for one eye, but only to increase the error for the other eye. So the system is unstable, “hunting” from one prediction to the other. Predictive coding also has a natural explanation for why we are unaware of our blind spot, for the lack of input in that area amounts to a report of no error, with the result that one perceives “more of the same”.

PC accounts of attention have also been championed. For example, Hohwy (2012) notes that realistic PC models, which must tolerate noisy inputs, need to include parameters that track the desired precision to be used in reporting error. So PC models need to make predictions of the error precision relevant for a given situation. Hohwy explores the idea that mechanisms for optimizing precision expectations map onto those that account for attention, and argues that attentional phenomena such as change blindness can be explained within the PC paradigm.

Predictive coding has interesting implications for themes in the philosophy of cognitive science. By integrating the processes of top-down prediction with bottom-up error detection, the PC account of perception views it as intrinsically theory-laden. Deployment of the conceptual categorization of the world embodied in higher levels of the net is essential to the very process of gathering data about the world. This underscores, as well, tight linkages between belief, imaginative abilities, and perception (Grush 2004). The PC paradigm also tends to support situated or embodied conceptions of cognition, for it views action as a dynamic interaction between the organism’s effects on the environment, its predictions concerning those effects (its plans), and its continual monitoring of error, which provides feedback to help ensure success.

It is too early to evaluate the importance and scope of PC models in accounting for the various aspects of cognition. Providing a unified theory of brain function in general is, after all, an impossibly high standard. Clark’s target article (2013) provides a useful forum for airing complaints against PC models and some possible responses. One objection that is often heard is that an organism with a PC brain can be expected to curl up in a dark room and die, for this is the best way to minimize error at its sensory inputs. However, that view may take too narrow a view of the sophistication of the predictions available to the organism. If it is to survive at all, its genetic endowment coupled with what it can learn along the way may very well endow it with the expectation that it go out and seek needed resources in the environment. Minimizing error for that prediction of its behavior will get it out of the dark room. However, it remains to be seen whether a theory of biological urges is usefully recast in PC terminology in this way, or whether PC theory is better characterized as only part of the explanation. Another complaint is that the top-down influence on our perception coupled with the constraint that the brain receives error signals rather than raw data would impose an unrealistic divide between a represented world of fantasy and the world as it really is. It is hard to evaluate whether that qualifies as a serious objection. Were PC models actually to provide an account of our phenomenological experience, and characterize the relations between that experience and what we count as real, then skeptical conclusions to be drawn would count as features of the view rather than objections to it. A number of responders to Clark’s target article also worry that PC-models count as overly general. In trying to explain everything they explain nothing. Without sufficient constraints on the architecture, it is too easy to pretend to explain cognitive phenomena by merely redescribing them in a story written in the vocabulary of prediction, comparison, error minimization, and optimized precision. The real proof of the pudding will come with the development of more complex and detailed computer models in the PC framework that are biologically plausible, and able to demonstrate the defining features of cognition.

11. Deep Learning: Connectionism’s New Wave

Whereas connectionism’s ambitions seemed to mature and temper towards the end of its Golden Age from 1980–1995, neural network research has recently returned to the spotlight after a combination of technical achievements made it practical to train networks with many layers of nodes between input and output (Krizhevsky, Sutskever, & Hinton 2012; Goodfellow, Bengio, & Courville 2016). Amazon, Facebook, Google, Microsoft, and Uber have all since made substantial investments in these “deep learning” systems. Their many promising applications include recognition of objects and faces in photographs, natural language translation and text generation, prediction of protein folds, medical diagnosis and treatment, and control of autonomous vehicles. The success of the game-playing program AlphaZero (Silver et al. 2018) has brought intense publicity to deep learning in the popular press. What is especially telling about AlphaZero is that essentially the same algorithm was capable of learning to defeat human world champions and other top-performing artificial systems in three different rule-based games (chess, shogi, and Go) “without human knowledge” of strategy, that is, by using only information about the rules of these games and policies it learned from extensive self-play. Its ability to soundly defeat expert-knowledge-based programs at their forte has been touted as the death knell for the traditional symbolic paradigm in artificial intelligence.

However, the new capabilities of deep learning systems have brought with them new concerns. Deep networks typically learn from vastly more data than their predecessors (AlphaZero learned from over 100 million self-played Go games), and can extract much more subtle, structured patterns. While the analysis of AlphaZero’s unusual approach to strategy has created a mini-revolution in the study of chess and Go (Sadler & Regan 2019), it also raised concerns that the solutions deep networks discover are alien and mysterious. It is natural, therefore, to have second thoughts about depending on deep learning technologies for tasks that must be responsive to human interests and goals.

The success of deep learning would not have been possible without specialized Graphics Processing Units (GPUs), massively-parallel processors optimized for the computational burden of training large nets. However, the crucial innovations behind deep learning’s successes lie in network architecture. Although the literature describes a bewildering set of variations in deep net design (Schmidhuber 2015), there are some common themes that help define the paradigm.

The most obvious feature is a substantial increase in the number of hidden layers. Whereas Golden Age networks typically had only one or two hidden layers, deep neural nets have anywhere from five to several hundred. It has been proven that additional depth can exponentially increase the representational and computational power of a neural network, compared to a shallower network with the same number of nodes (Bengio & Dellaleau 2011; Montúfar et al. 2014; Raghu et al. 2017). The key is that the patterns detected at a given layer may be used by the subsequent layers to repeatedly create more and more complex discriminations.

The number of layers is not the only feature of deep nets that explain their superior abilities. An emerging consensus is that many tasks that are hard to learn are characterized by the presence of “nuisance parameters”, sources of variation in input signals that are not correlated with decision success. Examples of nuisance parameters in visual categorization tasks include pose, size, and position in the visual field; examples in auditory tasks include tone, pitch, and duration. Successful systems must learn to recognize deeper similarities hiding under this variation to identify objects in images, or words in audio data.

One of the most commonly-deployed deep architectures—deep convolutional networks—leverages a combination of strategies that are well-suited to overcoming nuisance variation. Golden Age nets used the same activation function for all units, and units in a layer were fully connected to units in adjacent layers. However, deep convolutional nets deploy several different activation functions, and connections to units in the next higher layer are restricted to small windows, such as a square tile of an image or a temporal snippet of a sound file.

A toy example of a deep convolutional net trained to recognize objects in images will help illustrate some of the details. The input to such a net consists of a digitized scene with red, green, and blue (RGB) values for the intensity of colors in each pixel. This input layer is fed to a layer of filter units, which are connected only to a small window of input pixels. Filter units detect specific, local features of the image using an operation called convolution. For example, they might find edges by noting where differences in the intensity of nearby pixels are the greatest. Outputs of these units are then passed to rectified linear units (or “ReLU” nodes), which only pass along activations from the filter nodes that exceed a certain threshold. ReLU units send their signals to a pooling layer, which collects data from many ReLU units and only passes along the most-activated features for each location. The result of this sandwich of convolution-ReLU-pooling layers is a “feature map”, which marks all and only the most salient features detected at each location across the whole image. This feature map can then be sent to a whole series of such sandwiches to detect larger and more abstract features. For example, one sandwich might build lines from edges, the next angles from lines, the next shapes from lines and angles, and the next objects from shapes. A final, fully-connected classification layer is then used to assign labels to the objects detected in the most abstract feature map delivered by the penultimate layer.

This division-of-labor is extremely efficient at overcoming nuisance variation, compared to shallow Golden Age networks. Furthermore, limiting the inputs of the filter nodes to a small window significantly lowers the number of weights that must be learned at each level, compared to a fully-connected network. If features usually depend only on local relations (i.e. in the sense that one normally does not need to look at someone’s feet to read their facial expression), then this gain comes at no cost to classification accuracy. Furthermore, pooling the outputs of several different filter nodes helps detect the same feature across small differences in nuisance variables like pose or location. There is special enthusiasm for this kind of neurocomputational division-of-labor in cognitive science, because it was originally inspired by anatomical studies of mammalian neocortex (Hubel & Wiesel 1965; Fukushima 1980). Other sources of empirical evidence have demonstrated the potential of such networks as models for perceptual similarity and object recognition judgments in primates (Khaligh-Razavi & Kriegeskorte 2014; Hong et al. 2016; Kubilius, Bracci, & Beeck 2016; Lake, Zaremba et al. 2015; Yamins & DiCarlo 2016; and Guest & Love 2019 [Other Internet Resources, hereafter OIR]). These points also interface with the innateness controversy discussed in Section 6. For example, Buckner (2018) has recently argued that these activation functions combine to implement a form of cognitive abstraction which addresses problems facing traditional empiricist philosophy of mind, concerning the way that minds can efficiently discover abstract categorical knowledge in specific, idiosyncratic perceptions.

The increase in computational power that comes with deep net architecture brings with it additional dangers. In fact, the representational power of deep networks is so great that they can simply memorize the correct answer for every item in a large, complex data set, even if the “correct” labels were randomly assigned (Zhang et al. 2016 in OIR). The result is poor generalization of the task to be learned—with total failure to properly respond to inputs outside the training set. Effective deep nets thus employ an array of strategies to prevent them from merely memorizing training data, mostly by biasing the network against the learning of fine-grained idiosyncrasies. Popular options include dropout, which randomly deactivates a small number of nodes during training, and weight decay rules, which cause weights to decrease in value if not constantly refreshed by different examples.

While these general points may explain why deep convolutional nets tend to succeed on a wide variety of tasks, their complex structure makes it difficult to explain their decisions in specific cases. This concern interfaces with the XAI (explainable AI) movement, which aims to inspire the development of better tools to analyze the decisions of computer algorithms, especially so that AI systems can be certified to meet practical or legal requirements (Explainable Artificial Intelligence (XAI); B. Goodman & Flaxman 2017). Deep Visualization methods are important tools in addressing these goals for deep neural networks. One popular family of methods uses further machine learning to create an artificial image that maximizes the activation of some particular hidden layer unit (Yosinski et al. 2015). The image is intended to give one an impression of the kind of feature that unit detects when it fires. As expected, the images look more complex and more object-like as we ascend the level hierarchy (for examples and software, see http://yosinski.com/deepvis). Without additional processing, however, many of these visualizations appear chimerical and nonsensical, and it is not clear exactly how well this method reveals features that are genuinely important in the network’s processing. Another family of methods attempts to reveal the aspects of input images that are most salient for the nets’ decision-making. Relevance decomposition, for example, determines which nodes, if deactivated, would have had the greatest effect on some particular decision (Montavon, Samek, & Müller 2018). This can generate a “heatmap”, which shows the aspects of the input that were most influential in that decision. Further machine learning has also been used to build systems able to provide brief English phrases describing the features that lead to a net’s decisions (Hendricks et al. 2016 [OIR]; Ehsan et al. 2018). Despite these advances, the methodologies needed for an adequate explanation of a deep network’s behavior remain unclear and would benefit from further philosophical reflection (Lipton 2016 [OIR]; Zednik 2019 [OIR]).

The need for explainable deep nets is all the more pressing because of the discovery of so-called “adversarial examples” (Goodfellow et al. 2014; Nguyen, Yosinski, & Clune 2015). These come in at least two forms: “perturbed images” which are natural photographs modified very slightly in a way that causes dramatic changes in classification by deep nets even though the difference is imperceptible to humans, and “rubbish images”, which are purportedly meaningless to humans but are classified with high confidence scores by deep nets. Adversarial examples have led some to conclude that whatever understanding the net has of objects must be radically different than that of humans. Adversarial examples exhibit a number of surprising properties: though constructed from a particular training set, they are highly effective at fooling other nets trained on the same task, even nets with different training sets and different architectures. Furthermore, the search for effective countermeasures has led to frustrating failures. It has also been discovered, however, that perturbation methods can create images which fool humans (Elsayed et al. 2018), and human subjects can predict nets’ preferred labels for rubbish images with high accuracy (Z. Zhou & Firestone 2019). Others have noted that the features nets detect in adversarial examples lead to reliable classifications in naturally-occurring data, challenging the idea that the nets’ decisions should be counted as mistaken (Ilyas et al. 2019 [OIR]). These questions intersect with traditional issues about projectibility and induction, potentially offering new test cases for older philosophical conundrums in epistemology and philosophy of science (N. Goodman 1955; Quine 1969; Harman & Kulkarni 2007).

Although deep learning has received an enormous amount of attention in computer science and from the popular press, there is surprisingly little published about it directly among philosophers (though this is beginning to change—Buckner 2018, 2019 [OIR]; Miracchi 2019; Shevlin & Halina 2019; and Zednik 2019 [OIR]). However, there are rich opportunities for philosophical research on deep learning. Examples of some relevant questions include:

  • What kinds of explanation or justification are needed to satisfy our worries about the reliability of deep neural networks in practical applications? What results in deep net research would be needed to assure us that the relevant explanations or justifications are at hand?
  • Can deep nets serve as explanatory models of biological cognition in cognitive neuroscience? If so, what kind of scientific explanations do they provide? Are they mechanistic, functional, or non-causal in nature?
  • What are the prospects for new breakthroughs in deep net natural language processing, and what would it take for these to throw new light on the systematicity controversy?
  • Does deep learning research change the terms of the conflict between radical connectionists and those who claim that symbolic processing models are required to explain higher level cognitive functioning?
  • Do deep nets like AlphaZero vindicate classical empiricism about higher reasoning? Or must they ultimately replicate more human biases and domain-specific knowledge to reason in the way that humans do?

Bibliography

  • Aizawa, Kenneth, 1994, “Representations without Rules, Connectionism and the Syntactic Argument”, Synthese, 101(3): 465–492. doi:10.1007/BF01063898
  • –––, 1997a, “Exhibiting versus Explaining Systematicity: A Reply to Hadley and Hayward”, Minds and Machines, 7(1): 39–55. doi:10.1023/A:1008203312152
  • –––, 1997b, “Explaining Systematicity”, Mind & Language, 12(2): 115–136. doi:10.1111/j.1468-0017.1997.tb00065.x
  • –––, 2003, The Systematicity Arguments, Dordrecht: Kluwer.
  • –––, 2014, “A Tough Time to be Talking Systematicity”, in Calvo and Symons 2014: 77–101.
  • Bechtel, William, 1987, “Connectionism and the Philosophy of Mind: An Overview”, The Southern Journal of Philosophy, 26(S1): 17–41. doi:10.1111/j.2041-6962.1988.tb00461.x
  • –––, 1988, “Connectionism and Rules and Representation Systems: Are They Compatible?”, Philosophical Psychology, 1(1): 5–16. doi:10.1080/09515088808572922
  • Bechtel, William and Adele Abrahamsen, 1990, Connectionism and the Mind: An Introduction to Parallel Processing in Networks, Cambridge, MA: Blackwell.
  • Bengio, Yoshua and Olivier Delalleau, 2011, “On the Expressive Power of Deep Architectures”, in International Conference on Algorithmic Learning Theory (ALT 2011), Jyrki Kivinen, Csaba Szepesvári, Esko Ukkonen, and Thomas Zeugmann (eds.) (Lecture Notes in Computer Science 6925), Berlin, Heidelberg: Springer Berlin Heidelberg, 18–36. doi:10.1007/978-3-642-24412-4_3
  • Bengio, Yoshua, Thomas Mesnard, Asja Fischer, Saizheng Zhang, and Yuhuai Wu, 2017, “STDP-Compatible Approximation of Backpropagation in an Energy-Based Model”, Neural Computation, 29(3): 555–577. doi:10.1162/NECO_a_00934
  • Bodén, Mikael and Lars Niklasson, 2000, “Semantic Systematicity and Context in Connectionist Networks”, Connection Science, 12(2): 111–142. doi:10.1080/09540090050129754
  • Buckner, Cameron, 2018, “Empiricism without Magic: Transformational Abstraction in Deep Convolutional Neural Networks”, Synthese, 195(12): 5339–5372. doi:10.1007/s11229-018-01949-1
  • Butler, Keith, 1991, “Towards a Connectionist Cognitive Architecture”, Mind & Language, 6(3): 252–272. doi:10.1111/j.1468-0017.1991.tb00191.x
  • Calvo Garzón, Francisco, 2003, “Connectionist Semantics and the Collateral Information Challenge”, Mind & Language, 18(1): 77–94. doi:10.1111/1468-0017.00215
  • Calvo, Paco and John Symons, 2014, The Architecture of Cognition: Rethinking Fodor and Pylyshyn’s Systematicity Challenge, Cambridge: MIT Press.
  • Chalmers, David J., 1990, “Syntactic Transformations on Distributed Representations”, Connection Science, 2(1–2): 53–62. doi:10.1080/09540099008915662
  • –––, 1993, “Connectionism and Compositionality: Why Fodor and Pylyshyn Were Wrong”, Philosophical Psychology, 6(3): 305–319. doi:10.1080/09515089308573094
  • Chomsky, Noam, 1965, Aspects of the Theory of Syntax, Cambridge, MA: MIT Press.
  • Christiansen, Morten H. and Nick Chater, 1994, “Generalization and Connectionist Language Learning”, Mind & Language, 9(3): 273–287. doi:10.1111/j.1468-0017.1994.tb00226.x
  • –––, 1999a, “Toward a Connectionist Model of Recursion in Human Linguistic Performance”, Cognitive Science, 23(2): 157–205. doi:10.1207/s15516709cog2302_2
  • –––, 1999b, “Connectionist Natural Language Processing: The State of the Art”, Cognitive Science, 23(4): 417–437. doi:10.1207/s15516709cog2304_2
  • Churchland, Paul M., 1989, A Neurocomputational Perspective: The Nature of Mind and the Structure of Science, Cambridge, MA: MIT Press.
  • –––, 1995, The Engine of Reason, the Seat of the Soul: A Philosophical Journey into the Brain, Cambridge, MA: MIT Press.
  • –––, 1998, “Conceptual Similarity Across Sensory and Neural Diversity: The Fodor/Lepore Challenge Answered”, Journal of Philosophy, 95(1): 5–32. doi:10.5840/jphil19989514
  • Clark, Andy, 1989, Microcognition: Philosophy, Cognitive Science, and Parallel Distributed Processing, (Explorations in Cognitive Science), Cambridge, MA: MIT Press.
  • –––, 1990 [1995], “Connectionist Minds”, Proceedings of the Aristotelian Society, 90: 83–102. Reprinted in MacDonald and MacDonald 1995: 339–356. doi:10.1093/aristotelian/90.1.83
  • –––, 1993, Associative Engines: Connectionism, Concepts, and Representational Change, Cambridge, MA: MIT Press.
  • –––, 2013, “Whatever next? Predictive Brains, Situated Agents, and the Future of Cognitive Science”, Behavioral and Brain Sciences, 36(3): 181–204. doi:10.1017/S0140525X12000477
  • Clark, Andy and Rudi Lutz (eds.), 1992, Connectionism in Context, London: Springer London. doi:10.1007/978-1-4471-1923-4
  • Cotrell G.W. and S.L. Small, 1983, “A Connectionist Scheme for Modeling Word Sense Disambiguation”, Cognition and Brain Theory, 6(1): 89–120.
  • Cummins, Robert, 1991, “The Role of Representation in Connectionist Explanations of Cognitive Capacities”, in Ramsey, Stich, and Rumelhart 1991: 91–114.
  • –––, 1996, “Systematicity”:, Journal of Philosophy, 93(12): 591–614. doi:10.2307/2941118
  • Cummins, Robert and Georg Schwarz, 1991, “Connectionism, Computation, and Cognition”, in Horgan and Tienson 1991: 60–73. doi:10.1007/978-94-011-3524-5_3
  • Davies, Martin, 1989, “Connectionism, Modularity, and Tacit Knowledge”, The British Journal for the Philosophy of Science, 40(4): 541–555. doi:10.1093/bjps/40.4.541
  • –––, 1991, “Concepts, Connectionism and the Language of Thought”, in Ramsey, Stich, and Rumelhart 1991: 229–257.
  • Dinsmore, John (ed.), 1992, The Symbolic and Connectionist Paradigms: Closing the Gap, Hillsdale, NJ: Erlbaum.
  • Ehsan, Upol, Brent Harrison, Larry Chan, and Mark O. Riedl, 2018, “Rationalization: A Neural Machine Translation Approach to Generating Natural Language Explanations”, in Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (AIES ’18), New Orleans, LA: ACM Press, 81–87. doi:10.1145/3278721.3278736
  • Eliasmith, Chris, 2007, “How to Build a Brain: From Function to Implementation”, Synthese, 159(3): 373–388. doi:10.1007/s11229-007-9235-0
  • –––, 2013, How to Build a Brain: a Neural Architecture for Biological Cognition, New York: Oxford University Press.
  • Elman, Jeffrey L., 1991, “Distributed Representations, Simple Recurrent Networks, and Grammatical Structure”, in Touretzky 1991: 91–122. doi:10.1007/978-1-4615-4008-3_5
  • Elman, Jeffrey, Elizabeth Bates, Mark H. Johnson, Annette Karmiloff-Smith,Domenico Parisi, and Kim Plunkett, 1996, Rethinking Innateness: A Connectionist Perspective on Development, Cambridge, MA: MIT Press.
  • Elsayed, Gamaleldin F., Shreya Shankar, Brian Cheung, Nicolas Papernot, Alexey Kurakin, Ian Goodfellow, and Jascha Sohl-Dickstein, 2018, “Adversarial Examples That Fool Both Computer Vision and Time-Limited Humans”, in Proceedings of the 32Nd International Conference on Neural Information Processing Systems, (NIPS’18), 31: 3914–3924.
  • Fodor, Jerry A., 1988, Psychosemantics: The Problem of Meaning in the Philosophy of Mind, Cambridge, MA: MIT Press.
  • –––, 1997, “Connectionism and the Problem of Systematicity (Continued): Why Smolensky’s Solution Still Doesn’t Work”, Cognition, 62(1): 109–119. doi:10.1016/S0010-0277(96)00780-9
  • Fodor, Jerry and Ernest Lepore, 1992, Holism: A Shopper’s Guide, Cambridge: Blackwell.
  • Fodor, Jerry and Ernie Lepore, 1999, “All at Sea in Semantic Space: Churchland on Meaning Similarity”, Journal of Philosophy, 96(8): 381–403. doi:10.5840/jphil199996818
  • Fodor, Jerry and Brian P. McLaughlin, 1990, “Connectionism and the Problem of Systematicity: Why Smolensky’s Solution Doesn’t Work”, Cognition, 35(2): 183–204. doi:10.1016/0010-0277(90)90014-B
  • Fodor, Jerry A. and Zenon W. Pylyshyn, 1988, “Connectionism and Cognitive Architecture: A Critical Analysis”, Cognition, 28(1–2): 3–71. doi:10.1016/0010-0277(88)90031-5
  • Friston, Karl, 2005, “A Theory of Cortical Responses”, Philosophical Transactions of the Royal Society B: Biological Sciences, 360(1456): 815–836. doi:10.1098/rstb.2005.1622
  • Friston, Karl J. and Klaas E. Stephan, 2007, “Free-Energy and the Brain”, Synthese, 159(3): 417–458. doi:10.1007/s11229-007-9237-y
  • Fukushima, Kunihiko, 1980, “Neocognitron: A Self-Organizing Neural Network Model for a Mechanism of Pattern Recognition Unaffected by Shift in Position”, Biological Cybernetics, 36(4): 193–202. doi:10.1007/BF00344251
  • Garfield, Jay L., 1997, “Mentalese Not Spoken Here: Computation, Cognition and Causation”, Philosophical Psychology, 10(4): 413–435. doi:10.1080/09515089708573231
  • Garson, James W., 1991, “What Connectionists Cannot Do: The Threat to Classical AI”, in Horgan and Tienson 1991: 113–142. doi:10.1007/978-94-011-3524-5_6
  • –––, 1994, “Cognition without Classical Architecture”, Synthese, 100(2): 291–305. doi:10.1007/BF01063812
  • –––, 1997, “Syntax in a Dynamic Brain”, Synthese, 110(3): 343–355.
  • Goodfellow, Ian, Yoshua Bengio, and Aaron Courville, 2016, Deep Learning, Cambridge, MA: MIT Press.
  • Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy, 2015, “Explaining and Harnessing Adversarial Examples.”, in 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, May 7–9, 2015, available online.
  • Goodfellow, Ian J., Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio, 2014, “Generative Adversarial Nets”, in Proceedings of the 27th International Conference on Neural Information Processing Systems, (NIPS’14), Cambridge, MA: MIT Press, 2: 2672–2680.
  • Goodman, Bryce and Seth Flaxman, 2017, “European Union Regulations on Algorithmic Decision-Making and a ‘Right to Explanation’”, AI Magazine, 38(3): 50–57. doi:10.1609/aimag.v38i3.2741
  • Goodman, Nelson, 1955, Fact, Fiction, and Forecast, Cambridge, MA: Harvard University Press.
  • Grush, Rick, 2004, “The Emulation Theory of Representation: Motor Control, Imagery, and Perception”, Behavioral and Brain Sciences, 27(3): 377–396. doi:10.1017/S0140525X04000093
  • Guarini, Marcello, 2001, “A Defence of Connectionism Against the ‘Syntactic’ Argument”, Synthese, 128(3): 287–317. doi:10.1023/A:1011905917986
  • Hadley, Robert F., 1994a, “Systematicity in Connectionist Language Learning”, Mind & Language, 9(3): 247–272. doi:10.1111/j.1468-0017.1994.tb00225.x
  • –––, 1994b, “Systematicity Revisited: Reply to Christiansen and Chater and Niklasson and van Gelder”, Mind & Language, 9(4): 431–444. doi:10.1111/j.1468-0017.1994.tb00317.x
  • –––, 1997a, “Explaining Systematicity: A Reply to Kenneth Aizawa”, Minds and Machines, 7(4): 571–579. doi:10.1023/A:1008252322227
  • –––, 1997b, “Cognition, Systematicity and Nomic Necessity”, Mind & Language, 12(2): 137–153. doi:10.1111/j.1468-0017.1997.tb00066.x
  • –––, 2004, “On The Proper Treatment of Semantic Systematicity”, Minds and Machines, 14(2): 145–172. doi:10.1023/B:MIND.0000021693.67203.46
  • Hadley, Robert F. and Michael B. Hayward, 1997, “Strong Semantic Systematicity from Hebbian Connectionist Learning”, Minds and Machines, 7(1): 1–37. doi:10.1023/A:1008252408222
  • Hanson, Stephen J. and Judy Kegl, 1987, “PARSNIP: A Connectionist Network that Learns Natural Language Grammar from Exposure to Natural Language Sentences”, Ninth Annual Conference of the Cognitive Science Society, Hillsdale, NJ: Erlbaum, pp. 106–119.
  • Harman, Gilbert and Sanjeev Kulkarni, 2007, Reliable Reasoning: Induction and Statistical Learning Theory, Cambridge MA: MIT Press.
  • Hatfield, Gary, 1991a, “Representation in Perception and Cognition: Connectionist Affordances”, in Ramsey, Stich, and Rumelhart 1991: 163–195.
  • –––, 1991b, “Representation and Rule-Instantiation in Connectionist Systems”, in Horgan and Tienson 1991: 90–112. doi:10.1007/978-94-011-3524-5_5
  • Hawthorne, John, 1989, “On the Compatibility of Connectionist and Classical Models”, Philosophical Psychology, 2(1): 5–15. doi:10.1080/09515088908572956
  • Haybron, Daniel M., 2000, “The Causal and Explanatory Role of Information Stored in Connectionist Networks”, Minds and Machines, 10(3): 361–380. doi:10.1023/A:1026545231550
  • Hinton, Geoffrey E., 1990 [1991], “Mapping Part-Whole Hierarchies into Connectionist Networks”, Artificial Intelligence, 46(1–2): 47–75. Reprinted in Hinton 1991: 47–76. doi:10.1016/0004-3702(90)90004-J
  • ––– (ed.), 1991, Connectionist Symbol Processing, Cambridge, MA: MIT Press.
  • –––, 1992, “How Neural Networks Learn from Experience”, Scientific American, 267(3): 145–151.
  • –––, 2010, “Learning to Represent Visual Input”, Philosophical Transactions of the Royal Society B: Biological Sciences, 365(1537): 177–184. doi:10.1098/rstb.2009.0200
  • Hinton, Geoffrey E., James L. McClelland, and David E. Rumelhart, 1986, “Distributed Representations”, Rumelhart, McClelland, and the PDP group 1986: chapter 3.
  • Hohwy, Jakob, 2012, “Attention and Conscious Perception in the Hypothesis Testing Brain”, Frontiers in Psychology, 3(96): 1–14. doi:10.3389/fpsyg.2012.00096
  • Hong, Ha, Daniel L K Yamins, Najib J Majaj, and James J DiCarlo, 2016, “Explicit Information for Category-Orthogonal Object Properties Increases along the Ventral Stream”, Nature Neuroscience, 19(4): 613–622. doi:10.1038/nn.4247
  • Horgan, Terence E. and John Tienson, 1989, “Representations without Rules”, Philosophical Topics, 17(1): 147–174.
  • –––, 1990, “Soft Laws”, Midwest Studies In Philosophy, 15: 256–279. doi:10.1111/j.1475-4975.1990.tb00217.x
  • ––– (eds.), 1991, Connectionism and the Philosophy of Mind, Dordrecht: Kluwer. doi:10.1007/978-94-011-3524-5
  • –––, 1996, Connectionism and the Philosophy of Psychology, Cambridge, MA: MIT Press.
  • Hosoya, Toshihiko, Stephen A. Baccus, and Markus Meister, 2005, “Dynamic Predictive Coding by the Retina”, Nature, 436(7047): 71–77. doi:10.1038/nature03689
  • Huang, Yanping and Rajesh P. N. Rao, 2011, “Predictive Coding”, Wiley Interdisciplinary Reviews: Cognitive Science, 2(5): 580–593. doi:10.1002/wcs.142
  • Hubel, David H. and Torsten N. Wiesel, 1965, “Receptive Fields and Functional Architecture in Two Nonstriate Visual Areas (18 and 19) of the Cat”, Journal of Neurophysiology, 28(2): 229–289. doi:10.1152/jn.1965.28.2.229
  • Jansen, Peter A. and Scott Watter, 2012, “Strong Systematicity through Sensorimotor Conceptual Grounding: An Unsupervised, Developmental Approach to Connectionist Sentence Processing”, Connection Science, 24(1): 25–55. doi:10.1080/09540091.2012.664121
  • Johnson, Kent, 2004, “On the Systematicity of Language and Thought”:, Journal of Philosophy, 101(3): 111–139. doi:10.5840/jphil2004101321
  • Jones, Matt and Bradley C. Love, 2011, “Bayesian Fundamentalism or Enlightenment? On the Explanatory Status and Theoretical Contributions of Bayesian Models of Cognition”, Behavioral and Brain Sciences, 34(4): 169–188. doi:10.1017/S0140525X10003134
  • Khaligh-Razavi, Seyed-Mahdi and Nikolaus Kriegeskorte, 2014, “Deep Supervised, but Not Unsupervised, Models May Explain IT Cortical Representation”, PLoS Computational Biology, 10(11): e1003915. doi:10.1371/journal.pcbi.1003915
  • Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton, 2012, “Imagenet Classification with Deep Convolutional Neural Networks”, Advances in Neural Information Processing Systems, 25: 1097–1105.
  • Kubilius, Jonas, Stefania Bracci, and Hans P. Op de Beeck, 2016, “Deep Neural Networks as a Computational Model for Human Shape Sensitivity”, PLOS Computational Biology, 12(4): e1004896. doi:10.1371/journal.pcbi.1004896
  • Laakso, Aarre and Garrison Cottrell, 2000, “Content and Cluster Analysis: Assessing Representational Similarity in Neural Systems”, Philosophical Psychology, 13(1): 47–76. doi:10.1080/09515080050002726
  • Lake, Brenden M., Ruslan Salakhutdinov, and Joshua B. Tenenbaum, 2015, “Human-Level Concept Learning through Probabilistic Program Induction”, Science, 350(6266): 1332–1338. doi:10.1126/science.aab3050
  • Lake, Brenden M., Wojciech Zaremba, Rob Fergus, and Todd M. Gureckis, 2015, “Deep Neural Networks Predict Category Typicality Ratings for Images”, Proceedings of the 37th Annual Cognitive Science Society, Pasadena, CA, 22–25 July 2015, available online.
  • Lillicrap, Timothy P., Daniel Cownden, Douglas B. Tweed, and Colin J. Akerman, 2016, “Random Synaptic Feedback Weights Support Error Backpropagation for Deep Learning”, Nature Communications, 7(1): 13276. doi:10.1038/ncomms13276
  • Loula, João, Marco Baroni, and Brenden Lake, 2018, “Rearranging the Familiar: Testing Compositional Generalization in Recurrent Networks”, in Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, Brussels, Belgium: Association for Computational Linguistics, 108–114. doi:10.18653/v1/W18-5413
  • MacDonald, Cynthia and Graham MacDonald (eds), 1995, Connectionism, (Debates on Psychological Explanation, 2), Oxford: Blackwell.
  • Matthews, Robert J., 1997, “Can Connectionists Explain Systematicity?”, Mind & Language, 12(2): 154–177. doi:10.1111/j.1468-0017.1997.tb00067.x
  • Marcus, Gary F., 1998, “Rethinking Eliminative Connectionism”, Cognitive Psychology, 37(3): 243–282. doi:10.1006/cogp.1998.0694
  • –––, 2001, The Algebraic Mind: Integrating Connectionism and Cognitive Science, Cambridge, MA: MIT Press.
  • McClelland, James L and Jeffrey L Elman, 1986, “The TRACE Model of Speech Perception”, Cognitive Psychology, 18(1): 1–86. doi:10.1016/0010-0285(86)90015-0
  • McClelland, James L., David E. Rumelhart, and the PDP Research Group (ed.), 1986, Parallel Distributed Processing, Volume II: Explorations in the Microstructure of Cognition: Psychological and Biological Models, Cambridge, MA: MIT Press.
  • McLaughlin, Brian P., 1993, “The Connectionism/Classicism Battle to Win Souls”, Philosophical Studies, 71(2): 163–190. doi:10.1007/BF00989855
  • Miikkulainen, Risto, 1993, Subsymbolic Natural Language Processing: An Integrated Model of Scripts, Lexicon, and Memory, Cambridge, MA: MIT Press.
  • Miikkulainen, Risto and Michael G. Dyer, 1991, “Natural Language Processing With Modular Pdp Networks and Distributed Lexicon”, Cognitive Science, 15(3): 343–399. doi:10.1207/s15516709cog1503_2
  • Miracchi, Lisa, 2019, “A Competence Framework for Artificial Intelligence Research”, Philosophical Psychology, 32(5): 588–633. doi:10.1080/09515089.2019.1607692
  • Montavon, Grégoire, Wojciech Samek, and Klaus-Robert Müller, 2018, “Methods for Interpreting and Understanding Deep Neural Networks”, Digital Signal Processing, 73: 1–15. doi:10.1016/j.dsp.2017.10.011
  • Montúfar, Guido, Razvan Pascanu, Kyunghyun Cho, and Yoshua Bengio, 2014, “On the Number of Linear Regions of Deep Neural Networks”, in Proceedings of the 27th International Conference on Neural Information Processing Systems (NIPS’14), Cambridge, MA: MIT Press, 2: 2924–2932.
  • Morris, William C., Garrison W. Cottrell, and Jeffrey Elman, 2000, “A Connectionist Simulation of the Empirical Acquisition of Grammatical Relations”, in Wermter and Sun 2000: 1778:175–193. doi:10.1007/10719871_12
  • Nguyen, Anh, Jason Yosinski, Jeff Clune, 2015, “Deep Neural Networks Are Easily Fooled: High Confidence Predictions for Unrecognizable Images”, Proceedings of the 28th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015), 427–436, available online.
  • Niklasson, Lars F. and Tim van Gelder, 1994, “On Being Systematically Connectionist”, Mind & Language, 9(3): 288–302. doi:10.1111/j.1468-0017.1994.tb00227.x
  • O’Reilly, Randall C., 1996, “Biologically Plausible Error-Driven Learning Using Local Activation Differences: The Generalized Recirculation Algorithm”, Neural Computation, 8(5): 895–938. doi:10.1162/neco.1996.8.5.895
  • Phillips, Steven, 2002, “Does Classicism Explain Universality?”, Minds and Machines, 12(3): 423–434. doi:10.1023/A:1016160512967
  • Pinker, Steven and Jacques Mehler (eds.), 1988, Connections and Symbols, Cambridge, MA: MIT Press.
  • Pinker, Steven and Alan Prince, 1988, “On Language and Connectionism: Analysis of a Parallel Distributed Processing Model of Language Acquisition”, Cognition, 28(1–2): 73–193. doi:10.1016/0010-0277(88)90032-7
  • Pollack, Jordan B., 1989, “Implications of Recursive Distributed Representations”, in Touretzky 1989: 527–535, available online.
  • –––, 1991, “Induction of Dynamical Recognizers”, in Touretzky 1991: 123–148. doi:10.1007/978-1-4615-4008-3_6
  • Pollack, Jordan B., 1990 [1991], “Recursive Distributed Representations”, Artificial Intelligence, 46(1–2): 77–105. Reprinted in Hinton 1991: 77–106. doi:10.1016/0004-3702(90)90005-K
  • Port, Robert F., 1990, “Representation and Recognition of Temporal Patterns”, Connection Science, 2(1–2): 151–176. doi:10.1080/09540099008915667
  • Port, Robert F. and Timothy van Gelder, 1991, “Representing Aspects of Language”, Proceedings of the Thirteenth Annual Conference of the Cognitive Science Society, Hillsdale, N.J.: Erlbaum, 487–492, available online.
  • Quine, W. V., 1969, “Natural Kinds”, in Essays in Honor of Carl G. Hempel, Nicholas Rescher (ed.), Dordrecht: Springer Netherlands, 5–23. doi:10.1007/978-94-017-1466-2_2
  • Raghu, Maithra, Ben Poole, Jon Kleinberg, Surya Ganguli, and Jascha Sohl-Dickstein, 2017, “On the Expressive Power of Deep Neural Networks”, in Proceedings of the 34th International Conference on Machine Learning, 70: 2847–2854, available online.
  • Ramsey, William, 1997, “Do Connectionist Representations Earn Their Explanatory Keep?”, Mind & Language, 12(1): 34–66. doi:10.1111/j.1468-0017.1997.tb00061.x
  • Ramsey, William, Stephen P. Stich, and Joseph Garon, 1991, “Connectionism, Eliminativism, and the Future of Folk Psychology”, in Ramsey, Stich, and Rumelhart 1991: 199–228.
  • Ramsey, William, Stephen P. Stich, and David E. Rumelhart, 1991, Philosophy and Connectionist Theory, Hillsdale, N.J.: Erlbaum.
  • Rao, Rajesh P. N. and Dana H. Ballard, 1999, “Predictive Coding in the Visual Cortex: A Functional Interpretation of Some Extra-Classical Receptive-Field Effects”, Nature Neuroscience, 2(1): 79–87. doi:10.1038/4580
  • Rohde, Douglas L. T. and David C. Plaut, 2003, “Connectionist Models of Language Processing”, Cognitive Studies (Japan), 10(1): 10–28. doi:10.11225/jcss.10.10
  • Roth, Martin, 2005, “Program Execution in Connectionist Networks”, Mind & Language, 20(4): 448–467. doi:10.1111/j.0268-1064.2005.00295.x
  • Rumelhart, David E. and James L. McClelland, 1986, “On Learning the Past Tenses of English Verbs”, in McClelland, Rumelhart, and the PDP group 1986: 216–271.
  • Rumelhart, David E., James L. McClelland, and the PDP Research Group (eds), 1986, Parallel Distributed Processing, Volume 1: Explorations in the Microstructure of Cognition: Foundations, Cambridge, MA: MIT Press.
  • Sadler, Matthew and Natasha Regan, 2019, Game Changer: AlphaZero’s Groundbreaking Chess Strategies and the Promise of AI, Alkmaar: New in Chess.
  • Schmidhuber, Jürgen, 2015, “Deep Learning in Neural Networks: An Overview”, Neural Networks, 61: 85–117. doi:10.1016/j.neunet.2014.09.003
  • Schwarz, Georg, 1992, “Connectionism, Processing, Memory”, Connection Science, 4(3–4): 207–226. doi:10.1080/09540099208946616
  • Sejnowski, Terrence J. and Charles R. Rosenberg, 1987, “Parallel Networks that Learn to Pronounce English Text”, Complex Systems, 1(1): 145–168, available online.
  • Servan-Schreiber, David, Axel Cleeremans, and James L. McClelland, 1991, “Graded State Machines: The Representation of Temporal Contingencies in Simple Recurrent Networks”, in Touretzky 1991: 57–89. doi:10.1007/978-1-4615-4008-3_4
  • Shastri, Lokendra and Venkat Ajjanagadde, 1993, “From Simple Associations to Systematic Reasoning: A Connectionist Representation of Rules, Variables and Dynamic Bindings Using Temporal Synchrony”, Behavioral and Brain Sciences, 16(3): 417–451. doi:10.1017/S0140525X00030910
  • Shea, Nicholas, 2007, “Content and Its Vehicles in Connectionist Systems”, Mind & Language, 22(3): 246–269. doi:10.1111/j.1468-0017.2007.00308.x
  • Shevlin, Henry and Marta Halina, 2019, “Apply Rich Psychological Terms in AI with Care”, Nature Machine Intelligence, 1(4): 165–167. doi:10.1038/s42256-019-0039-y
  • Shultz, Thomas R. and Alan C. Bale, 2001, “Neural Network Simulation of Infant Familiarization to Artificial Sentences”, Infancy, 2(4): 501–536.
  • –––, 2006, “Neural Networks Discover a Near-Identity Relation to Distinguish Simple Syntactic Forms”, Minds and Machines, 16(2): 107–139. doi:10.1007/s11023-006-9029-z
  • Silver, David, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, et al., 2018, “A General Reinforcement Learning Algorithm That Masters Chess, Shogi, and Go through Self-Play”, Science, 362(6419): 1140–1144. doi:10.1126/science.aar6404
  • Smolensky, Paul, 1987, “The Constituent Structure of Connectionist Mental States: A Reply to Fodor and Pylyshyn”, The Southern Journal of Philosophy, 26(S1): 137–161. doi:10.1111/j.2041-6962.1988.tb00470.x
  • –––, 1988, “On the Proper Treatment of Connectionism”, Behavioral and Brain Sciences, 11(1): 1–23. doi:10.1017/S0140525X00052432
  • –––, 1990 [1991], “Tensor Product Variable Binding and the Representation of Symbolic Structures in Connectionist Systems”, Artificial Intelligence, 46(1–2): 159–216. Reprinted in Hinton 1991: 159–216. doi:10.1016/0004-3702(90)90007-M
  • –––, 1995, “Constituent Structure and Explanation in an Integrated Connectionist/Symbolic Cognitive Architecture”, in MacDonald and MacDonald 1995: .
  • St. John, Mark F. and James L. McClelland, 1990 [1991], “Learning and Applying Contextual Constraints in Sentence Comprehension”, Artificial Intelligence, 46(1–2): 217–257. Reprinted in Hinton 1991: 217–257 doi:10.1016/0004-3702(90)90008-N
  • Tomberlin, James E. (ed.), 1995, Philosophical Perspectives 9: AI, Connectionism and Philosophical Psychology, Atascadero: Ridgeview Press.
  • Touretzky, David S. (ed.), 1989, Advances in Neural Information Processing Systems I, San Mateo, CA: Kaufmann, available online.
  • ––– (ed.), 1990, Advances in Neural Information Processing Systems II, San Mateo, CA: Kaufmann.
  • ––– (ed.), 1991, Connectionist Approaches to Language Learning, Boston, MA: Springer US. doi:10.1007/978-1-4615-4008-3
  • Touretzky, David S., Geoffrey E. Hinton, and Terrence Joseph Sejnowski (eds), 1988, Proceedings of the 1988 Connectionist Models Summer School, San Mateo, CA: Kaufmann.
  • Van Gelder, Tim, 1990, “Compositionality: A Connectionist Variation on a Classical Theme”, Cognitive Science, 14(3): 355–384. doi:10.1016/0364-0213(90)90017-Q
  • –––, 1991, “What is the ‘D’ in PDP?” in Ramsey, Stich, and Rumelhart 1991: 33–59.
  • Van Gelder, Timothy and Robert Port, 1993, “Beyond Symbolic: Prolegomena to a Kama-Sutra of Compositionality”, in Vasant G Honavar, Leonard Uhr (eds.), Symbol Processing and Connectionist Models in AI and Cognition: Steps Towards Integration, Boston: Academic Press.
  • Vilcu, Marius and Robert F. Hadley, 2005, “Two Apparent ‘Counterexamples’ to Marcus: A Closer Look”, Minds and Machines, 15(3–4): 359–382. doi:10.1007/s11023-005-9000-4
  • Von Eckardt, Barbara, 2003, “The Explanatory Need for Mental Representations in Cognitive Science”, Mind & Language, 18(4): 427–439. doi:10.1111/1468-0017.00235
  • –––, 2005, “Connectionism and the Propositional Attitudes”, in Christina Erneling and David Martel Johnson (eds.), The Mind as a Scientific Object: Between Brain and Culture, New York: Oxford University Press.
  • Waltz, David L. and Jordan B. Pollack, 1985, “Massively Parallel Parsing: A Strongly Interactive Model of Natural Language Interpretation*”, Cognitive Science, 9(1): 51–74. doi:10.1207/s15516709cog0901_4
  • Wermter, Stefan and Ron Sun (eds.), 2000, Hybrid Neural Systems, (Lecture Notes in Computer Science 1778), Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/10719871
  • Yamins, Daniel L. K. and James J. DiCarlo, 2016, “Using Goal-Driven Deep Learning Models to Understand Sensory Cortex”, Nature Neuroscience, 19(3): 356–365. doi:10.1038/nn.4244
  • Yosinski, Jason, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hod Lipson, 2015, “Understanding Neural Networks Through Deep Visualization”, Deep Learning Workshop, 31st International Conference on Machine Learning, Lille, France, available online.
  • Zhou, Zhenglong and Chaz Firestone, 2019, “Humans Can Decipher Adversarial Images”, Nature Communications, 10(1): 1334. doi:10.1038/s41467-019-08931-6

Other Internet Resources

Copyright © 2019 by
Cameron Buckner <cjbuckner@uh.edu>
James Garson <JGarson@uh.edu>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free