[Next] [Up] [Previous]
Next: What is Different in Up: Neural Computation: A Research Previous: Neural Computation: A Research

Introduction

 One of the most interesting scientific developments during the next few decades will be the unraveling of the structure of computation in the nervous systems of living organisms. Since the information processing capabilities of living organisms are in many aspects superior to those of our current artificial computing machinery, this is likely to have significant consequences for the way in which computers and robots will be designed in the year 2020. Learning from successful computational strategies of neural systems will be especially desirable for computer science because:

Traditionally theoretical computer science has played the role of a scout that explores novel approaches towards computing well in advance of other sciences. This did also occur in the case of neural computation. Von Neumann's book The Computer and the Brain [von Neumann, 1958] raised over 40 years ago already several of the essential issues for understanding neural computation from the point of view of a theoretical computer scientist. Even earlier, the seminal paper A logical calculus of the ideas immanent in nervous activity [McCulloch and Pitts, 1943] provided an abstract circuit model[1] that reflects some essential aspects of computation in biological neural systems, but which is at the same time sufficiently simple and formally precise so that it can be investigated theoretically.[2] The paper A Theory of the Learnable by Valiant [Valiant, 1984] initiated research in theoretical computer science on another essential ingredient of neural systems: learning capabilities. This created the new area of computational learning theory in theoretical computer science. It made a number of important contributions to applied machine learning, but so far had little impact on the investigation of learning in neural systems[3]. Another mathematically rigorous approach towards understanding learning, reinforcement learning (see [Bertsekas and Tsitsiklis, 1996,Sutton and Barto, 1998]), has been more successful in this regard. But so far reinforcement learning has attracted little attention in theoretical computer science.

Altogether we would be very happy if we could discern a slow but steady development where theoretical computer science is gaining increasing impact on the investigation of computing and learning in neural systems. Unfortunately there is little support for such optimism , and concerted efforts would be needed to change this situation. An inspection of any recent issue of leading journals (e.g. Neural Computation[4], Network: Computation in Neural Systems) or conference proceedings in this area (e.g. of the Annual NIPS[5], with proceedings published by MIT Press under the title Advances in Neural Information Processing Systems) shows that there exists a large amount of interdisciplinary work on neural computation, with a fair number of theoretical contributions. But so far this theoretical work has been dominated by approaches from theoretical physics, information theory, and statistics. One might speculate that this results from the fact that theoretical computer science has become to a large extent ``method-driven'', i.e., we typically look for new problems that can be solved by variations and extensions of a body of fascinating mathematical tools that we have come to like, and that form the heart of current theoretical computer science. In contrast, to have a serious impact on research in neural computation, a theoretical researcher has to become also ``problem-driven'', i.e., we have to employ and develop those mathematical concepts and tools that are most adequate for the problem at hand.

One of the main obstacles for a theoretical computer scientist who is ready to tackle theoretical problems about computing and learning in biological neural systems is the diversity of models, and the diversity of opinions among leading neuroscientists regarding the right way to understand computations in the brain.[6] This concerns especially the first questions that a theoretical computer scientist is likely to ask:

Yet another difficulty arises from the fact that neuroscientists employ different models for different levels of detail, starting from models for the whole brain, going down to neural columns and circuits, further down to neurons and synapses, and finally down to the molecular ``switches'', e.g. the diverse variety of ion - channels and receptors that control the flow of charged particles in neurons and synapses. Unfortunately it is still not known, which of these levels is the right one for understanding neural computation and learning. It is quite possible that none of these levels is ``the right one'' and that the interplay of several of these levels of modeling is essential. On the other hand this family of models with different levels of temporal and spatial resolution provides a rich source of interesting research problems for theoretical computer scientists:


FOOTNODES:
[1] This model is nowadays called a threshold circuit. The books [Siu et al., 1995] and [Roychowdhury et al., 1994] provide good surveys.
[2] Curiously enough it is reported in [Hopcroft and Ullman, 1979] that one of the main concepts of theoretical computer science, the finite automation, had been conceived by Kleene [Kleene, 1956] when he investigated the model proposed by McCulloch and Pitts.
[3] A program for applying results from this area towards understanding neural systems is outlined in Valiant's book Circuits of the Mind [Valiant, 1994]
[4] see http://neco.mitpress.org/
[5] see http://www.cs.cmu.edu/afs/cs/project/cnbc/nips/NIPS.html
[6] This is a consequence of the fact that most of the basic questions about computing and learning in living organisms cannot be answered directly through suitable experiments. Usually one has to rely on indirect empirical evidence, that often depends on details of the experimental setup, the specific neural system and species that is studied, and the methods for data-analysis that are employed.
[7] An example of such investigation is for example [Maass and Natschläger, 2000].

[Next] [Up] [Previous]
Next: What is Different in Up: Neural Computation: A Research Previous: Neural Computation: A Research
Heike Graf, 9/28/2000