What and how the brain computes: introduction

brain computing introduce
Vovan_ST

Vovan_ST

ИТ специалист со стажем. Автор статьи. Профиль

The subject of this blog is how the brain works. In order to understand this, it is essential to know what is computed by different brain systems; and how the computations are performed. The aim of this blog is to elucidate what is computed in different brain systems; and to describe current computational approaches and models of how each of these brain systems computes. Understanding the brain in this way has enormous potential for understanding ourselves better in health and in disease. Potential applications of this understanding are to the treatment of the brain in disease; and to artificial intelligence which will benefit from knowledge of how the brain performs many of its extraordinarily impressive functions. This blog is pioneering in taking this approach to brain function: to consider what is computed by many of our brain systems; and how it is computed.

To understand how our brains work, it is essential to know what is computed in each part of the brain. That can be addressed by utilising evidence relevant to computation from many areas of neuroscience. Knowledge of the connections between different brain areas is important, for this shows that the brain is organised as systems, with whole series of brain areas devoted for example to visual processing. That provides a foundation for examining the computation performed by each brain area, by comparing what is represented in a brain area with what is represented in the preceding and following brain areas, using techniques of for example neurophysiology and functional neuroimaging. Neurophysiology at the single neuron level is needed because this is the level at which information is transmitted between the computing elements of the brain, the neurons. Evidence from the effects of brain damage, including that available from neuropsychology, is needed to help understand what different parts of the system do, and indeed what each part is necessary for. Functional neuroimaging is useful to indicate where in the human brain different processes take place, and to show which functions can be dissociated from each other. So for each brain system, evidence on what is computed at each stage, and what the system as a whole computes, is essential.

To understand how our brains work, it is also essential to know how each part of the brain computes. That requires a knowledge of what is represented and computed by the neurons in each part of the brain, but it also requires knowledge of the network properties of each brain region. This involves knowledge of the connectivity between the neurons in each part of the brain, and knowledge of the synaptic and biophysical properties of the neurons. It also requires knowledge of the theory of what can be computed by networks with defined connectivity.

There are at least three key goals of the approaches described here. One is to understand ourselves better, and how we work and think. A second is to be better able to treat the sys­tem when it has problems, for example in mental illnesses. Medical applications are a very important aim of the type of research described here. A third goal, is to be able to emulate and learn from the operation of parts of our brains, which some in the field of artificial intel­ligence (AI) would like to do to produce more useful computers and machines. All of these goals require, and cannot get off the ground, without a firm foundation in what is computed by brain systems, and theories and models of how it is computed. To understand the operation

 

Part of the enterprise here is to stimulate new theories and models of how parts of the brain work. The evidence on what is computed in different brain systems had advanced rapidly in the last 50 years, and provides a reasonable foundation for the enterprise, though there is much that remains to be learned. Theories of how the computation is performed are less advanced, but progress is being made, and current models are described in my blog for many brain systems, in the expectation that before further advances are made, knowledge of the considerable current evidence on how the brain computes provides a useful stating point, especially as current theories do take into account the limitations that are likely to be imposed by the neural architectures present in our brains.

The simplest way to define brain computation is to examine what information is repres­ented at each stage of processing, and how this is different from stage to stage. For example in the primary visual cortex (V1), neurons respond to simple stimuli such as bars or edges or gratings and have small receptive fields. Little can be read off about for example whose face is represented from the firing rates of a small number of neurons in V1. On the other hand, after four or five stages of processing, in the inferior temporal cortex, information can be read from the firing rates of neurons about whose face is being viewed, and indeed there is remarkable invariance with respect to the position, size, contrast and even in some cases view of the face. That is a major computation, and indicates what can be achieved by neural computation.

These approaches can only be taken to understand brain function because there is consid­erable localization of function in the brain, quite unlike a digital computer. One fundamental reason for localization of function in the brain is that this minimizes the total length of the connections between neurons, and thus brain size. Another is that it simplifies the genetic information that has to be provided in order to build the brain, because the connectivity in­structions can refer considerably to local connections. These points are developed in the book Cerebral Cortex: Principles of Operation (Rolls, 2016b).

That brings me to what is different about the present blog and Cerebral Cortex: Princi­ples of Operation (Rolls, 2016b). The previous book took on the enormous task of making progress with understanding how the major part of our brains, the cerebral cortex, works, by understanding its principles of operation. The present blog builds on that approach, and uses it as background, but has the different aim of taking each of our brain systems, and describing what they compute, and then what is known about how each system computes. The issue of how they compute relies for many brain systems on how the cortex operates, so Cerebral Cortex: Principles of Operation provides an important complement to the present blog.

One of the distinctive properties of this blog is that it links the neural computation ap- proachnotonlyfirmlytoneuronalneurophysiology,which providesmuchof theprimarydata about how the brain operates, but also to psychophysical studies (for example of attention); to neuropsychological studies of patients with brain damage; and to functional magnetic res­onance imaging (fMRI) (and other neuroimaging) approaches. The empirical evidence that is brought to bear is largely from non-human primates and from humans, because of the con­siderable similarity of their cortical systems, and the overall aims to understand the human brain, and the disorders that arise after brain damage.

In selecting the research findings on ‘what' is computed in different brain systems and on ‘how' it is computed, to include in this blog, I have selected pioneering research that has helped to identify key computational principles involved for different brain systems. Dis­coveries that have laid the foundation for our understanding as research has developed are emphasized. That has meant that much excellent neuroscience research could not be included in this book: but the aim of the book instead is to identify computational principles of op­eration of brain systems, providing some of the key research discoveries that have helped to identify those principles. I hope that future research will extend this aim further.

Before the 1960s there were many and important discoveries about the phenomenology of the cortex, for example that damage in one part would affect vision, and in another part movement, with electrical stimulation often producingthe opposite effect. Theprinciplesmay help us to understand these phenomena, but the phenomena provide limited evidence about how the cortex works, apart from the very important principle of localization of function (see Rolls (2016b)), and the important principle of hierarchical organization (Hughlings Jackson, 1878; Swash, 1989) (see Rolls (2016b)) which has been supported by increasing evidence on the connections between different cortical areas, which is a fundamental building block for understanding brain computations.

In the 1960s David Hubel and Torsten Wiesel made important discoveries about the stim­uli that activate primary visual cortex neurons, showing that they respond to bar-like or edge­like visual stimuli (Hubel and Wiesel, 1962, 1968, 1977), instead of the small circular re­ceptive fields of lateral geniculate cortex neurons. This led them to suggest the elements of a model of how this might come about, by cortical neurons that respond to elongated lines of lateral geniculate neurons (Fig. 2.5). This led to the concept that hierarchical organization over a series of cortical areas might at each stage form combinations of the features repres­ented in the previous cortical area, in what might be termed feature combination neurons.

However, before 1970 there were few ideas about how the cerebral cortex operates com­putationally.

David Marr was a pioneer who helped to open the way to an understanding of how the details of cortical anatomy and connectivity help to develop quantitative theories of how cort­ical areas may compute, including the cerebellar cortex (Marr, 1969), the neocortex (Marr, 1970), and the hippocampal cortex (Marr, 1971). Marr was hampered by some lack in detail of the available anatomical knowledge, and did not for example hypothesize that the hip­pocampal CA3 network was an autoassociation memory. He attempted to test his theory of the cerebellum with Sir John (Jack) Eccles by stimulating the climbing fibres in the cerebel­lum while providing an input from the parallel fibres to a Purkinje cell, but the experiment did not succeed, partly because of a lack of physiological knowledge about the firing rates of climbing fibres, which are low, rarely more than 10 spikes/s, whereas they had stimulated at much higher frequencies. Perhaps in part because David Marr was ahead of the experi­mental techniques available at the time to test his theories of network operations of cortical systems, he focussed in his later work on more conceptual rather that neural network based approaches, which he applied to understanding vision, with again limited success at least in understanding invariant object recognition (Marr, 1982), again related to the lack of available experimental data (Rolls, 2011c).

Very stimulating advances in thinking about cortical function were made in books by Abeles (1991), Braitenberg and Schutz (1991) and Creutzfeldt (1995), but many advances have been made since those books (Rolls, 2016b).

Theories ofoperation are essential to understanding the brain - e.g. collective computation in attractor networks, and emergent properties. It cannot be done just by molecular biology, though that provides useful tools, and potentially ways to ameliorate brain dysfunction.

I emphasize that to understand brain including cortical function, and processes such as memory, perception, attention, and decision-making in the brain, we are dealing with large- scale computational systems with interactions between the parts, and that this understanding requires analysis at the computational and global level of the operation of many neurons to perform together a useful function. Understanding at the molecular level is important for helping to understand how these large-scale computational processes are implemented in the brain, but will not by itself give any account of what computations are performed to im­plement these cognitive functions. Instead, understanding cognitive functions such as object recognition, memory recall, attention, and decision-making requires single neuron data to be closely linked to computational models of how the interactions between large numbers of neurons and many networks of neurons allow these cognitive problems to be solved. The single neuron level is important in this approach, for the single neurons can be thought of as the computational units of the system, and is the level at which the information is exchanged by the spiking activity between the computational elements of the brain. The single neuron level is therefore, because it is the level at which information is communicated between the computing elements of the brain, the fundamental level of information processing, and the level at which the information can be read out (by recording the spiking activity) in order to understand what information is being represented and processed in each cortical area.

Now let's look at how neurons are organized into a neural network in our brain.

Вас заинтересует / Intresting for you:

WEKA: brief description
WEKA: brief description 1256 views Светлана Комарова Thu, 10 Oct 2019, 07:35:46
Oracle BI 12c Overall componen...
Oracle BI 12c Overall componen... 4679 views Илья Дергунов Wed, 02 May 2018, 14:48:42
Enterprise-Driven Data Explora...
Enterprise-Driven Data Explora... 969 views Jannyse Dedrah Sun, 19 Aug 2018, 07:48:11
Thrift: brief description
Thrift: brief description 1073 views Светлана Комарова Wed, 09 Oct 2019, 12:13:54
Comments (0)
There are no comments posted here yet
Leave your comments
Posting as Guest
×
Suggested Locations