Information

Limitations of Neural Engineering Framework (NEF)

Limitations of Neural Engineering Framework (NEF)


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

All frameworks have limitations. Although I ask a lot of questions on this site in regards to the advantages of the Neural Engineering Framework (NEF), there must be significant limitations. What are they?


The limitations of the NEF fall into three main categories.

1. Simplistic Use of Neurotransmitters

The NEF uses neurotransmitters and their rate of propagation to set a time constant on an exponential low-pass filter for spikes, as well as for modulation of learning rules (dopamine). This is currently the only use of neurotransmitters in the NEF, despite their being many functions that they could serve in terms of signalling and modulation.

2. Limitations When Using Complex Neuron Models

Although different filters are being investigated by Aaron Voelker, the vast majority of filters used, as mentioned in the last point, still rely on the single time constant and on linear decoding. More complex neural models, although easily included in a feed-forward connection, usually have multiple time constants and non-linear synpatic effects, which make it very difficult to capture their dynamics. This is covered in chapter 4 of Bryan Tripp's PhD thesis.

3. Lack of Developmental Explanation

This last problem may be more of a reflection on the state of neuroscience, than an actual limitation of the NEF.

The NEF describes fully formed, adaptive, but non-developmental networks. Although the NEF can learn functions, it does not describe how certain structures such as the Basal Ganglia came to be formed. Neither does it give any description of how neurogenesis might occur within the framework, although it is easy to imagine how it might be leveraged to increase accuracy of certain populations.

Conclusion

Although the NEF is the only framework I know of that tries base behaviour on a biologically plausible substrate, it is not the silver bullet of theoretical neuroscience. It is very much a zeroth-order guess at the computations that the brain my perform.


Acknowledgments

We thank Abigail Morrison for her helpful comments on an earlier version of this manuscript. We thank Evgeniy Krysanov, an independent programmer, for bootstrapping the web application code base. Additionally, we thank Thomas Wachtler for supporting the project. We gratefully acknowledge hardware and funding from German Neuroinformatics Node G-Node (BMBF grant 01GQ0801). Partial funding by the German Federal Ministry of Education and Research (BMBF grant 01GQ0420 to BCCN Freiburg, 01GQ0830 to BFNT Freiburg/Tuebingen), BrainLinks-BrainTools Cluster of Excellence funded by the German Research Foundation (DFG, grant # EXC 1086), the Helmholtz Alliance on Systems Biology (Germany) and the Helmholtz Portfolio Theme “Supercomputing and Modeling for the Human Brain” is gratefully acknowledged. The article processing charge was funded by the German Research Foundation (DFG) and the Albert Ludwigs University Freiburg in the funding programme Open Access Publishing.


3 Neural Engineering Framework

Neural engineering framework (NEF). The NEF uses the same static nonlinearity f for the neural network dynamics and for encoding. Decoding is done via a linear readout, and feature dynamics are linear. Consistency is required only for x values of interest (depicted in blue).

Neural engineering framework (NEF). The NEF uses the same static nonlinearity f for the neural network dynamics and for encoding. Decoding is done via a linear readout, and feature dynamics are linear. Consistency is required only for x values of interest (depicted in blue).


VOLUME 1

Elliot S. Krames , . Farag Aboelsaad , in Neuromodulation , 2009

Other Definitions and Terms

The term neuromodulation can be defined as a technology that impacts upon neural interfaces and is the science of how electrical, chemical, and mechanical interventions can modulate or change central and peripheral nervous system functioning. It is a form of therapy in which neurophysiological signals are initiated or influenced with the intention of achieving therapeutic effects by altering the function and performance of the nervous system. The term neuromodulation, in the opinion of these authors, should replace other terms that are relevant to the field and are being used, including neuroaugmentation, neurostimulation, neuroprosthetics, functional electrical stimulation, assistive technologies, and neural engineering ( Sakas et al., 2007 ). These terms have much overlap and tend to confuse the uninitiated.

Neuroaugmentation is defined by the OnLine Medical Dictionary as the use of electrical stimulation to supplement the activity of the nervous system. Neurostimulation is the process or technology that applies electrical currents, in varying parameters, by means of implanted electrodes to achieve functional activation or inhibition of specific neuronal groups, pathways, or networks. Functional electrical stimulation, also known as FES, is defined as a technique that uses electrical currents to activate nerves innervating extremities affected by paralysis resulting from spinal cord injury (SCI), head injury, stroke, or other neurological disorders, restoring function in people with disabilities (Wikipedia: Functional Electrical Stimulation). FES is electrical stimulation of a muscle to provide normal control in order to produce a functional useful contraction, therefore, electrical stimulation that produces only sensory response generally would not be termed as FES and electrical stimulation that reduces pain is also not FES. Neuroprosthetics “is a discipline related to neuroscience and biomedical engineering concerned with developing neural prostheses, artificial devices to replace or improve the function of an impaired nervous system. The neuroprosthetic that has the most widespread use today is the cochlear implant with approximately 100 000 in worldwide use as of 2006” (Wikipedia: Neuroprosthetics). Neural engineering is an emerging interdisciplinary field of research that uses engineering techniques to investigate the function and manipulate the behavior of the central or peripheral nervous systems. The field draws heavily on the fields of computational neuroscience, experimental neuroscience, clinical neurobiology, electrical engineering and signal processing of living neural tissue, and encompasses elements from robotics, computer engineering, neural tissue engineering, materials science and nanotechnology ( Answers.Com. ).


Christopher Eliasmith

Chris Eliasmith is a Professor jointly-appointed in the Systems Design Engineering department and the Philosophy department, and is also cross-appointed to Computer Science at the University of Waterloo. He is the Director of the Centre for Theoretical Neuroscience at Waterloo, as well as the Canada Research Chair in Theoretical Neuroscience. The Centre focuses on mathematical characterizations of a variety neural systems, from individual ion channels to large-scale networks.

Professor Eliasmith heads the Computational Neuroscience Research Group (CNRG), which is developing a framework for modelling the function of complex neural systems (the Neural Engineering Framework or NEF). The NEF is grounded in the principles of signal processing, control theory, statistical inference, and good engineering design, while providing a rational and robust strategy for simulating and evaluating a variety of biological neural circuit functions. Members of the group apply the NEF to projects characterizing sensory processing, motor control, and cognitive function.

Work at the CNRG is divided into applications and theoretical development. Theoretical work includes extending the NEF to be more general (e.g. account for a wider range of single cell dynamics), more biologically plausible (e.g., capture network physiology and topology more precisely), and more adaptive (e.g., including better adaptive filtering, learning, etc.). The CNRG members are exploring general brain functions to explain not only how neural systems implement complex dynamics (the focus of the NEF), but also what neural systems are designed to do in general – i.e., what the basic functional principles of the brain are.

Regarding applications, the CNRG is building complex single cell networks to test hypotheses about the functioning of a given neural system as well as artificial intelligence and robotics applications. The simulation results are compared against available neural and behavioural data, and are then used to make novel predictions. CNRG members have constructed models of nonlinear adaptive arm control, working memory, locomotion, decision making, quadcopter control, the basal ganglia (implicated in Parkinsons Disease), rodent navigation, and language use.

Professor Eliasmith’s research team successfully built the world’s most complex simulation of the human brain – Semantic Pointer Architecture Unified Network (SPAUN). SPAUN consists of 2.5 million simulated neurons, enabling it to perform 8 tasks such as copy drawing, counting, answering questions and fluid reasoning. It runs on a supercomputer, has a digital eye that is used for visual input and a robotic arm used for drawing its responses.

Professor Eliasmith is the author of “How to build a brain: A neural architecture for biological cognition” and co-author of “Neural Engineering: Computation, representation and dynamics in neurobiological systems”.


1. Introduction

Modeling the human brain is one of the greatest scientific challenges of our time. Computational neuroscience has made significant advancements from simulating low-level biological parts in great detail, to solving high-level problems that humans find difficult however, we still lack a mathematical account of how biological components implement cognitive functions such as sensory processing, memory formation, reasoning, and motor control. Much work has been put into neural simulators that attempt to recreate neuroscientific data in precise detail with the thought that cognition will emerge by connecting detailed neuron models according to the statistics of biological synapses (Markram, 2006). However, cognition has not yet emerged from data-driven large scale models, and there are good reasons to think that cognition may never emerge (Eliasmith and Trujillo, 2013). At the other end of the spectrum, cognitive architectures (Anderson et al., 2004 Aisa et al., 2008) and machine learning approaches (Hinton and Salakhutdinov, 2006) have solved high-dimensional statistical problems, but do so without respecting biological constraints.

Nengo 1 is a neural simulator based on a theoretical framework proposed by Eliasmith and Anderson (2003) called the Neural Engineering Framework (NEF). The NEF is a large-scale modeling approach that can leverage single neuron models to build neural networks with demonstrable cognitive abilities. Nengo and the NEF has been used to build increasingly sophisticated neural subsystems for the last decade [e.g., path integration (Conklin and Eliasmith, 2005), working memory (Singh and Eliasmith, 2006), list memory (Choo and Eliasmith, 2010), inductive reasoning (Rasmussen and Eliasmith, 2014), motor control (DeWolf and Eliasmith, 2011), decision making (Stewart et al., 2012)] culminating recently with Spaun, currently the world's largest functional brain model (Eliasmith et al., 2012). Spaun is a network of 2.5 million spiking neurons that can perform eight cognitive tasks including memorizing lists and inductive reasoning. It can perform any of these eight tasks at any time by being presented the appropriate series of images representing the task to be performed for example, sequentially presenting images containing the characters A3[1234] instructs Spaun to memorize the list 1234. If asked to recall the memorized list, Spaun generates motor commands for a simulated arm, writing out the digits 1234. While the tasks that Spaun performs are diverse, all of the tasks use a common set of functional cortical and subcortical components. Each functional component corresponds to a brain area that has been hypothesized to perform those functions in the neuroscientific literature.

The NEF provides principles to guide the construction of a neural model that incorporates anatomical constraints, functional objectives, and dynamical systems or control theory. Constructing models from this starting point, rather than from single cell electrophysiology and connectivity statistics alone, produces simulated data that explains and predicts a wide variety of experimental results. Single cell activity (Stewart et al., 2012), response timing (Stewart and Eliasmith, 2009), behavioral errors (Choo and Eliasmith, 2010), and age-related cognitive decline (Rasmussen and Eliasmith, 2014), of NEF-designed models match physiological and psychological findings without being built specifically into the design. These results are a consequence of the need to satisfy functional objectives within anatomical and neurobiological constraints.

The transformation principle of the NEF proposes that the connection weight matrix between two neural populations can compute a non-linear function, and can be factored into two significantly smaller matrices. By using these factors instead of full connection weight matrices, NEF-designed models are more computationally efficient, which allows Nengo to run large-scale neural models on low-cost commodity hardware.

In order to make Nengo more simple, extensible, and fast, we have rewritten Nengo 2.0 from scratch in Python, leveraging NumPy (Oliphant, 2007) for manipulating large amounts of data. While NumPy is its only dependency, Nengo contains optional extensions for plotting if Matplotlib is available (Hunter, 2007) and for interactive exploration if IPython is available (Pérez and Granger, 2007). Since Nengo only depends on one third-party library, it is easy to integrate Nengo models in arbitrary CPython programs, opening up possibilities for using neurally implemented algorithms in web services, games, and other applications.

Nengo 2.0 has a simple object model, which makes it easy to document, test, and modify. Model creation and simulation are decoupled, allowing for models to be run with other simulators as drop-in replacements for Nengo 2.0's platform-independent reference simulator. To date, we have implemented one other simulator that uses PyOpenCL (Klཬkner et al., 2012) to take advantage of a GPU or multicore CPU. The OpenCL simulator can simulate large models on the scale of Spaun at least 50 times faster than Nengo 1.4 using inexpensive commodity hardware.

In all, Nengo 2.0 provides a platform for simulating larger and more complex models than Spaun, and can therefore further test the NEF as a theory of neural computation.


References

Jenia Jitsev, Morrison Abigail, Tittgemeyer Marc: Learning from positive and negative rewards in a spiking neural network model of basal ganglia. Neural Networks (IJCNN), The 2012 International Joint Conference on. IEEE. 2012

Morgan Quigley, et al: "ROS: an open-source Robot Operating System.". ICRA workshop on open source software. 2009, 3 (3.2):

Mikael Djurfeldt, et al: Run-time interoperability between neuronal network simulators based on the MUSIC framework. Neuroinformatics. 2010, 8.1: 43-60.

Chris Eliasmith, Charles H. Anderson: Neural engineering: Computation, representation, and dynamics in neurobiological systems. MIT press. 2004

Valentino Braitenberg: Vehicles: Experiments in synthetic psychology. MIT press. 1986


Keywords

Shruti R. Kulkarni is a postdoctoral research associate at Oak Ridge National Laboratory (ORNL). She received her Ph.D. in Electrical Engineering from New Jersey Institute of Technology in 2019 under the supervision of Dr. Bipin Rajendran, where she completed her dissertation on bio-inspired learning and hardware acceleration with emerging memories. Her research at ORNL is in algorithms and applications of neuromorphic computing. She has 15 peer reviewed journal and conference publications. Her research interests are in neuromorphic computing and applications, hardware design, and non-volatile memories.

Maryam Parsa is a postdoctoral research associate at Oak Ridge National Laboratory (ORNL) since October 2020. She received her PhD in Electrical and Computer Engineering department at Purdue University in 2020 with a prestigious four-year INTEL/SRC PhD fellowship at the C-BRIC center under supervision of Prof. Kaushik Roy. During her graduate studies she interned as a graduate researcher at Intel Corporation for two consecutive summers and at ORNL as ASTRO fellow for 15 months. Her primary research interests are Neural Architecture Search (NAS), Hyperparameter Optimization and Neuromorphic Computing.


Limitations of Neural Engineering Framework (NEF) - Psychology

To produce and understand words, humans access the mental lexicon. From a functional perspective, the long-term memory component of the mental lexicon is comprised of three levels: the concept level, the lemma level, and the phonological level. At each level, different kinds of word information are stored. Semantic as well as phonological cues can help to facilitate word access during a naming task, especially when neural dysfunctions are present. The processing corresponding to word access occurs in specific parts of working memory. Neural models for simulating speech processing help to uncover the complex relationships that exist between neural dysfunctions and corresponding behavioral patterns.

The Neural Engineering Framework (NEF) and the Semantic Pointer Architecture (SPA) are used to develop a quantitative neural model of the mental lexicon and its access during speech processing. By simulating a picture-naming task (WWT 6-10), the influence of cues is investigated by introducing neural dysfunctions within the neural model at different levels of the mental lexicon.

First, the neural model is able to simulate the test behavior for normal children that exhibit no lexical dysfunction. Second, the model shows worse results in test performance as larger degrees of dysfunction are introduced. Third, if the severity of dysfunction is not too high, phonological and semantic cues are observed to lead to an increase in the number of correctly named words. Phonological cues are observed to be more effective than semantic cues.

Our simulation results are in line with human experimental data. Specifically, phonological cues seem not only to activate phonologically similar items within the phonological level. Moreover, phonological cues support higher-level processing during access of the mental lexicon. Thus, the neural model introduced in this paper offers a promising approach to modeling the mental lexicon, and to incorporating the mental lexicon into a complex model of language processing.


4 Semantic pointers

In its most basic form, a semantic pointer can be thought of as a compressed representation that captures summary information about a particular domain. Typically, such representations derive from perceptual inputs. An image of an object in one's visual field, for instance, will initially be encoded as a pattern of activity in a very large population of neurons. Through transformations of the sort described above, however, further layers of neural populations produce increasingly abstract statistical summaries of the original visual input (see Fig. 1). Eventually, a highly compressed representation of the input can be produced. Such a characterization is consistent both with the decrease in the number of neurons found in later hierarchical layers of the visual cortex, and with the development of neurally inspired hierarchical statistical models for dimensionality reduction (Hinton & Salakhutdinov, 2006 Serre, Oliva, & Poggio, 2007 ). Analogous representations can be generated in other modalities such as audition and sensation.

The reason compressed representations of this sort are called semantic pointers is because they retain semantic information about the states they represent by virtue of being non-arbitrarily related to these states through the compression process. The reason why the representations are referred to as pointers is because they can be used to “point to” or regenerate representations at lower levels in the compression network (Hinton & Salakhutdinov, 2006 ). Moreover, any given semantic pointer can be manipulated independently of the network that is used to generate it. A semantic pointer of a table percept, for example, could be used in cognitive tasks related to tables without necessarily prompting a reactivation of the richer perceptual representations at the bottom of the relevant compression network.

The computational power of semantic pointers lies in their ability to be bound together (using compression operations such as circular convolution) into highly structured representations containing lexical, perceptual, and motor information from a variety of sources. Importantly, such structured representations are themselves semantic pointers, because they can point to and regenerate the subordinate representations from which they are built. Consider again the toy example of a table. Using recursive binding of the sort already described, which is a compression operation, semantic pointers for visual and tactile images of tables could be combined, along with pointers for an auditory image of the sound “table” and a visual image of the letters “t-a-b-l-e.” Additionally, various structures corresponding to verbal information like “has a flat surface” or “used for eating meals” might also be bound together. These structures would themselves be built out of other semantic pointers, including compressed visual images of flat surfaces, meal settings, and so on. Overall, the result of these numerous binding operations is a single representation that captures relations among a wide range of table-related contents. And this single representation can be transformed in numerous ways to re-access images of tables, verbal information about tables, or motor commands commonly used to interact with tables.

At this point, it should be clear that semantic pointers are highly applicable to the explanation of conceptual phenomena. They can account for symbolic processes, perceptual simulations, and a host of other functions all centered upon a single object class. In other words, they can act as a summary representation of a category of things in the world, which is precisely what a concept is often taken to be. Successful neural simulations of conceptual tasks such as simple linguistic inference (Eliasmith, 2013 ), inductive reasoning (Rasmussen & Eliasmith, 2011 ), and rule-based problem solving (Stewart & Eliasmith, 2011 ) have all been produced using semantic pointers, along with a large-scale brain model capable of executing a variety of cognitive functions (Eliasmith et al., 2012 ). Based on these successful applications, we think that the notion of a semantic pointer provides an ideal foundation for accounting for a wide range of conceptual phenomena.


1. Introduction

Modeling the human brain is one of the greatest scientific challenges of our time. Computational neuroscience has made significant advancements from simulating low-level biological parts in great detail, to solving high-level problems that humans find difficult however, we still lack a mathematical account of how biological components implement cognitive functions such as sensory processing, memory formation, reasoning, and motor control. Much work has been put into neural simulators that attempt to recreate neuroscientific data in precise detail with the thought that cognition will emerge by connecting detailed neuron models according to the statistics of biological synapses (Markram, 2006). However, cognition has not yet emerged from data-driven large scale models, and there are good reasons to think that cognition may never emerge (Eliasmith and Trujillo, 2013). At the other end of the spectrum, cognitive architectures (Anderson et al., 2004 Aisa et al., 2008) and machine learning approaches (Hinton and Salakhutdinov, 2006) have solved high-dimensional statistical problems, but do so without respecting biological constraints.

Nengo 1 is a neural simulator based on a theoretical framework proposed by Eliasmith and Anderson (2003) called the Neural Engineering Framework (NEF). The NEF is a large-scale modeling approach that can leverage single neuron models to build neural networks with demonstrable cognitive abilities. Nengo and the NEF has been used to build increasingly sophisticated neural subsystems for the last decade [e.g., path integration (Conklin and Eliasmith, 2005), working memory (Singh and Eliasmith, 2006), list memory (Choo and Eliasmith, 2010), inductive reasoning (Rasmussen and Eliasmith, 2014), motor control (DeWolf and Eliasmith, 2011), decision making (Stewart et al., 2012)] culminating recently with Spaun, currently the world's largest functional brain model (Eliasmith et al., 2012). Spaun is a network of 2.5 million spiking neurons that can perform eight cognitive tasks including memorizing lists and inductive reasoning. It can perform any of these eight tasks at any time by being presented the appropriate series of images representing the task to be performed for example, sequentially presenting images containing the characters A3[1234] instructs Spaun to memorize the list 1234. If asked to recall the memorized list, Spaun generates motor commands for a simulated arm, writing out the digits 1234. While the tasks that Spaun performs are diverse, all of the tasks use a common set of functional cortical and subcortical components. Each functional component corresponds to a brain area that has been hypothesized to perform those functions in the neuroscientific literature.

The NEF provides principles to guide the construction of a neural model that incorporates anatomical constraints, functional objectives, and dynamical systems or control theory. Constructing models from this starting point, rather than from single cell electrophysiology and connectivity statistics alone, produces simulated data that explains and predicts a wide variety of experimental results. Single cell activity (Stewart et al., 2012), response timing (Stewart and Eliasmith, 2009), behavioral errors (Choo and Eliasmith, 2010), and age-related cognitive decline (Rasmussen and Eliasmith, 2014), of NEF-designed models match physiological and psychological findings without being built specifically into the design. These results are a consequence of the need to satisfy functional objectives within anatomical and neurobiological constraints.

The transformation principle of the NEF proposes that the connection weight matrix between two neural populations can compute a non-linear function, and can be factored into two significantly smaller matrices. By using these factors instead of full connection weight matrices, NEF-designed models are more computationally efficient, which allows Nengo to run large-scale neural models on low-cost commodity hardware.

In order to make Nengo more simple, extensible, and fast, we have rewritten Nengo 2.0 from scratch in Python, leveraging NumPy (Oliphant, 2007) for manipulating large amounts of data. While NumPy is its only dependency, Nengo contains optional extensions for plotting if Matplotlib is available (Hunter, 2007) and for interactive exploration if IPython is available (Pérez and Granger, 2007). Since Nengo only depends on one third-party library, it is easy to integrate Nengo models in arbitrary CPython programs, opening up possibilities for using neurally implemented algorithms in web services, games, and other applications.

Nengo 2.0 has a simple object model, which makes it easy to document, test, and modify. Model creation and simulation are decoupled, allowing for models to be run with other simulators as drop-in replacements for Nengo 2.0's platform-independent reference simulator. To date, we have implemented one other simulator that uses PyOpenCL (Klཬkner et al., 2012) to take advantage of a GPU or multicore CPU. The OpenCL simulator can simulate large models on the scale of Spaun at least 50 times faster than Nengo 1.4 using inexpensive commodity hardware.

In all, Nengo 2.0 provides a platform for simulating larger and more complex models than Spaun, and can therefore further test the NEF as a theory of neural computation.


Limitations of Neural Engineering Framework (NEF) - Psychology

To produce and understand words, humans access the mental lexicon. From a functional perspective, the long-term memory component of the mental lexicon is comprised of three levels: the concept level, the lemma level, and the phonological level. At each level, different kinds of word information are stored. Semantic as well as phonological cues can help to facilitate word access during a naming task, especially when neural dysfunctions are present. The processing corresponding to word access occurs in specific parts of working memory. Neural models for simulating speech processing help to uncover the complex relationships that exist between neural dysfunctions and corresponding behavioral patterns.

The Neural Engineering Framework (NEF) and the Semantic Pointer Architecture (SPA) are used to develop a quantitative neural model of the mental lexicon and its access during speech processing. By simulating a picture-naming task (WWT 6-10), the influence of cues is investigated by introducing neural dysfunctions within the neural model at different levels of the mental lexicon.

First, the neural model is able to simulate the test behavior for normal children that exhibit no lexical dysfunction. Second, the model shows worse results in test performance as larger degrees of dysfunction are introduced. Third, if the severity of dysfunction is not too high, phonological and semantic cues are observed to lead to an increase in the number of correctly named words. Phonological cues are observed to be more effective than semantic cues.

Our simulation results are in line with human experimental data. Specifically, phonological cues seem not only to activate phonologically similar items within the phonological level. Moreover, phonological cues support higher-level processing during access of the mental lexicon. Thus, the neural model introduced in this paper offers a promising approach to modeling the mental lexicon, and to incorporating the mental lexicon into a complex model of language processing.


Acknowledgments

We thank Abigail Morrison for her helpful comments on an earlier version of this manuscript. We thank Evgeniy Krysanov, an independent programmer, for bootstrapping the web application code base. Additionally, we thank Thomas Wachtler for supporting the project. We gratefully acknowledge hardware and funding from German Neuroinformatics Node G-Node (BMBF grant 01GQ0801). Partial funding by the German Federal Ministry of Education and Research (BMBF grant 01GQ0420 to BCCN Freiburg, 01GQ0830 to BFNT Freiburg/Tuebingen), BrainLinks-BrainTools Cluster of Excellence funded by the German Research Foundation (DFG, grant # EXC 1086), the Helmholtz Alliance on Systems Biology (Germany) and the Helmholtz Portfolio Theme “Supercomputing and Modeling for the Human Brain” is gratefully acknowledged. The article processing charge was funded by the German Research Foundation (DFG) and the Albert Ludwigs University Freiburg in the funding programme Open Access Publishing.


Keywords

Shruti R. Kulkarni is a postdoctoral research associate at Oak Ridge National Laboratory (ORNL). She received her Ph.D. in Electrical Engineering from New Jersey Institute of Technology in 2019 under the supervision of Dr. Bipin Rajendran, where she completed her dissertation on bio-inspired learning and hardware acceleration with emerging memories. Her research at ORNL is in algorithms and applications of neuromorphic computing. She has 15 peer reviewed journal and conference publications. Her research interests are in neuromorphic computing and applications, hardware design, and non-volatile memories.

Maryam Parsa is a postdoctoral research associate at Oak Ridge National Laboratory (ORNL) since October 2020. She received her PhD in Electrical and Computer Engineering department at Purdue University in 2020 with a prestigious four-year INTEL/SRC PhD fellowship at the C-BRIC center under supervision of Prof. Kaushik Roy. During her graduate studies she interned as a graduate researcher at Intel Corporation for two consecutive summers and at ORNL as ASTRO fellow for 15 months. Her primary research interests are Neural Architecture Search (NAS), Hyperparameter Optimization and Neuromorphic Computing.


References

Jenia Jitsev, Morrison Abigail, Tittgemeyer Marc: Learning from positive and negative rewards in a spiking neural network model of basal ganglia. Neural Networks (IJCNN), The 2012 International Joint Conference on. IEEE. 2012

Morgan Quigley, et al: "ROS: an open-source Robot Operating System.". ICRA workshop on open source software. 2009, 3 (3.2):

Mikael Djurfeldt, et al: Run-time interoperability between neuronal network simulators based on the MUSIC framework. Neuroinformatics. 2010, 8.1: 43-60.

Chris Eliasmith, Charles H. Anderson: Neural engineering: Computation, representation, and dynamics in neurobiological systems. MIT press. 2004

Valentino Braitenberg: Vehicles: Experiments in synthetic psychology. MIT press. 1986


3 Neural Engineering Framework

Neural engineering framework (NEF). The NEF uses the same static nonlinearity f for the neural network dynamics and for encoding. Decoding is done via a linear readout, and feature dynamics are linear. Consistency is required only for x values of interest (depicted in blue).

Neural engineering framework (NEF). The NEF uses the same static nonlinearity f for the neural network dynamics and for encoding. Decoding is done via a linear readout, and feature dynamics are linear. Consistency is required only for x values of interest (depicted in blue).


VOLUME 1

Elliot S. Krames , . Farag Aboelsaad , in Neuromodulation , 2009

Other Definitions and Terms

The term neuromodulation can be defined as a technology that impacts upon neural interfaces and is the science of how electrical, chemical, and mechanical interventions can modulate or change central and peripheral nervous system functioning. It is a form of therapy in which neurophysiological signals are initiated or influenced with the intention of achieving therapeutic effects by altering the function and performance of the nervous system. The term neuromodulation, in the opinion of these authors, should replace other terms that are relevant to the field and are being used, including neuroaugmentation, neurostimulation, neuroprosthetics, functional electrical stimulation, assistive technologies, and neural engineering ( Sakas et al., 2007 ). These terms have much overlap and tend to confuse the uninitiated.

Neuroaugmentation is defined by the OnLine Medical Dictionary as the use of electrical stimulation to supplement the activity of the nervous system. Neurostimulation is the process or technology that applies electrical currents, in varying parameters, by means of implanted electrodes to achieve functional activation or inhibition of specific neuronal groups, pathways, or networks. Functional electrical stimulation, also known as FES, is defined as a technique that uses electrical currents to activate nerves innervating extremities affected by paralysis resulting from spinal cord injury (SCI), head injury, stroke, or other neurological disorders, restoring function in people with disabilities (Wikipedia: Functional Electrical Stimulation). FES is electrical stimulation of a muscle to provide normal control in order to produce a functional useful contraction, therefore, electrical stimulation that produces only sensory response generally would not be termed as FES and electrical stimulation that reduces pain is also not FES. Neuroprosthetics “is a discipline related to neuroscience and biomedical engineering concerned with developing neural prostheses, artificial devices to replace or improve the function of an impaired nervous system. The neuroprosthetic that has the most widespread use today is the cochlear implant with approximately 100 000 in worldwide use as of 2006” (Wikipedia: Neuroprosthetics). Neural engineering is an emerging interdisciplinary field of research that uses engineering techniques to investigate the function and manipulate the behavior of the central or peripheral nervous systems. The field draws heavily on the fields of computational neuroscience, experimental neuroscience, clinical neurobiology, electrical engineering and signal processing of living neural tissue, and encompasses elements from robotics, computer engineering, neural tissue engineering, materials science and nanotechnology ( Answers.Com. ).


Christopher Eliasmith

Chris Eliasmith is a Professor jointly-appointed in the Systems Design Engineering department and the Philosophy department, and is also cross-appointed to Computer Science at the University of Waterloo. He is the Director of the Centre for Theoretical Neuroscience at Waterloo, as well as the Canada Research Chair in Theoretical Neuroscience. The Centre focuses on mathematical characterizations of a variety neural systems, from individual ion channels to large-scale networks.

Professor Eliasmith heads the Computational Neuroscience Research Group (CNRG), which is developing a framework for modelling the function of complex neural systems (the Neural Engineering Framework or NEF). The NEF is grounded in the principles of signal processing, control theory, statistical inference, and good engineering design, while providing a rational and robust strategy for simulating and evaluating a variety of biological neural circuit functions. Members of the group apply the NEF to projects characterizing sensory processing, motor control, and cognitive function.

Work at the CNRG is divided into applications and theoretical development. Theoretical work includes extending the NEF to be more general (e.g. account for a wider range of single cell dynamics), more biologically plausible (e.g., capture network physiology and topology more precisely), and more adaptive (e.g., including better adaptive filtering, learning, etc.). The CNRG members are exploring general brain functions to explain not only how neural systems implement complex dynamics (the focus of the NEF), but also what neural systems are designed to do in general – i.e., what the basic functional principles of the brain are.

Regarding applications, the CNRG is building complex single cell networks to test hypotheses about the functioning of a given neural system as well as artificial intelligence and robotics applications. The simulation results are compared against available neural and behavioural data, and are then used to make novel predictions. CNRG members have constructed models of nonlinear adaptive arm control, working memory, locomotion, decision making, quadcopter control, the basal ganglia (implicated in Parkinsons Disease), rodent navigation, and language use.

Professor Eliasmith’s research team successfully built the world’s most complex simulation of the human brain – Semantic Pointer Architecture Unified Network (SPAUN). SPAUN consists of 2.5 million simulated neurons, enabling it to perform 8 tasks such as copy drawing, counting, answering questions and fluid reasoning. It runs on a supercomputer, has a digital eye that is used for visual input and a robotic arm used for drawing its responses.

Professor Eliasmith is the author of “How to build a brain: A neural architecture for biological cognition” and co-author of “Neural Engineering: Computation, representation and dynamics in neurobiological systems”.


4 Semantic pointers

In its most basic form, a semantic pointer can be thought of as a compressed representation that captures summary information about a particular domain. Typically, such representations derive from perceptual inputs. An image of an object in one's visual field, for instance, will initially be encoded as a pattern of activity in a very large population of neurons. Through transformations of the sort described above, however, further layers of neural populations produce increasingly abstract statistical summaries of the original visual input (see Fig. 1). Eventually, a highly compressed representation of the input can be produced. Such a characterization is consistent both with the decrease in the number of neurons found in later hierarchical layers of the visual cortex, and with the development of neurally inspired hierarchical statistical models for dimensionality reduction (Hinton & Salakhutdinov, 2006 Serre, Oliva, & Poggio, 2007 ). Analogous representations can be generated in other modalities such as audition and sensation.

The reason compressed representations of this sort are called semantic pointers is because they retain semantic information about the states they represent by virtue of being non-arbitrarily related to these states through the compression process. The reason why the representations are referred to as pointers is because they can be used to “point to” or regenerate representations at lower levels in the compression network (Hinton & Salakhutdinov, 2006 ). Moreover, any given semantic pointer can be manipulated independently of the network that is used to generate it. A semantic pointer of a table percept, for example, could be used in cognitive tasks related to tables without necessarily prompting a reactivation of the richer perceptual representations at the bottom of the relevant compression network.

The computational power of semantic pointers lies in their ability to be bound together (using compression operations such as circular convolution) into highly structured representations containing lexical, perceptual, and motor information from a variety of sources. Importantly, such structured representations are themselves semantic pointers, because they can point to and regenerate the subordinate representations from which they are built. Consider again the toy example of a table. Using recursive binding of the sort already described, which is a compression operation, semantic pointers for visual and tactile images of tables could be combined, along with pointers for an auditory image of the sound “table” and a visual image of the letters “t-a-b-l-e.” Additionally, various structures corresponding to verbal information like “has a flat surface” or “used for eating meals” might also be bound together. These structures would themselves be built out of other semantic pointers, including compressed visual images of flat surfaces, meal settings, and so on. Overall, the result of these numerous binding operations is a single representation that captures relations among a wide range of table-related contents. And this single representation can be transformed in numerous ways to re-access images of tables, verbal information about tables, or motor commands commonly used to interact with tables.

At this point, it should be clear that semantic pointers are highly applicable to the explanation of conceptual phenomena. They can account for symbolic processes, perceptual simulations, and a host of other functions all centered upon a single object class. In other words, they can act as a summary representation of a category of things in the world, which is precisely what a concept is often taken to be. Successful neural simulations of conceptual tasks such as simple linguistic inference (Eliasmith, 2013 ), inductive reasoning (Rasmussen & Eliasmith, 2011 ), and rule-based problem solving (Stewart & Eliasmith, 2011 ) have all been produced using semantic pointers, along with a large-scale brain model capable of executing a variety of cognitive functions (Eliasmith et al., 2012 ). Based on these successful applications, we think that the notion of a semantic pointer provides an ideal foundation for accounting for a wide range of conceptual phenomena.


Watch the video: Neuroengineering u0026 Medicine to Benefit Society (June 2022).


Comments:

  1. Arashirisar

    Interesting theme, I will take part. I know, that together we can come to a right answer.

  2. Troyes

    I fully share your opinion. There is something in this and I like this idea, I completely agree with you.

  3. Sigfrid

    Hi. Admin, do you want a joke?



Write a message