Project Ideas (PRELIMINARY)
The class project is intended to provide in-depth experience into
reading the modeling and biological literature, as well as in the
construction of models. In order to make clear progress on both
fronts by the end of the semester, it is important to be very focused
about what one selects for the project. The modeling component should
be based on a core of about 3 papers (2 modeling, 1 experimental or 1
modeling, 2 experimental; one may be a paper that we have already
read). You may end up reviewing a few additional papers before
selecting these three. The model may be a re-implementation of an
existing model, but needs involve some novel A number of ideas are
included in this document.
As with the homework assignments, students may work in pairs. Ideally,
these pairs will be cross-departmental and different than for the
- By October 16th: a 1/2 page description of the
area and the problem you think you would like to tackle for
your project. If possible, include references to papers that
are likely to become one of your core papers.
This should be written in conjunction with your
partner. Also be prepared on this day to describe your project in
about 10 minutes in the class.
Describe the problem that you will be tackling, why it is interesting,
what the basics are of the biological background, and a description
of the model (include ideas on what novel manipulations you may be
doing if you are implementing an existing model).
- November 8th: By this point in time, you should have decided on your
three papers and begun to implement your model. Submit an outline
of your final report with the relevant details filled in (at least
in bullet form). Also be prepared to describe your project in
approximately 10 minutes to the class.
- November 27th: You should have some preliminary results from your
model. Write these up and add them to your document. Hand this
- December 11th & 13th: 25 minute project reports in class.
- December 20: final written project report due.
Models of MI Production of Arm Movements
All of the models proposed by Georgopolous et al., Caminiti et al.,
Mussa-Ivaldi, Scott and Kalaksa, and Ajemian et al. presume that MI
cells fire as some function of extrinsic and/or intrinsic variables.
However, none of these models are actually asked to produce movements
in the correct direction and magnitude. What are the implications
when we instead formulate the problem in terms of producing the
correct movement? In other words, we do not make assumptions about
the behavior of MI neurons. Instead, we can place constraints as to
the inputs (visual representation of target) and the outputs (getting
the arm to the target), and apply some optimization technique to
adjust the connections between the neural units so as to satisfy the
One possibility might be to utilize a backpropagation network,
although some experimentation will be necessary in order to decide on
the right types of non-linear units. It may also be necessary to make
adjustments to the error criterion (perhaps adding a term that prefers
neurons to be off if possible). Issues/questions/problems that are
relevant to this problem include:
- How to classify (hidden) neurons (e.g., into intrinsic and extrinsic
- Design techniques for describing the behavior of cells without
placing them into discrete bins. Can we use this to compare the behavior
of two populations of cells?
- How much do these classifications vary with different random seeds?
With different choices for error criteria? or with numbers of cells?
Note that a 2DOF muscle geometry model is available in matlab.
Extensions of the Shah Model of MI Recruitment
The Shah model only focuses on the use of extrinsic-like cells in the
production of the correct muscle recruitment pattern. We would like
to talk about the origin of the intrinsic cells without committing to
a particular way in which they are wired. One way to approach this
problem is to apply a learning algorithm (e.g., as above) to acquire a
mapping between the sensory inputs and a muscle activation pattern.
Similar questions apply about the selection of optimality criteria and how
to talk about the behavior of the population as a whole. Furthermore,
since the Kakei paper provides a nice description of the changes in preferred
direction as a function of movement conditions, these experimental
results can be compared to the behavior of the model (and it might
be possible for us to get our hands on the raw data).
Differences Between Todorov and Shah
The Todorov (2001) paper claims to prove that muscles are optimally
recruited in a truncated cosine fashion. However, with the Shah model
Fagg, Shah and Barto) we demonstrate a clear counterexample
to this. What is different about these two approaches that results in
this apparent contradiction?
Basal Ganglia and Reinforcement Learning
The Basal Ganglia consist of a set of subcortical regions which are
heavily involved in (among other things) motor control and
reinforcement-based learning. Although the details of the learning
processes differ in subtle (but important) ways from the machine
learning notion of reinforcement learning, some of the parallels are
quite surprising. For example, neurons in part of the Striatum appear
to correlate with the monkey's expectation of future reward. For
example, if the monkey performs a task properly, and as a result
expects a reward, a larger number of cells in this region will fire
vigorously. Downstream from the Striatum, cells the Substantia Nigra
pars compacta (SNpc) respond to situations in which the monkey
suddenly moves from a state of not expecting a reward to one in which
a reward is expected. These cells produce the neurotransmitter
dopamine, which (in certain conditions) leads to synaptic changes in
the target cells. Target regions of these axons include the Striatum
and parts of cortex. In the machine learning terminology, we would
classify these dopaminergic signals as being something like the
"temporal difference" between two states.
One possible project would be to construct an explicit implementation of
the conceptual model presented by Graybiel (1998).
Other possible references:
- Wolfram Schultz, Paul Apicella, and Tomas Ljungberg (1993)
Responses of Monkey Dopamine Neurons to Reward and Conditioned Stimuli during Successive Steps of Learning a Delayed Response Task, Journal of Neuroscience, 13(3):900-913
- Wolfram Schultz (1992) Activity of dopamine neurons in the behaving primate, The Neurosciences 4:129-138
- Wolfram Schultz, Peter Dayan, P. Read Montague (1997) A Neural Substrate of Prediction and Reward, Science 275:1593-1599
- James C. Houk, James L. Adams, and Andrew G. Barto (1995) A
Model of How the Basal Ganglia Generate and Use Neural signals That
Predict Reinforcement, Chapter 13 of Models of Information
Processing in the Basal Ganglia (James C. Houk, Joel L. Davis, and David G. Beiser, eds), pp.249-270
Development of Orientation-Selective Cells in Visual Cortex
Cells in the primary visual cortex are often selective for oriented
edges. Studies have shown that this behavior is not hardwired, but
instead is a result of a developmental process that requires the
occurance of oriented edges in the visual stream. Models that attempt to
explain this process often rely on unsupervised learning techniques
in which cells compete with one-another to represent different types
- David C. Somers, Sascha B. Nelson, and Mriganka Sur,
An Emergent Model of Orientation Selectivity in Cat Visual Cortical Simple
Cells, Journal of Neuroscience 15(8):5448-5465
Motor Pattern Generator Circuits
Motor pattern generators are circuits that produce some sequence of
movements. Often these are repeating sequences that control activities
such as walking or chewing. In some cases these generators act
in isolation; in others their behavior is influenced by sensor
(and other) inputs. Two possible directions of modeling include:
- Lobster chewing models. See the work of Eve Marder (e.g. Chapter 10
of Methods in Neuronal Modeling by Koch and Segev.
- Gait generation in insects or salamanders. See the work of
Randy Beer (note that his work stretches from the biological side
to the robotics side; you will need to stay toward the biology side of
Hippocampus and Navigation
In rat, the Hippocampus is heavily involved in the formation of
spatial maps. Many Hippocampal cells fire in response to the rat
being in a specific location, with the activation level dropping off
slowly as the rat moves away from the central "place" encoded by the
cell. By combining the current activity of many cells, it is possible
to estimate the rat's position. These cells can be activated by a
variety of sources, including visual cues, auditory cues,
proprioceptive/tactile cues (position body sense and touch sense), and
even motor efference copy (a copy of the motor signals that are
currently being sent to the muscles). Thus, the representation
appears to be truely a "cognitive map" that is independent of the
sensory information that was used to estimate the rat's current
How are these representations constructed? How are they updated with new inputs? And how are they learned in the first place?
- Neil Burgess, Michael Recce, and John O'Keefe (1995)
Hippocampus: Spatial Models, in
Handbook of Brain
Theory (Michael A. Arbib, ed.), MIT Press, pp. 468-472
- Touretzky, D. S. and Redish, A. D. (1996) Theory of
rodent navigation based on interacting representations of space.
- Redish, A.D. and Touretzky, D. S. (1997) Cognitive maps beyond
the Hippocampus. Hippocampus, 7(1):15-35.
- O'Keefe, J., and Burgess, N. (1996) Geometric
determinants of the place fields of hippocampal neurons. Nature,
Sliding Threshold Theory for Hebbian Learning (BCM Theory)
The Hebbian learning rule (if two neurons tend to fire together, then
increase the connection strength between them) has long been seen as a
possible mechanism for learning in the brain and has seen some use in
the artificial neural network (ANN) community. In order to build an
implementation of this learning rule, one must answer the question of
how to keep the connection strengths from growing in an unbounded
fashion. Typical approaches range from allowing a single weight to
grow up to a fixed level, or normalizing the set of weights (e.g., by
keeping the sum of the weights at 1).
A more biological approach can be found in Sliding Threshold or BCM
Theory. In this approach high, short-term correlations between the
postsynaptic and presynaptic cells leads to an increase in connection
strength, whereas low (but still positive) correlations result in
decreases in strength. Furthermore, the dividing point between this
Hebbian and anti-Hebbian behavior is allowed to slide. Specifically,
when the average, long-term activity of the postsynaptic grows above a
certain level, the threshold increases (thus requiring higher levels
of correlation in order to induce Hebbian learning). The opposite
happens when the average activity drops below a critical level. Thus,
the rule is structured such that the postsynaptic cell attempts to
achieve a set of connections that lead to the right level of activity
(sort of a Goldie Locks kind of a story).
How might these types of mechanisms be implemented biologically and what
might some of the computational implications be?
Neural Networks, volume 5 (1992), number 1
N. Intrator and L. N. Cooper, Objective function formulation of
the BCM theory of visual cortical
plasticity: statistical connections, stability conditions, pp. 3-18.
Application of BCM to orientation selectivity learning in a
robot (note that this particular model may not be very biological).
A bio paper on how BCM may be implemented
Intrator and Leon N. Cooper (1995)
BCM Theory of Visual Cortical Plasticity, in Handbook of Brain
Theory (Michael A. Arbib, ed.), MIT Press, pp. 153-157
- Thomas H. Brown and Sumantra Chattarji (1995)
Hebbian Synaptic Plasticity, in
Handbook of Brain
Theory (Michael A. Arbib, ed.), MIT Press, pp. 454-459
Last modified: Tue Oct 23 13:05:54 2001