Cognitive Representations of Space in Primate Posterior Parietal Cortex
Cognitive Representations of Space in Primate Posterior Parietal Cortex by Matt Chafee, Center for Cognitive Sciences, University of Minnesota.
Two types of representations: dependent on sensorimotor information and independen of sensorimotor. "Thinking" requires ability to separate from direct input.
Most basic description of brain: sensorimotor interface. Every area of neocortex; either represents sensory information, outputs motor, or connects between them. How can cognition emerge in such an architecture?
Parietal cortex: important for spatial processing
Posterior parietal: spatial sensorimotor interface
----Visual spatial coordinate transformation
These are "logical attributes" of a sensorimotor interface. Seems to bottleneck where vision is funnelling into motor--may be why parietal plays such a role in attention. "Transformation": visual information has to be tranformed from one reference frame to another (not sure if I got that right).
Direct projections from parietal to primary motor, superior colliculus: access to motor and visual.
Parietal neurons: particularly active when moving attention from one visual field to another. Maps very nicely to hemispheric neglect.
Two basic forms of hemineglect:
--one centered in view-centered spatial coordinates (will only copy right side of scene)
--one object-centered (will copy right side of each object)
Neural correlates of coordinate transform:
--placed object in visual field while monkey looking somewhere else --> parietal neuron most strongly stimulated when attention elsewhere.
How do you build cognition into these circuits?
1. Sensorimotor dependence: "neurons represent the location of a stimulus or the direction of a movement"
--attention == ?; working memory == buffering operation; motor intention attempt to move signal backwards; decision process == ?maximizing response?
--training monkey to remember target location: put visual stimulus in neuron's receptive field, continues firing 3 sec. after stim. disappears.
2. Sensorimotor independence: neurons represent abstract spatial information generated by a cognitive operation.
Constructional apraxia: difficulty in computing spatial relationships
--took test for apraxia, converted it into form monkeys could perform. Monkeys would focus eye on center of display, then configuration of shapes would appear ("model"). Next, "incomplete copy": shape appears but missing a piece. Important: all shapes same when missing piece. Thus, each model is an inverted T composed of squares with an extra square attached somewhere. Then they get choice between squares to add to shape to complete appropriately. Must press key at right time (when correct square is highlighted) to add it to shape. Hope is to look at cognitive functions upstream of motor output.
--some cells: activity is correlated to location of missing element. Will provide same input to retina (inverted T-shape), but will differentially fire based on which square is missing from shape.
--some cells: potentially firing when predicting that inverted T will be presented to them. So looked at them with inverted T with two extra squares, but only one would disappear. Thus confirming that cell's firing is correlating to missing area (I'm not sure if I followed this right).
--controlling to make sure not motor output signal: potentially responsible for saccades. Cells basically have no activity during saccade test; neither during delay interval nor during actual saccade.
--more than 40% of cells measured by array carrying signal involved in this: possibly training for new cognitive functions (monkeys don't do this in the while), maybe even developing new circuits during course of training.
~85% predictive accuracy on determining location of missing square based on firing rates of cells. Error trials: error in square selection correlates to error in firing rates.
Are the cells active whenever you direct attention to a specific location? No. When you just give them dots to look at in same location, little activity in same population.
Neural activity during object construction:
--tried shifting model L & R vs. shifting incomplete object L & R: is cell tied to object-relative activity, or relative to screen or visual fields? Cells consistently fire for square in relation to object, regardless of object location.
Then a bunch of stuff I didn't quite follow. Tired, need more coffee.
Due to moving of object, were able to check activity of cells when attention (missing object) in different parts of retinocentric space: changes, based on where missing square would be. So although they fire in relation to object, not visual field, there is some level at which location in visual field is effecting firing strength.
--cells responding to viewer-centered side preceed cells responding to object-centered side
Conclusion: abstract representation
--encodes computed spatial information
--not spatial vision
--not motor planning
--not eye position
--predicts spatial choice
Q: monkeys activated both object & viewer-centered reference frames, then acted on one. This implies ability to choose one over the other.
A: Discussion of future work
Q: Monkeys are very over-trained. Is it possible that you can train these monkeys in one direction or another, that you're training these monkeys to adopt one reference frame over another?