Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Flexible coding of object motion in multiple reference frames by parietal cortex neurons

Abstract

Neurons represent spatial information in diverse reference frames, but it remains unclear whether neural reference frames change with task demands and whether these changes can account for behavior. In this study, we examined how neurons represent the direction of a moving object during self-motion, while monkeys switched, from trial to trial, between reporting object direction in head- and world-centered reference frames. Self-motion information is needed to compute object motion in world coordinates but should be ignored when judging object motion in head coordinates. Neural responses in the ventral intraparietal area are modulated by the task reference frame, such that population activity represents object direction in either reference frame. In contrast, responses in the lateral portion of the medial superior temporal area primarily represent object motion in head coordinates. Our findings demonstrate a neural representation of object motion that changes with task requirements.

This is a preview of subscription content, access via your institution

Access options

Rent or buy this article

Prices vary by article type

from$1.95

to$39.95

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Schematic illustration of interactions between object motion and self-motion.
Fig. 2: Behavioral task design and predicted psychometric functions.
Fig. 3: Summary of behavioral performance for each task reference frame.
Fig. 4: Data from example neurons recorded from areas VIP and MSTl.
Fig. 5: Summary of single-unit results for VIP and MSTl.
Fig. 6: Summary of population decoding results.
Fig. 7: Time course of decoder performance.
Fig. 8: Time courses of classification accuracy using within-task versus cross-task decoding.

Similar content being viewed by others

Data availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Code availability

Custom analysis code was written using MATLAB (v. 2018a). MATLAB scripts employed are available from the corresponding author upon reasonable request.

References

  1. Andersen, R. A., Essick, G. K. & Siegel, R. M. Encoding of spatial location by posterior parietal neurons. Science 230, 456–458 (1985).

    CAS  PubMed  Google Scholar 

  2. Avillac, M., Deneve, S., Olivier, E., Pouget, A. & Duhamel, J. R. Reference frames for representing visual and tactile locations in parietal cortex. Nat. Neurosci. 8, 941–949 (2005).

    CAS  PubMed  Google Scholar 

  3. Batista, A. P., Buneo, C. A., Snyder, L. H. & Andersen, R. A. Reach plans in eye-centered coordinates. Science 285, 257–260 (1999).

    CAS  PubMed  Google Scholar 

  4. Duhamel, J. R., Bremmer, F., Ben Hamed, S. & Graf, W. Spatial invariance of visual receptive fields in parietal cortex neurons. Nature 389, 845–848 (1997).

    CAS  PubMed  Google Scholar 

  5. Fetsch, C. R., Wang, S., Gu, Y., DeAngelis, G. C. & Angelaki, D. E. Spatial reference frames of visual, vestibular, and multimodal heading signals in the dorsal subdivision of the medial superior temporal area. J. Neurosci. 27, 700–712 (2007).

    CAS  PubMed  PubMed Central  Google Scholar 

  6. Galletti, C., Battaglini, P. P. & Fattori, P. Parietal neurons encoding spatial locations in craniotopic coordinates. Exp. Brain Res. 96, 221–229 (1993).

    CAS  PubMed  Google Scholar 

  7. Jay, M. F. & Sparks, D. L. Auditory receptive fields in primate superior colliculus shift with changes in eye position. Nature 309, 345–347 (1984).

    CAS  PubMed  Google Scholar 

  8. Lee, J. & Groh, J. M. Auditory signals evolve from hybrid- to eye-centered coordinates in the primate superior colliculus. J. Neurophysiol. 108, 227–242 (2012).

    PubMed  PubMed Central  Google Scholar 

  9. Mullette-Gillman, O. A., Cohen, Y. E. & Groh, J. M. Eye-centered, head-centered, and complex coding of visual and auditory targets in the intraparietal sulcus. J. Neurophysiol. 94, 2331–2352 (2005).

    PubMed  Google Scholar 

  10. Mullette-Gillman, O. A., Cohen, Y. E. & Groh, J. M. Motor-related signals in the intraparietal cortex encode locations in a hybrid, rather than eye-centered reference frame. Cereb. Cortex 19, 1761–1775 (2009).

    PubMed  Google Scholar 

  11. Schlack, A., Sterbing-D’Angelo, S. J., Hartung, K., Hoffmann, K. P. & Bremmer, F. Multisensory space representations in the macaque ventral intraparietal area. J. Neurosci. 25, 4616–4625 (2005).

    CAS  PubMed  PubMed Central  Google Scholar 

  12. Snyder, L. H., Grieve, K. L., Brotchie, P. & Andersen, R. A. Separate body- and world-referenced representations of visual space in parietal cortex. Nature 394, 887–891 (1998).

    CAS  PubMed  Google Scholar 

  13. Sajad, A. et al. Visual-motor transformations within frontal eye fields during head-unrestrained gaze shifts in the monkey. Cereb. Cortex 25, 3932–3952 (2015).

    PubMed  Google Scholar 

  14. Kiesel, A. et al. Control and interference in task switching-a review. Psychol. Bull. 136, 849–874 (2010).

    PubMed  Google Scholar 

  15. Ruge, H., Jamadar, S., Zimmermann, U. & Karayanidis, F. The many faces of preparatory control in task switching: reviewing a decade of fMRI research. Hum. Brain Mapp. 34, 12–35 (2013).

    PubMed  Google Scholar 

  16. Stoet, G. & Snyder, L. H. Neural correlates of executive control functions in the monkey. Trends Cogn. Sci. 13, 228–234 (2009).

    PubMed  PubMed Central  Google Scholar 

  17. Stoet, G. & Snyder, L. H. Single neurons in posterior parietal cortex of monkeys encode cognitive set. Neuron 42, 1003–1012 (2004).

    CAS  PubMed  Google Scholar 

  18. Kim, H. R., Pitkow, X., Angelaki, D. E. & DeAngelis, G. C. A simple approach to ignoring irrelevant variables by population decoding based on multisensory neurons. J. Neurophysiol. 116, 1449–1467 (2016).

    PubMed  PubMed Central  Google Scholar 

  19. Sasaki, R., Angelaki, D. E. & DeAngelis, G. C. Dissociation of self-motion and object motion by linear population decoding that approximates marginalization. J. Neurosci. 37, 11204–11219 (2017).

    CAS  PubMed  PubMed Central  Google Scholar 

  20. Rushton, S. K. & Warren, P. A. Moving observers, relative retinal motion and the detection of object movement. Curr. Biol. 15, R542–R543 (2005).

    CAS  PubMed  Google Scholar 

  21. Warren, P. A. & Rushton, S. K. Optic flow processing for the assessment of object movement during ego movement. Curr. Biol. 19, 1555–1560 (2009).

    CAS  PubMed  Google Scholar 

  22. Royden, C. S. & Connors, E. M. The detection of moving objects by moving observers. Vision Res. 50, 1014–1024 (2010).

    PubMed  Google Scholar 

  23. Royden, C. S. & Holloway, M. A. Detecting moving objects in an optic flow field using direction- and speed-tuned operators. Vision Res. 98, 14–25 (2014).

    PubMed  Google Scholar 

  24. Fajen, B. R. & Matthis, J. S. Visual and non-visual contributions to the perception of object motion during self-motion. PLoS ONE 8, e55446 (2013).

    CAS  PubMed  PubMed Central  Google Scholar 

  25. Dokka, K., MacNeilage, P. R., DeAngelis, G. C. & Angelaki, D. E. Multisensory self-motion compensation during object trajectory judgments. Cereb. Cortex 25, 619–630 (2015).

    PubMed  Google Scholar 

  26. MacNeilage, P. R., Zhang, Z., DeAngelis, G. C. & Angelaki, D. E. Vestibular facilitation of optic flow parsing. PLoS ONE 7, e40264 (2012).

    CAS  PubMed  PubMed Central  Google Scholar 

  27. Eifuku, S. & Wurtz, R. H. Response to motion in extrastriate area MSTl: center-surround interactions. J. Neurophysiol. 80, 282–296 (1998).

    CAS  PubMed  Google Scholar 

  28. Tanaka, K., Sugita, Y., Moriya, M. & Saito, H. Analysis of object motion in the ventral part of the medial superior temporal area of the macaque visual cortex. J. Neurophysiol. 69, 128–142 (1993).

    CAS  PubMed  Google Scholar 

  29. Ilg, U. J., Schumann, S. & Thier, P. Posterior parietal cortex neurons encode target motion in world-centered coordinates. Neuron 43, 145–151 (2004).

    CAS  PubMed  Google Scholar 

  30. Chen, X., DeAngelis, G. C. & Angelaki, D. E. Diverse spatial reference frames of vestibular signals in parietal cortex. Neuron 80, 1310–1321 (2013).

    CAS  PubMed  Google Scholar 

  31. Chen, X., DeAngelis, G. C. & Angelaki, D. E. Eye-centered representation of optic flow tuning in the ventral intraparietal area. J. Neurosci. 33, 18574–18582 (2013).

    PubMed  PubMed Central  Google Scholar 

  32. Berens, P. et al. A fast and simple population code for orientation in primate V1. J. Neurosci. 32, 10618–10626 (2012).

    CAS  PubMed  PubMed Central  Google Scholar 

  33. Zaidel, A., DeAngelis, G. C. & Angelaki, D. E. Decoupled choice-driven and stimulus-related activity in parietal neurons may be misrepresented by choice probabilities. Nat. Commun. 8, 715 (2017).

    PubMed  PubMed Central  Google Scholar 

  34. Britten, K. H., Newsome, W. T., Shadlen, M. N., Celebrini, S. & Movshon, J. A. A relationship between behavioral choice and the visual responses of neurons in macaque MT. Vis. Neurosci. 13, 87–100 (1996).

    CAS  PubMed  Google Scholar 

  35. Dokka, K., DeAngelis, G. C. & Angelaki, D. E. Multisensory integration of visual and vestibular signals improves heading discrimination in the presence of a moving object. J. Neurosci. 35, 13599–13607 (2015).

    CAS  PubMed  PubMed Central  Google Scholar 

  36. Sasaki, R., Angelaki, D. E. & DeAngelis, G. C. Processing of object motion and self-motion in the lateral subdivision of the medial superior temporal area in macaques. J. Neurophysiol. 121, 1207–1221 (2019).

    PubMed  PubMed Central  Google Scholar 

  37. Chen, A., DeAngelis, G. C. & Angelaki, D. E. Functional specializations of the ventral intraparietal area for multisensory heading discrimination. J. Neurosci. 33, 3567–3581 (2013).

    CAS  PubMed  PubMed Central  Google Scholar 

  38. Gu, Y. et al. Perceptual learning reduces interneuronal correlations in macaque visual cortex. Neuron 71, 750–761 (2011).

    CAS  PubMed  PubMed Central  Google Scholar 

  39. Kohn, A., Coen-Cagli, R., Kanitscheider, I. & Pouget, A. Correlations and neuronal population information. Annu. Rev. Neurosci. 39, 237–256 (2016).

    CAS  PubMed  PubMed Central  Google Scholar 

  40. Averbeck, B. B., Latham, P. E. & Pouget, A. Neural correlations, population coding and computation. Nat. Rev. Neurosci. 7, 358–366 (2006).

    CAS  PubMed  Google Scholar 

  41. Moreno-Bote, R. et al. Information-limiting correlations. Nat. Neurosci. 17, 1410–1417 (2014).

    CAS  PubMed  PubMed Central  Google Scholar 

  42. Dokka, K., Park, H., Jansen, M., DeAngelis, G. C. & Angelaki, D. E. Causal inference accounts for heading perception in the presence of object motion. Proc. Natl Acad. Sci. USA 116, 9060–9065 (2019).

    CAS  PubMed  Google Scholar 

  43. Chen, X., DeAngelis, G. C. & Angelaki, D. E. Eye-centered visual receptive fields in the ventral intraparietal area. J. Neurophysiol. 112, 353–361 (2014).

    PubMed  PubMed Central  Google Scholar 

  44. Chen, X., DeAngelis, G. C. & Angelaki, D. E. Flexible egocentric and allocentric representations of heading signals in parietal cortex. Proc. Natl Acad. Sci. USA 115, E3305–E3312 (2018).

    CAS  PubMed  Google Scholar 

  45. Crespi, S. et al. Spatiotopic coding of BOLD signal in human visual cortex depends on spatial attention. PLoS ONE 6, e21661 (2011).

    CAS  PubMed  PubMed Central  Google Scholar 

  46. Merriam, E. P., Gardner, J. L., Movshon, J. A. & Heeger, D. J. Modulation of visual responses by gaze direction in human visual cortex. J. Neurosci. 33, 9879–9889 (2013).

    CAS  PubMed  PubMed Central  Google Scholar 

  47. Bernier, P. M. & Grafton, S. T. Human posterior parietal cortex flexibly determines reference frames for reaching based on sensory context. Neuron 68, 776–788 (2010).

    CAS  PubMed  Google Scholar 

  48. Bremner, L. R. & Andersen, R. A. Temporal analysis of reference frames in parietal cortex area 5d during reach planning. J. Neurosci. 34, 5273–5284 (2014).

    CAS  PubMed  PubMed Central  Google Scholar 

  49. Duncker, K. Uber induzierte Bewegung. Psychologische Forschung 12, 180–259 (1929).

    Google Scholar 

  50. Zivotofsky, A. Z. The Duncker illusion: intersubject variability, brief exposure, and the role of eye movements in its generation. Invest. Ophthalmol. Vis. Sci. 45, 2867–2872 (2004).

    PubMed  Google Scholar 

  51. Gu, Y., Watkins, P. V., Angelaki, D. E. & DeAngelis, G. C. Visual and nonvisual contributions to three-dimensional heading selectivity in the medial superior temporal area. J. Neurosci. 26, 73–85 (2006).

    CAS  PubMed  PubMed Central  Google Scholar 

  52. Fetsch, C. R., Pouget, A., DeAngelis, G. C. & Angelaki, D. E. Neural correlates of reliability-based cue weighting during multisensory integration. Nat. Neurosci. 15, 146–154 (2012).

    CAS  Google Scholar 

  53. Gu, Y., Angelaki, D. E. & DeAngelis, G. C. Neural correlates of multisensory cue integration in macaque MSTd. Nat. Neurosci. 11, 1201–1210 (2008).

    CAS  PubMed  PubMed Central  Google Scholar 

  54. Chen, A., DeAngelis, G. C. & Angelaki, D. E. Representation of vestibular and visual cues to self-motion in ventral intraparietal cortex. J. Neurosci. 31, 12036–12052 (2011).

    CAS  PubMed  PubMed Central  Google Scholar 

  55. Chen, A., DeAngelis, G. C. & Angelaki, D. E. Macaque parieto-insular vestibular cortex: responses to self-motion and optic flow. J. Neurosci. 30, 3022–3042 (2010).

    CAS  PubMed  PubMed Central  Google Scholar 

  56. Chen, A., Gu, Y., Takahashi, K., Angelaki, D. E. & DeAngelis, G. C. Clustering of self-motion selectivity and visual response properties in macaque area MSTd. J. Neurophysiol. 100, 2669–2683 (2008).

    PubMed  PubMed Central  Google Scholar 

  57. Bishop, C. M. Pattern Recognition and Machine Learning (Springer, 2006).

Download references

Acknowledgements

This work was supported by National Institutes of Health grants EY016178 (to G.C.D.) and DC014678 (to D.E.A.), the Uehara Memorial Foundation (to R.S.), the Japan Society for the Promotion of Science (to R.S.) and an NEI CORE grant (EY001319). We thank D. Graf, S. Shimpi and E. Murphy for excellent technical support and J. Wen and A. Yung for programming support.

Author information

Authors and Affiliations

Authors

Contributions

R.S. and G.C.D. conceived and designed the research. R.S. performed experiments. R.S. analyzed data. A.A. built the recording system. R.S., A.A., D.E.A. and G.C.D. interpreted results of experiments. R.S. prepared the figures. R.S. and G.C.D. drafted the manuscript. R.S., A.A., D.E.A. and G.C.D. edited and revised the manuscript. R.S., A.A., D.E.A. and G.C.D. approved the final version of the manuscript.

Corresponding author

Correspondence to Ryo Sasaki.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Peer review information Nature Neuroscience thanks Alexander Huk, Shawn M. Willett and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended Data Fig. 1 Summary of psychophysical thresholds (inverse of sensitivity) across task conditions.

a, Average threshold for the Object Only condition (no self-motion) is plotted against average thresholds for the Object+Visual and Object+Combined conditions for the world (brown/magenta) and head (blue/cyan) coordinate tasks. Error bars represent 95% confident intervals. Averages taken over n = 185 sessions across the two animals. b, For each session, threshold in the Object+Combined condition is plotted against the corresponding threshold in the Object+Visual condition. Black symbols show mean thresholds and error bars represent 95% confidence intervals. Data from 128 sessions for Monkey N and 57 sessions for monkey K.

Extended Data Fig. 2 Summary of receptive field locations for populations of VIP (orange, N = 66) and MSTl (green, N = 44) neurons.

Cells are included here if they had significant structure in receptive field maps obtained by reverse correlation (17% of VIP and 13% of MSTl neurons) or if they had clear hand-mapped receptive fields for which good estimates of RF center and size were obtained (13% of VIP neurons and 12% of MSTl neurons). Significant structure in reverse correlation maps was assessed by a two-sided permutation test (p < 0.05), in which we scrambled the relationship between response amplitude and stimulus location within the RF, as described previously56. Ellipses approximate the RF dimensions and were derived either from a two-dimensional Gaussian fit (contour at half-maximal response) to receptive field maps obtained by reverse correlation (VIP: N = 38; MSTl: N = 23), or from hand mapping (VIP: N = 28; MSTl: N = 21). Coordinate (0, 0) represents the center of the visual display, where the fixation target was located. Yellow dashed lines represent the starting location of the moving object and the range of directions in head coordinates.

Extended Data Fig. 3 Data from four additional VIP neurons, illustrating diversity of effects of self-motion on tuning curves.

Top: Object+Combined condition. Bottom: Object+Visual condition. Format as in Fig. 4. Error bars denote SEM (n = 10 stimulus repetitions per datum).

Extended Data Fig. 4 Summary of time courses of average firing rates and directional selectivity.

a, Average response across all 223 VIP and 177 MSTl neurons is shown for each stimulus condition for both the head and world coordinate task conditions. For each neuron, responses were taken from the object motion direction that elicited the maximum firing rate. Error bars represent SEM. Color coding as in Fig. 7. Results were nearly identical if the responses of neurons were normalized before averaging. b, Average direction discrimination index (DDI) for populations of VIP (n = 223) and MSTl (n = 177) neurons (see Methods, Eq. 2). DDI values were computed separately for leftward and rightward self-motion and then averaged for each neuron. Error bars represent 95% confidence intervals. For this figure, both average responses and DDI values were computed within a 300 ms sliding time window that was advanced across the stimulus epoch in steps of 50 ms.

Extended Data Fig. 5 Decoder results are robust to the type of classifier used.

Black data points represent results from the FLD classifier used in all main figures. Red data points show results from a logistic regression decoder. For this comparison, the same population responses were used for training and testing each decoder. The results are very robust to the type of decoder used. Error bars represent 95% confidence intervals (across n = 1000 bootstraps).

Extended Data Fig. 6 Comparison of decoder results across animals.

a–d, Results for separate decoders trained to perform the world and head coordinate tasks. Format as in Fig. 6. Each row shows results separately for each animal. Pink and cyan dashed lines in panels b and d: expected ΔPSE for perfect performance in the world and head coordinate tasks, respectively. Error bars in panels b and d represent 95% confidence intervals (across n = 1000 bootstraps). eh, Results for the single decoder, shown separately for each animal. Decoders were trained separately using responses from each animal, yet main results are conserved across subjects. Error bars represent 95% confidence intervals (across n = 1000 bootstraps). Format as in panels a-d.

Extended Data Fig. 7 Effect of partial cube frame on single-unit responses and population decoding.

a, d, Distributions of the cube effect index (CEI, see Methods) for areas VIP and MSTl, respectively, in the world coordinate task. Black and gray shading denotes neurons with CEI values that are significantly different from zero and non-significant, respectively (two-sided permutation test, p < 0.05). b, e, Distributions of CEI for VIP and MSTl, respectively, in the head coordinate task condition. c, f, Distributions of the difference in CEI (ΔCEI) between world and head task conditions for VIP and MSTl, respectively. Green and purple shading indicates a median split of the data based on the absolute value, |ΔCEI|. g, h, Comparison of decoder accuracy (proportion correct) for populations of neurons with above-median |ΔCEI| (abscissa) and below-median |ΔCEI| (ordinate) values, for areas VIP and MSTl, respectively. Error bars represent 95% confidence intervals (across n = 1000 bootstraps). Data in these panels come from decoders that were trained separately for the world and head coordinate task conditions. i, j, Same as panels g and h, except for a single decoder trained to perform the task across both reference frame conditions. Format as in g, h.

Extended Data Fig. 8 Summary of choice-related and task-related response modulations.

a, Scatter plot of task probability (TP) and choice probability (CP) values for VIP neurons (N = 223). Color of the symbol centers corresponds to significance of TP and CP values as follows: blue center, both TP and CP are significantly different from 0.5 (two-sided permutation test, p < 0.05); red center, only CP is significantly different from 0.5; gold center, only TP is significantly different from 0.5; white center, neither TP nor CP is significant. The observation that TP and CP values are largely uncorrelated here is an empirical observation that is not enforced by the analysis. b, Scatter plot of TP and CP values for MSTl neurons (N = 177). Symbol center color conventions as in panel a. c, Scatter plot of TP values for VIP neurons computed separately for right and left choices (N = 223). d, Same as panel c but for MSTl neurons (N = 177). e, Scatter plot comparing CP values from VIP for the world and head coordinate task conditions (N = 223). f, Same as panel e but for MSTl neurons (N = 177).

Extended Data Fig. 9 Effects of selectively removing choice- or task-related response modulations.

a, Scatter plot of TP and CP values for VIP (N = 223) after selective removal of choice-related response modulations (see Methods for details). Format as in Extended Data Fig. 8a. b, Same as panel a except for MSTl (N = 177). Format as in Extended Data Fig. 8b. c, Scatter plot of TP and CP values for VIP after selective removal of task-related response modulations. d, Same as panel c, except for MSTl. e, Time course of decoder performance based on activity of 223 VIP neurons on response conflict trials, after removal of task-related response modulations. Data are shown for the case of separate decoders for world and head coordinate task conditions. Format as in Fig. 7b. Error bars represent 95% confidence intervals (across n = 100 bootstraps). f, Time course of VIP decoder performance, as in panel e, but after removal of choice-related response modulations.

Extended Data Fig. 10 Results from behavioral control sessions in which the depth of the partial cube was varied across trials.

a, Predicted ΔPSE values are shown as a function of the depth of the partial cube. Red and blue data points show predicted ΔPSE values and depths for the near and far edges of the cube. b, Dashed curves replot the predictions from panel a, where the horizontal axis is now depth relative to the origins for the near (red) and far (blue) cube edges (where the origins are the farthest depths for each edge). Data points represent behavioral ΔPSE values for the two monkeys (n = 7 sessions for each animal); magenta and brown data points show results for the Object+Combined and Object+Visual conditions. Error bars show 95% confidence intervals, and lines show regression fits. The slopes of the linear fits were not significantly different from zero for either animal or either self-motion condition (two-tailed t-test, p > 0.15 for all four cases).

Supplementary information

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sasaki, R., Anzai, A., Angelaki, D.E. et al. Flexible coding of object motion in multiple reference frames by parietal cortex neurons. Nat Neurosci 23, 1004–1015 (2020). https://doi.org/10.1038/s41593-020-0656-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41593-020-0656-0

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing