Farashahi S, Donahue CH, Khorsand P, Seo H, Lee D, and Soltani A (2017). Metaplasticity as a Neural Substrate for Adaptive Learning and Choice under Uncertainty. Neuron 94(2), 401-414.

 


View the Full Publication Here.

Abstract

Value-based decision making often involves integration of reward outcomes over time, but this becomes considerably more challenging if reward assignments on alternative options are probabilistic and non-stationary. Despite the existence of various models for optimally integrating reward under uncertainty, the underlying neural mechanisms are still unknown. Here we propose that reward-dependent metaplasticity (RDMP) can provide a plausible mechanism for both integration of reward under uncertainty and estimation of uncertainty itself. We show that a model based on RDMP can robustly perform the probabilistic reversal learning task via dynamic adjustment of learning based on reward feedback, while changes in its activity signal unexpected uncertainty. The model predicts time-dependent and choice-specific learning rates that strongly depend on reward history. Key predictions from this model were confirmed with behavioral data from non-human primates. Overall, our results suggest that metaplasticity can provide a neural substrate for adaptive learning and choice under uncertainty.

Merrikhi Y, Clark K, Albarran E, Mohammadbagher P, Zirnsak M, Moore T, Noudoost B (2017). Spatial working memory alters the efficacy of input to visual cortex. doi:10.1038/ncomms15041.

View the Full Publication Here.

Abstract

Prefrontal cortex modulates sensory signals in extrastriate visual cortex, in part via its direct projections from the frontal eye field (FEF), an area involved in selective attention. We find that working memory-related activity is a dominant signal within FEF input to visual cortex. Although this signal alone does not evoke spiking responses in areas V4 and MT during memory, the gain of visual responses in these areas increases, and neuronal receptive fields expand and shift towards the remembered location, improving the stimulus representation by neuronal populations. These results provide a basis for enhancing the representation of working memory targets and implicate persistent FEF activity as a basis for the interdependence of working memory and selective attention.

Owen LW, Manning JR (2017). Towards Human Super EEG. bioRxiv: 121020.

View the Full Publication Here.

Abstract

Human Super EEG entails measuring ongoing activity from every cell in a living human brain at millisecond-scale temporal resolutions. Although direct cell-by-cell Super EEG recordings are impossible using existing methods, here we present a technique for inferring neural activity at arbitrarily high spatial resolutions using human intracranial electrophysiological recordings. Our approach, based on Gaussian process regression, relies on two assumptions. First, we assume that some of the correlational structure of people's brain activity is similar across individuals. Second, we resolve ambiguities in the data by assuming that neural activity from nearby sources will tend to be similar, all else being equal. One can then ask, for an arbitrary individual's brain: given what we know about the correlational structure of other people's brains, and given the recordings we made from electrodes implanted in this person's brain, how would those recordings most likely have looked at other locations throughout this person's brain?

Heusser AC, Ziman K, Owen LW, Manning JR (2017). HyperTools: a Python toolbox for visualizing and manipulating high-dimensional data. arXiv: 1701.08290.

View the Full Publication Here.

Abstract

Data visualizations can reveal trends and patterns that are not otherwise obvious from the raw data or summary statistics. While visualizing low-dimensional data is relatively straightforward (for example, plotting the change in a variable over time as (x,y) coordinates on a graph), it is not always obvious how to visualize high-dimensional datasets in a similarly intuitive way. Here we present HypeTools, a Python toolbox for visualizing and manipulating large, high-dimensional datasets. Our primary approach is to use dimensionality reduction techniques (Pearson, 1901; Tipping & Bishop, 1999) to embed high-dimensional datasets in a lower-dimensional space, and plot the data using a simple (yet powerful) API with many options for data manipulation [e.g. hyperalignment (Haxby et al., 2011), clustering, normalizing, etc.] and plot styling. The toolbox is designed around the notion of data trajectories and point clouds. Just as the position of an object moving through space can be visualized as a 3D trajectory, HyperTools uses dimensionality reduction algorithms to create similar 2D and 3D trajectories for time series of high-dimensional observations. The trajectories may be plotted as interactive static plots or visualized as animations. These same dimensionality reduction and alignment algorithms can also reveal structure in static datasets (e.g. collections of observations or attributes). We present several examples showcasing how using our toolbox to explore data through trajectories and low-dimensional embeddings can reveal deep insights into datasets across a wide variety of domains.

Manning JR, Zhu X, Willke T, Ranganath R, Stachenfeld K, Hasson U, Blei DM, Norman KA (2017). A probabilistic approach to discovering dynamic full-brain functional connectivity patterns...

View the Full Publication Here.

Manning JR, Zhu X, Willke T, Ranganath R, Stachenfeld K, Hasson U, Blei DM, Norman KA (2017). A probabilistic approach to discovering dynamic full-brain functional connectivity patterns. bioRxiv: 106690.

Abstract

Recent work indicates that the covariance structure of functional magnetic resonance imaging (fMRI) data – commonly described as functional connectivity – can change as a function of the participant’s cognitive state (for review see [32]). Here we present a technique, termed hierarchical topographic factor analysis (HTFA), for efficiently discovering full-brain networks in large multi-subject neuroimaging datasets. HTFA approximates each subject’s network by first re-representing each brain image in terms of the activations of a set of localized nodes, and then computing the covariance of the activation time series of these nodes. The number of nodes, along with their locations, sizes, and activations (over time) are learned from the data. Because the number of nodes is typically substantially smaller than the number of fMRI voxels, HTFA can be orders of magnitude more efficient than traditional voxel-based functional connectivity approaches. In one case study, we show that HTFA recovers the known connectivity patterns underlying a synthetic dataset. In a second case study, we illustrate how HTFA may be used to discover dynamic full-brain activity and connectivity patterns in real fMRI data, collected as participants listened to a story. In a third case study, we carried out a similar series of analyses on fMRI data collected as participants viewed an episode of a television show. In these latter case studies, we found that both the HTFA-derived activity and connectivity patterns may be used to reliably decode which moments in the story or show the participants were experiencing. Further, we found that these two classes of patterns contained partially non-overlapping information, such that classifiers trained on combinations of activity-based and dynamic connectivitybased features performed better than classifiers trained on activity or connectivity patterns alone.

Dehaqani MR, Vahabie AH, Parsa M, Noudoost B, Soltani A (2016). Enhanced representation of space by prefrontal neuronal ensembles and its dependence on cognitive states. bioRxiv, 065581.

View the Full Publication Here.

Abstract

Although individual neurons can be highly selective to particular stimuli and certain upcoming actions, they can provide a complex representation of stimuli and actions at the level of population. The ability to dynamically allocate neural resources is crucial for cognitive flexibility. However, it is unclear whether cognitive flexibility emerges from changes in activity at the level of individual neurons, population, or both. By applying a combination of decoding and encoding methods to simultaneously recorded neural data, we show that while maintaining their stimulus selectivity, neurons in prefrontal cortex alter their correlated activity during various cognitive states, resulting in an enhanced representation of visual space. During a task with various cognitive states, individual prefrontal neurons maintained their limited spatial sensitivity between visual encoding and saccadic target selection whereas the population selectively improved its encoding of spatial locations far from the neurons’ preferred locations. This ‘encoding expansion’ relied on high-dimensional neural representations and was accompanied by selective reductions in noise correlation for non-preferred locations. Our results demonstrate that through recruitment of less-informative neurons and reductions of noise correlation in their activity, the representation of space by neuronal ensembles can be dynamically enhanced, and suggest that cognitive flexibility is mainly achieved by changes in neural representation at the level of population of prefrontal neurons rather than individual neurons.

Desrochers TM, Burk DC, Badre D, Sheinberg DL (2016). The monitoring and control of task sequences in human and non-human primates. Frontiers in Systems Neuroscience, 9.

View the Full Publication Here.

Abstract

Our ability to plan and execute a series of tasks leading to a desired goal requires remarkable coordination between sensory, motor, and decision-related systems. Prefrontal cortex (PFC) is thought to play a central role in this coordination, especially when actions must be assembled extemporaneously and cannot be programmed as a rote series of movements. A central component of this flexible behavior is the momentby-moment allocation of working memory and attention. The ubiquity of sequence planning in our everyday lives belies the neural complexity that supports this capacity, and little is known about how frontal cortical regions orchestrate the monitoring and control of sequential behaviors. For example, it remains unclear if and how sensory cortical areas, which provide essential driving inputs for behavior, are modulated by the frontal cortex during these tasks. Here, we review what is known about moment-tomoment monitoring as it relates to visually guided, rule-driven behaviors that change over time. We highlight recent human work that shows how the rostrolateral prefrontal cortex (RLPFC) participates in monitoring during task sequences. Neurophysiological data from monkeys suggests that monitoring may be accomplished by neurons that respond to items within the sequence and may in turn influence the tuning properties of neurons in posterior sensory areas. Understanding the interplay between proceduralized or habitual acts and supervised control of sequences is key to our understanding of sequential task execution. A crucial bridge will be the use of experimental protocols that allow for the examination of the functional homology between monkeys and humans. We illustrate how task sequences may be parceled into components and examined experimentally, thereby opening future avenues of investigation into the neural basis of sequential monitoring and control.

Frank SM, Sun L, Forster L, Tse PU, Greenlee MW (2016). Crossmodal attention effects in vestibular cortex during attentive tracking of moving objects. J Neuroscience, pii: 2480-16.

View the Full Publication Here.

Abstract

The midposterior fundus of the Sylvian fissure in the human brain is central to the cortical processing of vestibular cues. At least two vestibular areas are located at this site: the parietoinsular vestibular cortex (PIVC) and the posterior insular cortex (PIC). It is now well established that activity in sensory systems is subject to cross-modal attention effects. Attending to a stimulus in one sensory modality enhances activity in the corresponding cortical sensory system, but simultaneously suppresses activity in other sensory systems. Here, we wanted to probe whether such cross-modal attention effects also target the vestibular system. To this end, we used a visual multiple-object tracking task. By parametrically varying the number of tracked targets, we could measure the effect of attentional load on the PIVC and the PIC while holding the perceptual load constant. Participants performed the tracking task during functional magnetic resonance imaging. Results show that, compared with passive viewing of object motion, activity during object tracking was suppressed in the PIVC and enhanced in the PIC. Greater attentional load, induced by increasing the number of tracked targets, was associated with a corresponding increase in the suppression of activity in the PIVC. Activity in the anterior part of the PIC decreased with increasing load, whereas load effects were absent in the posterior PIC. Results of a control experiment show that attention-induced suppression in the PIVC is stronger than any suppression evoked by the visual stimulus per se. Overall, our results suggest that attention has a cross-modal modulatory effect on the vestibular cortex during visual object tracking.

Manning JR, Hulbert JC, Williams J, Piloto L, Sahakyan L, Norman KA (2016). A neural signature of contextually mediated intentional forgetting. Psychonomic Bulletin & Review.

View the Full Publication Here.

Abstract

The mental context in which we experience an event plays a fundamental role in how we organize our memories of an event (e.g. in relation to other events) and, in turn, how we retrieve those memories later. Because we use contextual representations to retrieve information pertaining to our past, processes that alter our representations of context can enhance or diminish our capacity to retrieve particular memories. We designed a functional magnetic resonance imaging (fMRI) experiment to test the hypothesis that people can intentionally forget previously experienced events by changing their mental representations of contextual information associated with those events. We had human participants study two lists of words, manipulating whether they were told to forget (or remember) the first list prior to studying the second list. We used pattern classifiers to track neural patterns that reflected contextual information associated with the first list and found that, consistent with the notion of contextual change, the activation of the first-list contextual representation was lower following a forget instruction than a remember instruction. Further, the magnitude of this neural signature of contextual change was negatively correlated with participants’ abilities to later recall items from the first list.

Schlegel A, Konuthula D, Alexander P, Blackwood E, Tse PU (2016). Fundamentally distributed information processing integrates the motor network into the mental workspace during mental rotation...

View the Full Publication Here.

Schlegel A, Konuthula D, Alexander P, Blackwood E, Tse PU (2016). Fundamentally distributed information processing integrates the motor network into the mental workspace during mental rotation. J Cognitive Neuroscience, 28(8):1139-51.

Abstract

The manipulation of mental representations in the human brain appears to share similarities with the physical manipulation of real-world objects. In particular, some neuroimaging studies have found increased activity in motor regions during mental rotation, suggesting that mental and physical operations may involve overlapping neural populations. Does the motor network contribute information processing to mental rotation? If so, does it play a similar computational role in both mental and manual rotation, and how does it communicate with the wider network of areas involved in the mental workspace? Here we used multivariate methods and fMRI to study 24 participants as they mentally rotated 3-D objects or manually rotated their hands in one of four directions. We find that information processing related to mental rotations is distributed widely among many cortical and subcortical regions, that the motor network becomes tightly integrated into a wider mental workspace network during mental rotation, and that motor network activity during mental rotation only partially resembles that involved in manual rotation. Additionally, these findings provide evidence that the mental workspace is organized as a distributed core network that dynamically recruits specialized subnetworks for specific tasks as needed.

Soltani A, Khorsand P, Guo CZ, Farashahi S, Liu J (2016). Neural substrates of cognitive biases during probabilistic inference. Nature Communications, 7:11393.

View the Full Publication Here.

Abstract

Decision making often requires simultaneously learning about and combining evidence from various sources of information. However, when making inferences from these sources, humans show systematic biases that are often attributed to heuristics or limitations in cognitive processes. Here we use a combination of experimental and modelling approaches to reveal neural substrates of probabilistic inference and corresponding biases. We find systematic deviations from normative accounts of inference when alternative options are not equally rewarding; subjects' choice behaviour is biased towards the more rewarding option, whereas their inferences about individual cues show the opposite bias. Moreover, inference bias about combinations of cues depends on the number of cues. Using a biophysically plausible model, we link these biases to synaptic plasticity mechanisms modulated by reward expectation and attention. We demonstrate that inference relies on direct estimation of posteriors, not on combination of likelihoods and prior. Our work reveals novel mechanisms underlying cognitive biases and contributions of interactions between reward-dependent learning, decision making and attention to high-level reasoning.

Dotson NM, Salazar RF, Goodell AB, Hoffman SJ, Gray CM (2015). Methods, caveats, and the future of large-scale microelectrode recordings in the non-human primate...

View the Full Publication Here.

Dotson NM, Salazar RF, Goodell AB, Hoffman SJ, Gray CM (2015). Methods, caveats, and the future of large-scale microelectrode recordings in the non-human primate. Frontiers in Systems Neuroscience, 9, 149.

Abstract

Cognitive processes play out on massive brain-wide networks, which produce widely distributed patterns of activity. Capturing these activity patterns requires tools that are able to simultaneously measure activity from many distributed sites with high spatiotemporal resolution. Unfortunately, current techniques with adequate coverage do not provide the requisite spatiotemporal resolution. Large-scale microelectrode recording devices, with dozens to hundreds of microelectrodes capable of simultaneously recording from nearly as many cortical and subcortical areas, provide a potential way to minimize these tradeoffs. However, placing hundreds of microelectrodes into a behaving animal is a highly risky and technically challenging endeavor that has only been pursued by a few groups. Recording activity from multiple electrodes simultaneously also introduces several statistical and conceptual dilemmas, such as the multiple comparisons problem and the uncontrolled stimulus response problem. In this perspective article, we discuss some of the techniques that we, and others, have developed for collecting and analyzing large-scale data sets, and address the future of this emerging field.

Khorsand P, Moore T, Soltani A (2015). Combined contribution of feedforward and feedback inputs to bottom-up attention. Frontiers in Psychology, 6(155):1-11.

View the Full Publication Here.

Abstract

In order to deal with a large amount of information carried by visual inputs entering the brain at any given point in time, the brain swiftly uses the same inputs to enhance processing in one part of visual field at the expense of the others. These processes, collectively called bottom-up attentional selection, are assumed to solely rely on feedforward processing of the external inputs, as it is implied by the nomenclature. Nevertheless, evidence from recent experimental and modeling studies points to the role of feedback in bottom-up attention. Here, we review behavioral and neural evidence that feedback inputs are important for the formation of signals that could guide attentional selection based on exogenous inputs. Moreover, we review results from a modeling study elucidating mechanisms underlying the emergence of these signals in successive layers of neural populations and how they depend on feedback from higher visual areas. We use these results to interpret and discuss more recent findings that can further unravel feedforward and feedback neural mechanisms underlying bottom-up attention. We argue that while it is descriptively useful to separate feedforward and feedback processes underlying bottom-up attention, these processes cannot be mechanistically separated into two successive stages as they occur at almost the same time and affect neural activity within the same brain areas using similar neural mechanisms. Therefore, understanding the interaction and integration of feedforward and feedback inputs is crucial for better understanding of bottom-up attention.

Brooks DI, Sigurdardottir HM, Sheinberg DL (2014). The neurophysiology of attention and object recognition in visual scenes. In K Kverga & M Bar (eds.) Scene Vision. Cambridge MA: MIT Press, 85-104.

View the Full Publication Here.

Abstract

This chapter examines some neural processes that a scene image undergoes as it moves through the visual system. It focuses on two opposite yet highly interactive neural systems, the frontoparietal network and the ventral visual stream. Visual recognition mechanisms in the ventral stream lean toward certain objects in visual scenes because they occupy a space that has already been allotted for a high priority by the lateral intraparietal area and the frontal eye fields. While the ventral visual system processes and determines the objects in that environment, the frontoparietal network allocates and points visual attention to important features of the environment.This division of labor by the two systems is supported by the view that spatial selection and target identification are separable parts of finding objects in visual scenes.

Dai J, Brooks DI, Sheinberg DL (2014). Optogenetic and electrical stimulation systematically bias visuospatial choice in primates. Current Biology, 24, 63-69.

View the Full Publication Here.

Abstract

Optogenetics is a recently developed method in which neurons are genetically modified to express membrane proteins sensitive to light, enabling precisely targeted control of neural activity [1, 2, 3]. The temporal and spatial precision afforded by neural stimulation by light holds promise as a powerful alternative to current methods of neural control, which rely predominantly on electrical and pharmacological methods, in both research and clinical settings [4, 5]. Although the optogenetic approach has been widely used in rodent and other small animal models to study neural circuitry [6, 7, 8], its functional application in primate models has proven more difficult. In contrast to the relatively large literature on the effects of cortical electrical microstimulation in perceptual and decision-making tasks [9, 10, 11, 12, 13], previous studies of optogenetic stimulation in primates have not demonstrated its utility in similar paradigms [14, 15, 16, 17, 18]. In this study, we directly compare the effects of optogenetic activation and electrical microstimulation in the lateral intraparietal area during a visuospatial discrimination task. We observed significant and predictable biases in visual attention in response to both forms of stimulation that are consistent with the experimental modulation of a visual salience map. Our results demonstrate the power of optogenetics as a viable alternative to electrical microstimulation for the precise dissection of the cortical pathways of high-level processes in the primate brain.

Dotson NM, Salazar RF, Gray CM (2014). Fronto-parietal correlation dynamics reveal interplay between integration and segregation during visual working memory. J Neuroscience, 34(41):13600-13.

View the Full Publication Here.

Abstract

Working memory requires large-scale cooperation among widespread cortical and subcortical brain regions. Importantly, these processes must achieve an appropriate balance between functional integration and segregation, which are thought to be mediated by task-dependent spatiotemporal patterns of correlated activity. Here, we used cross-correlation analysis to estimate the incidence, magnitude, and relative phase angle of temporally correlated activity from simultaneous local field potential recordings in a network of prefrontal and posterior parietal cortical areas in monkeys performing an oculomotor, delayed match-to-sample task. We found long-range intraparietal and frontoparietal correlations that display a bimodal distribution of relative phase values, centered near 0° and 180°, suggesting a possible basis for functional segregation among distributed networks. Both short- and long-range correlations display striking task-dependent transitions in strength and relative phase, indicating that cognitive events are accompanied by robust changes in the pattern of temporal coordination across the frontoparietal network.

Gözenman F, Tanoue RT, Metoyer T, Berryhill ME (2014). Invalid retrocues can eliminate the retrocue benefit: Evidence for a hybridized account...

View the Full Publication Here.

Gözenman F, Tanoue RT, Metoyer T, Berryhill ME (2014). Invalid retrocues can eliminate the retrocue benefit: Evidence for a hybridized account. Journal of Experimental Psychology: Human Perception and Performance, 40(5):1748-54.
[ DOI: 10.1037/a0037474. PMCID: PMC4172509. ]

Abstract

The contents of visual working memory (VWM) are capacity limited and require frequent updating. The retrospective cueing (retro-cueing) paradigm clarifies how directing internal attention among VWM items boosts VWM performance. In this paradigm a cue appears prior to retrieval, but after encoding and maintenance. The retro-cue effect (RCE) refers to superior VWM after valid versus neutral retro-cues. Here we investigated the effect of the invalid retro-cues’ inclusion on VWM performance. We conducted 2 pairs of experiments, changing both probe type (recognition and recall) as well as presence and absence of invalid retro-cue trials. Furthermore, to fully characterize these effects over time, we used extended post-retro-cue delay durations. In the first set of experiments, probing VWM using recognition indicated that the RCE remained consistent in magnitude with or without invalid retro-cue trials. In the second set of experiments, VWM was probed with recall. Here, the RCE was eliminated when invalid retro-cues were included. This finer-grained measure of VWM fidelity showed that all items were subject to decay over time. We conclude that the invalid retro-cues impaired the protection of validly cues items, but they remain accessible, suggesting greater concordance with a prioritization account.

Janczyk M, Berryhill ME (2014). Orienting attention in visual working memory requires central capacity: Decreased retrocue effects under dualtask conditions. Attention, Perception and Psychophysics...

View the Full Publication Here.

Janczyk M, Berryhill ME (2014). Orienting attention in visual working memory requires central capacity: Decreased retrocue effects under dualtask conditions. Attention, Perception and Psychophysics, 76, 715-724.
[ DOI 10.3758/s134140130615x. PMCID: PMC4080723. ]

Abstract

The retro-cue effect (RCE) describes superior working memory performance for validly cued stimulus locations long after encoding has ended. Importantly, this happens with delays beyond the range of iconic memory. In general, the RCE is a stable phenomenon that emerges under varied stimulus configurations and timing parameters. We investigated its susceptibility to dual-task interference to determine the attentional requirements at the time point of cue onset and encoding. In Experiment 1, we compared single- with dualtask conditions. In Experiment 2, we borrowed from the psychological refractory period paradigm and compared conditions with high and low (dual-) task overlap. The secondary task was always binary tone discrimination requiring a manual response. Across both experiments, an RCE was found, but it was diminished in magnitude in the critical dual-task conditions. A previous study did not find evidence that sustained attention is required in the interval between cue offset and test. Our results apparently contradict these findings and point to a critical time period around cue onset and briefly thereafter during which attention is required.

Manning JR, Ranganath R, Norman KA, Blei DA (2014). Topographic Factor Analysis: a Bayesian model for inferring brain networks from neural data. PLoS One, 9(5): e94914.

View the Full Publication Here.

Abstract

The neural patterns recorded during a neuroscientific experiment reflect complex interactions between many brain regions, each comprising millions of neurons. However, the measurements themselves are typically abstracted from that underlying structure. For example, functional magnetic resonance imaging (fMRI) datasets comprise a time series of three-dimensional images, where each voxel in an image (roughly) reflects the activity of the brain structure(s)–located at the corresponding point in space–at the time the image was collected. FMRI data often exhibit strong spatial correlations, whereby nearby voxels behave similarly over time as the underlying brain structure modulates its activity. Here we develop topographic factor analysis (TFA), a technique that exploits spatial correlations in fMRI data to recover the underlying structure that the images reflect. Specifically, TFA casts each brain image as a weighted sum of spatial functions. The parameters of those spatial functions, which may be learned by applying TFA to an fMRI dataset, reveal the locations and sizes of the brain structures activated while the data were collected, as well as the interactions between those structures.

Manning JR, Lew TF, Li N, Kahana MJ, Sekuler RW (2014). MAGELLAN: a cognitive map-based model of human wayfinding. Journal of Experimental Psychology: General, 143(3): 1314-1330.

View the Full Publication Here.

Abstract

In an unfamiliar environment, searching for and navigating to a target requires that spatial information be acquired, stored, processed, and retrieved. In a study encompassing all of these processes, participants acted as taxicab drivers who learned to pick up and deliver passengers in a series of small virtual towns. We used data from these experiments to refine and validate MAGELLAN, a cognitive map– based model of spatial learning and wayfinding. MAGELLAN accounts for the shapes of participants’ spatial learning curves, which measure their experience-based improvement in navigational efficiency in unfamiliar environments. The model also predicts the ease (or difficulty) with which different environments are learned and, within a given environment, which landmarks will be easy (or difficult) to localize from memory. Using just 2 free parameters, MAGELLAN provides a useful account of how participants’ cognitive maps evolve over time with experience, and