Ziman K, Heusser AC, Fitzpatrick PC, Field CE, Manning JR (2017) Is automatic speech-to-text transcription ready for use in psychological experiments? PsyArXiv: psh48.

Ziman K, Heusser AC, Fitzpatrick PC, Field CE, Manning JR (2017) Is automatic speech-to-text transcription ready for use in psychological experiments?  PsyArXiv: psh48.

Read the Full Publication here.

Abstract

Verbal responses are a convenient and naturalistic way for participants to provide data in psychological experiments (Salzinger, 1959). However, audio recordings of verbal responses typically require additional processing such as transcribing the recordings into text, as compared with other behavioral response modalities (e.g. typed responses, button presses, etc.). Further, the transcription process is often tedious and time-intensive, requiring human listeners to manually examine each moment of recorded speech. Here we evaluate the performance of a state-of-the-art speech recognition algorithm (Halpern et al., 2016) in transcribing audio data into text during a list-learning experiment. We compare the computer-generated transcripts to transcripts made by human annotators. Both sets of transcripts matched to a high degree and exhibited similar statistical properties, in terms of the participants’ recall performance and recall dynamics that the transcripts captured. This proof-of-concept study suggests that speech-to-text engines could provide a cheap, reliable, and rapid means of automatically transcribing speech data in psychological experiments. Further, our findings open the door for verbal response experiments that scale to thousands of participants (e.g. administered online), as well as a new generation of experiments that decode speech on-the-fly and adapt experimental parameters based on participants’ prior responses.

 

Erlikhman, G., Caplovitz, G. P. (2017). Decoding information about dynamically occluded objects in visual cortex. NeuroImage, 146, 778-788. doi: 10.1016/j.neuroimage.2016.09.024. Epub 2016 Sep 20.

Read the Full Publication Here

Erlikhman, G., Caplovitz, G. P. (2017). Decoding information about dynamically occluded objects in visual cortex. NeuroImage, 146, 778-788. doi: 10.1016/j.neuroimage.2016.09.024. Epub 2016 Sep 20. PubMed PMID: 27663987; PubMed Central PMCID: PMC5322156

Abstract

During dynamic occlusion, an object passes behind an occluding surface and then later reappears. Even when completely occluded from view, such objects are experienced as continuing to exist or persist behind the occluder even though they are no longer visible. The contents and neural basis of this persistent representation remain poorly understood. Questions remain as to whether there is information maintained about the object itself (i.e. its shape or identity) or non-object-specific information such as its position or velocity as it is tracked behind an occluder, as well as which areas of visual cortex represent such information. Recent studies have found that early visual cortex is activated by “invisible” objects during visual imagery and by unstimulated regions along the path of apparent motion, suggesting that some properties of dynamically occluded objects may also be neurally represented in early visual cortex. We applied functional magnetic resonance imaging in human subjects to examine representations within visual cortex during dynamic occlusion. For gradually occluded, but not for instantly disappearing objects, there was an increase in activity in early visual cortex (V1, V2, and V3). This activity was spatially-specific, corresponding to the occluded location in the visual field. However, the activity did not encode enough information about object identity to discriminate between different kinds of occluded objects (circles vs. stars) using MVPA. In contrast, object identity could be decoded in spatially-specific subregions of higher-order, topographically organized areas such as ventral, lateral, and temporal occipital areas (VO, LO, and TO) as well as the functionally defined LOC and hMT+. These results suggest that early visual cortex may only represent the dynamically occluded object's position or motion path, while later visual areas represent object-specific information.

Frank SM, Greenlee MW, Tse PU (2017). Long Time No See: Enduring Behavioral and Neuronal Changes in Perceptual Learning of Motion Trajectories 3 Years After Training. Cerebral Cortex.

Read the full publication here.

Frank SM, Greenlee MW, Tse PU (2017). Long Time No See: Enduring Behavioral and Neuronal Changes in Perceptual Learning of Motion Trajectories 3 Years After Training. Cerebral Cortex, 1-12. doi: 10.1093/cercor/bhx039.

 

Kohler PJ, Cavanagh P, Tse PU (2017). Motion-Induced Position Shifts Activate Early Visual Cortex. Front Neurosci, 11:168. doi: 10.3389/fnins.2017.00168.

Read the full publication here. 

Kohler PJ, Cavanagh P, Tse PU (2017). Motion-Induced Position Shifts Activate Early Visual Cortex. Front Neurosci, 11:168. doi: 10.3389/fnins.2017.00168.

Abstract

The ability to correctly determine the position of objects in space is a fundamental task of the visual system. The perceived position of briefly presented static objects can be influenced by nearby moving contours, as demonstrated by various illusions collectively known as motion-induced position shifts. Here we use a stimulus that produces a particularly strong effect of motion on perceived position. We test whether several regions-of-interest (ROIs), at different stages of visual processing, encode the perceived rather than retinotopically veridical position. Specifically, we collect functional MRI data while participants experience motion-induced position shifts and use a multivariate pattern analysis approach to compare the activation patterns evoked by illusory position shifts with those evoked by matched physical shifts. We find that the illusory perceived position is represented at the earliest stages of the visual processing stream, including primary visual cortex. Surprisingly, we found no evidence of percept-based encoding of position in visual areas beyond area V3. This result suggests that while it is likely that higher-level visual areas are involved in position encoding, early visual cortex also plays an important role. 

Sun, L. W., Hartstein, K. C., Frank, S. M., Hassan,W. and Tse, P. U. (2017). Back from the future: Volitional postdiction of perceived apparent motion direction. Vision Research.

Read The Full Publication Here.

Sun, L. W., Hartstein, K. C., Frank, S. M., Hassan,W. and Tse, P. U. (2017). Back from the future: Volitional postdiction of perceived apparent motion direction. Vision Research. pii: S0042-6989(17)30169-4. doi: 10.1016/j.visres.2017.09.001.

 

ABSTRACT Among physical events, it is impossible that an event could alter its own past for the simple reason that past events precede future events, and not vice versa. Moreover, to do so would invoke impossible self-causation. However, mental events are constructed by physical neuronal processes that take a finite duration to execute. Given this fact, it is conceivable that later brain events could alter the ongoing interpretation of previous brain events if they arrive within this finite duration of interpretive processing, before a commitment is made to what happened. In the current study, we show that humans can volitionally influence how they perceive an ambiguous apparent motion sequence, as long as the top-down command occurs up to 300 ms after the occurrence of the actual motion event in the world. This finding supports the view that there is a temporal integration period over which perception is constructed on the basis of both bottom-up and top-down inputs.

Farashahi, Shiva, Christopher H. Donahue, Peyman Khorsand, Hyojung Seo, Daeyeol Lee, and Alireza Soltani.  "Metaplasticity as a Neural Substrate for Adaptive Learning and Choice under Uncertainty."

Farashahi, Shiva, Christopher H. Donahue, Peyman Khorsand, Hyojung Seo, Daeyeol Lee, and Alireza Soltani. (2017)  "Metaplasticity as a Neural Substrate for Adaptive Learning and Choice under Uncertainty." Neuron 94, no. 2: 401-414.

View the Full Publication Here.

Abstract

Value-based decision making often involves integration of reward outcomes over time, but this becomes considerably more challenging if reward assignments on alternative options are probabilistic and non-stationary. Despite the existence of various models for optimally integrating reward under uncertainty, the underlying neural mechanisms are still unknown. Here we propose that reward-dependent metaplasticity (RDMP) can provide a plausible mechanism for both integration of reward under uncertainty and estimation of uncertainty itself. We show that a model based on RDMP can robustly perform the probabilistic reversal learning task via dynamic adjustment of learning based on reward feedback, while changes in its activity signal unexpected uncertainty. The model predicts time-dependent and choice-specific learning rates that strongly depend on reward history. Key predictions from this model were confirmed with behavioral data from non-human primates. Overall, our results suggest that metaplasticity can provide a neural substrate for adaptive learning and choice under uncertainty.

Merrikhi Y, Clark K, Albarran E, Mohammadbagher P, Zirnsak M, Moore T, Noudoost B (2017). Spatial working memory alters the efficacy of input to visual cortex. doi:10.1038/ncomms15041.

View the Full Publication Here.

Abstract

Prefrontal cortex modulates sensory signals in extrastriate visual cortex, in part via its direct projections from the frontal eye field (FEF), an area involved in selective attention. We find that working memory-related activity is a dominant signal within FEF input to visual cortex. Although this signal alone does not evoke spiking responses in areas V4 and MT during memory, the gain of visual responses in these areas increases, and neuronal receptive fields expand and shift towards the remembered location, improving the stimulus representation by neuronal populations. These results provide a basis for enhancing the representation of working memory targets and implicate persistent FEF activity as a basis for the interdependence of working memory and selective attention.

Owen LW, Manning JR (2017). Towards Human Super EEG. bioRxiv: 121020.

View the Full Publication Here.

Abstract

Human Super EEG entails measuring ongoing activity from every cell in a living human brain at millisecond-scale temporal resolutions. Although direct cell-by-cell Super EEG recordings are impossible using existing methods, here we present a technique for inferring neural activity at arbitrarily high spatial resolutions using human intracranial electrophysiological recordings. Our approach, based on Gaussian process regression, relies on two assumptions. First, we assume that some of the correlational structure of people's brain activity is similar across individuals. Second, we resolve ambiguities in the data by assuming that neural activity from nearby sources will tend to be similar, all else being equal. One can then ask, for an arbitrary individual's brain: given what we know about the correlational structure of other people's brains, and given the recordings we made from electrodes implanted in this person's brain, how would those recordings most likely have looked at other locations throughout this person's brain?

Heusser AC, Ziman K, Owen LW, Manning JR (2017). HyperTools: a Python toolbox for visualizing and manipulating high-dimensional data. arXiv: 1701.08290.

View the Full Publication Here.

Abstract

Data visualizations can reveal trends and patterns that are not otherwise obvious from the raw data or summary statistics. While visualizing low-dimensional data is relatively straightforward (for example, plotting the change in a variable over time as (x,y) coordinates on a graph), it is not always obvious how to visualize high-dimensional datasets in a similarly intuitive way. Here we present HypeTools, a Python toolbox for visualizing and manipulating large, high-dimensional datasets. Our primary approach is to use dimensionality reduction techniques (Pearson, 1901; Tipping & Bishop, 1999) to embed high-dimensional datasets in a lower-dimensional space, and plot the data using a simple (yet powerful) API with many options for data manipulation [e.g. hyperalignment (Haxby et al., 2011), clustering, normalizing, etc.] and plot styling. The toolbox is designed around the notion of data trajectories and point clouds. Just as the position of an object moving through space can be visualized as a 3D trajectory, HyperTools uses dimensionality reduction algorithms to create similar 2D and 3D trajectories for time series of high-dimensional observations. The trajectories may be plotted as interactive static plots or visualized as animations. These same dimensionality reduction and alignment algorithms can also reveal structure in static datasets (e.g. collections of observations or attributes). We present several examples showcasing how using our toolbox to explore data through trajectories and low-dimensional embeddings can reveal deep insights into datasets across a wide variety of domains.

Manning JR, Zhu X, Willke T, Ranganath R, Stachenfeld K, Hasson U, Blei DM, Norman KA (2017). A probabilistic approach to discovering dynamic full-brain functional connectivity patterns...

View the Full Publication Here.

Manning JR, Zhu X, Willke T, Ranganath R, Stachenfeld K, Hasson U, Blei DM, Norman KA (2017). A probabilistic approach to discovering dynamic full-brain functional connectivity patterns. bioRxiv: 106690.

Abstract

Recent work indicates that the covariance structure of functional magnetic resonance imaging (fMRI) data – commonly described as functional connectivity – can change as a function of the participant’s cognitive state (for review see [32]). Here we present a technique, termed hierarchical topographic factor analysis (HTFA), for efficiently discovering full-brain networks in large multi-subject neuroimaging datasets. HTFA approximates each subject’s network by first re-representing each brain image in terms of the activations of a set of localized nodes, and then computing the covariance of the activation time series of these nodes. The number of nodes, along with their locations, sizes, and activations (over time) are learned from the data. Because the number of nodes is typically substantially smaller than the number of fMRI voxels, HTFA can be orders of magnitude more efficient than traditional voxel-based functional connectivity approaches. In one case study, we show that HTFA recovers the known connectivity patterns underlying a synthetic dataset. In a second case study, we illustrate how HTFA may be used to discover dynamic full-brain activity and connectivity patterns in real fMRI data, collected as participants listened to a story. In a third case study, we carried out a similar series of analyses on fMRI data collected as participants viewed an episode of a television show. In these latter case studies, we found that both the HTFA-derived activity and connectivity patterns may be used to reliably decode which moments in the story or show the participants were experiencing. Further, we found that these two classes of patterns contained partially non-overlapping information, such that classifiers trained on combinations of activity-based and dynamic connectivitybased features performed better than classifiers trained on activity or connectivity patterns alone.

Dehaqani MR, Vahabie AH, Parsa M, Noudoost B, Soltani A (2016). Enhanced representation of space by prefrontal neuronal ensembles and its dependence on cognitive states. bioRxiv, 065581.

View the Full Publication Here.

Abstract

Although individual neurons can be highly selective to particular stimuli and certain upcoming actions, they can provide a complex representation of stimuli and actions at the level of population. The ability to dynamically allocate neural resources is crucial for cognitive flexibility. However, it is unclear whether cognitive flexibility emerges from changes in activity at the level of individual neurons, population, or both. By applying a combination of decoding and encoding methods to simultaneously recorded neural data, we show that while maintaining their stimulus selectivity, neurons in prefrontal cortex alter their correlated activity during various cognitive states, resulting in an enhanced representation of visual space. During a task with various cognitive states, individual prefrontal neurons maintained their limited spatial sensitivity between visual encoding and saccadic target selection whereas the population selectively improved its encoding of spatial locations far from the neurons’ preferred locations. This ‘encoding expansion’ relied on high-dimensional neural representations and was accompanied by selective reductions in noise correlation for non-preferred locations. Our results demonstrate that through recruitment of less-informative neurons and reductions of noise correlation in their activity, the representation of space by neuronal ensembles can be dynamically enhanced, and suggest that cognitive flexibility is mainly achieved by changes in neural representation at the level of population of prefrontal neurons rather than individual neurons.

Desrochers TM, Burk DC, Badre D, Sheinberg DL (2016). The monitoring and control of task sequences in human and non-human primates. Frontiers in Systems Neuroscience, 9.

View the Full Publication Here.

Abstract

Our ability to plan and execute a series of tasks leading to a desired goal requires remarkable coordination between sensory, motor, and decision-related systems. Prefrontal cortex (PFC) is thought to play a central role in this coordination, especially when actions must be assembled extemporaneously and cannot be programmed as a rote series of movements. A central component of this flexible behavior is the momentby-moment allocation of working memory and attention. The ubiquity of sequence planning in our everyday lives belies the neural complexity that supports this capacity, and little is known about how frontal cortical regions orchestrate the monitoring and control of sequential behaviors. For example, it remains unclear if and how sensory cortical areas, which provide essential driving inputs for behavior, are modulated by the frontal cortex during these tasks. Here, we review what is known about moment-tomoment monitoring as it relates to visually guided, rule-driven behaviors that change over time. We highlight recent human work that shows how the rostrolateral prefrontal cortex (RLPFC) participates in monitoring during task sequences. Neurophysiological data from monkeys suggests that monitoring may be accomplished by neurons that respond to items within the sequence and may in turn influence the tuning properties of neurons in posterior sensory areas. Understanding the interplay between proceduralized or habitual acts and supervised control of sequences is key to our understanding of sequential task execution. A crucial bridge will be the use of experimental protocols that allow for the examination of the functional homology between monkeys and humans. We illustrate how task sequences may be parceled into components and examined experimentally, thereby opening future avenues of investigation into the neural basis of sequential monitoring and control.

Frank SM, Sun L, Forster L, Tse PU, Greenlee MW (2016). Crossmodal attention effects in vestibular cortex during attentive tracking of moving objects. J Neuroscience, pii: 2480-16.

View the Full Publication Here.

Abstract

The midposterior fundus of the Sylvian fissure in the human brain is central to the cortical processing of vestibular cues. At least two vestibular areas are located at this site: the parietoinsular vestibular cortex (PIVC) and the posterior insular cortex (PIC). It is now well established that activity in sensory systems is subject to cross-modal attention effects. Attending to a stimulus in one sensory modality enhances activity in the corresponding cortical sensory system, but simultaneously suppresses activity in other sensory systems. Here, we wanted to probe whether such cross-modal attention effects also target the vestibular system. To this end, we used a visual multiple-object tracking task. By parametrically varying the number of tracked targets, we could measure the effect of attentional load on the PIVC and the PIC while holding the perceptual load constant. Participants performed the tracking task during functional magnetic resonance imaging. Results show that, compared with passive viewing of object motion, activity during object tracking was suppressed in the PIVC and enhanced in the PIC. Greater attentional load, induced by increasing the number of tracked targets, was associated with a corresponding increase in the suppression of activity in the PIVC. Activity in the anterior part of the PIC decreased with increasing load, whereas load effects were absent in the posterior PIC. Results of a control experiment show that attention-induced suppression in the PIVC is stronger than any suppression evoked by the visual stimulus per se. Overall, our results suggest that attention has a cross-modal modulatory effect on the vestibular cortex during visual object tracking.

Manning JR, Hulbert JC, Williams J, Piloto L, Sahakyan L, Norman KA (2016). A neural signature of contextually mediated intentional forgetting. Psychonomic Bulletin & Review.

View the Full Publication Here.

Abstract

The mental context in which we experience an event plays a fundamental role in how we organize our memories of an event (e.g. in relation to other events) and, in turn, how we retrieve those memories later. Because we use contextual representations to retrieve information pertaining to our past, processes that alter our representations of context can enhance or diminish our capacity to retrieve particular memories. We designed a functional magnetic resonance imaging (fMRI) experiment to test the hypothesis that people can intentionally forget previously experienced events by changing their mental representations of contextual information associated with those events. We had human participants study two lists of words, manipulating whether they were told to forget (or remember) the first list prior to studying the second list. We used pattern classifiers to track neural patterns that reflected contextual information associated with the first list and found that, consistent with the notion of contextual change, the activation of the first-list contextual representation was lower following a forget instruction than a remember instruction. Further, the magnitude of this neural signature of contextual change was negatively correlated with participants’ abilities to later recall items from the first list.

Schlegel A, Konuthula D, Alexander P, Blackwood E, Tse PU (2016). Fundamentally distributed information processing integrates the motor network into the mental workspace during mental rotation...

View the Full Publication Here.

Schlegel A, Konuthula D, Alexander P, Blackwood E, Tse PU (2016). Fundamentally distributed information processing integrates the motor network into the mental workspace during mental rotation. J Cognitive Neuroscience, 28(8):1139-51.

Abstract

The manipulation of mental representations in the human brain appears to share similarities with the physical manipulation of real-world objects. In particular, some neuroimaging studies have found increased activity in motor regions during mental rotation, suggesting that mental and physical operations may involve overlapping neural populations. Does the motor network contribute information processing to mental rotation? If so, does it play a similar computational role in both mental and manual rotation, and how does it communicate with the wider network of areas involved in the mental workspace? Here we used multivariate methods and fMRI to study 24 participants as they mentally rotated 3-D objects or manually rotated their hands in one of four directions. We find that information processing related to mental rotations is distributed widely among many cortical and subcortical regions, that the motor network becomes tightly integrated into a wider mental workspace network during mental rotation, and that motor network activity during mental rotation only partially resembles that involved in manual rotation. Additionally, these findings provide evidence that the mental workspace is organized as a distributed core network that dynamically recruits specialized subnetworks for specific tasks as needed.

Soltani A, Khorsand P, Guo CZ, Farashahi S, Liu J (2016). Neural substrates of cognitive biases during probabilistic inference. Nature Communications, 7:11393.

View the Full Publication Here.

Abstract

Decision making often requires simultaneously learning about and combining evidence from various sources of information. However, when making inferences from these sources, humans show systematic biases that are often attributed to heuristics or limitations in cognitive processes. Here we use a combination of experimental and modelling approaches to reveal neural substrates of probabilistic inference and corresponding biases. We find systematic deviations from normative accounts of inference when alternative options are not equally rewarding; subjects' choice behaviour is biased towards the more rewarding option, whereas their inferences about individual cues show the opposite bias. Moreover, inference bias about combinations of cues depends on the number of cues. Using a biophysically plausible model, we link these biases to synaptic plasticity mechanisms modulated by reward expectation and attention. We demonstrate that inference relies on direct estimation of posteriors, not on combination of likelihoods and prior. Our work reveals novel mechanisms underlying cognitive biases and contributions of interactions between reward-dependent learning, decision making and attention to high-level reasoning.

Dotson NM, Salazar RF, Goodell AB, Hoffman SJ, Gray CM (2015). Methods, caveats, and the future of large-scale microelectrode recordings in the non-human primate...

View the Full Publication Here.

Dotson NM, Salazar RF, Goodell AB, Hoffman SJ, Gray CM (2015). Methods, caveats, and the future of large-scale microelectrode recordings in the non-human primate. Frontiers in Systems Neuroscience, 9, 149.

Abstract

Cognitive processes play out on massive brain-wide networks, which produce widely distributed patterns of activity. Capturing these activity patterns requires tools that are able to simultaneously measure activity from many distributed sites with high spatiotemporal resolution. Unfortunately, current techniques with adequate coverage do not provide the requisite spatiotemporal resolution. Large-scale microelectrode recording devices, with dozens to hundreds of microelectrodes capable of simultaneously recording from nearly as many cortical and subcortical areas, provide a potential way to minimize these tradeoffs. However, placing hundreds of microelectrodes into a behaving animal is a highly risky and technically challenging endeavor that has only been pursued by a few groups. Recording activity from multiple electrodes simultaneously also introduces several statistical and conceptual dilemmas, such as the multiple comparisons problem and the uncontrolled stimulus response problem. In this perspective article, we discuss some of the techniques that we, and others, have developed for collecting and analyzing large-scale data sets, and address the future of this emerging field.