Reavis, EA, Frank, S, Tse, PU (2018) Learning efficient visual search for stimuli containing diagnostic spatial configurations and color-shape conjunctions. Attention, Perception & Psychophysics. In press.
As the world's largest consortium of scientists focused on deciphering the neural basis of attention, we are committed to sharing our findings and making them accessible to a wide range of institutions, industries, and educators in order to maximize the impact of our research.
Z. Bahmani, M.R. Daliri, Y. Merrikhi, K. Clark and B. Noudoost. (2018) Working Memory Enhances Cortical Representations via Spatially Specific Coordination of Spike Times Neuron 97, 967-979
Ziman K, Heusser AC, Fitzpatrick PC, Field CE, Manning JR (2017) Is automatic speech-to-text transcription ready for use in psychological experiments? PsyArXiv: psh48.
Verbal responses are a convenient and naturalistic way for participants to provide data in psychological experiments (Salzinger, 1959). However, audio recordings of verbal responses typically require additional processing such as transcribing the recordings into text, as compared with other behavioral response modalities (e.g. typed responses, button presses, etc.). Further, the transcription process is often tedious and time-intensive, requiring human listeners to manually examine each moment of recorded speech. Here we evaluate the performance of a state-of-the-art speech recognition algorithm (Halpern et al., 2016) in transcribing audio data into text during a list-learning experiment. We compare the computer-generated transcripts to transcripts made by human annotators. Both sets of transcripts matched to a high degree and exhibited similar statistical properties, in terms of the participants’ recall performance and recall dynamics that the transcripts captured. This proof-of-concept study suggests that speech-to-text engines could provide a cheap, reliable, and rapid means of automatically transcribing speech data in psychological experiments. Further, our findings open the door for verbal response experiments that scale to thousands of participants (e.g. administered online), as well as a new generation of experiments that decode speech on-the-fly and adapt experimental parameters based on participants’ prior responses.
Erlikhman, G., Caplovitz, G. P. (2017). Decoding information about dynamically occluded objects in visual cortex. NeuroImage, 146, 778-788. doi: 10.1016/j.neuroimage.2016.09.024. Epub 2016 Sep 20. PubMed PMID: 27663987; PubMed Central PMCID: PMC5322156
During dynamic occlusion, an object passes behind an occluding surface and then later reappears. Even when completely occluded from view, such objects are experienced as continuing to exist or persist behind the occluder even though they are no longer visible. The contents and neural basis of this persistent representation remain poorly understood. Questions remain as to whether there is information maintained about the object itself (i.e. its shape or identity) or non-object-specific information such as its position or velocity as it is tracked behind an occluder, as well as which areas of visual cortex represent such information. Recent studies have found that early visual cortex is activated by “invisible” objects during visual imagery and by unstimulated regions along the path of apparent motion, suggesting that some properties of dynamically occluded objects may also be neurally represented in early visual cortex. We applied functional magnetic resonance imaging in human subjects to examine representations within visual cortex during dynamic occlusion. For gradually occluded, but not for instantly disappearing objects, there was an increase in activity in early visual cortex (V1, V2, and V3). This activity was spatially-specific, corresponding to the occluded location in the visual field. However, the activity did not encode enough information about object identity to discriminate between different kinds of occluded objects (circles vs. stars) using MVPA. In contrast, object identity could be decoded in spatially-specific subregions of higher-order, topographically organized areas such as ventral, lateral, and temporal occipital areas (VO, LO, and TO) as well as the functionally defined LOC and hMT+. These results suggest that early visual cortex may only represent the dynamically occluded object's position or motion path, while later visual areas represent object-specific information.
Frank SM, Greenlee MW, Tse PU (2017). Long Time No See: Enduring Behavioral and Neuronal Changes in Perceptual Learning of Motion Trajectories 3 Years After Training. Cerebral Cortex, 1-12. doi: 10.1093/cercor/bhx039.
Kohler PJ, Cavanagh P, Tse PU (2017). Motion-Induced Position Shifts Activate Early Visual Cortex. Front Neurosci, 11:168. doi: 10.3389/fnins.2017.00168.
The ability to correctly determine the position of objects in space is a fundamental task of the visual system. The perceived position of briefly presented static objects can be influenced by nearby moving contours, as demonstrated by various illusions collectively known as motion-induced position shifts. Here we use a stimulus that produces a particularly strong effect of motion on perceived position. We test whether several regions-of-interest (ROIs), at different stages of visual processing, encode the perceived rather than retinotopically veridical position. Specifically, we collect functional MRI data while participants experience motion-induced position shifts and use a multivariate pattern analysis approach to compare the activation patterns evoked by illusory position shifts with those evoked by matched physical shifts. We find that the illusory perceived position is represented at the earliest stages of the visual processing stream, including primary visual cortex. Surprisingly, we found no evidence of percept-based encoding of position in visual areas beyond area V3. This result suggests that while it is likely that higher-level visual areas are involved in position encoding, early visual cortex also plays an important role.
Sun, L. W., Hartstein, K. C., Frank, S. M., Hassan,W. and Tse, P. U. (2017). Back from the future: Volitional postdiction of perceived apparent motion direction. Vision Research. pii: S0042-6989(17)30169-4. doi: 10.1016/j.visres.2017.09.001.
ABSTRACT Among physical events, it is impossible that an event could alter its own past for the simple reason that past events precede future events, and not vice versa. Moreover, to do so would invoke impossible self-causation. However, mental events are constructed by physical neuronal processes that take a finite duration to execute. Given this fact, it is conceivable that later brain events could alter the ongoing interpretation of previous brain events if they arrive within this finite duration of interpretive processing, before a commitment is made to what happened. In the current study, we show that humans can volitionally influence how they perceive an ambiguous apparent motion sequence, as long as the top-down command occurs up to 300 ms after the occurrence of the actual motion event in the world. This finding supports the view that there is a temporal integration period over which perception is constructed on the basis of both bottom-up and top-down inputs.
Value-based decision making often involves integration of reward outcomes over time, but this becomes considerably more challenging if reward assignments on alternative options are probabilistic and non-stationary. Despite the existence of various models for optimally integrating reward under uncertainty, the underlying neural mechanisms are still unknown. Here we propose that reward-dependent metaplasticity (RDMP) can provide a plausible mechanism for both integration of reward under uncertainty and estimation of uncertainty itself. We show that a model based on RDMP can robustly perform the probabilistic reversal learning task via dynamic adjustment of learning based on reward feedback, while changes in its activity signal unexpected uncertainty. The model predicts time-dependent and choice-specific learning rates that strongly depend on reward history. Key predictions from this model were confirmed with behavioral data from non-human primates. Overall, our results suggest that metaplasticity can provide a neural substrate for adaptive learning and choice under uncertainty.