Consciousness is our primary area of interest. According to the Higher-Order Theory of consciousness, a (first-order) perceptual representation of an object alone is only sufficient for unconscious perception of the object. Conscious perception arises if and only if there is an additional and appropriate (higher-order) representation suggesting to the subjects themselves that the relevant (first-order) perceptual representation is reliable enough to reflect the actual state of the present world. This view has been defended by numerous authors in philosophy and is well supported by modern empirical science.

Neuroscience of Metacognition

Why are some brain processes conscious while others aren’t? With the philosopher David Rosenthal we have argued that to understand this we need to understand metacognition (Lau & Rosenthal 2011 , Lau 2008).

    • Megan Peters builds mathematical (i.e. Bayesian) and biologically realistic (neural network) models about how we rate confidence in visual perception tasks, and tests them with psychophysics (Peters & Lau, 2015), ECoG, and fMRI.
    • JD Knotts studies how to use different masking techniques to disrupt subjects’ confidence judgments, without changing the way they perform visual discrimination tasks.
    • Piercesare Grimaldi analyses how confidence is generated at the neuronal circuitry level.
    • Jorge Morales is doing an fMRI project to compare metacognition in perception versus memory.

    We do a lot of these projects in collaboration with Steve Fleming, who is a principal investigator at the Wellcome Trust Centre for Neuroimaging in London. Jason Carpenter, who was hired in Los Angeles, is working there to facilitate the collaboration. This work is supported in part through an R01 grant from the National Institute of Neurological Disorders and Stroke (NINDS) of the National Institutes of Health (NIH).


    When we were at Columbia in New York (2007-2014), lab alumnus Brian Maniscalco (who is now working at NYU) developed a measure called meta-d’ to quantify how well people can do metacognition. We are very happy that many people are now using our toolbox. Ria Bhatt is now extending that measure to account for more complex types of tasks and data.


Outside of the focus of your attention, do you see colorful details? Turns out this is related to metacognition too: because you probably don’t really see that much, but you may think you see more than you actually do.

    • Lab alumnus Dobromir Rahnev (who now has his own lab at GeorgiaTech) reported some findings that changed the way we think about these phenomena.
    • Jorge and another alumnus, Guillermo Solovey (who is now a faculty member in the University of Buenos Aires), have both followed it up with a couple papers [Solovey et al. 2015 and Morales et al. 2015]. This work is supported in part by a grant from the Air Force Office of Scientific Research.
    • Brian Odegaard is going to extend these to more naturalistic and complex stimuli.
    • Bill Kowalski is also doing a project along these lines, trying to pin down the qualitative differences between attended and unattended vision. It looks like that unattended vision is not just “weaker”; there’s something more complicated going on.

Clinical Application

In collaboration with Mitsuo Kawato at ATR in Japan we have started doing studies of multivoxel neurofeedback, a.k.a decoded neurofeedback (DecNef). Aurelio Cortese, who is visiting from their lab, has used this cool new tool to manipulate people’s confidence level. And we will be using this tool to find ways to erase fear memory to treat anxiety disorders too - exciting projects that we’re starting to do also in collaboration with Michelle Craske. Because of these, we’re also starting to looking into fear conditioning and learning in general, especially in social contexts, with a focus on understanding formal theories and models.