Consider for a moment that throughout your day you are performing countless searches for objects and information in your environment. From searching your kitchen for a coffee mug in the morning, to finding a gas station on the way to work, scanning through a Google search for important information, or even searching for your car in the parking lot at the end of the day. Visual search is a common but exceedingly important mental task. It’s central to a number of professions as well; TSA baggage screeners routinely search for dangerous objects, intel analysts scan drone images for actors or targets, and radiologists search through medical scans for signs of physical abnormalities.
In general, my research focuses on this mental task (and other aspects of visual cognition), where the visual stream meets with memory. Through my research I study the intersection of visual working memory and spatial attention, particularly in the context of visual search. This area of research has evolved from my previous work examining the influence of attentional processes and stress, wherein I studied the influence of psychosocial stress, anticipatory stress, and their biological substrates on aspects of attention. In my current research into visual search I seek to understand how attention is guided through the environment during a commonly performed, but understudied type of search called categorical search.
In categorical search, people find any item (referred to as a target in an experimental setting) from a category, rather than a specific and/or highly familiar item. This is the type of search task you are most likely to find yourself conducting throughout the day (e.g. searching for any trash can for your litter, or finding any writing utensil). Compare this type of search to what a majority of visual search studies have participants do: view a picture of an item, letter, or shape items (e.g. Look for Ts among Ls; find a certain Landolt C; etc.) then locate it on a computer screen. Despite what is commonly examined in this area of research, rarely are we afforded the benefit of knowing the exact appearance of the object we will encounter, and almost never do we see a picture of an item right before we search for it.
Through a variety of computer-based methods I examine how the mental representations of what people are looking for guide their attention and eyes to items that match the category. In addition to standard behavioral measurements (e.g., RT, accuracy), I employ more complex technology, such as eye-tracking, that gives me a deeper understanding of the underlying cognitive processes going on during search. I have also used other physiological measures such as pupillometry (i.e. changes in pupil size) to examine memory processes during search.
I have taken my knowledge and expertise in attention into human factors issues of cybersecurity. In my lab, we are currently examining attentional and decision-making failures made by the operators who identify threats to a cybernetwork (cyber defenders). A component of this research is to understant the basic cognitive processes associated with information search, or how we look for information (e.g. scanning a webpage for a certain sentence of information; scanning lines of code). Cyber defenders rely on information search to monitor networks and detect intrusions. The project will later proceed into understanding best practices for improving human performance in cybersecurity
Thank you for visiting my website!