Chelsea Donaldson, Briana Paulman, University of Oklahoma
Full manuscript: www.kon.org/urc/v12/donaldson.html
Abstract Attentional models of time perception suggest that when more attention is given to non-temporal information processing, less attentional resources are allocated to temporal processing, which results in misperceptions of time. The current study sought to support these models through manipulating attention toward a slideshow, and thus indirectly manipulating the allocated attention toward temporal processing. Although two groups viewed a slideshow of the same duration (102 seconds), the continuous attention group viewed 52 pictures at two seconds each, while the non-continuous attention group viewed 17 pictures at two seconds each plus four-second blank screen intervals between pictures. In accordance with attentional models, we predicted that continuous attention toward the slideshow would result in less accurate time estimations compared to the condition that included blank screen intervals.
Rachael A. Divine, Mariam V. Balasanyan, Jennifer M. Vuong, Justin C. Latham, Robert J. Youmans*, California State University, Northridge
Full manuscript: www.kon.org/urc/v10/divine.html
Abstract Emotional regulation has become an important variable in understanding the effect emotions may have on attention and learning. In this study, 58 undergraduate students at California State University, Northridge were randomly assigned to watch one of two versions of an educational video. The information presented was identical in both versions of the educational video, but the presenter was asked to be more aggressive in one version of the presentation, and more neutral in the other. The study measured how well participants learned from each version of the video, and also how likely they were to notice surprising changes in background objects that were carefully created by the experimenters via video editing. Results indicated that the aggressive presentation had a negative effect on participants’ ability to detect changes, but no effect on their memory for the semantic content of the video.