Part of the book series:Lecture Notes in Computer Science ((LNTCS,volume 4232))
Included in the following conference series:
1530Accesses
Abstract
What a human’s eye tells a human’s brain? In this paper, we analyze the information capacity of visual attention. Our hypothesis is that the limit of perceptible spatial frequency is related to observing time. Given more time, one can obtain higher resolution – that is, higher spatial frequency information, of the presented visual stimuli. We designed an experiment to simulate natural viewing conditions, in which time dependent characteristics of the attention can be evoked; and we recorded the temporal responses of 6 subjects. Based on the experiment results, we propose a person-independent model that characterizes the behavior of eyes, relating visual spatial resolution with the duration of attentional concentration time. This model suggests that the information capacity of visual attention is time-dependent.
This is a preview of subscription content,log in via an institution to check access.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Sperling, G.: The information available in brief visual presentation. Psychological Monographs (1960)
Ruderman, D.: The statistics of natural images. Network: Computation in Neural Systems 5, 517–548 (1994)
Treisman, A., Sykes, M., Gelade, D.: Selective attention and stimulus integration. In: Donic, S. (ed.) Attention and performance VI, pp. 333–361 (1977)
Eriksen, C.W., St. James, J.D.: Visual attention within and around the field of focal attention: a zoom lens model. Perception and psychophysics (40), 225–240 (1986)
Müller, N.G., Bartelt, O.A., Donner, T.H., Villringer, A., Brandt, S.A.: A physiological correlate of the“Zoom Lens” of visual attention. Journal of Neuroscience 23(9), 3561–3563 (2003)
Verghese, P., Pelli, D.G.: The information capacity of visual attention. Vision Research 32(5), 983–995 (1992)
Verghese, P., Pelli, D.G.: The scale bandwidth of visual search. Vision Research 34(7), 955–962 (1994)
Cha, K., Horch, K.W., Normann, R.A.: Reading speed with a pixelized vision system. Journal of the Optical Society of America A-Optics & Image Science 9(5), 673–677 (1992)
Coughlin, M.J., Cutmore, T.R.H., Hine, T.J.: Automated eye tracking system calibration using artificial neural networks 76, 207–220 (2004)
Intriligator, J., Cavanagh, P.: The spatial resolution of visual attention. Cognitive Psychology 43, 171–216 (2001)
Reinagel, P., Zador, A.M.: Natural scene statistics at the centre of gaze. Network: Computation in Neural System 10, 1–10 (1999)
Author information
Authors and Affiliations
Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
Xiaodi Hou & Liqing Zhang
- Xiaodi Hou
You can also search for this author inPubMed Google Scholar
- Liqing Zhang
You can also search for this author inPubMed Google Scholar
Editor information
Editors and Affiliations
Dept. of Computer Science and Engineering, The Chinese Univ. of Hong Kong, Shatin, N.T., Hong Kong
Irwin King
Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China
Jun Wang
The Chinese University of Hong Kong, Shatin, N.T., Hong Kong
Lai-Wan Chan
Department of Computer Science and Engineering & Center for Cognitive Science, The Ohio State University, OH 43210, Columbus
DeLiang Wang
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Hou, X., Zhang, L. (2006). A Time-Dependent Model of Information Capacity of Visual Attention. In: King, I., Wang, J., Chan, LW., Wang, D. (eds) Neural Information Processing. ICONIP 2006. Lecture Notes in Computer Science, vol 4232. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11893028_15
Download citation
Share this paper
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative