Psarras, S;
Fatah gen. Schieck, A;
Zarkali, A;
Hanna, S;
(2019)
Visual Saliency in Navigation: Modelling Navigational Behaviour using Saliency and Depth Analysis.
In:
Proceedings of the 12th International Space Syntax Symposium.
International Space Syntax Symposium: Beijing, China.
Preview |
Text
SpaceSyntax2019_StamatiosPsarras313.pdf - Accepted Version Available under License : See the attached licence file. Download (824kB) | Preview |
Abstract
Spatial configuration is extensively used in spatial analysis methods to predict navigational behavior; several studies have shown a correlation of global and local space syntax metrics with the distribution of pedestrians in different settings. However, recent studies have also shown correlation of human behavior with the visibility of relevant objects, for example the visibility of paintings influenced navigational choices in museums. These findings suggest, that in addition to spatial configuration and depth, visually or semantically important elements are also important in wayfinding. Incorporating these additional characteristics into existing analyses could improve our ability to predict and model navigational behavior in the build environment. The main tool of identifying which elements of a visual scene are of most importance is Saliency. Saliency is the subjective perceptual quality some stimuli possess within a visual scene, which makes them stand out from their neighbors and gain the observer’s attention; it is determined in very early stages of visual processing, implying a certain generality of saliency between different observers. Saliency detection algorithms have been extensively used in computer vision and have shown correlation with observed behavior using eye tracking. Despite the attractiveness of a global tool that can identify prominent objects within a visual scene for architectural design, Saliency detection has not been applied in spatial perception and navigation. Here we examine the application of different saliency detection algorithms in the context of the built environment. We recorded navigational behavior of 143 pedestrians moving freely in an open space. We compared existing isovist models with observed behavior. We then tested different saliency detection models as well as a hybrid model combining isovist and saliency detection for their capacity to predict change in angular direction during navigation. Saliency had a significant negative correlation with change in angular direction; combining saliency with depth-based isovist models improved prediction performance. Our findings suggest that saliency has the potential to be a significant addition to traditional isovist models in predicting and modeling navigational behavior.
Archive Staff Only
View Item |