Movatterモバイル変換


[0]ホーム

URL:


Skip to main content
NCBI home page
Search in PMCSearch
As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with, the contents by NLM or the National Institutes of Health.
Learn more:PMC Disclaimer | PMC Copyright Notice
Journal of Eye Movement Research logo

Eye-tracking Analysis of Interactive 3D Geovisualization

Lukas Herman1,Stanislav Popelka2,Vendula Hejlova2
1Masaryk University, Czech Republic
2PalackÝ University, Czech Republic

Collection date 2017.

This work is licensed under a Creative Commons Attribution 4.0 International License,(https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use and redistribution provided that the original author and source are credited.

PMCID: PMC7141050  PMID:33828655

Abstract

This paper describes a new tool for eye-tracking data and their analysis with the use of interactive 3D models. This tool helps to analyse interactive 3D models easier than by time-consuming, frame-by-frame investigation of captured screen recordings with superimposed scanpaths. The main function of this tool, called 3DgazeR, is to calculate 3D coordinates (X, Y, Z coordinates of the 3D scene) for individual points of view. These 3D coordinates can be calculated from the values of the position and orientation of a virtual camera and the 2D coordinates of the gaze upon the screen. The functionality of 3DgazeR is introduced in a case study example using Digital Elevation Models as stimuli. The purpose of the case study was to verify the functionality of the tool and discover the most suitable visualization methods for geographic 3D models. Five selected methods are presented in the results section of the paper. Most of the output was created in a Geographic Information System. 3DgazeR works with the SMI eye-tracker and the low-cost EyeTribe tracker connected with open source application OGAMA, and can compute 3D coordinates from raw data and fixations.

Keywords: eye-tracking, 3D visualization, 3D model, cartography, Geographic Information System, 3D analysis tool

Introduction

The introduction summarizes state of the art in 3Dcartography eye-tracking research, followed by a presentationof previous attempts to record eye-tracking data overinteractive 3D models. In the methods section, 3DgazeR andits implementation are described. The results contain fiveselected data visualization methods applied in the exampleof the simple case study. At the end of the paper, asummary of 3DgazeR advantages and limitations is described.

3D Geovisualization

Bleisch (1) defines 3D geovisualizationas a generic term used for a range of 3D visualizationsrepresenting the real world, parts of the real world, or otherdata with a spatial reference. With the advent of virtualglobes such as Google Earth, or perhaps even earlier withthe notion of a digital earth (2), they have becomeincreasingly popular, and many people already know about 3Dgeovisualizations even though they may not call them assuch. Most 3D geovisualizations are digital elevationmodels draped with ortho or satellite imagery and relativelydetailed 3D city models (1). These perspective views areoften referred to as3D maps. The overview of the usabilityand usefulness of 3D geovisualizations was presented byÇöltekin, Lokka (3). Authors categorized the results fromexisting empirical studies according to visualization type,task type, and user type.

3D geovisualization is not limited to the depiction ofterrain where the Z axis represents elevation. Thedevelopment of a phenomenon in time is often displayed, forexample, with the aid of a so-called Space-Time-Cube(STC). Hägerstraand (4) proposed a framework for timegeography to study social interaction and the movement ofindividuals in space and time. The STC is a visualrepresentation of this framework where the cube’s horizontalplane represents space, and the 3D vertical axis representstime (5). With a Space-Time-Cube, any spatio-temporaldata can be displayed. That data can be, for example,information recorded by GPS devices, statistics withlocation and time components, or data acquired witheye-tracking technology (6).

3D maps and visualizations can generally be dividedinto two categories: static and interactive. Staticvisualizations are essentially perspective views (images) of any 3Dscene. In interactive 3D visualizations, the user can controland manipulate the scene. The disadvantages of static 3Dmaps are mainly overlapping objects in the 3D scene andthe distortion of distant objects. Inexperienced users couldhave problems with scene manipulation using a mouse (7).

Most of the cases referred to as 3D geovisualization arenot true 3D, but a pseudo 3D (or 2.5D – each X and Ycoordinate corresponds to exactly one Z value). Accordingto Kraak (8), true 3D can be used in those cases wherespecial equipment achieves realistic 3D projection (i.e. 3DLCD displays, holograms, stereoscopic images, anaglyphsor physical models).

Haeberling (9) notes that there is almost nocartographic theory or principles for creating 3D maps. In hisdissertation, Góralski (10) also argues that solidknowledge of 3D cartography is still missing. A similarview can be found in other studies (7, 11-13). The authorsreport that there is still very little known about how and inwhich cases 3D visualization can be effectively used.Performing an appropriate assessment of the usability of 3Dmaps is necessary.

Usability methods for 3D geovisualization (3D maps)

Due to the massive increase in map production inrecent years, it is important to focus on map usabilityresearch. Maps can be modified and optimized to betterserve users based on the results of this research.

One of the first works dealing with map usabilityresearch was published by Petchenik (14). In her work"Cognition in Cartography", she states that for the successfultransfer of information between the map creator and mapreader, it is necessary for the reader to understand the mapin the same way as the map creator. The challenge ofcognitive cartography is understanding how users read variousmap elements and how the meanings of those elementsbetween different users vary.

The primary direction of cognitive cartographyresearch leads to studies in how maps are perceived, toincrease their efficiency, and adapt their design to the needsof a specific group of users. The InternationalCartographic Association (ICA) has two commissions devotedto map users, the appraisal of map effectiveness, and mapoptimization – the Commission on Use and User Issues(http://use.icaci.org/) and the Commission on CognitiveVisualization (http://cogvis.icaci.org/). User aspects areexamined in respect of the different purposes of maps(for example Staněk, Friedmannová (15) or Kubíček,Šašinka (16).

Haeberling (17) evaluated the design variablesemployed in 3D maps (camera angle and distance, thedirection of light, sky settings and the amount of haze).Petrovič and Mašera (18) used a questionnaire todetermine user preferences between 2D and 3D maps.Participants of their study had to decide which type of map theywould use to solve four tasks: measuring distances,comparing elevation, determining the direction of north, andevaluating the direction of tilt. Results of the study ofPetrovič and Mašera (18) showed that 3D maps are betterfor estimating elevation and orientation than their 2Dequivalents, but 3D maps may cause potential problemsfor distance measuring.

Savage, Wiebe (19) tried to answer the questionwhether using 3D perspective views has an advantage overusing traditional 2D topographic maps. Participants wererandomly divided into two groups and asked to solvespatial tasks with a 2D or a 3D map. The results of the studyshowed no advantage in using 3D maps for tasks thatinvolved estimating elevation. Additionally, in tasks whereit was not necessary to determine an object’s elevation(e.g. measuring distances), the 3D variant was not as good.

User testing of 3D interactive virtual environments isrelatively scarce. One of the few articles describing suchan environment is presented by Wilkening and Fabrikant(20). Using the Google Earth application, they monitoredthe proportion of applied movement types – zoom, pan,tilt, and rotation. Bleisch, Burkhard (21) assessed the 3Dvisualization of abstract numeric data. Although speed andaccuracy were measured, no information about navigationin 3D space was recorded in this study. Lokka andÇöltekin (22) investigated memory capacity in the contextof navigating a path in a virtual 3D environment. Theyobserved the differences between age groups.

Previous studies (20, 23, 24) indicate that there areconsiderable differences between individuals in how they readmaps, especially in the strategies and procedures used todetermine an answer to a question. To understand mapreading strategy, the use of eye-tracking facilitates thestudy.

Eye-tracking in Cartography

Although eye-tracking to study maps was first used inthe late 1950s, it has seen increased use over the last ten tofifteen years. Probably the first eye-tracking study forevaluating cartographic products was the study of Enoch(25), who used as stimuli simple maps drawn on abackground of aerial images. Steinke (26) presented one of thefirst published summaries about the application ofeyetracking in cartography. He compiled the results of formerresearch and highlighted the importance of distinguishingbetween the perceptions of user groups of different age oreducation.

Today, several departments in Europe and the USAconduct eye-tracking research in cartography (27). InOlomouc, Czech Republic, eye-tracking has been used tostudy the output of landscape visibility analyses (28) andto investigate cartographic principles (29). In Zurich,Switzerland, Fabrikant, Rebich-Hespanha (30) evaluated aseries of maps expressing the evolution of phenomenon overtime and weather maps (31). Çöltekin from the sameuniversity analyzed users’ visual analytics strategies (32). InGhent, Belgium paper and digital topographic maps werecompared (33) and differences in attentive behaviorbetween novice and expert map users were analyzed (34).Ooms, Coltekin (35) proposed the methodology forcombining eye-tracking with user logging to referenceeyemovement data to geographic objects. This approach issimilar to ours, but instead of 3D model a dynamic map isused.

Eye-tracking to assess 3D visualization

The issue of 3D visualization on maps has so far onlybeen addressed marginally. At the State University ofTexas, Fuhrmann, Komogortsev (36) evaluated thedifferences in how a traditional topographic map and its 3Dholographic equivalent were perceived. Participants wereasked to suggest an optimal route. Analysis of theeyetracking metrics showed the better option to be theholographic map.

One of the first and more complex studies dealing witheye-tracking and the evaluation of 3D maps is the study byPutto, Kettunen (37). In this study, the impact of threetypes of terrain visualization was evaluated while beingrequired to solve four tasks (visual search, area selection,and route planning). The shortest average length offixation was observed for the shaded relief, indicating that thismethod is the easiest for users.

Eye-tracking for evaluating 3D visualization incartography is widely used at Palacký University in Olomouc,Czech Republic, with studies examining the differences inhow 3D relief maps are perceived (38), 3D maps of cities(39), a 3D model of an extinct village (40), and touristmaps with hill-shading (41) being produced there. Thesestudies showed that it is not possible to generalize theresults and state that 3D is more effective than 2D or viceversa. The effectivity of visualization depends on the exacttype of stimuli and also on the task.

In all these studies static images were used as stimuli.Nevertheless, the main advantage of 3D models is beingable to manipulate them (pan, zoom, rotate). An analysisof eye-tracking data measured on interactive stimuli iscostly, as eye-trackers produce video material withoverlaid gaze-cursors and any classification of fixationsrequires extensive manual effort (42). Eye tracking studiesdealing with interactive 3D stimuli typically comprise atime-consuming frame-by-frame analysis of capturedscreen recordings with superimposed scanpaths. One ofthe few available gaze visualization techniques for 3Dcontexts is the representation of fixations and saccades as 3Dscanpaths (43). A challenge with 3D stimuli is mappingfixations onto the correct geometrical model of thestimulus (44).

Several attempts to analyze eye-tracking data recordedduring the work with interactive 3D stimuli exist. Probablythe most extensive work has been done by Stellmach, whodeveloped tool called SWEETER – a gaze analysis tooladapted to the Tobii eye-tracker system and XNAFramework. SWEETER offers a coherent framework for loading3D scenes and corresponding gaze data logs, as well asdeploying adapted gaze visualizations techniques (45).

Another method for visualizing the gaze data ofdynamic stimuli was developed by Ramloll, Trepagnier (46).It is especially useful for 3D objects on retail sites allowingshoppers to examine products as interactive,non-stereoscopic 3D objects on 2D displays. In this approach, eachgaze position and fixation point is mapped to a 3D object’srelevant polygon. A 3D object is then flattened andoverlaid with the appropriate gaze visualizations. Theadvantage of this flattening is that the output can bereproduced on a 2D static medium (i.e. paper).

Both approaches used a remote eye-tracker to recorddata. Pfeiffer (42) used a head-mounted eye-trackingsystem by Arrington Research. This study extended recentapproaches of combining eye-tracking with motion capture,including holistic estimations of the 3D point of regard. Inaddition, he presented a refined version of 3D attentionvolumes for representing and visualizing attention in 3Dspace.

Duchowski, Medlin (47) developed an algorithm forbinocular eye-tracking in virtual reality, which is capableof calculating the three-dimensional virtual coordinates ofthe viewer’s gaze.

A head-mounted eye-tracker from the SMI was used inthe study of Baldauf, Fröhlich (48), who developed theapplication KIBITZER – a wearable gaze-sensitive system toexplore urban surroundings. The eye-tracker is connectedvia a smartphone and the user’s eye-gaze is analyzed toscan the visible surroundings for georeferenced digitalinformation. The user is informed about points of interest inhis or her current gaze direction.

SMI glasses were also involved in the work of Paletta,Santner (49), who used them in combination withMicrosoft Kinect. A 3D model of the environment wasacquired with Microsoft Kinect and gaze positions capturedby the SMI glasses were mapped onto the 3D model.

Unfortunately, all the presented approaches work withspecific types of device and are not generally available forthe public. For this reason, we decided to develop our ownapplication called 3DgazeR (3D Gaze Recorder).3DgazeR can place recorded raw data and fixations intothe 3D model’s coordinate system. The application worksprimarily with geographical 3D models (DEM – DigitalElevation Models in our pilot study). Majority ofvisualization of results of the case study is performed inopen source Geographic Information System QGIS. Theapplication works with data from an SMI RED 250 deviceand a low-cost, EyeTribe eye-tracker. This eye-tracker isconnected with open source application OGAMA. Manydifferent eye-trackers could be connected with OGAMAand then our tool will work with their data.

Methods

We designed and implemented our own experimentalapplication, 3DgazeR, due to the unavailability of toolsallowing eye-tracking while using interactive 3D stimuli.The main function of this instrument is to calculate the 3Dcoordinates (X, Y, Z coordinates of the 3D scene) forindividual points of view. These 3D coordinates can becalculated from the values of the position and orientation ofa virtual camera and the 2D coordinates of the gaze on thescreen. 2D screen coordinates are obtained from theeyetracking system, and the position and orientation of thevirtual camera are recorded with the 3DgazeR tool (seeFigure 1).

Fig. 1.

Fig. 1

Schema of 3DgazeR modules.

3DgazeR incorporates a modular design. The three modules are:

Data acquisition module

Connecting module to combine the virtual cameradata and eye-tracking system data

Calculating module to calculate 3D coordinates

The modular design reduces computational complexityfor data acquisition. Data for gaze position and virtualcamera position and orientation are recordedindependently. Combining the data and calculating 3Dcoordinates is done in the post-processing phase. Splitting themodules for combining data and calculating 3Dcoordinates allows information from different eye-trackingsystems (SMI RED, EyeTribe) and various types of data (rawdata, fixation) to be processed.

All three modules constituting 3DgazeR only use openweb technologies: HTML (HyperText Markup Language),PHP (Hypertext Preprocessor), JavaScript, jQuery andJavaScript library for rendering 3D graphics X3DOM. LibraryX3DOM was chosen because of its broad support incommonly used web browsers, as well as documentation for theaccessibility and availability of software to create stimuli.X3DOM uses an X3D (eXtensible 3D) structure format andis built on HTML5, JavaScript, and WebGL. The currentimplementation of X3DOM uses a so-called fallback modelthat renders 3D scenes through an InstantReality plug-in,a Flash11 plug-in, or WebGL. To run X3DOM, no specificplug-in is needed. X3DOM is free for both fornon-commercial and commercial use (50). Common JavaScript events,such asonclick on 3D objects, are supported in X3DOM. Aruntime API is also available and provides a proxy objectfor reading and modifying runtime parametersprogrammatically. The API functions serve for interactive navigation,resetting views or changing navigation modes. X3D datacan be stored in an HTML file or as part of external files.Their combination is achieved via an inline element.Particular X3D elements can be clearly distinguished throughtheir DEF attribute, which is a unique identifier. Otherprinciples and advantages of X3DOM are described in (50-53).

Data acquisition module

The data acquisition module is used to collect primarydata. Its main component is a window containing the 3Dmodel used as a stimulus. This 3D scene can be navigatedor otherwise manipulated. The rendering of virtual contentinside a graphics pipeline is the orthographic orperspective projection of 3D geometry onto a 2D plane. Theparameters for this projection are usually defined by someform of virtual camera. Only main parameters, position,and orientation of the virtual camera are recorded in theproposed solution. The position and orientation of thevirtual camera are recorded every 50 milliseconds (frequencyof records 20 Hz). The recording is performed usingfunctions from X3DOM runtime API and JavaScript ingeneral. The recorded position and orientation of the virtualcamera is sent every two seconds to a server and storedusing a PHP script to a CSV (Comma Separated Value)file. Storage of the 3D scene loading time is necessary forsubsequent combination with eye-tracking data. Similarly,termination of the 3D scene is also stored. The interface isdesigned as a full-screen 3D scene while input for answersis provided on the following screen (after the 3D scene).

Connection module

The connecting module combines two partial CSV filesbased on timestamps. The first step is joining trimmed data(from the eye-tracker and from the movement of the virtualenvironment) by markers of the beginning and end of thedepiction of a 3D scene. The beginning in both records isdesignated as time 0. Each record from the eye-tracker isthen assigned to the nearest previous recorded position ofthe virtual camera (by timestamp), which is the simplestmethod of joining temporal data and it was not difficult toimplement. The maximum time deviation (inaccuracy) is16.67 ms.

Fig. 2.

Fig. 2

Examples of data about eye-tracking data (left) and virtual camera movement (right) and schema of their connecting

Four variants of the connecting module were created –for SMI RED 250 and EyeTribe, and for raw data andfixations. The entire connecting module is implemented inJavaScript.

Calculating module

The calculating module comprises a similar windowand 3D model to those used in the test module. The samescreen resolution must be used as during the acquisition ofdata. For every record, the intersection of the viewing raywith the displayed 3D model is calculated. A 3D scene isdepicted with a virtual camera’s input position andorientation. The X3DOM runtime API functiongetViewingRayand screen coordinates as input data are used for thiscalculation. Setting and calculating the virtual camera’sparameters is automated using the FOR cycle. The result isa table containing timestamps, 3D scene coordinates (X,Y, Z), the DEF element the ray intersects with, andoptionally, a normal vector to this intersection. If the user is notlooking at any particular 3D object, this fact is alsorecorded, including whether the user is looking beyond thedimensions of the monitor. This function is based on raycasting method (see Figure 3) and can be divided into threesteps:

• calculation of the direction of the viewing ray fromthe virtual camera position, orientation and screencoordinates (functioncalcViewRay);

• ray casting to the scene;

• finding the intersection with the closest object(functionhitPnt).

For more information about ray casting see Hughes,Van Dam (54).

Fig. 3.

Fig. 3

Principle of ray casting method for 3D scene coordinates calculation.

For additional processing, analysis, and visualizationof calculated data, GIS software is used. It was primarilyopen source program QGIS, but ArcGIS with 3D Analystand ArcScene (3D viewing application) can also be used.We worked with QGIS version 2.12 with severaladditional plug-ins. Most important was Qgis2threejs plug-in.Qgis2threejs creates 3D models and exports terrain data,map canvas images, and overlaid vector data to a webbrowser supporting WebGL).

Pilot study

Our pilot experiment was designed as exploratoryresearch. The primary goal of this experiment was to test thepossibilities of 3DgazeR in evaluating different methodsof visualization and analyzing eye-tracking data acquiredwith an interactive 3D model.

Apparatus, tasks and stimuli

For the testing, we chose a low-cost EyeTribe device.Currently, the EyeTribe tracker is the most inexpensivecommercial eye-tracker in the world at a price of $99 (https://theeyetribe.com) Popelka, Stachon (55) comparedthe precision of the EyeTribe and the professional deviceSMI RED 250. The results of the comparison show that theEyeTribe tracker can be a valuable tool for cartographicresearch. The eye-tracker was connected to OGAMAsoftware (56). The device operated at a frequency of 30Hz.EyeTribe also works with a frequency of 60Hz, howeversaving information about camera orientation causedproblems with a frequencies higher than 20Hz. Some computersetups were not able to store camera data correctly whenthe frequency was higher than 20Hz. The length of the filewas shorter than real recording, because some rows wereomitted.

Two versions of the test were created – variant A andvariant B. Each variant included eight tasks over almostthe same 3D models (differing only in the used texture).The 3D models in variant A had no transparency, and theterrain was covered with a hypsometric color scale (fromgreen to brown). The same hypsometric scale covered four3D models in variant B, but transparency was set at 30%.The second half of the models in variant B had notransparency, but the terrain was covered with satellite imagesfrom Landsat 8. The order of the tasks was different in bothvariants. A comparison of variant A and variant B for thesame task is shown in Figure 4. Four tasks were required:

• Which object has the highest elevation? (Variant A –tasks 1, 5; Variant B – tasks 1, 2)

• Find the highest peak. (Variant A – tasks 2, 6; VariantB – tasks 3, 4)

• Which elements are visible from the given position?(Variant A – tasks 3, 7; Variant B – tasks 5, 6)

• From which positions a given object is visible?(Variant A – tasks 4, 8; Variant B – tasks 7, 8)

The first two tasks had only one correct answer, whilethe other two had one or more correct answers.

Fig. 4.

Fig. 4

An example of stimuli from variant A – terrain covered with a hypsometric scale (left) and variant B – terrain covered with a satellite image (right).

Design and participants

Before the pilot test we decided that 20 participantswould be tested on both variants, with an interval of at leastthree days between both testing sessions. Participants wererecorded on both variants, but not influenced by a learningeffect when performing the second variant of the testbecause of the interval.

Half of the participants were students of theDepartment of Geoinformatics with cartographic knowledge, halfof them were cartographic novices. Half of the participantswere men, half women. The age range was 18-32 years.

Screen resolution was 1600 x 900 and the samplingfrequency was set to 30 Hz. Each participant was seated at anappropriate distance from the monitor with an eye-trackingdevice calibrated with 16 points. Calibration results ofeither Perfect or Good (on the scale used in OGAMA) wereaccepted. An external keyboard was connected to thelaptop to start and end the tasks (F2 key for start and F3 forthe end). A researcher controlled the keyboard. Theparticipant performed the test using only a common PC mouse.

The experiment began with calibrating the device inthe OGAMA environment. After that, participants filled intheir ID and other personal information such as age, sex,etc. The experiment was captured as a screen recording.

Prepared with individual HTML pages, the experimentincluded questions, tasks, 3D models, and input screens foranswers. The names of the CSV files where recording ofthe virtual camera movement would be stored coincidedwith the task in the subsequently created eye-trackingexperiment in OGAMA. This would allow correctcombination in the connecting module.

As recording began, a page with initial informationabout the experiment appeared. The experiment ran inGoogle Chrome in full-screen mode and is available (inCzech) athttp://eyetracking.upol.cz/3d/. Each task was limited to 60 seconds duration, and the whole experimentlasts approximately 10 to 15 minutes. Longer total time ofthe experiment may affect the user performance. From theevidence from previous experiments, we found out thatwhen the recording is longer than e.g. 20 minutes, theparticipants started to be tired and they lose concentration.

Care was taken with the correct starting time for tasks.A screen with a target symbol appeared after the 3D modelhad loaded. The participant switched to the task by pressingthe F2 key. This keypress was recorded by OGAMA and usedby 3DGazeR to divide recording according to the task. Afterthat, a participant could manipulate the 3D model to try todiscover the correct answer. The participant then pressed F3,and a screen with a selection of answers appeared.

Recording, data processing and validation

It is necessary to store the data for each task separately,alternatively control or manually modify (e.g. delete theunnecessary lines at the end of recording). The data is thenprocessed in the connecting module where data from theeyetracking device is combined with virtual camera movement.The output is then sent to the calculation module which mustbe switched to full-screen mode. The calculation must takeplace on the same screen resolution as the testing. The outputshould be modified for import into GIS software andvisualized. For example, the data format of the time column had tobe modified into a form required to subsequently createanimation.

These adjusted data can be imported into QGIS. CSV dataare loaded and displayed here using the Create a Layer froma Delimited Text File dialog. The retrieved data can be storedin GML (Geography Markup Language) or Shapefile as pointlayers. After the export and re-render of this new layer abovethe 3D model, it is possible that some data may have thewrong elevation (see Figure 5). This distortion occurs whenthe 3D model is rotated while eyes are simultaneouslyfocused on a specific place, or when the model is rotated, andeyes track with a smooth pursuit. To remove these distortionsand correctly fit eye-tracking data exactly on the model, thePoint Sampling Tool plug-in in Qgis2threejs was used.

Fig. 5.

Fig. 5

Raw data displayed as a layer in GIS-software (green points – calculated 3D gaze data; red – points with incorrect el-evation).

Evaluation of the data validity

For the evaluation of the validity of 3DGazeR output,we have created the short animation of 3D model with onered sphere in the middle. The diameter of the sphere wasapproximately 1/12 of the 3D model width. In thebeginning, the sphere was located in the middle of the screen.After five seconds, the camera changed its position (it tooktwo seconds), and the sphere moved to the upper left sideof the screen. The camera stayed there for six seconds andthen moved again, so the sphere was displayed in the nextcorner of the screen. This process was repeated for all fourcorners of the screen. The validation study is available athttp://eyetracking.upol.cz/3d/.

The task of the participant was to look at the sphere alltime. The validation was performed on five participants.Recorded data were processed in connection andcalculating modules of 3DGazeR. For the evaluation of the datavalidity, we decided to analyze how many data sampleswere assigned to the sphere. Average values for all fiveparticipants are displayed in Figure 6. Each bar in thegraph represents one camera position (or movement). Theblue color corresponds to the data samples where the gazecoordinates were assigned to the sphere; the red color isused when the gaze was recorded out of the sphere. It isevident that inaccuracies were observed for the firstposition of the sphere because it took some time to participantsto find the sphere. A similar problem was found when thefirst movement appeared. Later, the percent of samplesrecorded out of the sphere is minimal. In total, averageamount of samples recorded out of the sphere is 3.79 %.These results showed that the tool works correctly and theinaccuracies are caused by the inability of the respondentsto keep eyes focused on the sphere that was verified bywatching the video recording in OGAMA.

Fig. 6.

Fig. 6

Evaluation of the data validity. Red color corresponds to the data samples where gaze was not recorded on the target sphere.

Results

Visualization techniques allow researchers to analyzedifferent levels and aspects of recorded eye tracking datain an exploratory and qualitative way. Visualizationtechniques help to analyze the spatio-temporal aspect of eyetracking data and the complex relationships it contains(44). We decided to use both fixations and raw data thatfor visualization. 3D alternatives to the usual methods ofeye-tracking data were created, and other methods suitablefor visualization of 3D eye-tracking data were explored.The following visualization methods were tested:

• 3D raw data

• 3D scanpath (fixations and saccades)

• 3D attention map

• Animation

• Graph of Z coordinate variation over time.

3D raw data

First, we tried to visualize raw data as simple pointsplaced on a 3D surface. This method is very simple, but itsmain disadvantage is the poor arrangement of depicting datain this way, mainly in areas with a high density of points.The size, color, and transparency of symbols can be set inused GIS software. With this type of visualization, data fromdifferent groups of participants can be compared, as shownin Figure 7. Raw data displayed as points were used as inputfor creating other types of visualizations. Figure 7 shows the3D visualization of raw data created in QGIS. Visualizationof a large number of points in the 3D scene in a web browserthrough Three.js is hardware demanding. Thus,visualization of raw data is more effective in ArcScene.

Fig. 7.

Fig. 7

Comparison of 3D raw data (red points – females, blue points – males) for variant B, task 6.

3D scanpath

The usual approach for depicting eye-tracking data isscanpath visualization superimposed on a stimulusrepresentation. Scanpaths show the eye-movement trajectory bydrawing connected lines (saccades) between subsequentfixation positions. A traditional spherical representation offixations was chosen, but (45) also demonstrate differenttypes of representation. Cones can be used to representfixations or viewpoints and view directions for camera paths.

The size of each sphere was determined from the lengthof the fixation. Fixations were detected in OGAMAenvironment with the use of I-DT algorithm. The settings forthresholds were set to maximum distance of 30px andminimum number of three samples per fixation. Fixationlength was used as the attribute for the size of each sphere.Transparency (30 %) was set because of overlaps. In thenext step we created 3D saccades linking fixations. ThePointConnector plug-in in QGIS was used for this purpose.

This visualization method is quite clear. It provides anoverview of the duration of individual fixations, theirposition, and relation to each other. It tells where theparticipant’s gaze lingered and where it stayed only briefly. Linesindicate if a participant skipped between remote locationsand back or if the observation of the stimulus was smooth.The scanpath from one participant solving variant A, task4 is shown in Figure 8. From the length of fixations, it isevident that the participant observed locations nearspherical bodies defining the target points crucial for tasksolving. His gaze shifted progressively from target to target,whereby the red target attracted the most attention.

Fig. 8.

Fig. 8

Scanpath (3D fixations and saccades) of one user for variant A, task 4. Interactive version is available athttp://eyetracking.upol.cz/3d/.

3D Attention Map

Visual gaze analysis in three-dimensional virtualenvironments still lacks the methods and techniques foraggregating attentional representations. Stellmach, Nacke (45)introduced three types of attention maps suitable for 3Dstimuli – projected, object-based, and surface-basedattention maps. In Digital Elevation Models, the use ofprojected attention maps is the most appropriate. Object-basedattention maps, which are relatively similar to the conceptof Areas of Interest, can also be used for eye-trackinganalysis of interactive 3D models with 3DgazeR. In this case,stimuli must contain predetermined components (objects)with unique identifiers (attribute DEF in X3DOM library).

Projected attention maps can be created in the ArcSceneenvironment using the Heatmap plug-in in QGIS function.Heatmap calculates the density of features (in our casefixations) in a neighborhood around those features.Conceptually, a smoothly curved surface is fitted over each point. Theimportant factors for creating Heatmap are grid cell size andsearch radius. We used a cell size of 25 m (it is about onethousandth of the terrain model size) and search radius as animplicit value (see Figure 9).

Fig. 9.

Fig. 9

Comparison of 3D attention maps from cartographers (left) and non-cartographers (right) for variant B, task 6. Inter-active versions are available athttp://eyetracking.upol.cz/3d/.

The advantage of projected attention maps is theirclarity for visualization of a large amount of data. In aGeographic Information System, the exact color scheme of theattention map can be defined (with minimum andmaximum values).

An interesting result was obtained from task 6, variantB. Figure 9 compares the resultant attention maps fromparticipants with cartographic knowledge with those fromthe general public. For cartographers, the most importantpart of the terrain was around the blue cube. Participantswithout cartographic knowledge focused on other objectsin the terrain. An interpretation of this behavior could bethat the cartographers were consistent with the task andlooked at the blue cube from different areas. By contrast,novices used the opposite approach and investigated whichobjects were visible from the blue cube’s position.

Animation

A suitable tool for evaluating user strategies isanimation. Creating an animation with a 3D model is notpossible in QGIS software, so we used ArcScene (with thefunction Create Time Animation) for this purpose. The modelcan also be rotated during the animation, providinginteractivity from data acquisition through to final analysis.Animations can be used to study fixations of individualsor to compare several users. Animations can be exportedfrom ArcScene software as video files (e.g. AVI), but itloses its interactivity. AVI files exported from ArcSceneare available athttp://eyetracking.upol.cz/3d/. A similarmethod to animation is taking screenshots, which can alsobe alternatively used in the qualitative (manual) analysisof critical task solving moments, such as at the end or whenentering an answer.

Graph

When analyzing 3D eye-tracking data, it would beappropriate to concentrate on analyzing the Z coordinate(height). From the data recorded with 3DgazeR, the Zcoordinate’s changes over time can be displayed, so theelevations the participants looked at in the model duringthe test can be investigated. Data from the programArcScene were exported into a DBF table and thenanalyzed in OpenOffice Calc. A scatter plot with data pointsconnected by lines should be used here. A graph for oneparticipant (see Figure 10) or multiple participants can becreated. A graph of Z coordinate raw data values wascreated in this case.

It is apparent from this graph when participants looked at higher ground or lowlands. In Figure 10, we can see how the participants initially fluctuated between elevations in observing locations and focused on the highest point around the 27th second during the task. In general, we con-clude that this participant studied the entire terrain quite carefully and looked at a variety of low to very high eleva-tions.

Fig. 10.

Fig. 10

Graph of observed elevations during task (variant A, task 4, participant no. 20).

Discussion

We developed our own testing tool, 3DgazeR, becausenone of the software tools found through literature reviewwere freely available for application. Those software toolsworked with specific devices, or had proprietary licenses,and were not free or open source software. 3DgazeR isfreely available to interested parties under a BSD licenseto fill this gap. English version of 3DgazeR is available athttp://eyetracking.upol.cz/3d/. Furthermore, 3DgazeR has several significant advantages:

• It permits evaluation of different types of 3D stimulibecause the X3DOM library is very flexible – for anoverview of various 3D models displayed throughX3DOM see (51-53)

• It is based on open web technologies and thus aninexpensive solution, and does not need specialsoftware installed or plug-ins on the client or serversides

• It combines open JavaScript libraries and PHP, and somay be easily extended or modified

• It writes data into a CSV file, allowing easy analysisunder various commercial, freeware, and open sourceprograms.

3DgazeR also demonstrates general approaches increating eye-tracking analyses of interactive 3Dvisualizations. Some limitations of this testing tool, however, wereidentified during the pilot test:

• A higher recording frequency of virtual cameraposition and orientation in the data acquisition modulewould allow greater precision during analysis

• Some of the calculated 3D gaze data (points) are notcorrectly placed on a surface. This distortion happenswhen the 3D model is rotated while eyes aresimultaneously focused on a specific place, or when themodel is rotated, and eyes track with a smooth motion.A higher frequency in recording virtual camera positionand orientation can solve this problem

• Data processing is time-consuming and involvesmanual effort. Automating this process anddeveloping tools to speed up data analysis andvisualization would greatly enhance its productivity.

Future development of 3DgazeR should aim atovercoming these limitations. Other possible extensions to ourmethodology and the 3DgazeR tool have been identified:

• We want to modify 3DgazeR to support other types of3D models (e.g. 3D models of buildings, machines, orsimilar objects), and focus mainly on the design andtesting of such procedures to create 3D modelscomprising individual parts marked with uniqueidentifiers (as mentioned above – with a DEF attribute).Such 3D models also allow us to create object-basedattention maps. The first trials in this direction arealready underway. They represent simple 3D modelswhich are predominantly created manually. This istime-consuming and requires knowledge of XML(eXtensible Markup Language) structure and X3Dformat. We would like to simplify and automate thisprocess as much as possible in the future.

• We would like to increase the frequency of records ofposition and orientation of the virtual camera,especially during its movement. On the other hand,when it is no user interaction (virtual camera positionis not changed at this time), it would be suitable todecrease the frequency to reduce the size of createdCSV file. The ideal solution would be the recordingwith adaptive frequency, depending on whether thevirtual camera is moving or not.

• We also want to improve the connecting module touse more accurate method for joining data of themovement of the virtual camera with data from theeye-tracking system.

• We tested primarily open source software (QGIS,OpenOffice Calc) for visualization of the results.Creation of 3D animation was not possible in QGIS,so commercial software ArcScene was used for thispurpose. The use of ArcScene is more effective alsoin the case of raw data visualization. We want to testthe possibilities of advanced statistical analysis insome open source program, e.g. R.

3DgazerR enables each participant’s strategy (e.g. Fig.8 and Fig. 10) to be studied, their pairs compared, andgroup strategies (e.g. Fig. 7 and Fig. 9) analyzed. In thefuture, once the above adjustments and additions havebeen included, we want use 3DgazerR for complexanalysis of user interaction in virtual space and compare 3Deyetracking data with user interaction recordings introducedby Herman and Stachoň (24). We would like to extend theresults of existing studies, e.g. (45), in this manner.

Conclusion

We created an experimental tool called 3DgazeR torecord eye-tracking data for interactive 3D visualizations.3DgazeR is freely available to interested parties undera BSD license. The main function of 3DgazeR is tocalculate 3D coordinates (X, Y, Z coordinates of the 3D scene)for individual points of view. These 3D coordinates can becalculated from the values of the position and orientationof a virtual camera and the 2D coordinates of the gaze uponthe screen. 3DgazeR works with both the SMI eye-trackerand the low-cost EyeTribe tracker and can compute 3Dcoordinates from raw data and fixations. The functionality ofthe 3DgazeR has been tested in a case study using terrainmodels (DEM) as stimuli. The purpose of this test was toverify the functionality of the tool and discover suitablemethods of visualizing and analyzing recorded data. Fivevisualization methods were proposed and evaluated: 3Draw data, 3D scanpath (fixations and saccades), 3Dattention map (heat map), animation, and a graph of Zcoordinate variation over time.

Acknowledgements

A special thank you to Lucie Bartosova, whoperformed the testing and did a lot of work preparing data foranalysis. Lukas Herman is supported by Grant No.MUNI/M/0846/2015, “Influence of cartographicvisualization methods on the success of solving practical andeducational spatial tasks” and Grant No. MUNI/A/1419/2016, “Integrated research on environmentalchanges in the landscape sphere of Earth II”, both awardedby Masaryk University, Czech Republic. StanislavPopelka is supported by the Operational ProgramEducation for Competitiveness – European Social Fund (projectCZ.1.07/2.3.00/20.0170 of the Ministry of Education,Youth and Sports of the Czech Republic).

References

  1. Bleisch S, Burkhard J and Nebiker S (2009) Efficient Integration of data graphics into virtual 3D Environ-ments.Proceedings of 24th International Cartography Conference.
  2. Gore A. (1998). The digital earth: understanding our planet in the 21st century.Australian surveyor, 43(2), 89–91. doi: 10.1080/00050348.1998.10558728 [DOI] [Google Scholar]
  3. Çöltekin A., Lokka I., & Zahner M. (2016). On the Usability and Usefulness of 3D (Geo) Visualizations-A Focus on Virtual Reality Environments.The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLI(B2), 387–392. 10.5194/isprs-archives-xli-b2-387-2016 [DOI] [Google Scholar]
  4. Hägerstraand T. (1970). What about people in regional science?Papers in Regional Science, 24(1), 7–24. 10.1111/j.1435-5597.1970.tb01464.x [DOI] [Google Scholar]
  5. Kveladze I., Kraak M.-J., & van Elzakker C. P. (2013). A methodological framework for researching the usa-bility of the space-time cube.The Cartographic Journal, 50(3), 201–210. 10.1179/1743277413Y.0000000061 [DOI] [Google Scholar]
  6. Li X., Çöltekin A., & Kraak M.-J. (2010). Visual exploration of eye movement data using the space-time-cube.Geographic Information Science, Springer, 295-309. [Google Scholar]
  7. Wood J., Kirschenbauer S., Döllner J., Lopes A., & Bodum L. (2005). Using 3D in visualization In Dykes J. (Ed.),. Exploring Geovisualization (pp. 295–312). Elsevier; 10.1016/B978-008044531-1/50432-2 [DOI] [Google Scholar]
  8. Kraak M. (1988) Computer-assisted cartographical 3D imaging techniques. Delft University Press, 175. [Google Scholar]
  9. Haeberling C. (2002). 3D Map Presentation–A System-atic Evaluation of Important Graphic Aspects. Proceedings of ICA Mountain Cartography Workshop "Mount Hood", 1-11.
  10. Góralski R. (2009). Three-dimensional interactive maps : theory and practice.Unpublished Ph.D. thesis. University of Glamorgan. [Google Scholar]
  11. Ellis G., & Dix A. (2006). An explorative analysis of user evaluation studies in information visualisation.Proceedings of the 2006 AVI workshop, 1-7. doi: 10.1145/1168149.1168152 [DOI]
  12. MacEachren A. M. (2004). How maps work: representation, visualization, and design. Guilford Press, 513. [Google Scholar]
  13. Slocum T. A., Blok C., Jiang B., Koussoulakou A., Montello D. R., Fuhrmann S., & Hedley N. R. (2001). Cognitive and usability issues in geovisualization.Cartography and Geographic Information Science, 28(1), 61–75. 10.1559/152304001782173998 [DOI] [Google Scholar]
  14. Petchenik B. B. (1977). Cognition in cartography. Cartographica. The International Journal for Geographic Information and Geovisualization, 14(1), 117–128. 10.3138/97R4-84N4-4226-0P24 [DOI] [Google Scholar]
  15. Staněk K., Friedmannová L., Kubíček P., & Konečný M. (2010). Selected issues of cartographic communication optimization for emergency centers.International Journal of Digital Earth, 3(4), 316–339. 10.1080/17538947.2010.484511 [DOI] [Google Scholar]
  16. Kubíček P., Šašinka Č., Stachoň Z., Štěrba Z., Apeltauer J., & Urbánek T. (2017). Cartographic Design and Usability of Visual Variables for Linear Features.The Cartographic Journal, 54(1), 91–102. 10.1080/00087041.2016.1168141 [DOI] [Google Scholar]
  17. Haeberling C. (2003). Topografische 3D-Karten-Thesen für kartografische Gestaltungsgrundsätze. ETH Zürich; 10.3929/ethz-a-004709715 [DOI] [Google Scholar]
  18. Petrovič D., & Mašera P. (2004). Analysis of user’s response on 3D cartographic presentations. Proceedings of 7th meeting of the ICA Commission on Mountain Cartography, 1-10.
  19. Savage D. M., Wiebe E. N., & Devine H. A. (2004). Performance of 2d versus 3d topographic representations for different task types.Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 48(16), 1793–1797. 10.1177/154193120404801601 [DOI] [Google Scholar]
  20. Wilkening J., & Fabrikant S. I. (2013). How users interact with a 3D geo-browser under time pressure.Cartography and Geographic Information Science, 40(1), 40–52. 10.1080/15230406.2013.762140 [DOI] [Google Scholar]
  21. Bleisch S, Burkhard J and Nebiker S (2009) Efficient Integration of data graphics into virtual 3D Environ-ments.Proceedings of 24th International Cartography Conference.
  22. Lokka I., & Çöltekin A. (2016). Simulating navigation with virtual 3D geovisualizations–A focus on memory related factors.The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLI(B2), 671–673. 10.5194/isprsarchives-XLI-B2-671-2016 [DOI] [Google Scholar]
  23. Špriňarová K., Juřík V., Šašinka Č., Herman L., Štěrba Z., Stachoň Z., Chmelík J., Kozlíková B. (2015). Human-Computer Interaction in Real-3D and Pseudo-3D Cartographic Visualization: A Comparative StudyCartography-Maps Connecting the World, Springer, 59-73. doi: 10.1007/978-3-319-17738-0_5 [DOI] [Google Scholar]
  24. Herman L., & Stachoň Z. (2016). Comparison of User Performance with Interactive and Static 3D Visualization–Pilot Study.The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLI(B2), 655–661. 10.5194/isprs-archives-XLI-B2-655-2016 [DOI] [Google Scholar]
  25. Enoch J. M. (1959). Effect of the size of a complex display upon visual search.JOSA, 49(3), 280–286. 10.1364/JOSA.49.000280 [DOI] [PubMed] [Google Scholar]
  26. Steinke T. R. (1987). Eye movement studies in cartography and related fields.. Cartographica. The International Journal for Geographic Information and Geo-visualization, 24(2), 40–73. 10.3138/J166-635U-7R56-X2L1 [DOI] [Google Scholar]
  27. Wang S., Chen Y., Yuan Y., Ye H., & Zheng S. (2016). Visualizing the Intellectual Structure of Eye Movement Research in Cartography.ISPRS International Journal of Geo-Information, 5(10), 16810.3390/ijgi5100168 [DOI] [Google Scholar]
  28. Popelka S., Brychtova A., Svobodova J., Brus J., & Dolezal J. (2013). Advanced visibility analyses and visibility evaluation using eye-tracking.Proceedings of 21st International Conference on Geoinformatics, 1-6. doi: 10.1109/Geoinformatics.2013.6626176 [DOI]
  29. Brychtova A., Popelka S., & Dobesova Z. (2012). Eye - tracking methods for investigation of cartographic principles. 12th International Multidisciplinary Scientific Geoconference. SGEM, II, 1041–1048. 10.5593/sgem2012/s09.v2016 [DOI] [Google Scholar]
  30. Fabrikant S. I., Rebich-Hespanha S., Andrienko N., Andrienko G., & Montello D. R. (2008). Novel method to measure inference affordance in static small-multiple map displays representing dynamic processes.The Cartographic Journal, 45(3), 201–215. 10.1179/000870408X311396 [DOI] [Google Scholar]
  31. Fabrikant S. I., Hespanha S. R., & Hegarty M. (2010). Cognitively inspired and perceptually salient graphic displays for efficient spatial inference making.Annals of the Association of American Geographers, 100(1), 13–29. 10.1080/00045600903362378 [DOI] [Google Scholar]
  32. Çöltekin A., Fabrikant S., & Lacayo M. (2010). Explor-ing the efficiency of users’ visual analytics strategies based on sequence analysis of eye movement recordings.International Journal of Geographical Information Science, 24(10), 1559–1575. 10.1080/13658816.2010.511718 [DOI] [Google Scholar]
  33. Incoul A., Ooms K., & De Maeyer P. (2015). Comparing paper and digital topographic maps using eye tracking.Modern Trends in Cartography, Springer, 339-356. doi: 10.1007/978-3-319-07926-4_26 [DOI] [Google Scholar]
  34. Ooms K., De Maeyer P., & Fack V. (2014). Study of the attentive behavior of novice and expert map users using eye tracking.Cartography and Geographic Information Science, 41(1), 37–54. 10.1080/15230406.2013.860255 [DOI] [Google Scholar]
  35. Ooms K., Çöltekin A., De Maeyer P., Dupont L., Fabrikant S., Incoul A., et al. Van der Haegen L. (2015). Combining user logging with eye tracking for interactive and dynamic applications.Behavior Research Methods, 47(4), 977–993. 10.3758/s13428-014-0542-3 [DOI] [PubMed] [Google Scholar]
  36. Fuhrmann S., Komogortsev O., & Tamir D. (2009). Investigating Hologram‐Based Route Planning.Transactions in GIS, 13(s1), 177–196. 10.1111/j.1467-9671.2009.01158.x [DOI] [Google Scholar]
  37. Putto K., Kettunen P., Torniainen J., Krause C. M., & Tiina Sarjakoski L. (2014). Effects of cartographic elevation visualizations and map-reading tasks on eye movements.The Cartographic Journal, 51(3), 225–236. 10.1179/1743277414Y.0000000087 [DOI] [Google Scholar]
  38. Popelka S., & Brychtova A. (2013). Eye-tracking Study on Different Perception of 2D and 3D Terrain Visual-isation.The Cartographic Journal, 50(3), 240–246. 10.1179/1743277413y.0000000058 [DOI] [Google Scholar]
  39. Dolezalova J., & Popelka S. (2016). Evaluation of the user strategy on 2D and 3D city maps based on novel scanpath comparison method and graph visualization.The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 637-640. 10.5194/isprsarchives-XLI-B2-637-2016 [DOI] [Google Scholar]
  40. Popelka S., & Dedkova P. (2014). Extinct village 3D visualization and its evaluation with eye-movement recording.Lecture Notes in Computer Science, 8579, 786–795. 10.1007/978-3-319-09144-0_54 [DOI] [Google Scholar]
  41. Popelka S. (2014). The role of hill-shading in tourist maps.CEUR Workshop Proceedings, 17-21
  42. Pfeiffer T. (2012). Measuring and visualizing attention in space with 3d attention volumes.Proceedings of the Symposium on Eye Tracking Research and Applications. ACM, 29-36. doi: 10.1145/2168556.2168560 [DOI]
  43. Stellmach S., Nacke L., & Dachselt R. (2010. a). 3d attentional maps: aggregated gaze visualizations in three-dimensional virtual environments.Proceedings of the international conference on advanced visual interfaces, ACM, 345-348. 10.1145/1842993.1843058 [DOI]
  44. Blascheck T., Kurzhals K., Raschke M., Burch M., Weiskopf D., & Ertl T. (2014). State-of-the-art of visualization for eye tracking data.Proceedings of EuroVis. doi: 10.2312/eurovisstar.20141173 [DOI]
  45. Stellmach S., Nacke L., & Dachselt R. (2010. b). Ad-vanced gaze visualizations for three-dimensional virtual environments.Proceedings of the 2010 symposium on eye-tracking research & Applications, ACM, 109-112. [Google Scholar]
  46. Ramloll R., Trepagnier C., Sebrechts M., & Beedasy J. (2004). Gaze data visualization tools: opportunities and challenges.Proceedings of Eighth International Conference on Information Visualisation, 173-180. doi: 10.1109/IV.2004.1320141 [DOI]
  47. Duchowski A., Medlin E., Cournia N., Murphy H., Gramopadhye A., Nair S., et al. Melloy B. (2002). 3-D eye movement analysis.Behavior Research Methods, Instruments, & Computers, 34(4), 573–591. 10.3758/BF03195486 [DOI] [PubMed] [Google Scholar]
  48. Baldauf M., Fröhlich P., & Hutter S. (2010) KIBITZ-ER: a wearable system for eye-gaze-based mobile ur-ban exploration.Proceedings of the 1st Augmented Human International Conference. ACM, 9-13. doi: 10.1145/1785455.1785464 [DOI]
  49. Paletta L., Santner K., Fritz G., Mayer H., & Schrammel J. (2013). 3D attention: measurement of visual saliency using eye tracking glasses. CHI'13 Extended Abstracts on Human Factors in Computing Systems. ACM, 199-204. doi: 10.1145/2468356.2468393 [DOI] [Google Scholar]
  50. Behr J., Eschler P., Jung Y., & Zöllner M. (2009). X3DOM: a DOM-based HTML5/X3D integration model. Proceedings of the 14th International Conference on 3D Web Technology, ACM, 127-135. doi: 10.1145/1559764.1559784 [DOI] [Google Scholar]
  51. Behr J., Jung Y., Keil J., Drevensek T., Zoellner M., Eschler P., & Fellner D. (2010). A scalable architecture for the HTML5/X3D integration model X3DOM. Proceedings of the 15th International Conference on Web 3D Technology, ACM. doi: 10.1145/1836049.1836077 [DOI] [Google Scholar]
  52. Herman L., & Reznik T. (2015). 3D web visualization of environmental information-integration of heterogeneous data sources when providing navigation and interaction.The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XL-3(W3), 479–485. 10.5194/isprsarchives-XL-3-W3-479-2015 [DOI] [Google Scholar]
  53. Herman L., & Russnák J. (2016). X3DOM: Open Web Platform for Presenting 3D Geographical Data and E-learning.Proceedings of 23rd Central European Conference, 31-40.
  54. Hughes J. F., Van Dam A., Foley J. D., & Feiner S. K. (2014). Computer graphics: principles and practice.Pearson Education, 1264. [Google Scholar]
  55. Popelka S., Stachoň Z., Šašinka Č., & Doležalová J. (2016). EyeTribe Tracker Data Accuracy Evaluation and Its Interconnection with Hypothesis Software for Cartographic Purposes.Computational Intelligence and Neuroscience, 2016, 1-14. 10.1155/2016/9172506 [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Vosskühler A., Nordmeier V., Kuchinke L., & Jacobs A. M. (2008). OGAMA (Open Gaze and Mouse Analyzer): Open-source software designed to analyze eye and mouse movements in slideshow study designs.Behavior Research Methods, 40(4), 1150–1162. 10.3758/BRM.40.4.1150 [DOI] [PubMed] [Google Scholar]

Articles from Journal of Eye Movement Research are provided here courtesy ofMultidisciplinary Digital Publishing Institute (MDPI)

ACTIONS

RESOURCES


[8]ページ先頭

©2009-2025 Movatter.jp