Movatterモバイル変換


[0]ホーム

URL:


Skip to main content

Advertisement

Springer Nature Link
Log in

Multimodal Search and Visualisation of Movies Based on Emotions Along Time in As Movies Go By

You have full access to thisopen access article

SN Computer Science Aims and scope Submit manuscript

Abstract

Largely due to the significant emotional impact they have on viewers and in their lives, movies are a powerful vehicle for culture and education and one of the most important and impactful forms of entertainment. By making a huge amount of movies more accessible in pervasive services and devices, and helping in emotion recognition and classification, technology has been playing an important role, and it is becoming more pertinent the ability to search, visualize and access movies based on their emotional impact, although emotions are seldom taken into account in these systems. In this paper, we characterize the challenges and approaches in this scenario, then present and evaluate, at two different stages, interactive means to visualize and search movies based on their dominant and actual emotional impact along the movie, with different and personalizable models and modalities, in As Movies Go By. In particular through emotional highlights in words, colors, emojis and trajectories, by drawing emotional blueprints or through users’ emotional states, with the ability to get us into a movie in serendipitous moments.

Similar content being viewed by others

Use our pre-submission checklist

Avoid common mistakes on your manuscript.

Introduction

Movies have the ability to awaken the emotions of their viewers, to influence our moods, attitudes and consequently our health and wellbeing, making a difference in our lives. Also for that, they have always had a very important role in society, and have become a powerful vehicle for culture, education, leisure and even propaganda [39]. The success of each movie depends on the emotions that are perceived and felt by the audience [3], and the emotional information related to this experience that viewers have is actually considered an important factor when searching or seeking a film to watch, also determining its success [2,44]. Of particular importance is the safe environment provided to experience roles and emotions we might not otherwise be free to experience [43], and film has gained a uniquely powerful ubiquity within human culture [39], supported by pervasive services and devices.

In this context, the huge amount of movies or films we can access, and the important role of emotions, make more pertinent the ability to access, visualize and search movies based on their emotional impact. As a whole and along time: “As the frames move and tell a story, it is that movement which emotionally connects you” [39], and this is the journey, the path or emotional story, we want to capture and support. On the other hand, rich content of movies appeals to different senses, and the ubiquity in their access creates opportunities to use different devices, even in casual situations and environments, suggesting a multimodal access. Such situations may be when we want our current emotion taken into account, or want to draw an emotional path to search for in movies; possibly triggered by a music we are listening to, that moves us and reminds us of movies we like and how they made us feel; with the ability to get us into a movie in serendipitous moments.

This paper is an extended version and a follow-up of [7], describing new features for personalized emotional models and another evaluation that was extended and conducted with a balanced mix of users that had already participated in the first evaluation, and users in their first contact with the application, with very encouraging results. In the following sections, we characterize and discuss further the main motivations, challenges, concepts and approaches in related work, as the background; then present and evaluate, in two different phases, interactive means to visualize and search movies based on their emotional impact, dominant as a whole or along the movie, with different models and modalities. In particular, through emotional highlights in words, colors, emojis and trajectories, by drawing emotional blueprints or through users’ emotional states in their facial expressions, with the ability to get us into a movie in serendipitous ways. Then, we conclude and discuss perspectives for future work.

Background

In this section, we present the most relevant concepts and related work, as a background and framework for our own work and contributions.

Emotions are complex sets of chemical and neural reactions that form a pattern and play a regulatory role, helping organisms to conserve life [14]. The study of emotion is complex and each emotional experience is personal and can involve several emotions [5,31]. Scientists have been trying to understand and explain their dynamics and arriving at several different models and definitions [20], with two main branches standing out: dimensional, and categorical, with discrete states that can also be represented in the dimensional models. Among the most adopted: Russell’s Circumplex represents emotions in a two-dimensional space (VA based on: Valence (pleasantness, x-axis; and Arousal (intensity & energy, y-axis) [33]. Ekman’s [16] categorical model, based on the emotional facial expressions recognized across cultures, has happiness/joy, anger, fear, sadness, disgust and surprise as its basic emotions. Plutchik’s [30] 3D model is both categorical and dimensional (polarity, similarity, intensity), with 8 primary emotions: Ekman’s 6, plus: anticipation, and trust, represented around the center, in colors, with the intensity as the vertical dimension (in 3 levels), that may also be represented in 2D as external levels going outwards, resembling a flower. The Geneva Emotion Wheel [34] is an appraisal based model [35] organizing a set of 12 colored emotions around a circle with valence (x-axis) and control (y-axis) dimensions, and intensity decreasing towards the circle center. As a way to represent and express emotions, emojis have also become increasingly popular, in computer-mediated communication, and can be classified in valence and arousal [17].

Besides representing and visualizing emotions, doing so along time as movies progress and users experience different emotional impact comes with additional complexity and challenges. Timelines can help, but do not capture the dimensions of the emotion models; and these do not tend to support time. In dimensional models like Russel’s, emotions can be represented on the wheel along time (with dots or small circles, drawing lines or painting), but in the end it is not easy to distinguish the trajectory followed, and it is not obvious to know if a long time was spent on the same emotion. Related work on trajectories and clustering can be of some help here.

Trajectories represent events over time and allow to highlight fundamental information [15] like the path, speed, time at a certain point and, through colors, how it evolved over time, e.g. through fading. FeelTrace [12] (Fig. 1a) lets users track perceived emotional content of speech in realtime, using an activation-evaluation space (similar to arousal-valence), naturally circular, with neutrality at the center. TimeWheel [41] (Fig. 1b) also repesents time within a circular shape, though not a wheel of emotions. In [13] (Fig. 1c), the authors adopt a semantic figurative metaphor of pulsing blood vessels to visualize Lisbon traffic, seeking a provocative perspectives and trying to invoke emotional responses. In [19] (Fig. 1d), georeferenced videos were represented in space and time by their trajectories on maps, emphasizing amount of videos in the different routes, speed the trajectories were filmed), and their age.

Fig. 1
figure 1

Temporal representations:a FeelTrace [12]: Plutchik’s colors used for axes extremes, and emotions in circles with color interpolated by those on the axes. Time is represented as circles shrink gradually over time, providing a visual indication of the way ratings changed. Interesting, but conflicts if circles’ size has another meaning;b TimeWheel [41]: instead of a time wheel, it represents a central axis for time, and organizes around it the dependent axes or variables (as multidimensional data), connecting by colored lines the variable with its corresponding time, for an intuitive perception of time dependencies;c pulsing blood vessels [13]: clots represent slow traffic and blood vessels represent number of vehicles (making vessels thicker) and average speed circulating in the city (faster, shorter),d video trajectories [19]: georeferenced videos were represented in space & time by their trajectories on maps, emphasizing amount of videos in the different routes (by brighter & thicker blue lines), as well as the speed the trajectories were filmed (higher arcs: more video shooting, thus slower pace), and their age (fading bright green of trajectories over time)

The ability to Search is paramount especially in large information systems; it is usually based on properties or keywords, and it is often possible to browse the results in search, exploratory and serendipity browsing [9,10]. Information Visualization (IV) may help to deal with data complexity in intuitive and effective ways to express meaningful information [1,8,42]. As such, IV goes hand in hand with search, to provide good representations for the results and browsing. Although this emotional perspective has been gaining attention, most websites and movie search and recommendation systems like IMDB, Netflix or HBO, do not support emotions. Instead, they search for films by actors, directors, ratings, genre, etc., and recommend based on popularity, most watched movies, and genre similarities. We present both commercial and research-based search systems with more representative goals, as well as emotion-based search, access and visualization interfaces.

MovieWall [27] is an interactive browsing interface placing movies in a cluster of posters and highlighting them according to search criteria like genre, actor, producers or keywords. Search by color is also relevant, especially when mapping the colors to emotions. Multicolr (https://labs.tineye.com/multicolr) searches images by up to five color percentages, that the user can adjust. ColorsInMotion [23] explores and views videos based on dominant color and movement (rhythm), with different visualization and summarization approaches. As for movement and trajectories, in SightSurfers [37] we proposed multimodal interactive mobile interfaces to search and access georeferenced videos based on trajectories’ shape and speed, and by time. No emotional dimension was explicitly supported for these videos, but a potential for emotional impact through increased engagement, sense of presence and immersion [32]; and this approach might inspire the search for movies with specific emotional trajectories in a wheel. Movie Clouds [9] on the other hand used color to index or associate tag clouds with content (subtitles, emotions expressed in the subtitles, audio events and music mood), or emotional impact (based on sensors and Ekman’s emotions with Plutchik’s colors, as in iFelt [29] on the movie timeline; and allowed painting a timeline to search for movies with that sequence or trajectory of content or impact. MEMOSE (MEdia EMOtion SEarch) is specialized in emotional search (based on tagging pictures with eighteen emotions), along with content tags (e.g. love and dogs) [21]. Movie Emotion Map (Fig. 2) goes further and aims to better understand the emotional space of films along with additional information [11].

Fig. 2
figure 2

Movie emotion map overview (left) and search by emotions filter (right) [11]: it creates emotional signatures for each movie, based on the words in IMDB reviews, and the Plutchik’s model [4]. Glyphs of different colors and intensities for emotional values are mapped onto a 2D graph, allowing users to search, view and interact with films according to the % of desired emotion and criteria like rating, genre, etc. Results: highlighted on map with title in red and movies with most similar emotional signatures

Searching and Visualizing Movies Based on Emotions in As Movies Go By

This section presents main features to search and visualize movies based on emotions, being designed and developed for the main web application of the AWESOMEFootnote1 project. The emotional impact on viewers is being assessed while they watch the movies, to provide feedback and to catalog the movies: based on biosensors like EEG, EDA, and a webcam for facial expressions; and by having users engaging in self-assessment and annotation of the movies, using different models or interfaces like categorical emotions, maniken and emotional wheel (both based on the VA dimensions) [28]; and articulating with other project tasks where video content-based features are extracted mostly in audio, subtitles and image.

This application is also integrating our previous As Music Goes By [25,26,38], allowing users to search, visualize and explore music and movies from complementary perspectives of music versions, artists, quotes & movie soundtracks. In this paper, we emphasize a perspective driven by Movies, by the name of As Movies Go By. In the following sections, we present and discuss our emotional model approach, that we aim to keep rich and expressive, though effective, flexible and easy to understand, and (by the order they were evaluated by users) the movie search and visualization features, based on their emotional impact, as a whole and along time, with multimodal interfaces for different contexts of use; and the personalized configuration of the emotion wheel.

Emotion Models and Representations

Our approach is based on Russel’s VA circumplex, or wheel, as the central model, where we color the wheel and also place categorical emotions (in words or emojis) to help convey more meaning. When using EEG, ECG or EDA sensors, the emotions are classified in VA by our classifiers; when using the webcam, users’ facial expressions are recognized in relation to the Ekman’s 6 basic emotions, and are most naturally expressed by emojis. In any case, as long as the categorical emotions are associated with a VA, and a color map is defined for the wheel, representations can be converted among each other (only rounded up when the output precision is lower: e.g. VA to the nearest categorical emotion). We provide some default mappings (e.g. Plutchik or Geneva) to choose from; and, for flexibility and personalization, are developing an interface for customizable wheels (Section “Personalizing and Configuring the Emotion Wheel”), defining: the categorical emotions, their VA, colors (by individual emotion, or the whole wheel color map or image) and emojis. In Fig. 5a), we exemplify the wheel created for the VA and colors of the main 8 emotions in the Plutchik Model: 6 from Ekman, plus Anticipation and Trust.

Multimodal Searching of Movies by Emotional Impact

To Search by Dominant Emotion or by Emotional Trajectory, several views were designed, with the output representation matching the input, in different modalities, and giving access to the individual movies in the results, where they can be explored and watched along with their emotional visualizations, as described in Section “Visualizing Movie Emotional Impact Along Time”.

The search menu is presented at the top of the Homepage (Fig. 3a), where the user can visualize the trailer and information about a random movie, that can be changed, also at random, by button click. The objective of this feature is to introduce the users to movies that they may not know or may want to revisit and watch, as a flavour of surprise and serendipity. The search methods based on emotions are available in a menu and they trigger popup windows (Figs. 3b1–c1,4a1–b1).

Fig. 3
figure 3

As Movies Go By—Homepage (a1) and Movie Search by Dominant Emotions: Search Queries:b1 Categorical emotions in words,c1 Emojis and Camera,d1 Colors in Wheel; Search Results:b2 Categorical emotions in words with complete movie information,c2 dominant emotion with emojis, andd2 dominant emotion on wheel

Fig. 4
figure 4

Movie search by trajectory: search queries:a1 with free drawing,b1 with discrete points; search results:a2 with free drawing,b2 with discrete points;a3 individual movie trajectory representation on wheel;a4 highlighted movie selected on wheel;b3 movie trajectory replay;c1 search by title query;c2 and results list; with all movies in the results giving access toc3 movie visualization

Search by Dominant Emotion can be done in three ways: by categorical emotion words (Fig. 3b1), by emojis or facial expressions on camera (c1) and by colors in wheel (d1). In all of them, the user can choose up to five emotions by clicking (in words, emojis or wheel) making them colorful, and the emotion bar filled with colors (if words and wheel) or emojis; the user can then adjust the bar to the % amount desired for each chosen emotion, working similarly to Multicolr (https://labs.tineye.com/multicolr) that allows an image search based on color, where users can select up to five colors and adjust their respective percentages by dragging the borders. The emojis and categorical emotion words chosen also change their size based on the % they have.

For the emojis (Fig. 3c1) if a camera is available, users may also click on the camera icon to turn it on, and have their facial expressions analyzed in real time and presented on the bar as %s of the emojis representing Ekman’s emotions [12], a well-accepted model, one explicitly associated with universal emotional facial expressions, recognized across cultures. In the wheel, circle size reflects the amount of time it was clicked, and the % time the emotion was felt, and can be adjusted in the bar. In this view, the circles represent the nearest categorical emotion, represented by colored sectors in the emotional model in Fig. 5.1 (but could be customized differently (sections “Emotion Models and Representations” and “Personalizing and Configuring the Emotion Wheel”)).

Search by Trajectory can be made in two ways: by free drawing (continuous trajectory) (Fig. 4a1); or by discrete points (Fig. 4b1). In the free drawing search method, the user can draw a line on the wheel with colors and VA associated with the emotions along the wheel. This line represents a sequence of emotions which are searched, starting from the beginning of each movie stored in the database. The discrete points are very similar, but instead of drawing a continuous line, users click on the wheel at several points to create circles representing the corresponding emotions’ VA, and connected by straight lines to create a path.

Search Results (Figs. 3b2–d2 and4a2–c2) by default are represented in a format that matches the input. When the user searches by emojis or categorial emotions, the user input is displayed followed by a list of the resulting movies (Fig. 3c2), with common information (like title, synopsis, director and cast) and emotional information, including: the three most dominant emotions shown as tag clouds and emojis; as a colored vertical bar (vertical to avoid being confused with timelines) ordered bottom-up by emotions’ %; and a bar in one average color (a weighted average of the dominant emotions). We chose the top 3 emotions to display, reflecting some diversity but not too much, like in a podium, and inspired by Miller’s research on cognitive load [24], that suggests 7 + –2 items (also taking into account that we have 3 different ways to present the same emotion (word, color and emoji)).

We then noticed that besides podiums, three is a number adopted e other selections, like the number of reactions highlighted for posts and comments e.g. on Facebook; so this choice appeals to familiarity).

When there are more than 3 emotions, the bars display a gray section at the top for the remainder %. The user can also click on one of these bars to display a more complete emotional information about the movie (Fig. 3b2). When the search is made by wheel, the input is displayed along with another wheel containing all the dominant emotions of the results in the current page (Fig. 3d2); and below is displayed a list with the respective results (10 per page) with a smaller wheel representing the movie dominant emotion, instead of the words and emojis.

The trajectory search results are displayed in a similar way as the method described for the wheel: the user input is displayed along with another wheel that has all resulting trajectories (in the page) together (Fig. 4a2, b2); and below in the list, each movie has the correspondent trajectory. In the trajectory results view, the user can also click on the results wheel to replay each movie trajectory individually (Fig. 4a3, b3) and then click on one of them to scroll down to the respective movie (Fig. 4a4). The movie trajectory in each movie result can also be clicked to animate the way its trajectory progresses, segment by segment (Fig. 4b3).

The user can also do a search by movie title, in the top search bar (Fig. 4c1). In all results displayed, regardless of the search method, the user can click on the movie poster or title (Fig. 4c2) to proceed to its visualization (Fig. 4c3), as described in Section “Visualizing Movie Emotional Impact Along Time”.

Multimodal search: to support more natural interaction and serendipitous moments, users may search by drawing emotional highlights (as dominant emotions) and trajectories (as emotional stories); and by the user emotional expressions detected with a camera (described above). And we are also exploring the search by music being played, to access the music detected, the movies featuring this music and those with similar emotional impact [6].

Visualizing Movie Emotional Impact Along Time

In the movie visualization page, the movie plays on the left, side by side with the emotional views on the right (selectable by the tabs above): the emotional wheel in Fig. 4c3), and the other views in close-ups in the next figures. In this case, Back to the Future is playing, and the user just clicked on a circle representing a sad emotion on the wheel and was directed to the corresponding scene where this sad emotion was felt: when Marty McFly, back in 1985, thinks that Doc has died… If you haven’t watched the movie, we won’t spoil it for you. But we will present the visualizations.

In previous work, we visualized emotions with tag clouds, charts and colors for categorical emotions; colored circles and painted trajectories on the VA wheel; and video timelines painted with colors mapped from the wheel (that could be shown in synchrony with the wheel (often on the side) when the movie was playing, or inspected on hover) [28]. But a couple of challenges remained for: (#1) the timing of emotional trajectories on the wheel: they represent visited emotions, but how to represent the time direction and speed?; (#2) stationary or overlapping spots: how to distinguish when the same emotions are felt for a short or long time, or again at different times?

The Emotion Wheel (Fig. 5b) is used to represent the emotions on the wheel in their VA positions, adopting a Scratch Art methaphor,Footnote2 where the circles reveal the background color in their respective positions, with sizes that reflect the time spent there in a row; and they are also identified as the closest categorical emotion.

Fig. 5
figure 5

Emotion wheel using scratch art metaphor:a background wheel for the 8 central emotions in the Plutchik Model;b emotion wheel revealing the underneath colors of the background, as the emotions are detected;c emotion wheel with transparent overlaps;d ande changing transparency on hover

To address challenge #2 of overlapping emotions, another view was created with the circles made more transparent in relation to the normal wheel, to make the overlaps noticeable. When hovering the circles, they loose transparency, helping to check these overlaps as the circles stand out (Fig. 5c–e).

By the total number of circles of an emotion and their size, in Fig. 5, it is possible to realize that the emotion Joy (although with neighbour circles at different VA values) is the dominant one in the example. This can be confirmed in the Cumulative Dominant Emotions Wheel, where (in Fig. 6a) each sector represents the frequency and how much time (in %) the user felt each emotion till the current time. An alternate view (Fig. 6b) was created after the first user evaluation, adopting a design closer to the one in the emotion wheel, representing emotions by colored circles, with size and distance to center reflecting its dominance. Circles are connected and the area in between is highlighted to emphasize the overall cumulative emotional impact of the movie, that was also highlighted in the first view, in its own way. Note that even in situations with overlaping emotions along time, the total amount of time spent at each emotion will be reflected in these views, addressing challenge #2.

Fig. 6
figure 6

Cumulative dominant emotions:a triangular areas;b circles

In both views, challenge #1 is addressed only in play mode, when the evolution is animated. The visible colors are the ones that were felt; another option would be to represent all the emotions in the current model (for a more complete contextualization), e.g. using transparency for absent ones (in a way closer to Plutchik’s in [36]).

To help identify how time goes by (challenge #1), when telling the user's emotional stories about watching a movie by the represented paths on the wheel: (1) the emotional views may be watched in realtime, in sync with the video being played [28], (2) they can be replayed a posteriori as an animation, e.g. redrawing the circles, at a faster pace; and (3) the final representation of the emotions, on its own, may also provide some help. Different approaches to address this challenge #1 in the emotion wheel are presented and discussed next.

Besides the drawing of the emotion circles in the Emotion Wheel (Fig. 7a, b), a trajectory can be represented by lines between circles, to reflect how the emotions were felt along the movie. But it does not address all the challenges inherent in #1 (e.g. where it starts or how long spent) beyond the animation. For stronger solutions, we adopted a couple of metaphors in our design, depicted in Figs.7,8.

Fig. 7
figure 7

Emotional trajectories: in emotional wheel:a trajectory in progress;b final state; in X-Ray view:c linesd circles; in contrail view:e in progressf final state

Fig. 8
figure 8

Timewheel:a initial state in replay;b midway representing timeline, identifying emotions;c final state

The X-Ray metaphor was adopted as a solution for challenge #1, where the evolution is presented through the fading of a single color on the path (whiter to darker gray), that is, we ignore the colors of the wheel and focus on the “skeleton”, highlighting the more recent. Transitions between discrete emotions are represented by lines, and emotions by circles (Fig. 7c, d). In crossovers, when returning to a previous emotion, the new circle on top will adopt the most recent color. On the other hand, the circle size (and the speed in the replay along time, in all the views) reflects the amount of time the user has felt that emotion. In this view, there is also the possibility for the user to see the emotional evolution only through circles (Fig. 7d).

The Contrail metaphor (Fig. 7e, f), refers to the trail of condensation an airplane leaves behind when it flies, due to differences in temperature, making it possible to observe the recent path it took to where it is. The trail is narrower close to the plane and is wider and more disperse at more distant points; and this is what is represented in Fig. 7f, from the oldest to the newest emotion, reinforced with the color becoming whiter and less transparent, and the lines narrower, making possible to visualize this passage of time. For example, in Fig. 7e the “plane” (the current time) is located close to the middle of the wheel, and the line there is the whiter, more opaque and narrower so far.

The Emotion TimeWheel addresses both challenges: #1, by mapping the emotions on the wheel onto the timeline; and #2, by the size of the circles (influenced by the total time that emotion was felt) and the length of each segment on the timeline (the same emotion possibly mapped more than once at different times), making it possible to verify which emotions were felt for the longest time and in what order.

In Fig. 8a) the visualization is in its initial state, in replay mode, with the circles and an empty timeline. When not in replay mode, the initial state is an empty wheel and timeline, and the circles are drawn when the lines reach the circles’ positions. In Fig. 8b), the animation is in progress and it can be noticed that the sectors in the timeline are being formed, as a line links each sector to the corresponding circle. In Fig. 8c), the animation is in its final state making it possible to see the full emotional story of the movie.

Personalizing and Configuring the Emotion Wheel

For flexibility and personalization, users may choose predefined emotional models, and they may also configure their wheel, at this stage, by defining: the categorical emotions that will be represented in the wheel, their VA values and colors. Our rationale for this feature is that there are different models for emotions and different categorical emotions that can be adopted; and as long as these emotion can be associated with a VA, representations can be converted among each other, and presented to the users in their preferred way, at any time (Section “Emotion Models and Representations”). This option could be made available only for restricted users that would be knowlegeable about emotion models, who could create different configurations to be made available to the general public; or could be made accessible to anyone, with the disclaimer that the users would be responsible for their own models, even if they did not make much sense in comparison with established models, but could still be valuable for exploration and even for creativity and self expression.

The interface for this configuration is exemplified in Fig. 9. On the left, the user chose a predefined model based on Ekman, with its 6 emotions, in the colors they have in Plutchik’s model (another option was Plutchik’s model with its main 8 emotions). Now there is the possibility to add and remove emotions and to change or define VA values and colors (on click, a color editor appears). The wheel, on the right, will be updated and adjusted at each emotion that is added or removed. Below, there is the option to delete or save the wheel for future adoption. Users may choose the current model to be used in the visualizations, from the predefined or user configurations.

Fig. 9
figure 9

Configuring the emotion wheel: the wheel (on the right) is updated to match the configuration (on the left) any time an emotion is added or removed. Ekman’s emotions are exemplified, making evident that this model only has one positive emotion (Joy); which is quite limited to represent movies’ emotional impact, inspiring us to explore richer emotional models, after iFelt [29]

User Evaluation

User evaluation was conducted in two phases to assess perceived usefulness, usability and user experience in the search and interactive visualization features to access movies based on emotions in As Movies Go By. Most features were evaluated in the first phase and published in [7]. In the second phase, and second evaluation, new features were added, a couple of previous features were enhanced or evaluated more explicitly, and we opted to have a mix of previous and new users participating in the evaluation, for a richer perspective.

Methodology

The same methodology was adopted in both evaluations, and main differences will be highlighted along the description, in this and following sections. A task-oriented evaluation was conducted, with semi-structured Interviews and Observation while the users performed the tasks with the different features and visualizations, after explaining the purpose of the evaluation, asking some demographic questions and briefing the subjects about the application. For each task, we observed and annotated success and speed of completion, errors, hesitations, and their qualitative feedback through comments and suggestions. An evaluation based on USE [22] for the tasks was adopted, rating perceived Utility, Satisfaction in user experience and Ease of use on a 5-point scale.

At the end, users were asked: to provide a global appreciation of the application, through a USE rating; to highlight the features or properties they appreciated the most, and suggestions for what they would like to see improved or added in the future; and to characterize the application with most relevant perceived ergonomic, hedonic and appeal quality aspects, by selecting pre-defined terms [18] that reflect aspects of fun and pleasure, user satisfaction and preferences.

Participants

The first evaluation had 10 participants, 6 male, 3 female and 1 non-binary, 22–55 years old (Mean 33.4, StdDev 13.8); all of them have college education (3MSc, 7BSc), coming from diverse backgrounds (5 Computer Engineering, 1 Arts, 1 Mathematics, 1 Radiology, 1 Administration and 1 Special Education); all having moderate to high acquaintance with computer applications, and this one being their first contact with this application, allowing to discover most usability problems and to perceive a tendency in user satisfaction.

Participants watch movies weekly (5), monthly (2), occasionally (2) or daily (1); using mostly streaming platforms such as Netflix (9): weekly (4) and daily (2); and television (9): weekly (4) and daily (1); and cinema (9): occasionally (7) and monthly (2); open access websites (5): occasionally (3), monthly (1) and daily (1). Most of them search for information about movies monthly (4) or occasionally (4), others weekly (2), daily (0) and never (0). The criteria they take into account to choose a movie: genre (9), actors (7), directors (5), and most popular at the moment (5). Almost everyone completely agreed (7) that viewers can feel emotions by watching movies; and they sometimes (4), never (3), a few times (2), and a lot of times (1) use movies to change their emotional state. When participants were asked to associate movie genres with the emotions they represented, there was a concensus associating comedy with joy, and horror with fear (90% in the first evaluation, and 95% for joy and 95% for fear in the second evaluation).

Participants were also asked if they ever used any movie or related application based on emotional states. One computer engineer said he had used Happify, aimed at wellbeing, encouraging users to connect with their thoughts and feelings using cognitive behavioral therapy skills (like savoring, thank, aspire, give, empathize and revive) and positive psychology; and Daylio, a diary and mood tracking app based on mood (in 5 levels: rad, good, meh, bad, awful), employing startegies like reminders and achievements. These are somehow related, but not using emotions per se or movies in a very explicit way. The majority of participants never used one such application, but would like to use, giving some insight of what they would like to find, such as movie search or recommendation according to users emotional states, and automatic emotion recognition (something that we are already exploring here and in other parts of our work).

The second evaluation consisted of 10 participants as well, 6 male, 4 female, 21–28 years old (Mean 24.8, StdDev 2.2); one of them did not have any college degree (has a professional degree in frontend development) and the rest were comprised of 4 BSc: 1 Computer Engineering, 1 Education, 1 Radiologist, 1 Marketing; 4MSc: 3 Computer Engineering and 1 Accounting; and 1 PhD in Architecture. Previous content consumption habits were similar to first evaluation; and all of the new participants never used a movie application based on emotions; even the participants who took part in the first evaluation never used one as well, with the exception of this app when they tested it, seven months before. Having a mix of 50% new and 50% previous participants from the first evaluation, we could compare how they would respond to familiar and new features, allowing to reach richer conclusions.

Results

In both evaluations, the users finished almost all the tasks quickly and without many hesitations, and generally enjoyed the experience with the application. The results are presented in Tables1 and2 and explained in the text, along with the comments made by the users.

Table 1 USE evaluation of As Movies Go By
Table 2 Quality terms users chose for As Movies Go By

Homepage. At this page, we asked the subjects ‘to change the random movie and watch its trailer and information’ to evaluate this interactive feature. We had quite positive results for USE (U:4.3; S:4.2; E:5), also presented on Table 1. Users found the feature “very good to suggest movies” and “a good way to choose movies when you’re not sure what to watch”. Another user said it could be interesting to have a movie randomizer filtered by dominant emotion, since this emotional information is presented. In the second evaluation, the results were similar USE (U:4.4; S:4.6; E:5), slightly more positive in usefulness and especially in satisfaction. Users also had similar reactions, with one of them saying “this is a good way to discover new movies”.

Search and Results. For this part of the application, we created tasks for the 6 methods of searching movies and their respective results.

In T 2.1, in the first evaluation, users were asked ‘to search movies by dominant emotions in words with the input of joy (79%) and sadness (21%)’. Their response was very good with the majority completing the task quickly. Most of the users found this search method “easy to execute and useful” with a couple saying that “percentages were too specific”, i.e. there was no need to be so accurate, though recognizing that it is important to specify dominance and this is a good way to do that. So, in the second evaluation, task T2.1 had less accurate percentages ‘[…] Joy (80%) and Sadness (20%)’ and was well received. In the first evaluation, they also mentioned that the interface could “have more emotions to choose from”, which was aligned with what we were already working on (Section “Emotion Models and Representations”), making the emotion model customizable (Section “Personalizing and Configuring the Emotion Wheel”).

Overall, the opinion on this task was already quite positive in the first evaluation (U:4.2; S:4.3; E:4.4). Regarding its results, users were asked ‘to name the most and least dominant emotions in the first result presented’. They found the way the results were presented “interesting, especially the emotional information”, and concluded very quickly what was the most dominant emotion. When trying to find the least dominant emotion, the users struggled a bit at first, as they did not know they had to click the dominant emotions’ bar (having more than 3 dominant emotions, as indicated by the gray bar on top); then it was ok, and the rating was (U:4.1; S:4.1; E:3.9). At the second evaluation, users gave slightly more positive USE scores, also in this task. This increase in the USE score was partially due to the users that were already familiar with the system, from the first evaluation. About the search method (U:4.6; S:4.5; E:4.7), users said that it was “easy and intuitive to use” and that they “had full control of what and how many emotions they want in their search”. In the results section (U:4.6; S:4.4; E:4.6), users who were already familiar had no trouble, but a coulple found that “clicking on the bars was not intuitive nor perceptive at first, but once you know it, it’s very easy to use”.

In T 2.2, the users had to ‘turn on the camera and express the emotions of Joy and Surprise in their faces’; then ‘change the values to 84% and 16% respectively’ (in the first evaluation) and ‘85% and 15%’ (in the second) in the emojis (and these could also be selected without the camera). The reaction was very positive already in the first evaluation, with comments like: “very interactive with the user”, and “loved this functionality, would like to see in future apps”. The only downside was that “some emotions are difficult to represent with facial expressions”, especially when thinking about a broader set (in the ones used, 6 are from Ekman, corresponding to emotions that are easily recognized in facial expressions, even across cultures; Trust and Anticipation (added by Plutchik) being more challenging to express). But overall was a “very good experience” (U:3.9; S:4.6; E:3.7). The results were presented in a similar way as the previous search method, but here the participants were asked to ‘say which emoji corresponds to each emotion and to interpret the Average Emotion bar’. Most of them could identify the emotion of each emoji with ease, but several failed when interpreting the average emotion (something that they were not familiar with), then deeming it “not very useful”. Though one person thought that “it’s useful to know which type of movie it’ll be”. These results got (U:3.2; S:3.5; E:3.8).

On the second evaluation, users liked this search very much, more useful and easy to use than before (U:4.4; S:4.6; E:4.4) and said it was “very interactive” but “would be more interesting with the use of more emotions than the ones presented”. This would be easier with the emojis than the facial expressions though. When asked about interpreting the average emotion bar, the participants who were familiar with it had no problem, but again this feature was considered “not very useful”; whereas the new participants had more trouble to interpret the feature and said “the bar is hard to understand, mainly if you don’t know which color represents each emotion yet”. The scores for these search results were similar to the ones in the first evaluation (U:3.4; S:3.8; E:4.0).

Task T 2.3 was the last task to search for Dominant emotions, this time by Colored circles in a Wheel. The users were asked to ‘draw two circles on the wheel (presented to them in an image), by clicking on the wheel, one around the Joy emotion area and another around the Trust area’ (color would be automatically assigned based on the position), and then ‘adjust the percentages to 78% and 22% respectively’, ‘80% and 20%’ (in the second evaluation). Generally, the participants thought that “the previous methods were more intuitive than this one” because “when using it for the first time, it’s not easy to know how it works” (U:3.3; S:3.2; E:3.3). In the results, the users had‘to identify the dominant emotion in the wheel’, and they found it “easy to understand” but “the results wheel, on itself, makes it more difficult if movies have emotions in overlapping positions”, something we are already dealing with and testing in other views. (U:3.5; S:3.5; E:4.4).

The second evaluation got slightly more positive results in the search (U:3.7; S:3.8; E:3.6), but still with lower scores than the previous search methods, with users justifying inline with this comment: “it's a more complicated method than the previous ones, and since it has the same goal, it becomes a little less appealing to use”. Upon evaluating the results, we had far better opinions than before (U:4.2; S:4.2; E:4.5) with most users finding it useful and “easy to understand”; although one user mentioned that “the results wheel is perceptible but unnecessary because we are shown results that we are already expecting to see”, failing to notice at first that the circles are not exactly the same, depending on the movies, and that these can be accessed from here.

In the first Trajectory Search method, by Free Drawing in T 2.4, the users were prompted with an image with a continuous line representing ‘an emotional trajectory, within a movie, that they had to draw’. Overall, the participants found it reasonably satisfactory and easy to use (S:3.0; E:3.4); but not so useful (U:2.8), a few saying: “it’s not common or mainstream”; “it’s not relevant to draw the trajectory this way”. It was easier for them to think of the individual emotions, even when in sequence (next task), although it may be something they get to appreciate more as they use it. In the results view, they had to ‘observe each trajectory individually in the results wheel (at the top) and choose one, then replay the trajectory of the chosen movie’. The participants struggled to understand how it worked, hence the E score (U:3.2; S:3.2; E:2.7), because “there is no indication that the wheels can be clicked to replay the trajectories”; and “it’s not useful to have all trajectories mixed up”; the users also pointed that “each movie wheel in the list should have the emotion labels, as presented in the search and results wheels”, not presented to make the list lighter. But they liked trajectories shown in sequence, and the list of results.

As expected, due to the familiarity of some of the participants with this feature, the USE scores were higher in the second evaluation (U:3.5; S:4.0; E:4.1) with some users saying “it’s complex to use” and the participants for the 2nd time already mentioning that the next method (by discrete points) is “more simple and easier to understand and execute”. The results’ scores also improved greatly (U:4.2; S:4.3; E:4.4) mainly because half the users knew how it worked and the new participants found it “very useful to choose the trajectory on the results wheel, although not being perceptive that we can click on it at first glance”.

In the second trajectory search method, by Discrete Points in task T 2.5, users had to do a similar search as before but with a different image prompt (with circles for each emotion in the trajectory) they drew by clicking. They found it more “appealing and simple” and “more intuitive and satisfying than the free drawing method”; “it’s the best way to represent emotions on a wheel [separately]” (U:4.3; S:4.3; E:4.7). In the results, users had to do the same as in the previous view, and the feedback was very similar, but “[emotions] individually are much easier to understand than the free drawing”, one said, as he was “used to thinking of emotions separate from each other”, getting higher scores (U:3.5; S:3.5; E:3.5).

In the second evaluation, this search method got again the highest scores and higher than in the first evaluation: (U:4.7; S:4.9; E:4.9) for search, and (U:4.6; S:4.4; E:4.7) for results. Generally, users found this method “much more appealing and easier to use than the previous one” because “it’s a more direct and simple way to choose the emotions we want”. When it came to the results, all users were already familiar with how the feature worked. The only noticeable comment came from a user who said that “the replay on each movie could be performed with each emotion circle and respective line shown at the same time, so it would become faster”; although one at a time helps emphasizing the sequence, and it could become faster by reducing the time each element takes to be presented.

In the last search, T 2.6, users had to ‘search for the movie Back to the Future and proceed to its visualization’. This was the quickest and easiest task because it is the usual way to search for something in most applications (U:4.8; S:4.7; E:4.9). The only suggestion was to include “an auto complete”, something common in most applications, although not our focus in this work. In the second evaluation, the scores and opinions were pretty similar (U:4.9; S:4.7; E:4.9), with a similar suggestion for auto complete even. This was expectable because even the new participants were already familiar with this kind of search.

Movie Visualization. In T 3.1, users were asked to ‘describe what they see in the Emotion Wheel visualization, namely the emotions that were felt; say if they were mostly more positive or negative, and more intense or calm’, and finally, they were asked to ‘choose an emotion and visualize the corresponding movie scene’ (U:3.6; S:3.5; E:3.2). In general, users completed the task in the expected time, only hesitating a bit when accessing the movie scene, as they were not familiar with the visualization and were not sure if they should click them; then finding this feature very interesting and useful. As we have seen throughout the previous features, the visualization ones also had improvements on the USE scores and opinions in the second evaluation. The Emotion Wheel scores were (U:4.5; S:4.5; E:4.6) this time, with participants saying that “it is very intuitive” and “very useful to navigate to each scene in the movie using the emotions”.

In T 3.2, users were to ‘identify the dominant emotion on the Emotion Wheel’, and then to do the same, to compare, in the cumulative Dominant Emotions Wheel; where they found it easier to do. Here, they also had to ‘identify which emotion was absent and which ones had the same level of occurrence’. Overall opinion was quite positive (U:4.0; S:4.2; E:4.5), with some users considering the visualization appealing and easy to understand.

For the second evaluation, there were two different representations for the Dominant Emotions Wheel. Generally, users found them “very easy to understand”; and between these two, most users found the first representation, with the triangles, “much more direct and easier to see what emotions it contains”. Again, the scores (U:4.4; S:4.5; E:4.8) were higher than in the first evaluation.

Back to the Emotion Wheel view, in task T3.1.1, users had to ‘identify overlapping emotions (with similar VA felt more than once); which emotion was felt the longest; and which one was felt more often”. The overlaps were well identified in a reasonable time, but some users did not find it obvious at first that they could interact with overlapping circles (that would change transparency level on hovering) to inspect these emotions; although the transparency hinted where the overlaps were. So, satisfaction and ease of use were scored below its usefulness (U:3.8; S:3.4; E:3.4). The second evaluation was similar to the first one, with slightly higher scores (U:4.1; S:3.8; E:3.9) and new users having a little more trouble finding the overlapping emotions, saying “they could be more clear or noticeable”.

Still on this view, in T 3.1.2, the emotional path of the film is presented, or replayed, through an animation with lines forming between circles from the first emotion to the last. It was asked to ‘watch and identify emotions in this path’, which was quite easy paying attention to the animation, but not so easy at the end to know the direction time goes by, the first and last one. In general, they considered it a good feature (U:3.6; S:3.9; E:4.1), highlighting: “To Replay this animation is very helpful”, compared to just seeing the final state of the visualization. In the second evaluation, all users liked the new real time experience (U:4.4; S:4.4; E:4.5), with the visualization drawn in synch, as the movie played; but when asked to test the replay feature, they said “it would be useful to speed up the replay” and “if I want to know what was the first and last emotions, I have to watch the replay and it becomes a little tiring to have to do that over and over”. Something that is addressed in the views of the next tasks, by reducing the need to replay.

In T 3.3, the emotional path is represented using the metaphor of the X-Ray, and the users were asked ‘which is the dominant emotion; the one felt for the longest time; and again which ones are the first and last emotion?’. This visualization did not please users so much (U:2.9; S:2.9; E:3.0), lacking the emotion colors and due to the confusion caused by the overlapping lines that are more visible here; they would prefer the circles only, that already have a color hinting the order (in grayscale, as they discovered), and the replay animation to show from the oldest to the most recent emotion. One user mentioned not liking this view for being so monochromatic; and interestingly, the radiologist was the participant who understood and appreciated this view better; which has a purpose to temporarily highlight the emotional evolution along time, not to be used instead of the colored view. Like x-rays are not used instead of pictures, but to visualize otherwise hidden properties. This task also had a significant improvement in the second evaluation (U:4.6; S:4.6; E:4.7), with users saying that “the monochromatic black and white colors help greatly in distinguishing which emotions come first and last without the need of a replay”. Some users also said that “the lines could be optional because it's more clear and clean without them”. We also had the radiologist reinforcing the insight that the greyscale is more perceptive than the colors for the purpose of this task.

On the other hand, in task T 3.4, the visualization using the metaphor of the Contrail of an airplane in the sky pleased the users much more in the first evaluation (U:3.4; S:3.6; E:3.8). They found it “very interesting”, “out of the box”, “easier to understand”, even if the colors are quite similar to the X-Ray, changing width, color and transparency of the lines along time (narrower and less transparent in the most recent emotions). The users were again asked to ‘identify the dominant emotion; the one felt for the longest time; which ones are the first and last emotion, using replay if necessary’. Some comments included: “I liked the color change along with the change in width”; “Pleasant viewing”; “Replay is a nice feature”; with a suggestion that older lines did not need to be so transparent to be better noticed. When comparing this feature with the previous one, during the second evaluation, participants liked the idea but now preferred the x-ray version, instead, because “the contrail lines become too thick and confusing” and others saying that “the contrail overlaps are too confusing”; but still getting better scores when compared to the first evaluation (U:3.9; S:4.1; E:4.5).

In T 3.5, on the Emotion TimeWheel View, the emotional path or story of the film is presented with all emotions related to a timeline. The users were asked to ‘identify the dominant emotion; which one was felt for the longest time; which one was felt more often; in which part of the film there was a greater concentration of the emotion joy; and which was the first and last emotion to be felt’. Overall, this was considered by many “the best view to answer all the questions asked in the Movie Visualization task list”, and especially the ones dealing with temporal aspects (U:4.4; S:4.1; E:4.0). Other comments included: “the best way to demonstrate the emotional path”; and “aesthetic and very easy to understand”. Suggestions: "The timeline could fill in automatically as we watch the movie”; “It could allow to go to the movie scenes at those times”, which we already had in other features [28] not evaluated here, and added in the second evaluation (T 3.6), synchronizing and indexing all the views with the movie being played. Of all the views, this was still considered the best during the second evaluation (U:4.9; S:4.6; E:4.7). Users agreed that this view “is like a merged version of all the previous views” and that “it’s the most complete view” allowing the users to answer all of the questions asked in previous views. In this second version, there was an improvement, where the user could speed up the replay, which users appreciated very much.

Navigation & Synchronization among the different views was explicitly evaluated for the first time in the second evaluation, in task 3.6, subdivided into 3 separate sub-tasks.

In T 3.6.1, to evaluate Synchronization, users were asked to ‘watch the movie and observe the wheel’ as the wheel began to fill with the corresponding emotions synchronized with the movie, in real time. Overall, users gave a quite positive feedback (U:4.7; S:4.6; E:4.8), saying that “it is really interesting to know which emotions are occurring at given times of the movie”, but one user said that “given the current display, it is a little distracting because if I want to watch the movie, I would not pay that much attention to the emotions”. It is quite interesting that they are conscious of this split attention challenge; which inspired us to conceive the replay feature, that can be used after watching the movie, which can be done even in fullscreen without emotional feedback, if that is the user’s preference.

To evaluate Indexing or Navigation, in T 3.6.2, users were told to “click on an emotion to navigate to its corresponding scene”. This feature also had a very positive feedback (U:4.2; S:4.5; E:4.8), with comments such as “it’s good to watch a scene representing directly an emotion chosen by us”, and one user saying that this functionality would only be viable for someone who had watched the movie before, otherwise we would get the movie spoiled. This reinforces our option to have different features made available, for flexibility, with different levels of information being presented.

In the last sub-task T 3.6.3, to evaluate Replay, users were asked to “replay the wheels and observe their outcome”. This task was one of the easiest for users to understand (U:4.8; S:4.5; E:5.0) and one of the most helpful functionalities, because it allowed to “observe quickly the emotions presented in the movie” as well as “good to see how the movie’ emotions are displayed without having to wait until the end of the real time synchronization”.

Emotion Wheel Configuration. This feature was new in the second evaluation. In T 4.1, users were asked to ‘select the Ekman model, add all emotions and observe the result’. The participants, in general, found it easy to use and liked observing the model, but did not quite understand the purpose of this feature, with (U:3.4; S:3.8; E:4.0). In T 4.2, users were asked to ‘clear the wheel and change to the Plutchik mode, to add the respective emotions’ and then ‘remove the emotion “Joy” and replace it with a custom emotion with the desired color’. Similarly to the previous task (U:3.7; S:3.7; E:4.0) users did not quite understand the purpose of adding the emotions, referring that “the only useful feature in my opinion is customizing the emotions colors” and “I like to be able to change the colors of the emotions, but I don’t know what to do with the valence and arousal values”. Overall, users liked the ability to customize the emotions’ colors; but since they are not so aware of the VA values in the wheel a priori (only after adding them) and what these values should be for each emotion, they found it a bit confusing to use it.

Global Evaluation. Overall, users in the first evaluation found the application and their features quite interesting, innovative and visually appealing. Although there were some difficulties at first in some of the most unusual visualizations, for the participants less familiar with this kind of representations, ease of use was also mentioned, and was even rated higher in the mean USE values of all the features. The global USE classification (U:3.7; S:3.9; E:3.8), rated in separate by the users, is close to the mean value calculated from the features’ ratings (U:3.9; S:3.9; E:4.2), reflecting that in general users found it useful, easy, and quite satisfactory to use these interactive features; and had a good experience. The second global evaluation had a higher USE classification, both rated in separate (U:4.5; S:4.5; E:4.6) and as mean values (U:4.2; S:4.3; E:4.5); and this was mainly due to half of the participants also partaking in the first evaluation, and the new, improved or emphasized features. Most of the difficulties experienced, at first glance, were overcome simply by being familiar with the application and their features, from the first evaluation, even if this had been a very short time experience. Even for new users, those with computer experience, like computer engineers, had no trouble perceiving right away how the system worked. Also to be noted that the global scores assigned at the end were a bit higher than the mean values of the scores assigned to the features in separate, reflecting an increased satisfaction as the participants got the big picture.

When explicitly asked to refer to the features that they appreciated the most, in the first evaluation, participants mentioned: the random movie display; the TimeWheel; the search by facial expressions using the camera (the one they mentioned the most), but also by emojis, word percentages, and by discrete points; the view of the dominant emotions; the contrail; and the connection of the circles to the scenes of the movie (alowing to navigate to the video scenes taking the emotions into account). In the second evaluation, the answers were quite similar, although they now also emphasized the x-ray view and the navigation to the movie scenes, from the emotions displayed on the wheels. For global suggestions, they repeated a couple that were already mentioned in the tasks, described above.

To summarize their appreciation, users classified the application with most relevant (as many as they found appropriate) perceived ergonomic (8 positive + 8 negative (opposite)), hedonic (7 + 7) and appeal (8 + 8) quality aspects in [18], as presented in Table 2. Although having the same number of participants, in the second evaluation, each participant chose more terms on average. Interesting was the most chosen term in both evaluations. Comprehensible, pleasant, clear, trustworthy original and innovative were also chosen by half or more subjects in the first evaluation. Whereas, in the second evaluation, we had almost the same list (in a different order and with Good instead of Clear) chosen by more than a half of the participants.

Just one negative term was chosen in the first evaluation: complex (3 times), very close to the opposite positive term: simple (2 times), both chosen 5 times in the second evaluation. Complex is also associated with being rich in terms of the features that are provided, and these were perceived as Comprehensible and Clear. In the second evaluation, one user also chose Confusing, another negative term (not chosen in the first evaluation); opposite to Clear, that was chosen 3 times (6 times in the first evaluation). So, more clear than confusing, but a bit less clear than in the first evaluation, which aligns with having more features to deal with, though considered simpler.

The chosen terms were well distributed among the (H)edonic, (E)rgonomic and (A)ppeal qualities, in the first evaluation; with more H and E terms in the top positions, but more A terms overall. In the second evaluation, the (E)rgonomic terms increased, while those in H and A decreased. The E category is also where we find the only 2 negative terms, although the increase in terms was higher in the positive terms. These results confirm and complement the feedback from the other evaluation aspects and user comments.

Conclusions and Perspectives

This paper presented interactive mechanisms proposed to search and visualize movies based on dominant and emotional impact along the movie, with different models and modalities, extending previous features and addressing challenges and open issues in promising ways.

A user evaluation was carried out in two different phases, with a couple of features added or enhanced before the second one. Most users found the application and the evaluated features quite interesting, comprehensible and pleasant, and also trustworthy, innovative, original, and clear (in the first evaluation) good (in the second). The USE ratings reflected that, in general, users found the interactive features useful, easy, and quite satisfactory to use; in spite of the initial difficulties and lower scores in some of the most unusual visualizations. The second evaluation had half of the previous and half new participants, for a richer perspective, and it got higher scores in all the tasks, mainly due to familiarity, of previous users, and improvements. This suggests that in longer term adoption, users’ appreciation and satisfaction could increase.

We observed that different features excelled in different situations, and some have a more supporting role than others; e.g. views to highlight emotions overlapping, or the order when they were felt, at the expense of the emotion color, suggest they could cohexist and be used with a purpose, in complement.

In addition, the TimeWheel was considered the best view to deal with the temporal aspects, although it uses some space in the wheel that is important to represent the emotions (the Emotion Wheel being better for that); the search by facial expressions using the camera was the most highlighted feature, for the modality and the automatic detection, but it is limited in the variety of expressions recognized (6 by Ekman at the moment), whereas emojis can represent a much larger set, and the users did also mention they would like to have more emotions; the X-Ray metaphor has an interesting potential, and although it was superseded in the first evaluation by the Contrail that in a way extends the same concept with a stronger representation, it was considered more effective for the purpose of emphasizing temporal paths, in the second evaluation.

The pragmatism, clarity and beauty of the summarization in the Dominant Emotion view was also very much appreciated, the top for this purpose; and users enjoyed drawing trajectories with discrete points. They also liked the connection of the circles to the movie scenes, alowing to access them based on emotions, and to increase emotional awareness; and the replay feature, to animate the emotion visualizations and watch in a short time how the emotional impact evolved. Actually these two features exist in most views, but in the first evaluation were tested once or twice in the context of other features, not getting an explicit score, that would for sure be quite high; and it was, in the second evaluation, where they were scored in separate. Also new in the second evaluation was the Emotion Wheel Configuration, that users appreciated mainly to select predefined models and customize emotions’ colors, but were not so sure about the definition of VA values. This is quite understandable, and reinforces the arguments in our discussion about the rationale for this feature, in Section “Personalizing and Configuring the Emotion Wheel”: that this configuration could be made available only for restricted users that would be knowlegeable about emotion models, who could create different configurations for the general public; or could be made accessible to anyone, and still be valuable for exploration and even for creativity and self expression. Finally, from the start, users really enjoyed the random movie display as they enter the homepage, adding to the flavour of surprise and serendipity.

Future work includes refining, based on the evaluations and user feedback, and further extending the interactive visualization and search features; and then reevaluate them with a larger and more diverse set of participants from the relevant target audience in the new directions explored. The main goal is to provide useful and interesting ways to perceive and find movies that we value and can enrich our experience, increase our emotional awareness, and ability to even regulate our emotional states. To help contributing to this goal: emotion models’ configuration can be explored further, to accommodate richer and more flexible options, in coordination with research on emotions, signals and emotional states processing and identification, as well as self-report. Search based on current emotions using other sensors and VA can also be explored to suggest how to reinforce or balance into desired states. Recommendation techniques based on affective states and impact, preferences and access patterns can help in this direction [40]. Scale is also an important aspect to keep in mind, either in the amount of movies, and the amount of emotions detected or annotated in each movie; although filtering most relevant and e.g. listing by pages, highlighting top 3 emotions and aggregating dominant properties, can already help. Other media and modalities [32,37] can also be explored for accessibility of those with special needs or vision impairments; and to increase awareness even when not relying on the visual dimension, e.g. when listening to a talk or a song, or just focusing the visual attention on the movie, for a more immersive experience.

Notes

  1. AWESOME Project: Awareness While Experiencing and Surfing On Movies based on Emotions.

  2. Scratch Art: Introduction, by Kunstler, J.https://juliannakunstler.com/art1_scratch_art.html

    Make Your Own Scratch Art, by Craft Project Ideas.https://www.craftprojectideas.com/make-your-own-scratch-art-2/

References

  1. Aigner W, Miksch S, Schumann H, Tominski C. Visualization of time-oriented data, vol. 4. London: Springer; 2011.https://doi.org/10.1007/978-0-85729-079-3.

    Book MATH  Google Scholar 

  2. Arriaga P, Alexandre J, Postolache O, Fonseca MJ, Langlois T, Chambel T. Why do we watch? The role of emotion gratifications and individual differences in predicting rewatchability and movie recommendation. Behav Sci. 2019;10(1):8.https://doi.org/10.3390/bs10010008.

    Article  Google Scholar 

  3. Aurier P, Guintcheva G. The dynamics of emotions in movie consumption: a spectator-centred approach. Int J Arts Manag. 2015;17(2):5–18.

    Article  Google Scholar 

  4. Bader N, Mokryn O, Lanir J. Exploring emotions in online movie reviews for online browsing. In 22nd International Conference on Intelligent User Interfaces Companion. 2017, 35–38.https://doi.org/10.1145/3030024.3040982.

  5. Cabanac M. What is emotion? Behav Proc. 2002;60(2):69–83.https://doi.org/10.1016/S0376-6357(02)00078-5.

    Article MATH  Google Scholar 

  6. Caldeira F, Lourenço J, Tavares Silva N, Chambel T. Towards multimodal search and visualization of movies based on emotions. In ACM International Conference on Interactive Media Experiences (IMX 2022), 2022, 349–356.https://doi.org/10.1145/3505284.3532987.

  7. Caldeira F, Lourenço J, Chambel T. Happy or sad, smiling or drawing: multimodal search and visualisation of movies based on emotions along time. In Proc. of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications VISIGRAPP/HUCAPP - Human Computer Interaction Theory and Applications 2023, SciTePress, 2023, pp. 85–97. Best Paper Award.https://doi.org/10.5220/0011896400003417.

  8. Card SK, Mackinlay JD, Shneiderman B. Information visualization. Readings in information visualization: using vision to think, 1999, 1–34.

  9. Chambel T, Langlois T, Martins P, Gil N, Silva N, Duarte E. Content-based search overviews and exploratory browsing of movies with MovieClouds. Int J Adv Media Commun. 2013;5(1):58–79.https://doi.org/10.1504/IJAMC.2013.053674.

    Article  Google Scholar 

  10. Chen Y-X. Exploratory browsing: enhancing the browsing experience with media collections. Ph. D. Dissertation. Citeseer. 2010.

  11. Cohen-Kalaf M, Lanir J, Bak P, Mokryn O. Movie emotion map: an interactive tool for exploring movies according to their emotional signature. Multimed Tools Appl. 2021;81(11):14663–84.https://doi.org/10.1007/s11042-021-10803-5.

    Article MATH  Google Scholar 

  12. Cowie R, Douglas-Cowie E, Savvidou S, McMahon E, Sawey M, Schroder M. “'FEELTRACE': an instrument for recording perceived emotion in real time”. In ISCA tutorial and research workshop (ITRW) on speech and emotion. 2000.

  13. Cruz P, Machado P. Pulsing blood vessels: a figurative approach to traffic visualization. IEEE Comput Graphics Appl. 2016;36(2):16–21.https://doi.org/10.1109/MCG.2016.29.

    Article MATH  Google Scholar 

  14. Damásio A. Feeling & knowing: making minds conscious. Pantheon. 2021.https://doi.org/10.1080/17588928.2020.1846027.

    Article  Google Scholar 

  15. Dodge S, Weibel R, Lautenschütz A-K. Towards a taxonomy of movement patterns. Inf Vis. 2008;7(3–4):240–52.https://doi.org/10.1057/PALGRAVE.IVS.9500182.

    Article MATH  Google Scholar 

  16. Ekman P. An argument for basic emotions. Cogn Emot. 1992;6(3–4):169–200.https://doi.org/10.1080/02699939208411068.

    Article MATH  Google Scholar 

  17. Fischer B, Herbert C. Emoji as affective symbols: affective judgments of emoji, emoticons, and human faces varying in emotional content. Front Psychol. 2021.https://doi.org/10.3389/fpsyg.2021.645173.

    Article MATH  Google Scholar 

  18. Hassenzahl M, Platz A, Burmester M, Lehner K. Hedonic and ergonomic quality aspects determine a software’s appeal. ACM CHI. 2000;2000:201–8.https://doi.org/10.1145/332040.332432.

    Article MATH  Google Scholar 

  19. Jorge A, Serra S, Chambel T. Interactive visualizations of video tours in space and time. In 28th Int. BCS Human Computer Interaction Conference (HCI 2014), 2014, pp. 329–334.https://doi.org/10.5555/2742941.2742989.

  20. Kleinginna PR, Kleinginna AM. A categorized list of emotion definitions, with suggestions for a consensual definition. Motiv Emot. 1981;5(4):345–79.https://doi.org/10.1007/BF00992553.

    Article  Google Scholar 

  21. Knautz K, Siebenlist T, Stock WG. Memose: search engine for emotions in multimedia documents. In Proceedings of the 33rd international ACM SIGIR conference on research and development in information retrieval. 2010, 791–792.https://doi.org/10.1145/1835449.1835618.

  22. Lund AM. Measuring usability with the USE questionnaire. Usability User Exp. 2001;8(2):3–6.

    MATH  Google Scholar 

  23. Martinho J, Chambel T. ColorsInMotion: interactive visualization and exploration of video spaces. In Proc. of the 13th International MindTrek Conference: Everyday Life in the Ubiquitous Era. 2009, 190–197.https://doi.org/10.1145/1621841.1621876.

  24. Miller GA. The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychol Rev. 1956;63:81–97.https://doi.org/10.1037/h0043158.

    Article MATH  Google Scholar 

  25. Moreira A, Chambel T. As Music goes by in versions and movies along time. In Proc. of ACM International Conference on Interactive Experiences for TV and Online Video (TVX 2018), 2018, pp. 239–244.https://doi.org/10.1145/3210825.3213567.

  26. Moreira A, Chambel T. This Music reminds me of a movie, or is it an old song? An interactive audiovisual journey to find out, explore and play. In VISIGRAPP (1:GRAPP). 2019, 145–158.https://doi.org/10.5220/0007692401450158.

  27. Nefkens MW. MovieWall: a novel interface concept for visual exploration of large movie collections. Master’s thesis, Utrecht University; 2017.https://studenttheses.uu.nl/handle/20.500.12932/27906. Accessed Aug 2023.

  28. Nunes L, Ribeiro C, Chambel T. Emotional and Engaging movie annotation with gamification. VISIGRAPP (2: HUCAPP) 2022: 262–272.https://doi.org/10.5220/0010991500003124.

  29. Oliveira E, Martins P, Chambel T. iFelt: accessing movies through our emotions. In Proc. of 9th European Conference on Interactive TV and Video, 2011, 105–114.https://doi.org/10.1145/2000119.2000141.

  30. Plutchik R. A general psychoevolutionary theory of emotion. In: Theories of emotion. Amsterdam: Elsevier; 1980. p. 3–33.

    Chapter  Google Scholar 

  31. Plutchik R. The nature of emotions: Human emotions have deep evolutionary roots, a fact that may explain their complexity and provide tools for clinical practice. Am Sci. 2001;89(4):344–50.https://doi.org/10.1511/2001.28.344.

    Article  Google Scholar 

  32. Ramalho J, Chambel T. Immersive 360º mobile video with an emotional perspective. 2013 ACM international workshop on Immersive Media Experiences. 2013.https://doi.org/10.1145/2512142.2512144.

  33. Russell JA. A circumplex model of affect. J Pers Soc Psychol. 1980;39(6):1161–78.https://doi.org/10.1037/h0077714.

    Article MATH  Google Scholar 

  34. Sacharin V, Schlegel K, Scherer KR. Geneva emotion wheel rating study. Archive ouverte UNIGE; 2012.https://archive-ouverte.unige.ch/unige:97849. Accessed Aug 2023.

  35. Scherer KR. The dynamic architecture of emotion: evidence for the component process model. Cogn Emot. 2009;23(7):1307–51.https://doi.org/10.1080/02699930902928969.

    Article MATH  Google Scholar 

  36. Semeraro A, Vilella S, Ruffo G. PyPlutchik: visualizing and comparing emotion-annotated corpora. PLoS ONE. 2021;16(9): e0256503.https://doi.org/10.1371/journal.pone.0256503.

    Article  Google Scholar 

  37. Serra S, Jorge A, Chambel T. Multimodal access to georeferenced mobile video through shape, speed and time. In 28th Int. BCS Human Computer Interaction Conference (HCI 2014). 2014, 347–352.https://doi.org/10.5555/2742941.2742992.

  38. Serra V, Chambel T. Quote surfing in music and movies with an emotional flavor. In VISIGRAPP (2: HUCAPP). 2020, 75–85.https://doi.org/10.5220/0009177300750085.

  39. Shah V. The role of film in society. ThoughtEconomics; 2011.https://thoughteconomics.com/the-role-of-film-in-society/. Accessed 29 May 2023.

  40. Tkalcic M, Kosir A, Tasic J. Affective recommender systems: the role of emotions in recommender systems. In: The RecSys 2011 workshop on human decision making in recommender systems. Citeseer; 2011, pp. 9–13.https://api.semanticscholar.org/CorpusID:16782471. Accessed Aug 2023.

  41. Tominski C, Abello J, Schumann H. Axes-based visualizations with radial layouts. In Proceeding of 2004 ACM symposium on Applied Computing, 2004, 242–1247.https://doi.org/10.1145/967900.968153.

  42. Tufte E. The visual display of quantitative information. Cheshire: Graphics Press LLC; 1983.

    MATH  Google Scholar 

  43. Uhrig SC. Cinema is good for you: the effects of cinema attendance on self reported anxiety or depression and 'happiness'. ISER Working Paper Series 2005–14. Institute for Social and Economic Research; 2005.https://ideas.repec.org/p/ese/iserwp/2005-14.html. Accessed Aug 2023.

  44. Zhang S, Tian Q, Huang Q. Gao W, Li S. Utilizing affective analysis for efficient movie browsing. In 16th IEEE International Conference on Image Processing (ICIP). IEEE, 2009, 1853–1856.https://doi.org/10.1109/ICIP.2009.5413590.

Download references

Acknowledgements

This work was partially supported by FCT through funding of the AWESOME project, ref. PTDC/CCI/29234/2017, and LASIGE Research Unit, ref. UIDB/00408/2020 and ref. UIDP/00408/2020.

Funding

Open access funding provided by FCT|FCCN (b-on). This work was partially supported by FCT through funding of the AWESOME project, ref. PTDC/CCI/29234/2017, and LASIGE Research Unit, ref. UIDB/00408/2020 and ref. UIDP/00408/2020. But no grants were received by any of the authors.

Author information

Authors and Affiliations

  1. LASIGE, Faculdade de Ciências, Universidade de Lisboa, Lisboa, Portugal

    Francisco Caldeira, João Lourenço & Teresa Chambel

Authors
  1. Francisco Caldeira

    You can also search for this author inPubMed Google Scholar

  2. João Lourenço

    You can also search for this author inPubMed Google Scholar

  3. Teresa Chambel

    You can also search for this author inPubMed Google Scholar

Corresponding author

Correspondence toTeresa Chambel.

Ethics declarations

Conflict of interest

Francisco Caldeira declares that he has no conflict of interest. João Lourenço declares that he has no conflict of interest. Teresa Chambel declares that she has no conflict of interest.

Ethical Approval

This article does not contain any studies with animals performed by any of the authors. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

Informed Consent

Informed consent was obtained from all individual participants included in the study.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visithttp://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Use our pre-submission checklist

Avoid common mistakes on your manuscript.

Associated Content

Part of a collection:

Recent Trends on Computer Vision, Imaging and Computer Graphics Theory and Applications

Advertisement


[8]ページ先頭

©2009-2025 Movatter.jp