BACKGROUNDPromotional materials for videos are helpful in informing a potential audience about the content of the videos. For instance, video trailers, still-image posters, and the like may be helpful in letting users know about the theme or plot of a movie, television show, or other type of video. In order to create quality promotional materials, it is often useful to analyze the content of a particular video to determine the plot, key character roles within the video, and the like. With this information, the creator of the promotional material is able to create the trailer, poster, or other type of content in a way that adequately portrays the contents of the video.
Conventional approaches to movie content analysis depend on metadata provided by cast lists, scripts, and/or crowd-sourcing knowledge from the web without regard to correlations among roles. For instance, these traditional techniques may identify main characters from a video by manually identifying the characters and using metadata (e.g., cast lists, scripts, and/or crowd-sourcing knowledge from the web) associated with the movies. Some attempts have been made to associate names with the corresponding roles in news videos based on co-occurrence, as well as using face appearance, clothes appearance, speaking status, scripts, and image search results. One approach attempts to match an affinity network of faces and a second affinity network of names in order to assign a name to each face. However, such an approach has limited applicability for generating promotional posters since the matching merely matches faces to names.
While these traditional techniques may work in instances where the analyzed video includes rich metadata, such conventional approaches are not practical when little metadata is available, which may be true for internet protocol television (IPTV) and video on demand (VOD) systems. In contrast to metadata-rich videos, these videos often only include a brief title of each video section. In addition, the current process of creating promotional posters is time intensive and expensive because the current process requires the skills of graphics artists and designers. Promotional posters are characterized by: (1) having a conspicuous main theme and object; (2) grabbing attention through the use of colors and textures; (3) being self-contained and self-explained; and (4) being specially designed for viewing from a distance. Accordingly, as the amount of movies and other videos increase, manual techniques become difficult to effectively administer. In addition, not all of these movies and videos will have a sufficient amount of metadata available for analysis to create a high-quality poster or other types of promotional content.
SUMMARYCreating promotional posters for videos may be helpful for marketing these videos. Displaying the main characters from a video is a cornerstone for promotional posters in some instances. Tools and techniques for automatically acquiring key roles from a video free from use of metadata (e.g., cast lists, scripts, and/or crowd-sourcing knowledge from the web) are described herein.
These techniques include discovering key roles and their relationships by treating a video (e.g., a movie, television program, music video, personal video, etc.) as a community. First, the techniques segment a video into a hierarchical structure that includes levels for scenes, shots, and key frames. Second, the techniques perform face detection and grouping on the detected key frames. Third, the techniques exploit the key roles and their correlations in this video to discover a community. Fourth, the discovered community provides for a wide variety of applications, including the automatic generation of visual summaries (e.g., video posters) based on the acquired key roles.
This summary is provided to introduce concepts relating to acquiring and presenting key roles via community discovery from video. These techniques are further described below in the detailed description. This summary is not intended to identify essential features of the claimed subject matter, nor is it intended for use in determining the scope of the claimed subject matter.
BRIEF DESCRIPTION OF THE DRAWINGSThe detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the drawings to reference like features and components.
FIG. 1 illustrates an example computing environment including a computing device that acquires key roles from video.
FIG. 2 illustrates example components for acquiring a key role from a video via community discovery.
FIG. 3 illustrates example components for determining a face cluster of a key role.
FIG. 4 illustrates an example excerpted from several face cluster results from a video.
FIG. 5 illustrates an example of a community graph discovered from key roles acquired from a video.
FIG. 6 illustrates example user interface (UI) presentations in the form of posters created using key roles acquired from a video.
FIGS. 7 and 8 are flow diagrams illustrating example approaches for acquiring key roles and their relationships from video for presentation.
FIG. 9 is a flow diagram of an example process for acquiring a key role via face grouping.
FIG. 10 is a flow diagram of an example process employing key-role acquisition from video to generate presentations.
DETAILED DESCRIPTIONPromotional posters are helpful in marketing videos, and often display the main characters from a video. The techniques described below automatically create a presentation that includes images of the characters that are determined, automatically, to be the main characters in the video. These techniques may make this automatic determination by analyzing the video to determine how often each character appears in the video.
The techniques described herein identify key roles of a video by analyzing the video itself. That is, the techniques use facial recognition techniques to identify the main characters of a video. From this information, the techniques may then automatically create a visual presentation (e.g., a poster or other visual summary) for the video that includes the main characters.
The techniques may identify the main characters in any number of ways. For instance, the techniques may determine how often a face appears on screen, how often a character is spoken about, and the like. Furthermore, the techniques may create a community graph based on the analysis of the movie, which may also be used to identify the key roles. The community graph may depict the interrelationships between characters in the movie, as well as a strength of these interrelationships.
By discovering relationships within a community in this way, these example techniques are able to discover key roles within a video that is free from typically-used rich metadata, such as cast lists, scripts, and/or crowd-sourced information obtained from the world-wide-web. These techniques include automatically discovering key roles and their relationships by treating a video (e.g., a movie, television program, music video, personal video, etc.) as a community. First, the techniques segment a video into a hierarchical structure (including shot, key frame, and scene). Second, the techniques perform face detection and grouping on the detected key frames. Third, the techniques create a community by exploiting the key roles and their correlations or relationships in the video segments. Finally, the discovered community provides for a wide variety of applications. In particular, the discovered community enables automatic generation of visual summaries or video posters based on the acquired key roles from the community.
For context, the entertainment industry has boomed in recent years, resulting in a huge increase in the number of videos, such as movies, television programs, music videos, personal videos, and the like. As the numbers of videos grow, it becomes important to index and search video libraries. In addition, because people respond favorably to images, such as those in promotional posters, being able to present a pleasant visual summary is important for promotional purposes. As such, the techniques described herein may be helpful in creating a poster or other image that visually represents a respective video in a manner that is consistent with the content of the video.
Generally, characters of a video are the center of attention within the video, and the interactions among these characters help to narrate a story. Because these characters (or “roles”) and their interactions are the center of audience interest, indentifying key roles and analyzing their relationships to discover a community is useful for understanding the content of a movie or other video. However, discovering a community is challenging due to the complex environment in movies. For example, the variation of characters' poses, wardrobe changes, and various illumination conditions may make the identification of characters within a video difficult. In addition, correlations or relationships between roles are difficult to analyze thoroughly because roles can interact in different ways, including direct interactions (e.g., dialogs with each other) and indirect interactions (e.g., talking about other roles). Thus, being able to automatically acquire key roles for indexing, while useful, is not straightforward.
In order to automatically detect key roles from video, the techniques described below first structure the incoming video, whether the video is streaming or stored. The first structural unit that the techniques identify is a shot, which includes a continuous section of video shot by one camera. The second structural unit that the techniques identify is a key frame, which, as used herein, includes an image extracted from a shot that includes at least one face and that represents the shot in terms of color, background image, and/or action. In some implementations a key frame may include more than one image from a shot. This definition of a “key frame” may differ from traditional uses of the term “key frame” in some instances. The third structural unit that the techniques build is a scene, which include shots that are similar to one another and that the techniques groups together to form the scene. In various implementations, shot similarity is determined based on the shots having similarity to each other greater than a predetermined or configurable threshold value.
The techniques detect faces that appear in the key frames and groups the faces into face clusters according to role. The techniques then construct a community graph based on co-occurrence of the faces in the video. In the community graph, key roles are presented as nodes/vertices and relationships between the key roles are presented as edges.
Once discovered, the community graph of key roles has a wide variety of applications including automatic generation of visual summaries such as video posters, images to accompany reviews, or the like. In one specific example of many, the techniques described herein generate a visual summary (e.g., a movie poster) by detecting key roles from a discovered community, selecting representative images for each key role, selecting a typical background image of the video, and creating the poster according to at least one of four different visualization techniques based on the representative key roles and the background.
The discussion begins with a section entitled “Example Computing Environment,” which describes one non-limiting environment that may implement the described techniques. Next, a section entitled “Example Components” describes non-limiting components that may implement the described techniques in the example environment or other environments. A third section, entitled “Example Approach to Community Discovery from a Video” illustrates and describes one example technique for discovering community from a video without employing metadata. A fourth section, entitled “Example Video Poster Generation,” illustrates an example application for acquiring a key role and presenting the key role via community discovery from video. A fifth section, entitled “Example Processes,” presents several example processes for acquiring a key role and presenting the key role via community discovery from video. A brief conclusion ends the discussion.
This brief introduction, including section titles and corresponding summaries, is provided for the reader's convenience and is intended to limit neither the scope of the claims nor the following sections.
Example Computing Environment
FIG. 1 illustrates anexample computing environment100 in which techniques for acquiring a key role and presenting the key role via community discovery from video independent of metadata may be implemented. Theenvironment100 includes anetwork102 over which the video may be received by a computing device104. Theenvironment100 may include a variety of computing devices104 as video source and/or presentation destination devices. As illustrated, the computing device104 includes one ormore processors106 andmemory108, which stores anoperating system110 and one or more applications including avideo application112, ageneration application114, andother applications116 running thereon.
WhileFIG. 1 illustrates thecomputing device104A as a laptop-style personal computer, other implementations may employ apersonal computer104B, a personal digital assistant (PDA)104c, a thin client104D, amobile telephone104E, a portable music player, a game-type console (such as Microsoft Corporation's Xbox™ game console), a television with an integrated set-top box104F or a separate set-top box, or any other sort of suitable computing device or architecture. When the computing device104 is embodied in a television or a set-top box, the device may be connected to a head-end or the internet, or may receive programming via a broadcast or satellite connection.
Thememory108, meanwhile, may include computer-readable storage media. Computer-readable media includes, at least, two types of computer-readable media, namely computer storage media and communications media.
Computer storage media includes volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules, or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information for access by a computing device.
In contrast, communication media may embody computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave, or other transmission mechanism. As defined herein, computer storage media does not include communication media.
Theapplications112,114, and116 may represent desktop applications, web applications provided over anetwork102, and/or any other type of application capable of running on the computing device104. Thenetwork102, meanwhile, is representative of any one or combination of multiple different types of networks, interconnected with each other and functioning as a single large network (e.g., the Internet or an intranet). Thenetwork102 may include wire-based networks (e.g., cable) and wireless networks (e.g., cellular, satellite, etc.).
As illustrated, the computing device104 implements avideo application112 that functions to structure streaming or stored video for acquiring a key role and community discovery for presentation from ageneration application114. In other implementations thegeneration application114 may be integrated in thevideo application112.
Example Components
Various components may be employed to automatically generate video presentations by acquiring key roles from the video without employing rich metadata. In at least one instance, the described components discover a community to represent the video. The components then use the community to determine the key roles, which the components then use to create a poster or other type of promotional material that accurately portrays the contents of the video. For instance, the poster may include images of the key roles identified with reference to the discovered community.
FIG. 2, for instance, illustrates example components for discovering a community from a video to acquire key roles independent of rich metadata such as cast lists and scripts at200. The described approach includes discovering key roles and their relationships based on content analysis.
As shown inFIG. 2, a video tool202 (e.g., which may include thevideo application112 or similar logic) includes avideo structuring component204 that receives avideo206. In response, thevideo structuring component204 analyzes and segments the video into hierarchical levels. Thevideo structuring component204 then outputs thevideo structure information208 as hierarchically structured levels that include scenes, shots, and key frames for further processing by other components included in thevideo tool202.
Aface grouping component210, in the illustrated instance, detects faces from the key frames and performs face grouping to output a face cluster212 for each role in the video. Based on the roles represented by each face cluster212 and thevideo structure information208, thecommunity discovery component214 identifies nodes (e.g., according to co-occurrence of the roles in a scene) and constructs acommunity graph216. Thecommunity graph216 is input to thegeneration tool218, which inFIG. 2 is shown integrated in thevideo tool202. In other implementations, for example as shown in the environment ofFIG. 1, thegeneration tool218 may be separate from and operate independently of thevideo tool202.
In acommunity graph216, each node represents a key role within the video and the weight of each edge indicates a significance of the relationship between each pair of roles. In some instances the size of particular nodes in thecommunity graph216, corresponds to how “key” thecommunity discovery component214 determines the role is in the community.
In the illustrated example ofcommunity graph216, the four illustrated roles are identified as most important based on their interactions, although any number of roles may make up thecommunity graph216 in other instances. In this example, anode220 represents the most key role, while anode222 represents the next most key role, and thenodes224 and226 represent other key roles that interact with the roles represented by thenodes220 and222, but appear less often in the video. Accordingly, thenodes220 and222 likely represent characters played by the stars of the video while thenodes224 and226 likely represent major supporting roles.
FIG. 3 illustrates, at300, example components for determining a face cluster212. As shown at300, theface grouping component210 includes aface detection component302 that receives one or morekey frames304, such as from the structuredvideo208. Theface detection component302 detects faces from thekey frames304 to get theface information306 and includes bounding face rectangles as face images. Theface detection component302 may detect multiple face areas from eachkey frame304, in some instances, since a video can contain a large number of characters per shot. Based on face images detected from each face area, theface grouping component210 groups each face image detected to be the same person together to form several groups. The higher number of face images per group, the more often the detected face appears in shots of the video.
Afeature extraction component308 extracts features from theface information306. Thefeature extraction component308 includes a face image normalization component310 that normalizes the detected faces into (e.g., 64×64)gray scale images312. Afeature concatenation component314 concatenates the gray value of each pixel as a 4096-dimensional vector316 for each detected face image, in some instances.
Aface descriptor component318 creates a description for each detected face image based on thevector316. Theface descriptor component318 includes adistance matrix component320 that receives eachvector316 and compares the vectors using learning based encoding and principal component analysis (LE-PCA) to produce asimilarity matrix322. Aclustering component324 then takessimilarity matrix322 as input and outputs a face cluster212 with anexemplar326 for each cluster, which is used bygeneration tool218. In various implementations,clustering component324 employs an Affinity Propagation (AP) clustering algorithm. However, in other implementations a K-Means or other clustering algorithm may be employed. In some instances theexemplar326 is a face image that is first identified as belonging to the face cluster212. Although, in other instances, theexemplar326 is selected based on other or additional criteria such as having a forward facing pose or the illumination conditions of the particular face image. Theexemplar326 is used as the node representation incommunity graph216 in some implementations.
Example Approach to Community Discovery from a Video
Various approaches may be employed to automatically generate video presentations by acquiring key roles from a video without employing rich metadata. One such approach includes discovering a community to represent the video. The described approach includes automatically identifying key roles and their relationships based on video content analysis without employing metadata. The approach includes identifying key roles from the video. Key roles are those characters, identified by the faces that appear most often in the video. The faces that appear most often are likely to represent the main characters of the video. Once the key roles are identified, the approach discovers a community based on relationships between the identified roles.
FIG. 4 illustrates, at
400, example face images excerpted from several face clusters
212 from a video. Each of rows
402,
404,
406, and
408 represent a respective four clusters and include seven images from the respective four clusters. The number of images per cluster will vary per video and per role. For each cluster in
FIG. 4, the similarity of each two vectors representing each face image is calculated using their Euclidean distance. To obtain clusters as exemplified in
FIG. 4, the clustering component
324 iteratively calculates an exemplar for each cluster starting by initially treating each of n face images,
={ƒ
i}
i=1n, as a potential exemplar of itself. The
clustering component324 propagates two types of information for each pair ƒ
iand ƒ
j. The first type of information propagates from ƒ
ito ƒ
jand indicates how well ƒ
jwould serve as an exemplar of among all of the potential exemplars of ƒ
i. The first type of information is termed responsibility and denoted r(i,j). The second type of information propagates from ƒ
jto ƒ
iand indicates how appropriately ƒ
jwould act as an exemplar of ƒ
iby considering other potential representative face images that may choose ƒ
jas an exemplar. The second type of information is termed availability and denoted a(i,j).
 Given a similarity matrix Sn×n={Si,j|si,jis similarity between ƒiand ƒj}, such as asimilarity matrix322, the two types of information are propagated iteratively as shown inequation 1, below.
r(i,j)←Si,j−maxj≠j′{A(i,j′)+si,j′}
a(i,j)←min{0,r(j,j)}+Σi′∉{i,j}max{0,r(i′,j)}  (1)
Self availability is determined by equation 2, below.
a(j,j)←Σi′≠jmax{0,r(i′,j)}  (2)
The iteration process stops when convergence is reached, and the exemplar for each face ƒiis extracted by solving equation 3, presented below.
arg maxj{r(i,j)+a(j,j)}  (3)
Theclustering component324 clusters faces with thesame exemplar326 as a face cluster212, for example as shown in the excerptedrows402,404,406, and408 with each cluster containing the images of one role as shown in the excerpts.
FIG. 5 illustrates, at500, an example of a community graph, such ascommunity graph216. In this example, thecommunity graph500 is discovered from key roles identified from face clusters generated from the same video as the cluster excerpts shown inFIG. 4.
Thenodes502,504,506, and508 ofFIG. 5 are exemplars that correspond to the clusters ofFIGS. 4,402,404,406, and408, respectively. Meanwhile, thenodes510 and512 are exemplars from clusters that were omitted from the sample presented inFIG. 4 in the interest of brevity.
Thecommunity graph500 depicts interactions among roles in a video using social network analysis, which is a field of research in sociology that models interactions among people as a complex network among entities and seeks to discover hidden properties. In thecommunity graph500, people or roles are represented by nodes/vertices in a social network, while correlations or relationships among the roles are modeled as weighted edges. Because characters in videos interact in different ways such as through physical contact, verbal interaction, appearing together in frames of the video, and speaking about other characters that are not in the current frame, a community graph may use various correlations.
In the example of thecommunity graph500, thecommunity discovery component214 uses a “visually accompanying” correlation for roles that co-occur in a scene. In other examples one or more different correlations such as “physical contact” and “verbal interaction” may be used.
Specifically, the “visually accompanying” correlation means that when two roles appear in the scene, they need not appear together in a frame in order to have the “visually accompanying” correlation. Roles appearing closer together in a time line of the scene indicate a stronger relationship in accordance with the “visually accompanying” correlation. According to the analysis performed by thecommunity discovery component214, correlations d(a, b) between two faces a and b are represented by equation 4, in which c is a constant in seconds and ΔT=|time (a)−time (b)| measures the temporal distance of the two faces a and b.
Thecommunity discovery component214 collects correlations or relationships of all of the faces from each detected role and calculates the weight of the edge between each face cluster A and B in the graph to obtain an adjacency matrix WA,Bin accordance with equation 5.
WA,B=w(A,B)=Σa∈AΣb∈Bd(a,b)  (5)
For example, theface detection component302 often detects around 500 faces from key frames of two hours of video. Thus, thecommunity discovery component214 calculates d(a, b) about C5002≈105times for such a two-hour video.
In at least one implementation, face pair correlations d(a, b) are calculated scene by scene. Although in other implementations face pair correlations d(a, b) may be calculated on a per video basis or across multiple videos, for example in the case of a television or movie series.
Thecommunity graph500 includes nodes of differing sizes that illustrate the size of the corresponding face cluster. For example, thenode506 being larger than the other nodes indicates that thecluster406 includes more face images than the other clusters for the example video. In addition, the weights of the edges between the nodes illustrate the strength of the correlation. AlthoughFIG. 5 shows the weights both numerically and graphically by the width of the edge line, both need not be shown.
A parameter can be set in various implementations to control a minimum strength of correlation as well as a number or percentage of roles/nodes to be included in acommunity graph216, such as thegraph500. Configurable parameter entries may result in the top configurable amount or percentage of identified key roles with correlation weights above a configurable amount or percentage being included in the community graph. While other parameter entries may result in the top 5 or 25% of identified key roles with the highest 25% of correlation weights or weights of 0.2 or higher being included in the community graph. In some instances all nodes connected by edges with the threshold correlation weight are illustrated, and other parameter entries may be included.
Example Video Poster Generation
FIG. 6 illustrates example user interface (UI) presentations in the form of posters created by thegeneration application114, for example as embodied by thegeneration tool218 using key-role acquisitions from a video. Key roles and their relationships, such as those discovered by thecommunity graph216, provide a basis for a wide variety of applications. For example, visual summaries or video posters may be generated based on acquired key roles.FIG. 6 illustrates four different styles of poster visualizations based on theexample community graph500. As described herein, visual summaries and video posters include static previews, including either an existing image or a synthesized image of video content.
In the video domain, content includes movies, television programs, music videos, and personal videos, as well as movie series and television series. Digital or printed posters with graphical images and often containing text are designed to promote the video content. Promotional posters serve the purpose of attracting the attention of the possible audiences as well as revealing key information about the content to entice the potential audience to view the video.
Thegeneration tool218 automatically creates a presentation or poster containing identified key roles such as selected from one of thecommunity graphs216 or500. The key roles will generally appear frequently in the video and have many interactions with other roles in the video.
Thegeneration tool218 identifies nodes/vertices that contain the most frequently captured faces with edges to other vertices having a correlation weight meeting a minimum or configurable threshold. Thegeneration tool218 employs a role importance function ƒ(v) on a vertex v where FaceNum(v) denotes the number of faces in the cluster represented by vertex v and Degree(v) is the degree of the vertex v in the community graph, e.g., the sum of the weight of the edges connected to v. The terms FaceNum(v) and Degree(v) may be in different levels of granularity. Thus, thegeneration tool218 employsλ=num of faces/ΣvDegree(v) to balance these two terms in the role importance function presented as equation 6, below.
ƒ(v)=FaceNum(v)+λλDegree(v)  (6)
Various implementations of thegeneration tool218 are configurable to select a number or percentage of roles with the largest ƒ(v) as the key roles for presentation. For example, the 3-5 roles with the largest ƒ(v) may be selected, roles with an ƒ(v) above a threshold may be selected, or the roles with the top 25% of the calculated ƒ(v) may be selected. In at least one embodiment, the roles selected may be based on an organic separation, that is a natural breaking point where there is a noticeably larger separation between the ƒ(v) values in the range of ƒ(v) represented by thecommunity graph216.
FIG. 6, at602, illustrates a representative frame style poster. To create this style of poster, thegeneration tool218 selects a key frame that contains key roles. For example key frames in contention to be selected may be the key frames containing the most key roles or key frames containing a number of key roles above a configurable threshold. Thegeneration tool218 also quantifies one or more of how well the contending key frame represents the entire video in terms of color and/or theme as well as the visual quality of the contending key frame, including whether the frame and the characters contained therein are “in-focus.”
Thegeneration tool218 employs a representation function r(ƒi) on each contending key frame ƒiand selects the frame with the largest r. Representation function r(ƒi) is shown in equation 7, below.
In equation 7, j indicates the face index in the frame ƒi, S(ƒi(j)) denotes the area of the j-thface, h(ƒi) indicates the color histogram of key frame ƒi, andh is the average color histogram of the video. Other features related to video quality are integrated in various implementations.
FIG. 6 illustrates two collage style posters at604 and606. To create these styles of poster, thegeneration tool218 extracts a representative face image for each key role and employs a collage technique to organize the faces into a visually appealing presentation. Thegeneration tool218 selects candidate face images using the role importance function ƒ(v) shown in equation 6. In addition, thegeneration tool218 selects the number of roles to be included in the collage from the values assigned to nodes by the role importance function ƒ(v) shown in equation 6.
In various implementations, the representative faces extracted from the candidate face images are also extracted based on being front-facing, of acceptable visual quality, e.g., clear as opposed to blurry, and/or not occluded by other characters, scenery, and in some instances clothing such as hats, scarves, or dark-glasses.
The collage technique used by thegeneration tool218 to create the picture collage style shown at604 detects the face region as the region-of-interest (ROI). Thegeneration tool218 employs the Markov Chain Monte Carlo (MCMC) to assemble a picture collage in which all ROIs are visible while other parts of the image are overlaid. Similarly, after detecting the face region as the ROI, the collage technique used by thegeneration tool218 to create the video collage style shown at606 concatenates the images by smoothing the boundaries to assemble a naturally appealing collage.
FIG. 6 illustrates a synthesized style poster at608. To create this style of poster, thegeneration tool218 seamlessly embeds images of the key roles on a representative background. Thus, the synthesized style poster contains a representative background which introduces typical surroundings and context in addition to prominently featuring key roles to entice potential viewers to watch the video.
To create the synthesized style of poster, thegeneration tool218 selects a key frame that contains a representative background and filters out or extracts objects from the background based on character interaction with the objects. In various implementations thegeneration tool218 selects the background key frame using a process equivalent to that of selecting a representative frame as a poster as discussed regarding602 ofFIG. 6. However, when selecting a background key frame, thegeneration tool218 selects the frame with the smallest r(ƒi) as defined by equation 7. When selecting a background frame, thegeneration tool218 selects a frame in which a minimal number of faces appear, to avoid viewer distraction and to minimize object/face removal processing.
Thegeneration tool218 seamlessly inserts face images of key roles on the filtered background. In at least one implementation, the position and scale of the face images are based on the size of the corresponding cluster212 represented by the node in thecommunity graph216. For example, images from the largest clusters are featured more prominently than those from smaller clusters.
Example Processes
FIGS. 7 and 8 are flow diagrams illustrating example processes700 and800 for performing key-role acquisition from video as represented inFIGS. 2-6.
The process700 (as well as each process described herein) is illustrated as a collection of acts in a logical flow graph, which represents a sequence of operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the blocks represent computer instructions stored on one or more computer-readable media that, when executed by one or more processors, perform the recited operations. Note that the order in which the process is described is not intended to be construed as a limitation, and any number of the described acts can be combined in any order to implement the process, or an alternate process. Additionally, individual blocks may be deleted from the process without departing from the spirit and scope of the subject matter described herein. In various implementations one or more acts ofprocess700 may be replaced by acts from the other processes described herein.
Theprocess700, for example, includes, at702, thevideo tool202 receiving a video. For instance the received video may be a video streamed over anetwork102 or stored on a computing device104. At704, thevideo tool202 performs video structuring. For example, the received video is structured by segmenting the video into a hierarchical structure that includes levels for scenes, shots, and key frames. At706, thevideo tool202 processes the faces from the structured video. For instance, faces from the key frames are processed by detecting and grouping. At708, thevideo tool202 discovers a community based on the processed faces. At710, thevideo tool202 automatically generates a presentation of the video based on the discovered community. In several implementations, the presentation is generated without relying on rich metadata such as cast lists, scripts, or crowd-sourced information such as that obtained from the world-wide-web.
Theprocess800, as another example, includes, at802, thevideo tool202 receiving a video. At804, thevideo structuring component204 hierarchically structures the video into thevideo structure information208 including scene, shot, and key frame segments. For instance, thevideo structuring component204 may first detect shots as a continuous section of video taken by a single camera, extract a key frame from each shot, and detect similar shots that thevideo structuring component204 groups to form a scene. At806, thecommunity discovery component214 and theface grouping component210 receive the scene, shot, and key frame segments. At808, theface grouping component210 performs face grouping by detecting faces from the key frames to form the face clusters212.
At810, meanwhile, thecommunity discovery component214 constructs acommunity graph216 by identifying nodes (e.g., according to co-occurrence of the roles in a scene) based on the roles represented by the face clusters212 and thevideo structure information208. At812, thegeneration tool218 receives thecommunity graph216. At814, thegeneration tool218 identifies important roles by using a role importance function such as that shown in equation 6. For instance, thegeneration tool218 calculates role importance based on the nodes/vertices of thecommunity graph216 that contain the most frequently captured faces and have an appropriate number of edges connecting to other nodes/vertices. At816, thegeneration tool218 generates one or more presentations in accordance with those shown inFIG. 6.
FIG. 9 is a flow diagram of an example process for acquiring key roles via face grouping. Theprocess900 ofFIG. 9 includes, at902, theface grouping component210 receiving the key frames304. At904, theface detection component302 detects theface information306 from the key frames304. At906, thefeature extraction component308 receives the detectedface information306. At908, the face image normalization component310 normalizes the detected faces into (e.g., 64×64)gray scale images312. At910, thefeature concatenation component314 concatenates the gray value of the pixels of thegray scale images312 as a 4096-dimensional vector316, in some instances. At912, theface descriptor component318 receives thevector316. At914, thedistance matrix component320 produces asimilarity matrix322 by comparing received vectors using learning-based encoding and principal component analysis (LE-PCA). At916, theclustering component324 generates face clusters, like face cluster212, and selects anexemplar326 for each cluster.
FIG. 10 is a flow diagram of an example process employing key-role acquisition from video to generate a presentation. Theprocess1000 ofFIG. 10 illustrates thegeneration tool218 automatically creating a presentation or poster containing identified key roles selected from a community graph such as thecommunity graphs216 or500.
At1002, thegeneration tool218 identifies nodes/vertices containing the most-frequently captured faces and that have edges to other vertices with a correlation weight meeting a minimum threshold by using a role importance function. For instance, thegeneration tool218 may use a role importance function such as that shown in equation 6 to identify the desired nodes/vertices.
At1004, thegeneration tool218 selects one or more presentation styles for generation. At1006, when thegeneration tool218 selects a key frame style presentation such as the example shown at602, a representative frame containing key roles is selected as the presentation by using a representation function such as that shown in equation 7. At1008, when thegeneration tool218 selects a collage style presentation, such as the picture collage style example shown at604 or a video collage style example shown at606, thegeneration tool218 selects candidate face images by using a role importance function. In some instances, thegeneration tool218 uses a role importance function, such as that shown in equation 6 to select candidate face images.
At1010, processing for the two example collage styles diverges. At1012, when thegeneration tool218 selects a picture collage style presentation, thegeneration tool218 assembles a picture collage in which each face region-of-interest is visible, while other parts of the face images are overlaid. At1014, when thegeneration tool218 selects a video collage style presentation, thegeneration tool218 creates a video collage by detecting the face regions-of-interest and concatenating the images with smoothed boundaries to assemble a naturally appealing collage.
At1016, when thegeneration tool218 selects a synthesized style presentation such as the example shown at608, thegeneration tool218 synthesizes a presentation by embedding images of the key roles on a representative background. For example, the representative background frame with the smallest r(ƒi) as defined by equation 7 is selected. To complete the synthesized style presentation, thegeneration tool218 embeds face images of identified key roles on the filtered background.
At1018, thegeneration tool218 provides the selected presentation styles for display. In various implementations, the presentations are displayed electronically, e.g., on a computer screen or digital billboard, although the presentations may also be provided for use in print media.
CONCLUSIONAlthough the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as exemplary forms of implementing the claims.