CROSS REFERENCE TO RELATED APPLICATIONSThe present application claims priority from U.S. Provisional Application No. 62/141,132 filed Mar. 31, 2015. The aforementioned application is hereby incorporated by reference in its entirety.
BACKGROUND1. Technical Field
One or more embodiments relate to systems and methods for providing media content to multiple users. More specifically, one or more embodiments of the present invention relate to systems and methods for distributing media content among multiple users.
2. Background and Relevant Art
Through advancements in computing devices and computing technology, users can often share user-generated media with other users. As such, users are increasingly capturing and sharing experiences using various computing devices. For example, modern mobile devices enable users to capture and share pictures, videos, and text with co-users (e.g., family members, co-workers, friends, or with the public at large). For instance, a user can share user-generated content with a group of friends via a variety of communication systems (e.g., IM, text, or social networks).
Despite advances in technology, a number of drawbacks remain for a user wanting to share user-generated media with other users. For example, one disadvantage of conventional systems is that many conventional systems are directed toward media posts that are individualistic in nature. In other words, a thread of posts between multiple users focuses on interactions between the individual user that created the post and the other co-users interacting with the user, rather than a group of users interacting with each other as a group. Thus, many conventional systems do not provide an environment where a group of users can co-create and share group-created media with each other.
As another disadvantage, many conventional communication systems that allow users to share user-generated media often provide a cluttered and confusing presentation of the shared content. Some conventional systems that allow users to share user-generated media with other users attempt to reduce interface clutter by removing, deleting, or denying additional access to shared media once a co-user accesses the shared media. Specifically, in these conventional systems, a co-user's access to the shared media ends after the co-user accesses the shared media (e.g., views a shared photo or video). Although these conventional systems provide an effort to reduce clutter, these conventional systems do so at the expense of further increasing the isolation of each post between users. In other words, because users view each media post in isolation, and because each media post is removed automatically after a user accesses the media, participating in a media conversation using these conventional systems is difficult. This is especially the case with a group media conversation involving multiple users.
Accordingly, there are a number of considerations to be made in improving a user experience in relation to creating and participating in multimedia conversations with a group of users.
SUMMARYOne or more embodiments described herein provide benefits and/or solve one or more of the foregoing or other problems in the art with systems and methods of creating and sharing collaborative media content between co-users. For example, one or more principles described herein provide systems and methods that allow a user to view, contribute to, and create media presentations that include media segments generated and provided by the user and/or multiple co-users.
Moreover, some principles described herein provide systems and methods that provide users with a number of features that assist users in the automatic creation, sharing, and filtering of media segments and media presentations. For example, systems and methods described herein may provide for the automatic creation of media presentations based that share related media segment or the automatic creation of an event media presentation. As another example, the systems and methods described herein may provide suggestions to a user as to which co-users with which the user should share a media presentation. Further, the systems and methods may filter media presentation to isolate particular media segment as well as identify and block inappropriate media.
The systems and methods may also provide information to users regarding media presentations as well as about co-users. For example, the systems and methods disclosed herein may automatically generate and display credits for a media presentation. Further, the systems and methods can allow users to view and contribute to a co-user's profile, such as enabling the user to share a personal media segment message with the co-user.
In addition, principles described herein provide systems and methods that provide users to access live media captured at an event and/or view highlights from the event. Further, the systems and methods may allow a user to access and view the live media and highlight for more than one angle or perspective. For instance, systems and methods described herein can obtain multiple live streaming media segments captured at the same time by different users at an event, and share the live streaming media segments with other co-users.
In addition, the systems and methods disclosed herein provide a user with a media presentation list that organizes media presentations for presentation to the user. For example, the systems and methods disclosed herein provide efficient and intuitive navigation between various media segments within a media presentation. As a result, a user can quickly navigate and experience the media presentations in a manner that reduces user interface clutter and increases user enjoyment. Similarly, a user can intuitively navigate through media segments within a media presentation to experience a media presentation in an enjoyable manner.
Additional features and advantages will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of such exemplary embodiments. The features and advantages of such embodiments may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features will become more fully apparent from the following description and appended claims, or may be learned by the practice of such exemplary embodiments as set forth hereinafter.
BRIEF DESCRIPTION OF THE DRAWINGSIn order to describe the manner in which the above recited and other advantages and features of one or more embodiments can be obtained, a more particular description will be rendered by reference to specific embodiments thereof that are illustrated in the accompanying drawings. It should be noted that the figures are not drawn to scale, and that elements of similar structure or function are generally represented by like reference numerals for illustrative purposes throughout the figures. These drawings depict only typical embodiments, and are not therefore considered to be limiting of its scope. Accordingly, various embodiments will be described and explained with additional specificity and detail using the accompanying drawings.
FIG. 1 illustrates a schematic diagram of a communication system in accordance with one or more embodiments described herein;
FIG. 2 illustrates a schematic diagram of a media presentation system in communication with one or more client devices in accordance with one or more embodiments described herein;
FIG. 3 illustrates a sequence-flow diagram showing multiple client devices in communication with the media presentation system in accordance with one or more embodiments;
FIG. 4 illustrates a flowchart of a method for automatically creating a media presentation based on related media segments in accordance with one or more embodiments;
FIG. 5 illustrates an exemplary graphical user interface showing an example media presentation in accordance with one or more embodiments;
FIG. 6 illustrates an exemplary graphical user interface of an example process of applying facial recognition to a media presentation in accordance with one or more embodiments;
FIG. 7 illustrates a media presentation that includes a list of participant in accordance with one or more embodiments;
FIG. 8 illustrates a media presentation that includes a credits media segment in accordance with one or more embodiments;
FIG. 9 illustrates a flowchart of a method for generating a participant list in accordance with one or more embodiments;
FIGS. 10A-10B illustrate exemplary graphical user interfaces showing a user profile in accordance with one or more embodiments;
FIGS. 11A-11B illustrate exemplary graphical user interfaces showing a company profile in accordance with one or more embodiments;
FIG. 12 illustrates a sequence-flow diagram showing a user and a co-user interacting with the media presentation system in accordance with one or more embodiments;
FIG. 13A illustrates a baseball stadium where multiple users of the media presentation system may be watching a baseball game;
FIG. 13B illustrates an example media presentation including media segment captured by users at the baseball game shown inFIG. 13A in accordance with one or more embodiments;
FIG. 14 illustrates a flowchart of a method for providing, to a user, a live streaming media segment captured by another user in accordance with one or more embodiments;
FIGS. 15A-B illustrate example media presentations including a media presentation of an event that provides a user with event information, live action, replays, and highlights in accordance with one or more embodiments;
FIG. 16 illustrates a flowchart of amethod1600 for providing a live streaming media segment to a user in accordance with one or more embodiments;
FIG. 17 illustrates a flowchart of a method for generating a significant act (“highlights”) media presentation in accordance with one or more embodiments;
FIG. 18 illustrates the process of the media presentation system generating a favorites media presentation in accordance with one or more embodiments;
FIG. 19 illustrates an example media presentation that is filtered to include media segments sharing a common topic in accordance with one or more embodiments;
FIG. 20 illustrates an example embodiment of the media presentation system providing restriction options to a user within a media presentation on a client device in accordance with one or more embodiments;
FIG. 21 illustrates a flowchart of a method for censoring a media segment in accordance with one or more embodiments;
FIG. 22 illustrates another flowchart of a method for censoring a media segment in a media presentation in accordance with one or more embodiments;
FIG. 23 illustrates a flowchart of a method for generating a media presentation based on related media segments in accordance with one or more embodiments;
FIG. 24 illustrates a block diagram of a client device in accordance with one or more embodiments;
FIG. 25 illustrates a network environment of a social networking system in accordance with one or more embodiments described herein; and
FIG. 26 illustrates an example social graph of a social networking system in accordance with one or more embodiments described herein.
DETAILED DESCRIPTIONEmbodiments disclosed herein provide benefits and/or solve one or more of the abovementioned or other problems in the art with a media presentation system that improves a user's experience for creating and participating in collaborative multimedia conversations with other users. In particular, the media presentation system easily allows a user to receive, view, contribute to, edit, report, filter, and/or create media presentation. Additionally, in one or more embodiments, the media presentation system provides an intuitive graphical user interface that allows a user to efficiently navigate, view, create, contribute to, and otherwise experience media presentations within a media presentation list.
In particular, one or more embodiments of the media presentation system may automatically create a media presentation based on related media segments. In another embodiment, the media presentation system may generate a credits list and/or participant list. Further, in some embodiments, the media presentation system may provide, to a user, a live streaming media segment captured by another user. In addition, the media presentation system may provide multiple live streaming media segment to one or more users. The media presentation system may also automatically generate and provide a significant act (“highlights”) media presentation to users of the media presentation system. Further, in one or more embodiments, the media presentation system may censor a media segment in a media segment and/or media presentation for one or more users.
In addition to presenting several media presentations to the user, in some embodiments, the media presentation system can enable the user to start or otherwise create a media presentation. For instance, and as discussed briefly above, the media presentation system can assist the user in selecting which co-users can view the media presentation and/or which co-users can add to the media presentation. Further, the media presentation system may allow the user to approve, edit, or remove media segments added by the user or by other co-users to the media presentation.
Accordingly, one or more embodiments of the media presentation system overcome one or more of the disadvantages of conventional systems by providing systems and methods to allow users to create and share collaborative media presentations. As such, example embodiments of the media presentation system allow users to participate in collaborative media conversations in an intuitive and enjoyable manner. Moreover, one or more embodiments of the media presentation system provide a graphical user interface that eliminates user interface clutter and confusion inherent in conventional systems due to having duplicate copies of media and/or long lists of media, which require the user to spend significant amounts of time to manually experience the shared media content. In addition, and as will be describe in more detail below, example embodiments of a media presentation system can provide continued access to a collaborative media presentation for users to continue to enjoy and share, while at the same time overcoming user interface clutter (e.g., a user's media contribution is not simply erased to reduce clutter as with some conventional systems).
The term “media,” as used herein refers to digital data that may be transmitted over a communication network. Examples of media include, but are not limited to, digital photos, digital video files, digital audio files, and/or streaming content. Accordingly, media may refer to images, video, audio, text, documents, animations, or any other audio/visual content that may be transmitted over a communication network. In addition, examples of media can include user-generated media (e.g., content that a user captures using a media capturing feature of a smart phone, such as digital photos or videos) as well as nonuser-generated media (e.g., content generated by a party other than a user, but to which the user has access).
The term “media segment,” as used herein refers generally to a discrete portion of media. A media segment may include an image segment, video segment, and/or an audio segment. For example, a media segment may be an image segment that is displayed for a duration of time. As another example, a media segment may include a video clip or an audio clip.
As used herein, the term “media presentation” refers to a defined set of one or more media segments. For example, a media presentation can include a plurality of media segments contributed by one or more users. As such, in one or more embodiments, a media presentation can include a compilation of media segments composed by multiple users. For example, a media presentation may include a thread of related media segments captured by two or more users in a conversation with each other about a particular topic. Additionally, a media presentation can include a single media segment, provided by a user, to which other users can append one or more additional media segments (e.g., the creation of a new media presentation).
As used herein, the terms “interact” or “interacting” refer generally to any type of interface activity between a user and a client device. For example, interacting can include a user viewing, browsing, accessing, and/or otherwise experiencing video content. Moreover, interacting can include selecting elements on a client device, such as selecting menu options or graphical buttons to create a media presentation or add a media segment to an existing media presentation. For instance, a user can interact with a client device to capture a media segment, replay a captured media segment, approve a captured media segment, request a captured media segment be added to a media presentation, or cancel capture of a media segment. In one or more embodiments, a user can interact with a client device using one or more user input devices, such as a touch screen, touchpad, or mouse.
FIG. 1 illustrates an example embodiment of a communication system100 (or simply, “system100”) in accordance with one or more embodiments described herein. As shown, thesystem100 may include amedia presentation system102, afirst client device104a, and asecond client device104b(collectively “client devices104”), that are communicatively coupled through anetwork106. AlthoughFIG. 1 illustrates a particular arrangement of themedia presentation system102,client devices104, and thenetwork106, various additional arrangements are possible. For example, theclient devices104 may directly communicate with themedia presentation system102, bypassing thenetwork106.
Optionally, thesystem100 may include asocial networking system108. When thesystem100 includes thesocial networking system108, themedia presentation system102 may be a part of thesocial networking system108. In this manner, users of thesocial networking system108 may be able to use the features and functionalities of themedia presentation system102 described herein. Alternatively, themedia presentation system102 and thesocial networking system108 may be independent from each other, but still be able to communicate with each other via thenetwork106.
As mentioned above, themedia presentation system102, theclient devices104, and thesocial networking system108 may communicate via thenetwork106. Thenetwork106 may include one or more networks and may use one or more communications platforms or technologies suitable for transmitting data and/or communication signals. Additional details relating to thenetwork106 are explained below with reference toFIGS. 24-25.
As further illustrated inFIG. 1, auser110 may interact with thefirst client device104aand a co-user112 may interact with thesecond client device104b. Theuser110 and/or the co-user112 may be an individual (i.e., human user), a business, a group, or other entity. For sake of explanation,FIG. 1 illustrates only oneuser110 and one co-user112, however, it should be understood thatsystem100 may include any number of users, with each of the users interacting with thesystem100 with one or more client devices. Likewise, it should be understood that the terms “user” and “co-user” are generally used for purposes of explanation, and that theuser110 and the co-user112 are both simply users of themedia presentation system102.
As mentioned above, theuser110 and the co-user112 may interact withclient devices104aand104b, respectively, to communicate with themedia presentation system102 and/orsocial networking system108. Theclient devices104 may represent various types of client devices. For example, theclient devices104 can include: a mobile device such as a mobile telephone, a smartphone, a PDA, a tablet, a laptop; a non-mobile device such as a desktop or server; or any other type of computing device. In addition, a client device may include display devices such as televisions, LCD displays, LED displays, monitors, projectors, etc. Additional details and examples with respect to theclient devices104 are discussed below with respect toFIGS. 24-25.
As shown inFIG. 1 and as mentioned above, thesystem100 can include thesocial networking system108. In some embodiments, however, thesystem100 does not include asocial networking system108. Nevertheless, themedia presentation system102, theuser110, and/or the co-user112 may be associated with thesocial networking system108. For example, theuser110 may authorize themedia presentation system102 to access thesocial networking system108 to obtain information about theuser110, such as the user's profile, social networking contacts, and affinity to each social networking contact. Themedia presentation system102 may also use thesocial networking system108 to share media presentations among users of themedia presentation system102. For instance, a user may compose and/or initiate a media presentation from thesocial networking system108. Further, theuser110 many post a media presentation on a newsfeed of a social networking user who is connected to theuser110 via themedia presentation system102 and/orsocial networking system108.
As briefly discussed above, thesystem100 includes themedia presentation system102. In general, themedia presentation system102 facilitates the generation and distribution of media presentations. Themedia presentation system102 further enables theuser110 to share media presentations with the co-user112, as well as receive shared media presentations from the co-user112.
Regardless of the particular components or arrangement of components of thesystem100, thesystem100 generally allows users of the system to create, edit, filter, report, and/or share media presentations that include media segments contributed from multiple users. As a non-limiting overview example, the co-user112 may capture media on thesecond client device104b(e.g., a digital video) and create a media presentation that includes the captured media as a media segment. The co-user112 can then share the media presentation with theuser110 via themedia presentation system102. For example, themedia presentation system102 can provide the media presentation to thefirst client device104ato present the media presentation to theuser110.
FIG. 2 illustrates a schematic diagram of amedia presentation system102 in communication with one ormore client devices104. Themedia presentation system102 inFIG. 2 can represent one or more embodiments of themedia presentation system102 discussed above with reference toFIG. 1. Similarly, theclient device104 shown inFIG. 2 may represent one or more embodiments of thefirst client device104aand/or thesecond client device104bdiscussed above with reference toFIG. 1. For example, themedia presentation system102 and theclient device104 inFIG. 2 can be part of thecommunication system100 illustrated inFIG. 1.
As illustrated inFIG. 2, themedia presentation system102 can include, but is not limited to, amedia presentation generator210, adistribution manager212, amedia presentation database214, and a user profile database216. In general, themedia presentation generator210 can receive media segments from the client device(s)104 and use the media segments to generate new or updated media presentations. Thedistribution manager212 can provide media presentations to one or more users of themedia presentation system102 via the client device(s)104. Themedia presentation database214 can maintain a plurality of media presentations and/or media segments, and the user profile database216 can maintain user information for users of themedia presentation system102.
Each component of themedia presentation system102 may be implemented using a computing device including at least one processor executing instructions that cause themedia presentation system102 to perform the processes described herein. In some embodiments, the components of themedia presentation system102 can be implemented by a single server device, or across multiple server devices. Although a particular number of components are shown inFIG. 2, themedia presentation system102 can include more components or can combine the components into fewer components (such as a single component), as may be desirable for a particular embodiment.
As briefly mentioned above, and as illustrated inFIG. 2, themedia presentation system102 may include amedia presentation generator210. Themedia presentation generator210 may create a new media presentation or generate an updated media presentation, as described below. Themedia presentation generator210 may also communicate with themedia presentation database214, which may store media presentations and/or media segments.
In one or more embodiments, for example, themedia presentation generator210 may generate a media presentation when themedia presentation system102 receives one or more media segments from theclient device104. For example, a user may request, on theclient device104, to create a new media presentation or add a media segment to an existing media presentation. In response, theclient device104 may capture a media segment, and then send the media segment to themedia presentation system102 to create a new media presentation or to add to an existing media presentation.
To illustrate, themedia presentation system102 may receive a media segment from thefirst client device104a. The media segment may include an indication that identifies the media segment as a new media presentation or identifies the media segment as corresponding to an existing media presentation. For example, a media segment may include metadata that identifies the media segment as part of a new media presentation or as part of an existing media presentation. For instance, in the event a user, via thefirst client device104a, submits a media segment intended to be the first media segment of a new media presentation, the metadata can include identification data that is null or identification data that otherwise indicates the media segment is not associated with any existing media presentation. On the other hand, when a user submits a media segment that is intended to be associated with an existing media presentation, the metadata can include identification data that uniquely references the existing media presentation.
Accordingly, after themedia presentation system102 receives a media segment, themedia presentation generator210 may determine that the media segment is not associated with an existing media presentation, and in response, themedia presentation generator210 may generate a new media presentation that includes the received media segment. As part of creating a new media presentation, for example, themedia presentation generator210 can associate identification data with the media presentation that themedia presentation system102 can use to identify the media presentation, as well as associate other properties or settings with the media presentation, as will be further described below.
In some embodiments, upon creating a media presentation, themedia presentation generator210 can create a media presentation file to store information and metadata for the media presentation. The file may include data structures such as one or more tables, arrays, databases, etc. Further, themedia presentation generator210 may store the media presentation file in themedia presentation database214 in connection with the media presentation.
The media presentation file can include information, such as a user ID corresponding to the user that created the file. In addition, the media presentation file can include links or pointers to media segments included in the media presentation (e.g., the location of a media segment stored within themedia presentation database214 and/or a third-party database). Further, the media presentation file can include information about each media segment, such as which users contributed each media segment, which users are associated with each media segment (e.g., tagged, identified in, etc.), the order each media segment should be presented on the client device104 (e.g., play list order), a image that represent each file (e.g., a frame from the media segment, a user selected image, a default image, etc.), and/or information about each media segment (e.g., likes, shares, views, etc.). In this manner, themedia presentation generator210 may create a media presentation by associating and/or threading a number of media segments together.
Further, the media presentation file can include permission and authorization information. For example, the media presentation file may indicate which users are authorized to view and which users are permitted to contribute to the media presentation. As another example, the media presentation file can indicate which users can modify media segments within the media presentation and/or remove the media presentation from themedia presentation system102.
In addition to determining a media segment is not associated with an existing media presentation, themedia presentation generator210 can determine that a received media segment is associated with an existing media presentation. Based on determining that a media segment is associated with an existing media presentation, themedia presentation generator210 may update the existing media presentation to include the received media segment.
In one or more embodiments, themedia presentation generator210 may append the received media segment to one or more media segments corresponding to a media presentation. Specifically, themedia presentation system102 may receive a media segment from aclient device104. Further, themedia presentation system102 may identify the received media segment as belonging to an existing media presentation. Next, themedia presentation generator210 may append the received media file to the identified media presentation.
In some example embodiments, themedia presentation generator210 may append a media segment to a media presentation by updating the media presentation file associated with the media presentation. For example, upon themedia presentation system102 receiving a media segment that is to be appended or added to an existing media presentation, themedia presentation generator210 may update the media presentation file to point to the received media segment. For instance, themedia presentation generator210 may add a link or pointer to the location the received media segment is stored in themedia presentation database214. Accordingly, each time a media presentation is updated (e.g., a media segment is added, removed, or modified), themedia presentation generator210 may update the media presentation file corresponding to the media presentation.
Because, in some embodiments, themedia presentation generator210 generates media presentations by linking media segments together (e.g., in a media presentation file), themedia presentation generator210 may include the same media segment in numerous media presentations without storing duplicate portions of the same media segment on themedia presentation system102 or on client devices. In other words, more than one media presentation file may link or point to the same media segment. For example, a user can user the same media segment in several media presentations, and themedia presentation system102 may only store the media segment once. As another example, multiple users may include the same media segment (e.g., a popular or trending media segment) in several different media presentations, and themedia presentation system102 may link to the same media segment in each of the several media presentations.
In an alternate embodiment, upon receiving a media segment that themedia presentation system102 identifies as corresponding to an existing media presentation, themedia presentation generator210 may create a new media presentation that creates a new content file that includes the existing media presentation content as well as the received media segment content. In this manner, each time a media segment is modified, updated, and/or removed, themedia presentation generator210 may create a new media presentation content file. Further, themedia presentation generator210 may overwrite or remove one or more previous versions of a media presentation content file when a media presentation is updated.
Additionally, themedia presentation generator210 may create both an updated media presentation file as well as an updated content file each time themedia presentation generator210 updates a media presentation with another media segment. For example, the updated media presentation file may point to the updated content file, as it is stored on themedia presentation system102. For example, the media presentation file may include a single link that points to the media presentation content file stored on themedia presentation system102.
As mentioned above, themedia presentation system102 includes adistribution manager212. In general, thedistribution manager212 sends media presentations to users of themedia presentation system102. Once thedistribution manager212 determines to whom to distribute a media presentation, thedistribution manager212 may send the media presentation.
As described above, each media presentation may be associated with a media presentation file. Thedistribution manager212 may send the media presentation file to theclient device104. In some embodiments, thedistribution manager212 may send the media presentation file in connection with one or more media segments that belong to a media presentation. For example, thedistribution manager212 may send a minimum of media segment (e.g., the first media segment, the first three media segments, all media segment, etc.) or a maximum number of media segments (e.g., no more than three media segments, no more than 20 megabytes worth of media segment) to theclient device104 when sending the media presentation file. If thedistribution manager212 is sending an updated file to theclient device104, thedistribution manager212 may include any updated media segments.
Alternatively, thedistribution manager212 may send the media presentation file without any accompanying media segments. In this case, theclient device104 may request one or more media segments when a user, via theclient device104, requests access to a media presentation (e.g., the user provides an interaction that indicates the user wants to watch a media presentation). In this manner, thedistribution manager212 may send a media presentation file or updates to a media presentation file without sending larger media segment files to theclient device104. Then when theclient device104 requests one or more media segments for a media presentation, the media presentation may send or stream the media segments to theclient device104.
To illustrate, thedistribution manager212 sends a media presentation file toclient device104 for a media presentation having a number of media segments. When a user on the client device views the media presentation and/or activates the media presentation (described below), theclient device104 may accesses the media presentation file and identify the media segments to request from themedia presentation system102. For instance, based on the media presentation file, theclient device104 may request that thedistribution manager212 send or stream the first media segment. As such, thedistribution manager212 accesses the media segments, for example, from themedia presentation database214, and provides the first media segment to theclient device104. While the user continues to watch the first media segment, the client device may, based on the media presentation file, identify a second media segment as being the next segment in the media presentation, and request themedia presentation system102 send the second media segment. Again, upon receiving the request, thedistribution manager212 may provide the second media segment to theclient device104. As such, theclient device104 may receive each media segment upon request, which may help a user preserve data transfer limits associated with theclient device104.
As mentioned above, in some example embodiments, themedia presentation system102 may create a single file for a media presentation rather than logically linking together numerous media segments. In these embodiments, when aclient device104 requests the media presentation, thedistribution manager212 may access the media presentation in themedia presentation database214 and provide the media presentation, or a portion thereof to theclient device104. Further, rather than sending the entire media presentation, thedistribution manager212 may stream relevant portions of the media presentation to theclient device104 upon the client device sending a request for the media presentation.
Additionally, or alternatively, upon themedia presentation system102 receiving a media segment from a client device that is to be added or appended an existing media presentation, themedia presentation system102 may distribute the received media segment to other client devices that currently include the media presentation. Specifically, in one or more embodiments, after themedia presentation system102 identifies the media presentation to which the received media segment corresponds, thedistribution manager212 may identify the client devices (or users associated with the client devices) that have the existing media presentation and send the media segment to the identified client devices. In addition, as described above, the media presentation system may generate instructions to append a received media segment to an existing media presentation. In this case, thedistribution manager212 may send the instructions along with the newly received media segment to the identified client devices.
Further, thedistribution manager212 can distribute media segments and/or media presentations through a variety of distribution channels. For example, in addition to distributing media presentations to users of themedia presentation system102, in some example embodiments, thedistribution manager212 can distribute media presentations to another system, such as a social networking system, a messaging application, and/or other systems or applications. For instance, thedistribution manager212 may distribute a media presentation that a user creates through a social networking system to one or more of the user's social networking connections (e.g., directly or through a plug-in that integrates themedia presentation system102 in the social networking system). In some cases, thedistribution manager212 may post a media presentation on the newsfeeds of one or more social networking users connected to the users via the social networking system. In some example embodiments, themedia presentation system102 may allows other users to compose (e.g., create a media segment or media presentation) or reply (e.g., add a media segment) to a media presentation via the social networking system.
As shown inFIG. 2 themedia presentation system102 includes amedia presentation database214. Themedia presentation database214 may store media segments and/or media presentations. Themedia presentation database214 can also store metadata associated with media presentations, such as the number of users that have accessed or viewed each media presentation, the creator or contributors of each media presentation, date information associated with each media presentation, authorization information, user preference information, and any other information associated with media presentations.
In addition to themedia presentation database214, and as shown inFIG. 2, themedia presentation system102 can include the user profile database216. The media profile database216 may store user information corresponding to each user in themedia presentation system102. The user profile database216 may include a user profile for each user of themedia presentation system102. A user profile may include, but is not limited to, biographic information, demographic information, behavioral information, social information, or other types of descriptive information, such as work experience, educational history, hobbies or preferences, interests, affinities, and/or location information. As described above, user profile information may be linked to corresponding profile information for a user stored by a social networking system.
In addition to storing user information, the user profile database216 may store user relationship information between users with themedia presentation system102. The user relationship information may indicate users who have similar or common work experience, group memberships, hobbies, educational history, and/or are in any way related or share common attributes. The user relationship information may also include user-defined relationships between different users and content (e.g., user defined friends, groups, etc.).
Further, the user profile database216 may store preference setting information associated with each user. For example, the media presentation system call allow a user to set default preferences (e.g., via a user preference setting interface). Example user preference settings can relate to user-defined default sharing preferences to apply to media presentations that a user creates. In one or more embodiments, for example, a user can define default preferences to apply to media presentations based on characteristic of a co-user, such as age, or based on other characteristic or combination of characteristics of a co-user and/or the media presentation.
In addition to the above, in one or more embodiments, a user can set user preferences on a user-per-user basis. For example, a user can individually select one or more users that are authorized to append media segments to media presentations created by the user. In addition, the user can select one or more users that can view, edit, remove, and/or forward media presentations the user creates. For instance, a supervisor user (e.g., parent user) can setup or otherwise manage a junior user's preference settings (e.g., a child user) such that the supervisor user always can view media presentations created by the child user, as well as having authorization to edit, remove, or otherwise manage media presentations or media segments the junior user creates. In one or more embodiments, the supervisor user can lock the junior user's preference settings with respect to the supervisor user (e.g., with a password).
Returning toFIG. 2, themedia presentation system102 may communicate with any number of client device(s)104. For purposes of explanation, only oneclient device104 will be described, but it is understood that the principles described can be applied to a plurality of client devices associated with any number of users. As illustrated inFIG. 2, theclient device104 can include, but is not limited to, auser input detector220, a user interface manager222, amedia segment creator224, amedia presentation manager226, and astorage manager228.
Each component of theclient device104 may be implemented using a computing device including at least one processor executing instructions that cause theclient device104 to perform the processes described herein. In one or more embodiments, the various components are implemented using one or more applications installed and running on theclient device104. In some embodiments, the components of theclient device104 can be implemented by a client device alone, or across multiple computing devices. Although a particular number of components are shown inFIG. 2, theclient device104 can include more components or can combine the components into fewer components (such as a single component), as may be desirable for a particular implementation.
Theuser input detector220 can detect user interactions with a user interface to determine user input (e.g., detecting a touch gesture on a touch screen corresponding to an interactive element of the user interface). More specifically, theuser input detector220 can detect, identify, and/or receive user interactions and translate user interactions into a user input (e.g., a user command or request). As referred to herein, a “user interaction” means a single interaction, or combination of interactions, received from a user by way of one or more input devices. In some embodiments, theuser input detector220 can translate a combination of user interactions as a single user input and/or translate a single user interaction into multiple user inputs.
For example, theuser input detector220 can detect a user interaction from a keyboard, mouse, touch screen, or any other input device. In the event a touch screen is used as an input device, theuser input detector220 can detect one or more touch gestures (e.g., swipe gestures, tap gestures, pinch gestures, or reverse pinch gestures) that a user provides to the touch screen. In one or more embodiments, a user can provide one or more touch gestures in relation to and/or directed at one or more graphical objects, items, or elements of a user interface presented on a touch screen. Theuser input detector220 may additionally, or alternatively, receive data representative of a user interaction. For example, theuser input detector220 can receive one or more user configurable parameters from a user, one or more user commands from a user, and/or any other suitable user input.
As mentioned above,client device104 can include a user interface manager222. In one or more embodiments, the user interface manager222 can utilize user input and/or other data received from a user (or source simulating user input) to manage, control, and/or facilitate the use of a user interface. In general, the user interface manager222 can facilitate the presentation (e.g., by way of a display screen associated with a client device104) of a graphical user interface (or simply “user interface”) for purposes of allowing a user to access the features and benefits of themedia presentation system102. In particular, and in response to the user input (e.g., detected by the user interface detector220), the user interface manager222 can allow a user to control a user interface to view, navigate, browse, search, edit, contribute to, share, re-share, and/or otherwise experience media presentations. Further, the user interface manager222 can display graphical elements that a user interacts with in navigating media presentations in a media presentation list as well as capturing a media segment to include in a media presentation.
To illustrate, the user interface manager222 can provide a user interface that facilitates the display of one or more media presentations and/or graphical elements on the client device104 (e.g., on a display screen). In one or more embodiments, the user interface manager222 can present a user interface as a user navigates within a media presentation list. Further, the user interface manager222 can change the display of the user interface as a user scrolls through a media presentation list, by providing one or more swipe gestures to a touch screen as described above.
In one or more embodiments, the user interface manager222 can display a thumbnail or preview of a media presentation to represent to the media presentation. For example, the user interface manager222 can display an image (e.g., a representative frame) from a media presentation, such as the first image of media segment to represent the media presentation. Alternatively, the user interface manager222 may display the image of the last played frame or first unplayed frame to represent the media presentation.
In some example embodiments, a user can interact with a media presentation list by providing, via a touch screen displaying a user interface, one or more vertical swipe gestures directed toward the media presentation list, as will be further discussed below with respect to below figures. In alternative embodiments, the user interface manager222 can allow a user to navigate a media presentation list using other navigation techniques, such as flipping through media presentations (e.g., turning a graphical representation of a page with each page corresponding to a different media presentation).
In addition to allowing a user to manually navigate or browse a media presentation list, the user interface manager222 can present a media presentation to the user. In particular, the user interface manager222 can play a media presentation, including one or more media segments, on the display of theclient device104. As described below, themedia presentation manager226 may provide media segments and/or media presentations for the user interface manager222 to display.
In one or more embodiments of the user interface manager222 can cause theclient device104 to present a search interface that allows a user search for specific media presentations based on a variety of user input that may relate to characteristics of media presentations. In one or more embodiments, for example, a user can search for media presentations based on media presentation creator (e.g., a username, contact information, phone number, email address, or other identifier), media presentation content (e.g., users tagged in the presentation, topic of presentation), title of a media presentation, date associated with a media presentation, and/or any other identifiable characteristic of a media presentation. For example, the user interface manager222 can provide a query input field, a display of suggested search terms, and/or provide a modified feed of media presentations based on resultant media presentations identified in response to the user's search query.
In addition to allowing a user to browse, search or otherwise navigate a plurality of media presentations within a media presentation list, in some example embodiments, the user interface manager222 may present a notification to a user when a media presentation is posted or updated. The interface manager222 may present the notification as an alert, message, banner, icon, sound, etc. Further, the interface manager222 may only display a limited number of notifications as well as display notifications for only certain media presentations, such as for media presentations that the user is participating in, or from co-users whom the user is following, etc.
In some example embodiments, the interface manager222 may display a live image or video currently being captured by theclient device104. For example, when a user is capturing a media segment, the interface manager222 may display the media segment as the media segment is being captured. The interface manager222 may also display user interface elements on the display in connecting with the user capturing a media segment.
As illustrated inFIG. 2 and as mentioned above, theclient device104 includes amedia segment creator224. In general, themedia segment creator224 assists a user in capturing or otherwise obtaining a media segment to add to a media presentation. For example, themedia segment creator224 can use theclient device104 to capture a media segment. Alternatively, themedia segment creator224 may assist a user in selecting a previously stored media segment.
To illustrate, in some example embodiments, themedia segment creator224 can use a camera and/or microphone (if present) on theclient device104 to capture an image, video, and/or audio. For instance, a user may capture a digital photo using theclient device104 or may record a video with audio using theclient device104. Further, the user can view or replay the captured media segment and choose to accept and share the media segment within a media presentation, or can delete or recapture the media segment.
Rather than capturing a new media segment, the user can use themedia segment creator224 to obtain an existing image, video, and/or audio segment to add to a media presentation. For example, themedia segment creator224 may allow a user to select a media segment stored on the user's client device. Additionally or alternatively, themedia segment creator224 may allow the user to select a media segment stored on a network device or online (e.g., a media segment accessible on cloud storage).
In some example embodiments, themedia segment creator224 can provide the ability for a user to edit and/or modify a captured video. For instance, themedia segment creator224 may allow a user to edit a captured image or video. Examples of editing include applying themes, coloring, modifying the runtime of a media segment, adding text or graphics, etc. To illustrate, themedia segment creator224 may facilitate one or more options for a user to add text to a media segment. As another illustration, themedia segment creator224 may allow a user to define the duration of a captured video, or extract a portion of a video to capture a media segment. Additional detail regarding editing a media segment or media presentation is described below in connection withFIG. 5.
Upon capturing a media segment, themedia segment creator224 may store the media segment on theclient device104 and/or on a network device or online (e.g., a media segment accessible on cloud storage). For example, themedia segment creator224 may store the media segment in themedia presentation database214, described below. Further, themedia segment creator224 may provide a copy of the media segment to themedia presentation system102, as described above.
As mentioned above, themedia presentation system102 includes amedia presentation manager226. In general, themedia presentation manager226 organizes media presentations within the media presentation list as well as organizes media segments within each media presentation. Further, thepresentation manager226 facilities the presentation of one or more media presentations to a user in response to user input. In addition, thepresentation manager226 assists a user in managing media presentations. For example, themedia presentation manager226 can add media segments to a media presentation. Further, themedia presentation manager226 can enable the user to set conditions, permissions, and/or restrictions on media presentations. Additional detail regarding themedia presentation manager226 will now be provided.
Thepresentation manager226 may organize media presentations within a media presentation list based on information received from themedia presentation system102. For example, thepresentation manager226 may arrange media presentations in the media presentation list according to recency of each media presentation, such as in a media presentation feed. To illustrate, when a new media presentation is shared with a user, or when a previously shared media presentation is updated, thepresentation manager226 may arrange the media presentation feed to display the new or updated media presentation before older media presentations. Alternatively, thepresentation manager226 may arrange or rank the media presentation list based on other criteria, such as media presentation title, creator, age of the media presentation, presentation length, contributors, number of contributors, indication as a favorite, etc. In some example embodiments, thepresentation manager226 can enable a user to define, through user preferences, how thepresentation manager226 organizes, arranges, and/or ranks the media presentation list.
In addition to organizing media presentations within a media presentation list, in some cases, thepresentation manager226 may organize media segments within a media presentation. For example, the may promote a popular media segment within a media presentation. The popularity can be based on popularity among a select group of users (e.g., friends of the user, co-users inn the same state or country as the user) or globally among all users of themedia presentation system102. Further, thepresentation manager226 may promote or prioritize a media segment that is sponsored or that advertises a product or service. In some cases, the prioritizing of the advertisement media segment may be based on the user's (or users similar to the user) interest in the product or service, as gathered from user profile information.
Further, thepresentation manager226 can also organize media segments within a media presentation. For example, thepresentation manager226 may move a media segment in a media presentation to the end of the media presentation based on the number of times a user views the media segment. For example, if a user views a media segment a threshold number of times, thepresentation manager226 may modify a media segment presentation order so that the media segment plays after other media segments in the media presentation that the user has viewed less. As another example, thepresentation manager226 may filter out a media segment from a media presentation that the user has viewed x number of times by the user. For instance, thepresentation manager226 may skip a frequently viewed media segment, or move the media segment to the end of the media presentation when the user has viewed the media segment a threshold number of times.
Further, thepresentation manager226 may update a media presentation upon receiving modifications and/or changes from either a user or themedia presentation system102. For example, if a user deletes, edits, or adds a media segment to a media presentation, thepresentation manager226 may reflect the deletion, edit, or addition to the media presentation in the media presentation list. In addition, themedia presentation manager226 can send information or data regarding the modification of the media presentation list to themedia presentation system102, and themedia presentation system102 can distribute the modification to other users. Similarly, when themedia presentation system102 sends an updated media segment to theclient device104, thepresentation manager226 may incorporate the updated media segments into corresponding media segments. Further, when themedia presentation system102 sends information to update or delete a media segment in a media presentation, thepresentation manager226 apply the modification to the media presentation.
While thepresentation manager226 generally provides a single media presentation list, in an alternate embodiment, thepresentation manager226 may provide numerous media presentation lists on theclient device104. For example, thepresentation manager226 may present a media presentation list of media presentations shared among friends and another media presentation list of media presentations shared among family members. To further illustrate, thepresentation manager226 may provide numerous media presentation lists that are arranged by category, theme, topic, creator, contributors, date created, etc.
In general, thepresentation manager226 only plays one media presentation at a time. For example, thepresentation manager226 may fully display one media presentation to the user at a time and thus plays the fully displayed media presentation. In some example embodiments, however, thepresentation manager226 may be able to display more than one media presentation to a user. In these embodiments, thepresentation manager226 may determine which media presentation(s) to play or allow the user to indicate which media presentation(s) to play.
Further, thepresentation manager226 may facilitate playing, pausing, skipping, and/or repeating, media segments, or portions of media segments, within a particular media presentation in response to user input. Specifically, in response to themedia presentation system102 detecting touch gestures, themedia presentation manager226 can provide navigational features based on the detected touch gesture. For example, if a user provides a right-to-left swipe gesture, themedia presentation manager226 can repeat the previously played media segment. As another example, if a user provides a tap gesture, themedia presentation manager226 can skip the remaining portion of the media segment currently playing and begin playing the next media segment in the media presentation. Thus, if a media presentation includes a number of media segments, themedia presentation manager226 can allow a user to quickly navigate among the media segments.
In additional example embodiments, in response to a user providing a double tap gesture, themedia presentation manager226 can pause the media presentation. In yet another example, if a user provides a left-to-right swipe gesture, themedia presentation manager226 can provide an information page regarding the current media segment or media presentation (e.g., information associated with the user that contributed a particular media segment, details of the media segment such as date created, length, etc.). The above example gestures are provided as examples only, and one or more embodiments can include the same or additional gestures associated with the same or other functionality, as described herein.
When thepresentation manager226 pauses or moves away form a media presentation due to user navigational input, thepresentation manager226 may display an image of the last played frame or next unplayed frame of the media presentation to represent the media presentation. In this manner, the user can visually see the current position of a media presentation that is not actively playing. Further, when a user navigates back to the paused media presentation, displaying an image of the last played frame or next unplayed frame of the media presentation can help the user transition back into the presentation of the media presentation as thepresentation manager226 resumes play of the media presentation.
As mentioned above, themedia presentation system102 may provide search functionality that allows a user to search or discover media presentations not currently shared with the user. For example, thepresentation manager226 may enable a user to discover popular, trending, or featured media presentations that users of themedia presentation system102 have made public. For instance, thepresentation manager226 may enable a “discover tab” that a user may select to view one or more popular, trending, or featured media presentations. In another instance, thepresentation manager226 may provide channels that allow users to discover different categories of media presentations, such as comedy, news, the arts, music, culture, etc. Further, thepresentation manager226 may allow a user to discover other media presentations by creator demographics (age, residency, nationality, topic, channel, category, recency, popularity (e.g., number of viewers), trends (e.g., within the last hour, 12 hours, 24-hours, or another specified amount of time), location, interests, etc.
In some instances, themedia presentation system102 may automatically generate discoverable media presentations. The user and/or themedia presentation system102 may specify an ephemeral or lasting discovery time period for media segments and/or media presentations, such as within the last hour, 12 hours, 24-hours, or another specified amount of time. In other instances, an administrator or editor (e.g., a human curator) associated with themedia presentation system102 or associated with a third-party (e.g., a social networking system or messaging application) may select one or more discoverable media segments and/or media presentations.
In addition to allowing users to search and discover other publicly available media presentations, in some example embodiments, thepresentation manager226 may enable a user to discover media presentations of other users of themedia presentation system102 that are in a specified proximity of the user. For example, if the user is at an event, thepresentation manager226 may allow the user to discover media presentations from other users at the event, as further described below. Further, thepresentation manager226 may enable a user to discover other users who are at, or have created a media presentation at a particular location, such as a park, school, point of interest, etc.
In one or more embodiments, thepresentation manager226 may provide a user with additional information about co-users of themedia presentation system102. For example, thepresentation manager226 may generate a list of co-users who have contributed to a media presentation, as further described below. Further, thepresentation manager226 may facilitate the user to see a list of other co-users within a media presentation. Upon selecting a co-user in a media presentation, thepresentation manager226 may present a user profile of the selected co-user to the user, as further described below.
Thepresentation manager226 may also allow a user to tag as a favorite or mark media presentations or media segments within a media presentation, as further described below. For instance, thepresentation manager226 may compile a media presentation that includes media segments that the user has tagged as favorite media segments. Thepresentation manager226 may enable the user to share the favorites media presentation with other users. Alternatively, thepresentation manager226 may not provide the option to a user to share the favorites media presentation with other users.
Thepresentation manager226 can also allow a user to “like” individual media segments within a media presentation as well as “like” the media segment as a whole. By indicating a preference for one or more media segments within a media presentation, the creator of the media presentation can promote (e.g., move up) a well-liked media segment with a media presentation or remove a media segment altogether that receives little or no likes after a set number of views.
In addition to adding a “like” to a media segment or a media presentation, thepresentation manager226 may allow a user to preserve, or lock, a media presentation as it had been presented to the user. For example, if a user forwards a media presentation to a co-user to view the media presentation, by the time the co-user views the media presentation, the media presentation may have changed due to additional media segments being added to the media presentation. As such, in some example embodiments, the user may select particular media segments, or lock a media presentation as presented to the user, and share the selected or locked media presentation to co-users, or add the locked media presentation to their profile. In this manner, the user may provide the same media presentation to co-users without worry that the media presentation will change before the co-users have an opportunity to view the media presentation.
Along similar lines, the user may desire to view media segments within a media presentation from a set duration of time. As such, thepresentation manager226 may enable the user to filter which media segments thepresentation manager226 presents to the user. For example, the user may specify to view media segment from a media presentation that were added to the media presentation around a specific time period, such as media segments added in the last 24-hours, media segments added on January 24, or media segments added between 8:00 am on March 25 and 5:00 pm March 27.
In some example embodiments, themedia presentation manager226 may enable a user who creates or manages a media presentation to control the arrangement of media segments within the media presentation. For instance, themedia presentation manager226 may allow the user to indicate a specific presentation order for media segments in the media presentation.
In addition to a user indicating a presentation order, themedia presentation manager226 may allow a user to specify that a particular media segment be presented in a particular manner. For example, if the user includes an advertisement media segment in a media presentation, the user may specify that the advertisement media segment always be presented last in the media presentation, even when other co-users add media segments to the media presentation.
In some instances, themedia presentation manager226 may allow the user to specify if the media presentation should play media segments in a particular order, such as the media presentation playing media segments in the order in which the media segments were added, in reverse order, randomized. Additionally or alternatively, themedia presentation manager226 may enable a user who created the media presentation to instruct themedia presentation manager226 to always present specific media segments in the media presentation (e.g., the first media segment created by the user, a media segment that advertises a product, etc.), regardless of other rules and conditions imposed by the user.
Not only can a user that creates a media presentation provide preferences to themedia presentation manager226 for presenting a media presentation, a user who receives media presentations can specify viewing and presentation preferences. To illustrate, a user that receives one or more media presentations may specify to themedia presentation manager226 to play media presentations in a particular order (e.g., in the order in which the media segments were added, in reverse order, randomized). Further, themedia presentation manager226 may allow a user to specify that only media segments added in the last day, week, month, etc., be played when presenting the media presentation to the user. For instance, if a media presentation has fifty media segments, but only eight added in the last week, themedia presentation manager226 may allow the user to specify to display the last week of media segments when presenting the media presentation to the user. Similarly, themedia presentation manager226 may allow the user to request that themedia presentation manager226 only present media segments that meet a threshold number of likes from co-users (e.g., don't present any media segments that do not have over x likes after the first week and x+y likes after the second week).
In addition to enabling users to set presentation preferences, themedia presentation manager226 may provide a user with the ability to edit media presentations created or managed by the user. For example, themedia presentation manager226 may enable the user to remove one or more media segments from a media presentation. To illustrate, upon other co-users adding a media segment to a media presentation created by the user, themedia presentation manager226 can enable the user (i.e., the creator) to remove one of the media segments added by the other users. For example, the user may remove the media segment in response to viewing the segment and determining that the segment does not belong in the media presentation. Additionally or alternatively, the user may remove the segment in response to other users flagging the media segment for removal. For instance, one or more co-users may tag a media segment in the media presentation as inappropriate. In response, the user may review and remove the media segment using themedia presentation manager226.
Themedia presentation manager226 may also enable a user who created a media presentation to specify which co-users can view the media presentation as well as which co-users can contribute (e.g., add a media segment) to the media presentation. More specifically, themedia presentation manager226 can enable the user to grant viewing access of the media presentation to one or more co-users. The user may grant access by selecting co-users such as individual co-users, a defined group of users (e.g., “friends,” “family,” “co-workers,” “classmates,” etc.), or the public at large to view the media presentation that the user created.
In some example embodiments, themedia presentation manager226 may provide the user a list of co-users to share a media presentation with based on the user's social networking contacts. For example, themedia presentation manager226 may interface with a social networking system to identify potential social networking users with whom the user can share a media presentation. In this manner, if a user views a media presentation that he or she likes, the user can easily share or re-share the media presentation with a preselected group of social networking contacts (g friends, family, co-workers, classmates, etc.). Further, in some instances, themedia presentation manager226 can share the media presentation via the social networking system if the user has selected one or more social networking users with which to share user-created media presentation.
In addition, themedia presentation manager226 can recommend other users with whom to share the media presentation based on a number of factors. For example, themedia presentation manager226 may recommend the user share a media presentation with co-users who are within a specified age range of the user. For example, if the user is a teenager, the authorization managermedia presentation manager226 may identify other teenagers and determine who among the identified teenagers are authorized to view the media presentation. In another embodiment, themedia presentation manager226 may identify the topic of the media presentation and recommend other users that share the same or similar topics. Further, themedia presentation manager226 may identify co-users who are proximate to the user.
Further, the user may specify that a media segment added to the media presentation satisfy a particular maturity rating level. For instance, based on the maturity rating level set by the user, themedia presentation manager226 may require that co-users post only media segments that are appropriate for all children 12 years of age and under. In another instance, the user may specify that co-users must be over 21 to view the media presentation and/or add a media segment to the media presentation. For instance, a co-user may need to authenticate (e.g., enter a password or otherwise verify themselves) before the co-user is shown the media presentation flagged with adult content.
Additionally, themedia presentation manager226 may allow a user who creates a media presentation to permit or authorize one or more co-users to edit media presentations created by the user. For example, if the user is an entity (e.g., a business, company, or corporation), the user may authorize one or more co-users to edit media presentations created by the entity, such as approving media segments that co-users want to add to the media presentation created by the entity or deleting media segments tagged as inappropriate. Additionally, themedia presentation manager226 can enable a user who has extended permission to one or more co-users to revoke permissions at any time, or to only grant authorization for a limited time (e.g., permission expires after one week, if not renewed).
In addition to allowing a user to grant co-user permission to manage media presentations the user creates, in some example embodiments, themedia presentation manager226 may provide analytics to a user regarding media segments in a media presentation. For example, themedia presentation manager226 may indicate to a user the number of co-users that have viewed a media presentation created by the user. As another example, themedia presentation manager226 may provide the user the number of times a media segment has been liked, skipped, hidden, muted, removed, and/or replayed. For instance, when a media segment has been hidden or removed a threshold number of times (e.g., over 20 times, or by 60% of co-users that have access to the media presentation, etc.), themedia presentation manager226 may suggest to the user to remove the media segment from the media presentation. Themedia presentation manager226 may provide additional analytics to the user to assist the user in improving a media presentation shared with other users.
In some example embodiments, thepresentation manager226 may allow a user to report a media segment as inappropriate, as further described below. For example, a co-user may report that a media segment includes inappropriate content, such as content suitable only for adults. When a user flags a media segment as misappropriate, thepresentation manager226 may skip the media segment for the user that reported the media segment as inappropriate as well as for other co-users who have access to the media segment.
Similarly, thepresentation manager226 may also hide media segments within a media presentation that do not meet a maturity setting specified by a user, as further described below. For example, a user may indicate that he or she does not want to view any content that contains mature material above a certain rating. As such, thepresentation manager226 may automatically skip media segments that are flagged by the creator or the media segment or by other viewers of the media segment as containing mature material. Thepresentation manager226 may indicate to the user that a media segment has been skipped. Alternatively, thepresentation manager226 may remove the inappropriate media segment from the media presentation presented to the user having the maturity rating enabled.
Along the same lines, thepresentation manager226 may enable a viewing user to hide a media segment from a media presentation that the user does not want to view when re-watching a media presentation. Similarly, thepresentation manager226 may enable a user to hide, or remove a media presentation altogether from the user's media presentation list. For example, thepresentation manager226 may present a media presentation that the user is not interested in viewing. In response, the user can select an option to archive, hide, and/or delete the media presentation from the user's media presentation list. Thepresentation manager226 may then remove the media presentation from the user's media presentation list and not provide updates when other co-users add media segments to the removed media presentation.
FIG. 2 also illustrates astorage manager228. Thestorage manager228 may includemedia presentations230 and user preferences232. For example, thestorage manager228 may store media presentations shared with the user associated with theclient device104 as well as media presentation system created by the user. Thestorage manager228 may communicate with themedia presentation system102 to send media segments, media presentations, and/or user information between theclient device104 and themedia presentation system102. For instance, thestorage manager228 may receive one or more media segments from themedia presentation database214. Similarly, thestorage manager228 may send user preferences to the user profile database216 on themedia presentation system102.
Themedia presentation102 may, in one or more embodiments, automatically create amedia presentation102 using media segments previously captured by users of themedia presentation system102. For example, multiple users may capture media segments using their respective client devices. Themedia presentation system102 may then identify that one or more of the captured media segments are related, and combine the related media segments into a media presentation.
To illustrate,FIG. 3 provides a sequence-flow diagram showing interactions between multiple client devices, such as thefirst client device104aand thesecond client device104b, and themedia presentation system102 in accordance with one or more embodiments. Thefirst client device104aand thesecond client device104bmay be example embodiments of thefirst client device104aand thesecond client device104bdescribed with regard toFIG. 1. Further, themedia presentation system102 may correspond to themedia presentation system102 described herein.
The sequence-flow diagram may provide one example of themedia presentation system102 automatically generating a media presentation using previously captured media segments. As shown instep310, the first client device may capture a first media segment. For example, a user using the first client device may capture a digital photo, record a video, capture an audio clip, etc. Upon capturing the first media segment, thefirst client device104amay store the first media segment. Thefirst client device104amay store a captured media segment locally, such as in internal memory, or store the captured media segment externally, such as on a memory card or stick. Additionally or alternatively, the first client device may store a media segment remotely, such as on a cloud-based storage system. In one or more embodiments, instead of capturing the first media segment, a user can locate a previously captured media segment to use as the first media segment. For example, thefirst client device104amay provide the user access to the first media segment, such as through a photo album or other application.
Thefirst client device104amay allow the user to associate information with the first media segment upon accessing the media segment. For example, thefirst client device104amay allow the user to tag (e.g., hashtag, location tag, labels, etc.) the first media segment to indicate people, a location, an event, a topic, or other information associated with the first media segment. Further, thefirst client device104amay allow the user to append annotations (e.g., description, comments, text, graphics, emojis, pictogram, etc.) to the first media segment.
In addition to a user associating information with the first media segment, thefirst client device104acan derive information from the first media segment itself (e.g., auto-tag the media segment). For example, thefirst client device104amay analyze the media segment to obtain additional information about the media segment, such as performing image recognition (described in additional detail below) to identify elements in the media segment, such as words, faces, objects, scenery, etc. After identifying elements in the first media segment, thefirst client device104amay associate the identified elements (e.g., auto-tag people, locations, objects, words, etc.) with the first media segment. Information about the first media segment may include data about the media segment itself, such as the media segment's media type (e.g., image, video, and audio recording), size, resolution, frame rate, megapixels, creation date, modification date, tags, metadata and/or other data about the media segment.
Additionally, or in the alternative, thefirst client device104acan send the first media segment itself to themedia presentation system102, and themedia presentation system102 may analyze the first media segment to obtain information with respect to the first media segment. For example, themedia presentation system102 can analyze the content of the first media segment, analyze data file attributes, and or other identifiable data or characteristics of the first media segment. In addition, as with thefirst client device104a, themedia presentation system102 can associate the any information with the first media segment for later use by themedia presentation system102, as further described below.
Regardless of the source of the media segment information, themedia presentation system102 can maintain receive and/or maintain media segment information. For example, and as shown inFIG. 3, thefirst client device104amay provide media segment information to themedia presentation system102. For instance,step312 illustrates thefirst client device104asending information for the first media segment to themedia presentation system102. In one or more embodiments, only the first media segment information is sent to the media presentation system, while thefirst client device104amaintains the media segment itself. Additionally, thefirst client device104acan send both the first media segment and the information associated with the first media segment (e.g., within metadata of the first media segment).
Just as the user of thefirst client device104acan capture a media segment, other users of themedia presentation system102 can also capture media segments. To illustrate, instep314, another user can use thesecond client device104bto capture a second media segment. In particular, a co-user using thesecond client device104bmay capture a second media segment. Thesecond client device104bmay store as well as provide the user with access to the second media segment, as described above. Further, thesecond client device104bmay similarly collect, obtain, and send information associated with the second media segment to themedia presentation system102, as shown instep316.
After collecting or otherwise obtaining information regarding various media segments from multiple client devices, themedia presentation system102 may compare the information associated with each of the various media segments to each other and identify potential commonalities between one or more media segments. To illustrate, step318 shows themedia presentation system102 identifying commonalities between the first media segment and the second media segment. In particular, themedia presentation system102 may identify commonalities by comparing the information associated with the first media segment to the information associated with the second media segment to determine whether the information associated with the first media segment matches information associated with the second media segment.
In one or more embodiments, themedia presentation system102 can compare a single type of information between two or more media segments. For example, themedia presentation system102 may identify the first media segment and the second media segment were captured at the same location by comparing GPS information. As another example, themedia presentation system102 may identify if one or more annotations or tags between the first media segment and the second media segment match (e.g., both media segment share the same hashtag and/or location tag). Additionally, themedia presentation system102 may compare people (e.g., by way of tagging or recognition) in the first media segment with people in the second media segment to determine if a particular person is present in both the first media segment and the second media segment.
In addition to comparing a single type of information associated with media segments, themedia presentation system102 can compare a combination of information types to determine a commonality between the first media segment and the second media segment. For example, themedia presentation system102 can compare both GPS information as well as timestamp information to determine the first and second media segments were captured near the same location and at approximately the same time. Thus, based on the GPS information and the timestamp information matching or otherwise indicating a commonality (e.g., GPS and timestamp information that does not exactly match, but is within some threshold distance and time period, respectively), themedia presentation system102 can determine that the first media segment and the second media segment are related based on the identified commonality with the combination of information types (e.g., GPS and timestamp information).
Themedia presentation system102 may also determine whether the user and the co-user (e.g., the users associated with the first andsecond client devices104aand104b) are connected via themedia presentation system102 or are otherwise socially connected (e.g., connected via a social networking system). Further, themedia presentation system102 may perform additional comparisons between the first media segment and the second media segment and/or the user and co-user of the first and second client devices to identify associations and commonalities that indicate a potential relation between the first media segment and the second media segment.
After identifying one or more commonalities between two media segments, as shown instep320, themedia presentation system102 may make a determination whether the first media segment and the second media segment are related based on the identified commonalities. For example, themedia presentation system102 may determine that two media segments are related only upon determining that a minimum number of commonalities. For instance, themedia presentation system102 may require the identification of three (3) or more commonalities between two media segments before determining that the media segments are related. To illustrate, if themedia presentation system102 only matches GPS coordinates and time stamp information between two media segments, themedia presentation system102 may need to identify an additional commonality before determining that the two media segments are related. In such a case, for example, themedia presentation system102 may also identify that the users who captured the media segments are socially connected (e.g., on a social networking system), or need to match multiple tags (e.g., user-provided tags, hashtags, locations tags, etc.) between the two media segments before determining that the media segments are related.
In addition to determining a relation between two media segments based on a number of identified commonalities, themedia presentation system102 can also determine a relation between media segments by weighting the significance of commonality factors. For example, themedia presentation system102 can weight identified commonalities within a calculation or algorithm to determine a relationship score. Based on the relationship score, themedia presentation102 can determine if the two media presentations are related (e.g., when the relationship score exceed a defined value). In some instances, for example, themedia presentation system102 may apply different weighting to factors when determining whether two or more media segments are related. For example, themedia presentation system102 may decrease the significance of some identified commonalities, while increasing the significance of other identified commonalities. For example, themedia presentation system102 may place greater weight user-provided people tags compared to people tags based on a facial recognition analysis.
Once themedia presentation system102 determines that two media segments are related, themedia presentation system102 may optionally prompt the user, via thefirst client device104a, to allow themedia presentation system102 to create a media presentation, as step322 illustrates. In addition, or in the alternative to prompting the user, themedia presentation system102 may prompt the co-user to allow themedia presentation system102 to automatically create a media presentation based on related media segments. As part of prompting a user for approval to generate a media presentation, themedia presentation system102 may provide a preview of the media presentation to the user. For example, themedia presentation system102 may provide one or more still frame images or portions of each media segment to be included in the media presentation to the user. If the user grants approval, the user, via thefirst client device104a, may send an indication of approval to themedia presentation system102 to create the media presentation, as shown instep324.
Instep326, themedia presentation system102 may create or generate the media presentation using the first media segment and the second media segment. In some cases, themedia presentation system102 may automatically generate a media presentation based on two or more media segments that themedia presentation system102 identifies as related without first seeking user approval. For example, once themedia presentation system102 determines that a one or more media segments are related, themedia presentation system102 may generate a media presentation that includes the one or more related media segments.
After creating a media presentation that includes one or more related media segments, themedia presentation system102 can send the media presentation to the client devices associated with each of the one or more related media segments included in the media presentation (e.g., the user and the co-user associated with thefirst client device104aand thesecond client device104b). As shown instep328, themedia presentation system102 may provide the media presentation to the user and co-user via thefirst client device104aand thesecond client device104b. The user and/or co-user may then decide to further share (e.g., re-share) the media presentation with other users, such as selected users or a preselected group of users (e.g., friends, family, neighbors, classmates, coworkers, etc.). For instance, if only the user receives the media presentation, the user may decide to share the media presentation with co-user and/or other co-users connected to the user via themedia presentation system102. In one or more embodiments, themedia presentation system102 may automatically send the media presentation to one or more other users identified in the media segments (e.g., tagged users) and/or one or more other users connected to the user and/or co-user of thefirst client device104aandsecond client device104b(e.g., per a user settings and preferences).
The following detailed example illustrates how themedia presentation system102 may automatically generate a media presentation based on identified related media segments. Two friends, Ellie and Clara, are going to a party on a Friday night, where they will meet up with other friends. While getting ready for the party, during the party, and after the party, Ellie and Clara take a number of pictures and videos with their mobile phones. When capturing photos and videos, Ellie and Clara often tag each other and other friends in the photos and videos. Ellie and Clara also add a hashtag to the pictures and videos, such as #Ellie&Clara. Further, other friends of Ellie and Clara also take pictures and videos at the party.
Themedia presentation system102 may receive information corresponding to each picture and video that Ellie and Clara capture. Using the information, themedia presentation system102 may identify matches between many of them (match factors such as location, descriptions, tags, capture time, etc.) and determine that a number of them are related. Themedia presentation system102 may also determine that pictures and videos captured by other friends at the party are related to pictures and videos taken by Ellie and/or Clara. Using a group of related pictures and videos, themedia presentation system102 may automatically create a media presentation and send the created media presentation to Ellie and/or Clara. For instance, themedia presentation system102 may send a notification to Clara that a new media presentation is in her media presentation list. Themedia presentation system102 may ask or prompt Clara whether she would like to share the media presentation with Ellie and/or with other co-users, such as friends at the party.
Further, themedia presentation system102 may allow Clara to edit the media presentation before sharing it with other users. In some cases, themedia presentation system102 may give other co-users the ability to edit their own media segments included in the media presentation. As shown in this example, themedia presentation system102 may automatically generate a media presentation based on related content captured by multiple users.
As another example, Ellie and Clara may be participating in a video conversation with each other. Other friends may also join in the conversation. Themedia presentation system102 may identify various related video segments from the video conversation and automatically generate a media presentation. Ellie and/or Clara may then share the media presentation with others.
Although in some cases the users may be connected and know each other, in other examples implementations, a media presentation can include media segments from user's that are not previously connected. For example, themedia presentation system102 may identify a number of users, possibly unconnected users (e.g., the users are not connected within themedia presentation system102 or within a social networking system) that are at the same location or participating in similar activities. In response, themedia presentation system102 may generate a media presentation based on related media segments captured by these users. To illustrate, a group of users of themedia presentation system102 may be spending a Sunday afternoon at Delores Park in San Francisco, Calif. Each user may capture one or more media segments. Themedia presentation system102 may identify the captured media segments as being related and automatically generate a media presentation to share with the group of users at the park. One will appreciate that a variety of situations between users can cause themedia presentation system102 to automatically create media presentations based on related media segments.
FIG. 4 illustrates a flowchart of amethod400 for automatically creating a media presentation based on related media segments in accordance with one or more embodiments. Themethod400 includes anact402 of receiving information associated with a first media segment from a first client device. Themethod400 also includes anact404 of receiving information associated with a second media segment from a second client device. Themethod400 further includes anact406 of determining whether the first media segment and the second media segment are related based on comparing the information associated with a first media segment to the information associated with a second media segment. In addition, themethod400 includes anact408 of receiving the first media segment from the first client device and the second media segment from the second client device. Themethod400 also includes anact410 of generating a media presentation that includes the first media segment and the second media segment. Further, themethod400 includes theact412 of providing the media presentation to the first client device and the second client device.
With reference now toFIG. 5, and as briefly mentioned above, themedia presentation system102 may allow a user to edit a media segment (e.g., a captured media segment prior to inclusion in a media presentation, a user's media segment in a media presentation, or another user's media segment in a media presentation based on receiving authorization from the other user). To illustrate,FIG. 5 shows aclient device500 that can present a graphical user interface502 (or “GUI502”) by way of atouch screen504. In some instances, theGUI502 can be a graphical user interface for a mobile application. For example, theclient device500 can execute an application that facilitates interactions with themedia presentation system102 described in connection withFIG. 1.
Further, theclient device500 illustrated inFIG. 5 may be an example embodiment of thefirst client device104aor thesecond client device104bdescribed in connection withFIG. 1. For example, theuser110 may use theclient device500 to interact with themedia presentation system102 via a mobile media presentation or social networking application. Further, while the computing device ofFIG. 5 illustrates a mobile device, one will appreciate that a media presentation application may be executed on other types of computing devices, such as the computing and client devices described below in connection withFIGS. 24-25.
TheGUI502 may include one or more graphical user interface areas that display content to a user. As shown inFIG. 5, theGUI502 may include a first graphicaluser interface area506a(or “first area506a”), a second graphicaluser interface area506b(or “second area506b”), and a third graphicaluser interface area506c(or “third area506c”). It should be noted that theclient device500 may include any number of graphical user interface areas. In addition, thefirst area506a,second area506b, and thethird area506cmay move each within theGUI502 and, in some cases, move out of theGUI502. For example, a user may scroll or navigate within theGUI502 to view more of thethird area506c, which in turn moves thefirst area506abeyond the visible area of the GUI502). As another example, theclient device500 may only display the one or two graphical user interface areas at one time.
As shown inFIG. 5, theclient device500 may enable to the user to edit a media segment. In general, theclient device500 and/or themedia presentation system102 may provide one or more editing tools for a user to perform edits on a media segment. It should be noted that whileFIG. 5 describes theclient device500 providing editing functionality to a user, themedia presentation system102, independent of, or in connection with theclient device500 can provide editing functionality to a user.
As illustrated inFIG. 5, theclient device500 may display, in thefirst area506a, a header that indicates to the user that theclient device500 is in editing mode. For example, as shown inFIG. 5, thefirst area506adisplays the text “Edit Media Segment.” In some embodiments, thefirst area506amay include various headers that provide a user with navigational functionality. For instance, thefirst area506amay indicate different menus, modes, settings, or screen views as described herein. Alternatively, in some embodiments, theclient device500 does not include a first graphical user interface area that provides a user with header information, rather the first graphical user interface area can include content that relates to media segments and media presentations.
As further illustrated inFIG. 5, thesecond area506bof theclient device500 displays amedia segment530. For example, themedia segment530 may be an image segment and/or a video segment. Below themedia segment530, the client device may displayediting options516 in thethird area506c. Theclient device500 may presentediting options516 to a user to assist the user in editing one or more media segments.
Theediting options516 may include a variety of options to a user to edit a media segment. For example, as illustrated inFIG. 5, theediting options516 may include options to add various editing effects such as text, stickers, sound/music, voice filters, and visual filters, which are discussed below. It should be appreciated that theediting options516 may include other options, such as cropping, reducing redeye, resizing, rotating, trimming, retouching, etc. Further, each editing option520-526 inFIG. 5 may include additional options that allow a user to refine the editing effect. Upon a user selecting an initial editing option, for example, theclient device500 may display additional editing options related to the initial selected editing option (e.g., in thethird area506c).
As further illustrated inFIG. 5, the client device can presentvarious editing options516 to the user. For example, thetext option520 may allow a user to add text to a media segment. For example, a user may add text over a portion or a media segment, such as adding a title, a comment, or other words and/or textual graphics to a media segment. In addition, thetext option520 may allow a user to add a text slide to a portion of a media segment. For example, thetext option520 may allow a user to insert a text slide before, during, or after a media segment.
In addition to thetext option520, thesticker option522 may allow a user to add one or more stickers to a media segment. Stickers include ideograms, smileys, pictographs, and other images. Stickers may also include frames or boarders around a media segment, or other graphics that visually interact with content within a media segment. Further, stickers may include custom images and graphics created by users. In some instances, a user may purchase stickers from a store, library, or sticker repository. For example, a user may buy an individual sticker, or buy a package of stickers as an in-app purchase.
In one or more embodiments, when a user thesticker option522, theclient device500 may provide a number of stickers from which to choose. After selecting a sticker, theclient device500 may provide options to the user to adjust the size of the sticker, the position of the sticker on the media segment, etc. Further, theclient device500 may automatically detect a location for a sticker and/or change the position of a sticker within a media segment. For instance, if the sticker is a hat that is placed on a user's head, theclient device500 may reposition the hat to remain atop the user's head as the position of the user changes within a media segment (e.g., a video media segment).
In some example embodiments, theclient device500 may suggest stickers to add based content within the media segment530 (e.g., detected through image recognition) and or based on meta-data associated with the media segment (e.g., location, time, tags, etc.). For example, theclient device500 may detect that the media segment is related to a particular theme (e.g., ocean, mountains, party). To illustrate, theclient device500 may recognize (e.g., through image recognition) two people in the media segment (e.g., a couple), as well as recognize that the media segment shows the couple on their wedding day. Upon the user selecting thesticker option522, theclient device500 may recommend a sticker of a heart-shaped frame to place around the couple.
In addition to thesticker option522, the sound/music option524 may allow a user to add audio to a media segment. When a user selects the sound/music option524, theclient device500 may provide the user with a numbers of sounds or audio segments from which to choose. Further, theclient device500 may provide the user with an interface to select another audio segment, such as selecting a song stored on theclient device500, dialog, or sound effects (e.g., background noise, traffic, nature sounds, etc.). Further, theclient device500 can allow the user to capture sound using theclient device500, such as enabling the user to add a voice over to a media segment.
Thevoice filter option526 may provide the user with the ability to change audio attributes of the audio within a media segment. More specifically, thefilter option526 may allow a user to change the pitch, tone, speed, reverberation, flanging, etc. of audio within a media segment. For example, a user may apply a voice filter that causes a speaker in a media segment to sound like he or she has inhaled helium or has a sore throat. As another example, applying a voice filter may cause a voice audio to sound like a particular celebrity, a child, a robot, a musical instrument, a witch, a chipmunk, etc.
In addition to thevoice filter option526, theclient device500 can provide avisual filter option528, as illustrated inFIG. 5. For example, thevisual filter option528 may include options to change to color, brightness, contrast, hue, shading, blur, lighting, texture, etc. of a media segment. For instance, upon selecting the visual filter option,528, theclient device500 may assist the user in applying a visual filter, such as changing the media segment, or a portion thereof, to black and white. In some example embodiments, theclient device500 may suggest visual filters to apply based content within the media segment530 (e.g., detected through image recognition) and or based on meta-data associated with the media segment (e.g., location, time, tags, etc.).
One will appreciate that theclient device500 may assist the user in applying a variety of visual filters, as well as other editing options, to a media segment.
With many of the editing options, theclient device500 can also allow the user to specify the duration of the editing effect, such as how long to display a text slide on the media segment. In addition, theclient device500 may allow a user to apply editing effects to multiple media segments. For instance, theclient device500 may allow a user to add a text overlay (e.g., annotations) or a song that spans multiple media segments. Further, theclient device500 may provide a user with a preview as the user tentatively selects different edition options. The preview of the edited media segment may appear in the second area540b. For example, if the user adds a text overlay to themedia segment530, the text may appear on themedia segment530 in the first area540a.
In addition to editing media segments, one or more embodiments of media presentation system can perform an image recognition analysis to identify various content elements within a media segment in the process of creating a media presentation. For example, theclient device500 and/or themedia presentation system102 may perform image recognition to identify elements in a media segment, such as words, faces, objects, etc. To illustrate by way of example,FIG. 6 shows theclient device500 applying facial recognition to amedia segment608. WhileFIG. 6 describes theclient device500 performing facial recognition, one will appreciate that theclient device500 may recognize other objects, such as animals, landmarks, symbols, etc., as well as recognize characters and words using optical character recognition (OCR).
Further, whileFIG. 6 illustrates aclient device500 to detect elements in a media segment, themedia presentation system102 may also detect elements in a media segment. To illustrate, theclient device500 and themedia presentation system102 may work in tandem to identify elements in a media segment. For example, theclient device500 may identify one or more faces in a media segment, send an image of the one or more faces to themedia presentation system102. Themedia presentation system102 may compare the one or more faces to a database of known faces and return the identity of the one or more faces to theclient device500. Alternatively, theclient device500 may send a media segment to themedia presentation system102 to have themedia presentation system102 perform element recognition on the media segment.
As shown inFIG. 6, a media segment may be presented on theclient device500. Theclient device500 displayed inFIG. 6 may be one embodiment of theclient device500 illustrated inFIG. 5. As such, theclient device500 may include aGUI502 that displays various views by way of thetouch screen504. Further, theGUI502 shown inFIG. 6 may include thefirst area506aand thesecond area506b, as described above.
The first graphicaluser interface area506amay include amedia presentation608 that includes one or more media segments. For example, as shown inFIG. 6, one or the media segments in themedia presentation608 may include a video or picture of a director (Jake) talking to two actors (Sam and Tim) on a film set for a medieval-type movie. Theclient device500 may use facial recognition to detect the faces ofSam612a,Tim612b, andJake612cin themedia presentation608.
Based on the people and/or objects recognized in a media presentation, theclient device500 may recommend, to a user operating theclient device500, co-users with whom the user should share the media presentation. For example, as shown inFIG. 6, theclient device500 may display one ormore sharing options616 in thesecond area506b. The sharingoptions616 may display a list of co-users corresponding to people recognized in themedia presentation608. For instance, theclient device500 may display sharingoptions616 that recommend the user share themedia presentation608 withSam622a,Tim622b, and/orJake622c.
If theclient device500 recognizes an object in themedia presentation608, themedia presentation system102 may, in some embodiments, recommend that the user share themedia presentation608 with a co-user or a group of co-users that associate with the object. For example, theclient device500 may recognize a football team's logo and/or a football player's jersey in a media presentation. In response, theclient device500 may recommend that a user share the media presentation with a co-user that associates with the football team, or a group dedicated to the football team.
In providing the recommendations with whom to share a media presentation, theclient device500 may limit the scope of which co-users to recommend based on a user's connections. For example, theclient device500 may limit recommendations to co-users and/or groups to whom the user is connected via themedia presentation system102 or otherwise socially connected (e.g., connected via a social networking system). Further, theclient device500 may recommend that the user share the media segment with users connected one or more co-users identified in the media segment. Similarly, theclient device500 may also determine co-users with whom to share a media presentation based on the people and/or objects recognized in a media presentation. For example, if theclient device500 recognizes a celebrity, theclient device500 may recommend that the user share the media presentation with a group associated with the celebrity.
In addition to providing recommendations based on elements recognized in a media presentation, in some example embodiments, theclient device500 may use recognition of people and/or objects to organize a media presentation within a user's media presentation list. For example, if theclient device500 recognizes a sports star in a media presentation for which a user has indicated a preference (e.g., likes), theclient device500 may prioritize the media presentation that includes the sports star in the user's media presentation list. Similarly, theclient device500 may use recognized elements to organize a media segment within a media presentation. For instance, after recognizing the sports star in a media segment within a media presentation, themedia presentation system102 may prioritize the media segment within the media presentation, such as to the beginning of the media presentation.
In another example, theclient device500 may group similar media segments together in a media presentation based on commonly identified elements. For example, theclient device500 may identify a common object in multiple media segments within a media presentation, and group those media segments together in the media presentation. As another example, theclient device500 may group all media segments within a media presentation together that include the user of theclient device500.
In a similar manner, theclient device500 may recognize common elements from media segments across a user's media presentation list and generate a new media presentation to include the common element. For example, the user may have several media presentations in their media presentation list. Around half of the media presentations may have one or more media segments that include a friend of the user. Theclient device500 may recognize the friend in each of the media presentations and organize the media segments that include the friend into a new media presentation.
In addition to the above examples, a client device may use facial recognition to generate a media segment to add to a media presentation that lists the identified participants in a media presentation. To illustrate,FIG. 7 displays amedia presentation708 that includes at least (3) three media segments730a-c. Thefirst media segment730ashows two people talking (Joe and Jack), where one person is talking on his phone (Jack). Thesecond media segment730bshows a person (Bo) sitting in a chair. The third media segment703cshows a mother (Sarah) and a child (Sam) standing near each other.
A client device and/or themedia presentation system102 may recognize faces in each of the first three media segments. For instance, the client device and/or themedia presentation system102 may recognize the faces ofJoe712aandJack712bin the first media segment703a, the face ofBo722cin the second media segment703b, and the faces ofSarah722dandSam722einthird media segment730c.
Based on detecting the faces ofJoe712a,Jack712b,Bo722c,Sarah722d, andSam722ein themedia presentation708, the client device and/ormedia presentation system102 may generate a fourth media segment730dthat lists all the participates identified in themedia presentation708. For example, the fourth media segment730dmay show a listing displaying an image and name for each ofJoe712a,Jack712b,Bo722c,Sarah722d, andSam722e. In addition, the fourth media segment730dmay include participates who are tagged by a user or co-user as being a participant in themedia segment708.
The client device and/ormedia presentation system102 can append the fourth media segment730dto themedia presentation708. In this manner, the client device may display the fourth media segment730dshowing participants in themedia presentation708 at the end of themedia presentation708. Alternatively, a client device may display the list of participants separate from the first three media segments in themedia presentation708. For example, the client device may show the first three media segments in a first graphical user interface area while displaying the fourth media segment of participants in a second graphical user interface area.
The fourth media segment730dmay be interactive. In other words, a user may be able to select the picture or name of a participant in the fourth media segment730d. Upon selecting a participant's picture or name, the client device may replay a media segment that includes the participant. In some cases, the fourth media segment730dmay display, in addition to a participant's name or image, an option for the user to replay a segment(s) that includes the particular participant. For example, a user may select a replay option associated withBo722c, and in response, the client device may replay thesecond media segment730bthat includes Bo. Alternatively, upon a user selecting a participant's picture or name, the client device730dmay provide the user with a profile of a participant, described in additional detail below.
While the fourth media segment730dcan present one or more participants identified or tagged in a media presentation, the fourth media segment730dmay also omit one or more participants despite identifying the participants. Participants may be omitted for a number of reasons. For instance,Sam722emay be omitted because Sam's parent has set a user preference to exclude Sam from participant lists or because Sam is below a minimum required age. Similarly,Jack722bmay be omitted from the participant list because he has set his privacy settings that block him from being included in a participant list (or blocked in a participant list if the user is not one of his connections).Bo722cmay be omitted because he is not a user of themedia presentation system102. Alternatively, Bo may be listed, but selecting Bo's name or picture does not link to his user profile or other information about himself because he is not a user of themedia presentation system102.
As another example,Joe722amay not be listed in the participant list because the client device and/or media presentation system could not recognize Joe and because another user did not tag Joe. Alternatively, rather than not listing Joe, the client device could show Joe'sface712ain the participant's list. Then, if another user recognizes and tags Joe, the client device can list Joe's name by his picture.
In a similar manner to a client device displaying a participant list, the client device may additionally, or in the alternative, display a credits list appended to or in connection with a media presentation. To illustrate,FIG. 8 illustrates amedia presentation808, which includes at least (3) media segments830a-cand credits832 (shown as part of thefourth media segment830dinFIG. 8). In general, thecredits832 can display users who contributed a media segment to a media presentation. For instance, thecredits832 can display a user's name next to a corresponding picture displayed next to their name. Further, thecredits832 can display a thumbnail (e.g., an image or clip) of the media segment provided by the user.
A client device may display thecredits832 at the end of a media presentation. For example, as shown inFIG. 8, thecredits832 are included in afourth media segment830dappended to the end of themedia presentation808. After a client device plays through the first three media segments830a-c, the client device may present the user with thecredits832. The client device may pause on thecredits832 for a set amount of time (e.g., 5 seconds, 10 seconds, 20 seconds, 2 seconds per name displayed, 3 seconds per media segment included in the media presentation, etc.) or may display the credits until the user provides navigational input (e.g., replay media segment advance to the next media segment, or return to the media presentation list.)
Additionally or alternatively, the client device may provide the credits in a separate graphical user interface area from themedia presentation808. For example, the client device may present themedia presentation808 in a first graphical user interface area and display the credits in a second graphical user interface area. Thus, in some cases, the client device may provide a media segment and credits corresponding to the media presentation at the same time. Further, the client device may initially hide or truncate the credits, and later provide the credits to a user upon the user selecting a graphical option within the graphical user interface of the client device to display or expand the credits.
Additionally, thecredits832 may appear as an image segment or a video segment. For instance, thecredits832 may appear as image segment (e.g., a slide) that displays contributing users in the media presentation, as shown inFIG. 8. In some cases, the credits may include multiple images (e.g., appear as multiple slides), and/or be navigable (e.g., scrollable) by a user on a client device. Alternatively, thecredits832 may appear as a video segment that scrolls through a list of contributing users to a media presentation. Whether the credits appear as an image segment or a video segment may depend on a number of factors, such as the number of media segments in the media presentation or the number of contributing users. Further, the credit may appear as a combination of image and video segments, such as a video segment that presents a list of contributing users, followed by an image segment that provides the option to replay the media presentation or share the media presentation with other co-users.
As mentioned above, thecredits832 may include a list of users that contributed to a media segment. To illustrate,FIG. 8 showsBeth822a,Jake822b, andLisa822c. More specifically,FIG. 8 displays images and name ofBeth822a,Jake822b, andLisa822c. While not illustrated, thecredits832 may also display a thumbnail (e.g., an image or clip) of the media segment(s) contributed by each user. For example, thecredits832 may display an image extracted from the media segment, a portion of the media segment (e.g., 3-seconds from the media segment or a number of still images taken from the media segment), or the media segment itself within the credits next to the contributing users (e.g., a thumbnail-sized version of the media segment). Further, rather that displaying the image of each contributing user, thecredits832 may display the thumbnails of each media segment in a media presentation.
Thecredits832 may list users in an order that corresponds to the order that media segments were presented in the media presentation. For example, when a client device presents thefirst media segment830afirst (added byBeth722a), thesecond media segment830b(added byJake722b), and then thethird media segment830c(added byLisa722c), the credits would list the contributors ordered asBeth722a,Jake722b, andLisa722c. Ordering thecredits832 based on media segment order may assist a viewer watching a media presentation to connect which media segment was provided by which user.
In another example embodiment, thecredits832 may present the order of contributing users in another manner. For example, thecredits832 may list contributing users alphabetically by first name, last name, or username. As another example, thecredits832 may list users based on the number of media segments a user contributed to a media presentation. For example, the credits may listBeth722afirst because she contributed three media segment, whileJake722band Lisa772cprovided only one or two media segment to a media presentation. In another example, thecredits832 may listBeth722afirst because she was the initiator of the media presentation. One will appreciate that a number of methods, or combinations thereof, may be employed to order contributing users within thecredits832.
When ordering the credits based on when a media segment is presented in a media presentation, displaying a thumbnail of each media segment instead of a user's picture may more intuitive to a viewing user because the credits appear to be a story board of the media presentation that also provide information about each media segment's contributor (e.g., name and possibly picture). In some instances, however, a user could be listed multiple times if the user contributes several media segments to a media presentation. In these instances, thecredits832 may group media segment thumbnails together in order of first appearance, or in some other order, such as by the initiator of the media presentation and/or number of media segments provided by each user.
Further, thecredits832 may provide information and/or statistics for a media presentation. For instance, thecredits832 may provide the ability for a user viewing a media presentation to see the number oflikes824, views826, andshares828 for a media presentation. For example, as shown inFIG. 8, themedia presentation808 may have 15 likes, 20 views, and 3 shares. In addition, thecredits832 may provide information and/or statistics for individual media segment within a media presentation. As an example, upon auser selecting Lisa822c, thecredits832 may display the number of likes, views, and shares for the media segment provided by Lisa. As another example, thecredits832 may provide information and/or statistics next to each contributing user displayed in thecredits832.
In connection with displaying the number of likes, views, and shares for a media segment or the media presentation, thecredits832 may provide a user with the ability to like, share, comment on a media segment or the media presentation, mark a media segment as a favorite, report a media segment as inappropriate, leave a message to the co-user that provided a particular media segment, etc. For example, after watching a media presentation, thecredits832 may provide a user with the ability to indicate that he or she likes the media presentation as a whole and/or likes one or more particular media segments from within the media presentation. In addition, by selecting a co-user in thecredits832, a user can send the selected co-user a message.
As another example, a media presentation may include twelve (12) media segments and a user may desire to share four (4) of the twelve (12) media segments with a co-user. Using thecredits832, the user may select the four (4) media segments to share, and themedia presentation system102 can share the selected media segments as a new media presentation to other co-users. As a note, sharing selected media segments may be dependent on permissions set by the co-users who posted the selected media segments and/or their relation to the user.
In addition to displaying contributing users, thecredits832 may also provide options that allow a user to further interact with media segments within the media presentation. For example, thecredits832 may provide a user with the option to replay a particular segment from within the media presentation. To illustrate, thecredits832 may display a contributing user and/or a thumbnail of a media segment provided by the contributing user. Upon a user selecting the contributing user and/or thumbnail from thecredits832, the client device may replay the media segment. For instance, if a user taps and holds a thumbnail via the touch screen of a client device, the client device may replay the media segment as long as the user is holding the touch screen. In another instance, a client device may present the media segment to the user along with controls that allows the user to pause the media segment, replay the media segment, or return to thecredits832.
Further, in some example embodiments, thecredits832 may provide options that allow a user to interact with contributing users. To illustrate, thecredits832 may display the names and images of contributing users, such asBeth822a,Jake822b, andLisa822c, as shown inFIG. 8. Upon selecting one of the users (e.g., selecting their picture and/or name), thecredits832 may link to the co-users' profile. In other words, upon selecting a contributing user's name or picture, a client device may display a user's profile information. The client device may display the user's profile information in place of the credits, or in addition to the credits, such as in a separate graphical user interface area.
Whether the client device displays a user's profile information in place of or in addition to the credits may depend on the capabilities of the client device (e.g., screen size, processing power, network connection speed, etc.) and/or user preference. For example, if the client device is a monitor or television, the client device may display graphical user interface areas that include media segments from a media presentation, credits from the media presentation, as well as a user's profile of a contributing user, all within the display. Alternatively, if the client device is a smaller mobile display, the client device may display only one or two graphical user interface areas on the display.
In addition to displaying a list of contributing users, thecredits832 may include users that have been tagged or recognized in the media presentation (e.g., participants). For example, for each media segment, thecredits832 can include a listing of which users participated in the media segment. In some example embodiments, the credits may include a slide for each media segment where each slide includes information about the media segment, such as the user that provided the media segment, any users tagged/included in the media segment, and statistics for the media segment (e.g., likes, views, shares, etc.).
FIG. 9 illustrates a flowchart of amethod900 for generating a participant list in accordance with one or more embodiments. Themethod900 includes anact902 receiving a plurality of media segments belonging to a media presentation. Themethod900 also includes anact908 of detecting one or more faces within the plurality of media segments. Themethod900 further includes anact906 of matching the one or more detected faces within the plurality of media segments to one or more users. In addition, themethod900 includes anact908 of generating a participant list that identifies the one or more users that match the one or more detected faces. Themethod900 also includes anact910 of supplementing the media presentation to include the participant list. Further, themethod900 includes theact912 of providing the media presentation to a user.
As mentioned above, users of themedia presentation system102 can have user profiles with themedia presentation system102. To illustrate,FIGS. 10A-B show an example user profile displayed on aclient device500 in accordance with one or more embodiments (FIG. 10B is an extension ofFIG. 10A, such as if a user scrolls down). Theclient device500 displayed inFIGS. 10A-B may be one embodiment of theclient device500 illustrated inFIG. 5. As such, theclient device500 may include aGUI502 that displays various views by way of thetouch screen504. Further, the client device may include thefirst area506aand thesecond area506bdescribed above.
In particular,FIGS. 10A-B show the user profile of a user with the name Kate. As illustrated inFIG. 10A, theclient device500 may display, in afirst area506a, aheader area1012 that provides information about Kate. For example, theheader area1012 may display Kate'sname1014a(or a username, nickname, etc.) along with apicture1014bof Kate. Theheader area1012 may also include amessage area1014c, where Kate can leave messages for co-users viewing her user profile. Themessage area1014cmay display text, graphics (e.g., emojis), and/or website links provided by Kate. Themessage area1014cmay also include one or more media segments that Kate leaves for viewers of her profile page. Further, theheader area1012 may additionally include other information, such as contact information for Kate.
Thefirst area506amay also includesocial information1016 associated with the user. Examples ofsocial information1016 include the number ofposts1018a(e.g., media segments) Kate has provided, the number of Kate'sfollowers1018b, as well as the number of users that Kate is following1018c. Further, thesocial information1016 may include aselectable follow option1018dto follow or unfollow Kate. As shown inFIG. 10A, the user viewing Kate's profile is currently following Kate and selecting the selectable following option would cause the use to unfollow Kate. The social information1018 may also include a listing of other social networking systems to which Kate belongs.
In addition, thefirst area506amay include one or more other selectable graphical elements. For example,FIG. 10A illustrates amedia message option1020 that a user can select to leave Kate a message (e.g., in the form of an image media segment or video media segment). Upon selecting themedia message option1020, theclient device500 may allow a user to leave a media segment message for Kate. For example, theclient device500 may allow the user to capture a digital photo, record a video, and/or leave an audio message for Kate.
Depending on a user's (e.g., Kate) privacy settings and/or preferences, a co-user may not be able to leave a message for the user. For instance, a co-user that is not directly connected, or only remotely connected to Kate may not be able to select the media message option. Further, depending on the type of connection a co-user has with Kate, the co-user may be limited in the type of message (e.g., audio, picture, or video message), the length of the message, or the number of messages the co-user is able to leave.
As an example of co-users leaving messages for a user, when a user has a birthday, co-users can access the user's profile and leave a birthday message for the user. The birthday messages may be viewable to other users or may be limited such that only the user may view the message. Further, theclient device500 may prioritize birthday messages based on factors such as, when the message was left, connections between co-users that leave messages and the user, if the user has previously viewed the message, if the user has liked or favored a message, etc.
When a user leaves a message, theclient device500 may display the message in thesecond area506b(shown in in bothFIGS. 10A-B) as part of a media presentation. For example, when a user leaves a message (e.g., a message segments) for Kate, theclient device500 displays the message within themedia presentation1008. When Kate accesses her user profile, she may be able to view themedia presentation1008 of message left for her by co-users. For example, Kate may be able to view messages1030a-d(shown inFIG. 10B) left for her by other co-users. When multiple messages are available, theclient device500 may present the messages in single presentation (e.g., individually playing through each message). In an alternative embodiment, theclient device500 may display the multiple messages in a vertical or horizontal grouping. Further, theclient device500 may display thumbnails or previews of each message, and when a user selects a thumbnail, theclient device500 may expand and play the message.
While a co-user who leaves a message on a user's profile (e.g., Kate's profile) is generally able to view messages that they themselves leave for the user (e.g., to replay, recapture, accept, etc.), in some cases the co-user may be able to view messages that other co-users have left for the user (e.g., view other messages left for Kate). The co-user's ability to access messages that other co-users have left for another user may be dependent on a number of factors. For example, when another co-user leaves a message, the other co-user may indicate that other co-users are allowed to view the message (e.g., all users able to view Kate's user profile can view the message). In some cases, a co-user leaving a message for Kate may allow co-users to view the message depending on if the other co-users are directly connected to Kate (e.g., followers of Kate and/or people Kate is following). As another example, a message that a co-user leaves for Kate may not be accessible to other co-users unless Kate allows the other co-users to access the message. In some cases, both the creator of the message and Kate both need to approve granting co-users access to the message before other co-users are granted access to a message left by the co-user.
In some alternative embodiments, themedia presentation1008 may include media segments that the user (e.g., Kate) has posted on themedia presentation system102. For example, the media segments1030a-cmay include media segments that Kate added to one or more media presentations. In this manner, a co-user viewing Kate's user profile may be able to access and play one or more media segment1030a-cadded by Kate.
When aclient device500 provides a media presentation that includes media segments provided by a user (e.g., Kate), theclient device500 may limit a co-user's access to one or more of the media segments within the media presentation based on a number of factors. For example, theclient device500 may limit a co-user's access based on: whether the co-user is directly or tangentially connected to Kate; Kate's privacy settings; whether Kate has indicated that a particular media segment should or should not be accessible by other co-users; and/or whether the co-user is in a media segment included in the media presentation. To illustrate, Kate may allow “close friends” to view all media segments, allow family members to view all media segment that include any family members, and allow the public to only access a select number of media segments designated as public.
Further, when aclient device500 provides a media presentation that includes media segments provided by a user (e.g., Kate), theclient device500 may display the media segments in a consolidated manner (e.g., merge all the media segments into a seemingly single media presentation) or display each media segment independently. As described above, theclient device500 may present media segment within a user's profile in a vertical or horizontal layout or arrangement. Further, theclient device500 may display thumbnails of the media segment that expand and play upon selection (or not play depending on Kate's permissions and preferences).
Further, theclient device500 may organize media segments added by the user (e.g., Kate) in a number of ways. For example, theclient device500 may organize the media segments chronologically, by topic and tags, or by people within each media segment. To illustrate, for each co-user viewing Kate's profile, theclient device500 may intelligently sort media segments to first display media segments that include the particular co-user viewing Kate's profile, followed next by media segments that include friends of the particular co-user viewing Kate's profile, followed by media segments that include Kate, followed by other media segments provided by Kate. In this manner, theclient device500 can provide a unique experience to each co-user who views Kate's user profile.
Just as users of themedia presentation system102 can have user profiles, a company or business can similarly create a company profile. To illustrate,FIGS. 11A-B show an example company profile displayed on aclient device500 in accordance with one or more embodiments (FIG. 11B is an extension ofFIG. 11A, such as if a user scrolls down on the client device500). In particular,FIGS. 11A-B show a company profile for Solitude Camping. Theclient device500 displayed inFIGS. 11A-B may be one embodiment of theclient device500 illustrated inFIG. 5. As such, theclient device500 may include aGUI502 that displays various views by way of thetouch screen504. Further, the GUI may include thefirst area506aand thesecond area506b, described above.
As illustrated inFIG. 11A, theclient device500 may display, in thefirst area506a, aheader area1112 that provides information about Solitude Camping. For example, the header1114 may display apicture1114b, image, or logo of a company along the company'sname1114a(i.e., Solitude Camping). Theheader area1112 may also include a message left by the company, such as a brief statement introducing the company, the company's motto or mission statement, a message about recent events or promotions, etc. In addition, theheader area1112 may include other information, such as contact information for the company (e.g., address, website, phone number, email, etc.).
Theclient device500 may also displaysocial information1116 associated with the company in thefirst area506a.Social information1116 may include anoverall rating1118afor the company, as provided by users of the media presentation system102 (or obtained from multiple sources). AsFIG. 11A illustrates, Solitude Camping has a rating of 4.5 stars out of 5 stars, based on feedback from 500+ users.Social information1116 may also include an indication of a company's social footprint and presence. For instance, as shown inFIG. 11A, thesocial information1116 includes the total number ofusers1118bthat “like” or follow Solitude Camping.Social information1116 may also include other social information associated with a company, such as social networking systems to which the company belongs.
In addition to displayingsocial information1116, theclient device500 may allow a user visiting the company profile to provide social feedback for a company. For example, theclient device500 may enable a user to rate Solitude Camping. To illustrate, a user may provide his or her rating of Solitude Camping to the company profile by selecting a corresponding number of the stars that represents the user's opinion of Solitude Camping. As another example, a user may user theclient device500 to like or follow Solitude Camping, thus adding to Solitude Camping's social footprint and presence.
In addition to displaying aheader area1112 andsocial information1116, theclient device500 may display reviews, comments, and/or testimonials provided by users of themedia presentation system102 in thefirst area506a. For example, a user viewing a company's user profile may select themessage option1120 labeled “Add Review.” In response, theclient device500 may assist the user in capturing a media segment. In some cases, a user may leave a message about the company in general. In other cases, a user may leave a message regarding a product or service offered by the company.
In some example embodiments, before a user can leave a media segment review for a company, the user may need to provide additional input regarding the nature of the review. For instance, the user may need to specify whether the review is a general review, a review for a product, or a review for a service offered by the company. Further, when possible, the user may provide information to further help themedia presentation system102 categorize a media segment review. For example, the user may identify the exact product for which the user is providing a review.
As shown inFIGS. 11A-B, theclient device500 may display a plurality of media segment reviews1130a-c(or collectively “media segment reviews1130”) as part of the company profile. The media segment reviews1130 may be part of a media presentation, or may be independent of a media presentation. The media segment reviews1130 may be reviews left by users. Additionally, the media segment reviews1130 may include review directed towards products and/or services offered by Solitude Camping. For example, theclient device500 may display a media segment review for a3-person tent (e.g.,media segment1130a) or a lantern (media segment1130c).
Theclient device500 may provide the media segment reviews1130 in a variety of ways. For example, theclient device500 may provide individual messages organized in a horizontal and/or vertical manner, as shown inFIG. 11B. For instance, theclient device500 may display a group of messages, such as messages1130a-d, in a vertical layout. A user may scroll up and down in thesecond area506bto view different media segment reviews1130 left by users. Further, theclient device500 may provide thumbnails or previews of each media segment reviews1130, as described above in connection with a user's profile. Each thumbnail may show a representative image signaling to a user what a media segment review is about. When a user selects a thumbnail, theclient device500 may expand and play the select media segment review.
Theclient device500 may organize the media segment reviews1130 based on a number of factors. For example, theclient device500 may present the most recent media segment reviews1130 first. As an additional example, theclient device500 may organize the media segment reviews1130 based on category. For instance, theclient device500 may separate general reviews from product and service reviews. Theclient device500 may further organize the product and service reviews (e.g., in a hierarchy manner) for example, based on product department, product category, product type, and products themselves.
In some example embodiments, theclient device500 may not initially display the media segment reviews1130. Rather, theclient device500 may list various categories or groupings, which may be broad, narrow, or hierarchal. Upon selecting a listing, theclient device500 may display one or more media segment reviews1130 that are associated with the listing. For example, upon selecting a listing of “Tent Reviews,” theclient device500 may present a user with themedia segment review1130a.
A company's profile may be accessible to the public. Further, a company may have limited rights in restricting which users can access their profile, view media segment reviews1130, or leave the reviews on the company's profile. Accordingly, most users of themedia presentation system102 can access the company's profile, view media segment reviews1130 displayed on the company's profile, and even leave a media segment review. In this manner, company profiles may be different from a user profile in that a company has little control over user access while a user has much greater control over user access.
Themedia presentation system102 may keep or promote media segment reviews1130 that are found helpful, and remove media segment reviews1130 that are inappropriate and/or unhelpful. In some example embodiments, themedia presentation system102 may determine to remove a media segment review when a threshold negative feedback level is satisfied. For example, when a number of users indicate dissatisfaction with a media segment review, themedia presentation system102 may remove the review. Similarly, when a threshold number of users like or find a media segment review helpful, themedia presentation system102 may promote the media segment review, such as by prominently displaying the media segment review on the company's profile.
Just as users of themedia presentation system102 can endorse or report a media segment review, the company can also endorse or report a media segment review. For example, a company may endorse a user's media segment review, such as when a user leaves a positive or useful review. In addition, the company may report a media segment review as inappropriate, offensive, distasteful, and/or unhelpful. While a company may be able to endorse or report a media segment review, however, the company itself may be unable to edit or modify the media segment reviews1130. In some example embodiments, themedia presentation system102 may provide additional weight to a company's negative rating of a message (e.g., count as multiple votes) when determining whether to remove a media segment review as unhelpful or inappropriate.
Alternatively, in some embodiments, a company may be able to monitor and remove media segment reviews that the company finds inappropriate or unhelpful from the company's profile. Similarly, the company may promote media segment reviews on the company's profile that the company finds helpful to other users. In some cases, a company may not be able to promote or remove a media segment review until a condition has first been met, such as a period of time has passed since the media segment review was posted or a threshold number of ratings where left for a media segment review.
Besides displaying a media presentation or media segment reviews on a company's profile, theclient device500 may also display one or more additional media presentations and/or media segments on the company's profile. For example, theclient device400 may display a media presentation that includes promotional media segments provided by the company. As another example, themedia presentation system102 may identify media segments on themedia presentation system102 that identify the company (e.g., a user tags the company or themedia presentation system102 detects the company in the media segment). In response, themedia presentation system102 may add the media segment to a media presentation on the company's profile.
WhileFIGS. 11A-B illustrate a company profile shown on themedia presentation system102, in some example embodiments, themedia presentation system102 may provide one or more of the media presentations described above to a social networking system. For example, themedia presentation system102 may provide one or more media segment reviews to a social networking system for the social networking system to include on a social networking page associated with the company.
Additionally, in some embodiments, the company's profile in themedia presentation system102 and a social networking system may communicate with each other, such as to share company information with each other. For example, information about the company, as well as the company rating may be based in part from information and ratings listed on the company's social networking page. Further, in some example embodiments, the social networking system may allow customers of the company to leave review messages (e.g., review media segment) for a company via the social networking system.
As described in connection withFIG. 11 below, themedia presentation system102 may assist a user in automatically generating a media presentation. For instance, themedia presentation system102 may allow a user to generate a media presentation that includes media segments captured by multiple users at an event or common location. As an example, a user and other co-users may be at an event, such as a concert, wedding, school party, dinner event, etc., and the user may want to create a media presentation for the event and allow co-users at the event to contribute to the media presentation. The user may want to create a media presentation for the event, but the user may not be connected to other co-users at the event. Accordingly, themedia presentation system102 may allow the user in connecting with other co-users at the event, and thereby, create an event media presentation.
To illustrate,FIG. 12 displays a sequence-flow diagram showing interactions between a user, co-users, and themedia presentation system102 to generate an event media presentation. The sequence-flow diagram may provide one example of themedia presentation system102 assisting a user in generating a media presentation, as will be described below. Theuser110, co-user112, and themedia presentation system102 may examples of theuser110, co-users112, and themedia presentation system102 described above with regard toFIG. 1.
In step1210, theuser110 may send a request to themedia presentation system102 to initiate the creation of an event media presentation. For instance, theuser110 may use a client device to signal to themedia presentation system102 that theuser110 is at an event and that theuser110 would like to initiate a media presentation. In another instance, after recognizing that theuser110 is at an event, themedia presentation system102 may prompt theuser110 to initiate a media presentation withother co-users112 also at the event. For example, themedia presentation system102 may send a message to theuser110 saying, “I notice you are at a concert. Would you like to create a media presentation for this event?”
Alternatively, theuser110 may start a media presentation, associate the media presentation with an event, and allow others at the event to add to the media presentation. For example, theuser110 may be at a graduation ceremony and capture a media segment at a graduation ceremony as part of a media presentation. Theuser110 can tag the media presentation as belonging to the graduation ceremony. Further, theuser110 can set permission on the media presentation such that other users at the graduation ceremony are invited to add media segments to the media presentation, as described below.
In step1212, themedia presentation system102 may determine if other users (e.g., co-users112) are at the same event as the user. For example, themedia presentation system102 may identify users at the same location as the user. Additionally, themedia presentation system102 may communicate with the social networking system to determine that other users are also at the event.
In determining whetherco-users112 are at the same event as the user, themedia presentation system102 may determine if theuser110 is located at a specific venue, such as a concert hall, stadium, college campus, friend's house, outdoor event, office building, a park, auditorium, etc. (e.g., based on GPS information associated with the user's120 device). Based on this information, the media presentation can determine if one or more co-users112 are within a threshold proximity range of the user. The threshold proximity range may increase or decrease depending on the size of the venue. Alternately, or in the case that themedia presentation system102 cannot determine the event venue, themedia presentation system102 may determine ifco-users112 are within a default proximity range of a user110 (e.g., 20 feet, 50 feet, 100 feet, 50 yards, 100 yards, etc.)
In step1214, themedia presentation system102 may notifyco-users112 at the event that theuser110 has invited them to contribute to an event media presentation. For example, after determining which co-users112 are at the event and/or proximate to the user, themedia presentation system102 may send an invitation to the identifiedco-users112 at the event. The invitation may invite the identifiedco-user112 to view and contribute to an event presentation initiated by the user.
In some embodiments, theuser110 may specifyco-users112 to which themedia presentation system102 should send the invitation. For example, after themedia presentation system102 determines which co-users112 are at the same event as the user, themedia presentation system102 may present a list ofco-users112 to the user. Theuser110 may select one or more co-users112 from the list for themedia presentation system102 to invite. Alternatively or in addition, theuser110 may manually selectco-users112 to receive the invitation. For example, theuser110 may access alist indicating co-users112 who are attending the event (e.g., co-users112 who checked-in to the event or otherwise indicated their attendance, such as via a social networking system). As another example, theuser110 may select users from a contacts list.
In other embodiments, themedia presentation system102 may determine theco-users112 that should receive an invitation. For example, themedia presentation system102 may send the invitation to allco-users112 at the event, to a set number ofco-users112 closest to the user, to a random selection ofco-users112 at the event, to co-users112 that shares similar demographics as the user110 (e.g., similar ages), to co-users112 that are tangentially connected to the user110 (e.g., closeness to the user), to co-users112 that regularly capture media segments, tonotable co-users112, etc. For instance, themedia presentation system102 may send the invitation to users who are positioned at different locations of an event such that the event media presentation will include media segments captured from a host of different perspectives and angles. One will appreciate that themedia presentation system102 may determine other methods for selecting which users to which to send the invitation.
Instep1216, one ormore co-users112 who receive an invitation can capture a media segment at the event. For example, a co-user112 can use his or her client device to record a video clip or capture a digital photo. Upon capturing a media segment, theco-users112 may share/send (e.g., via their client devices) the media segments to themedia presentation system102, shown instep1218. For example, a client device associated with a co-user112 may send the media segment to themedia presentation system102 and themedia presentation system102 can store the media segment in a database. The client device may also send a file to themedia presentation system102 that includes metadata about the media segment. For instance, the file can indicate that the media segment is associated with the event media presentation initiated by the user.
In some embodiments, themedia presentation system102 may check the media segment to verify that it corresponds to the event. For example, themedia presentation system102 may verify that the time and/or location of the media segment correspond to the location and time of the event. In alternative embodiments, themedia presentation system102 may allowco-users112 to provide media segments regardless of when and where the media segments were captured. For example, auser110 at a friend's party may request thatco-users112 provide media segments taken at the party as well as any media segments that feature their friend, regardless of when and where the media segments where captured.
After receiving one or more media segments from theuser110 andco-users112 at the event, themedia presentation system102 can generate a media presentation for the event, asstep1220 illustrates. Themedia presentation system102 can generate the event media presentation by organizing and linking the received media segments from theuser110 andco-users112. As one example, themedia presentation system102 may organize and arrange the media segments in the event media presentation based on chronological order of when the media segments were captured. As another example, themedia presentation system102 may organize and arrange the media segments in the media presentation using another metric, such as the length of each media segment, the relationship between of the user and the co-user that captured a media segment, or even randomly.
In step1222, themedia presentation system102 may optionally allow theuser110 to approve/edit media segments submitted from the co-users112. For example, theuser110 may indicate (e.g., approve/disapprove) whether one or more media segments should be included in the media presentation. In some cases, themedia presentation system102 may automatically approve select media segments added to the media presentation, such as media segment from friends of theuser110 or users whom theuser110 has previously approved. Further, in allowing theuser110 to edit the media presentation, themedia presentation system102 may allow theuser110 to delete one or more media segments as well as reorder the media segments within the media presentation. Theuser110 may also edit the media presentation as described above.
Themedia presentation system102 may share the event media presentation with theuser110 andco-users112 at the event, as shown instep1224. Themedia presentation system102 may share the event media presentation withco-users112 who contributed to the media presentation as well as other co-user112 who are present at the event who have not yet contributed. Further, themedia presentation system102 may preserve the event media presentation such that the users and co-users can access the media presentation, even after the event is over.
In some instances, themedia presentation system102 may allow theuser110 and/orco-users112 to share the media presentation with other users of themedia presentation system102. For instance, theuser110 may share the media presentation with friends not at the event, even while other co-users at the event are still contributing to the event media presentation. In another instance, a co-user112 may post the event media presentation on a social networking system and allow his or her friends to view, comment, and contribute to the media presentation.
In some embodiments, the event media presentation may be a foundation media presentation that spawns into various different media presentation system. For example, auser110 shares the event media presentation with co-users that were not at the event after the event is over. When sharing the event media presentation, themedia presentation system102 may create a copy or version of the event media presentation to share between the user and co-users. Accordingly, the user and co-user can add media segments to and personalize the spawned version of the event media presentation. Other groups of users may user the event media presentation to spawn separate media presentations that include media segments that are also a part of the original event media presentation. In this manner, themedia presentation system102 may allow the event media presentation to serve a starting point for theuser110 andco-users112 to create their own media presentation among their own set of friends and connections.
In some additional or alternative embodiments, themedia presentation system102 may obtain media segments already captured for an event. In other words, after theuser110 extends an invitation, or requests to initiate an event media presentation, themedia presentation system102 may compile media segments already captured byco-users112 at the event. For example, if auser110 arrives at a concert after the opening act and requests to initiate a media presentation for the concert, themedia presentation system102 may provide theuser110 with media segments showing the opening act to the user. These media segments may be detected automatically by themedia presentation system102 or provided by co-users at the event using the systems, methods, and processes described above.
As briefly mentioned above, themedia presentation system102 may communicate with a social networking system or other system to collect information about an event and/or users at the event. For example, a social networking system may indicate to themedia presentation system102 where and when an event is occurring, the name of the event, participates who will be or are at the event, etc. In addition, themedia presentation system102 may obtain event and user information based on a user's or co-user's status on a social networking system or other status-broadcasting system (e.g., FACEBOOK, INSTAGRAM, etc.).
Further, when themedia presentation system102 communicates with an external system, such as a social networking system to determine when an event is occurring, as well as participants at the event, themedia presentation system102 may also provide the event media presentation to the social networking system. To illustrate, a user may set up a social networking system event for a family holiday barbeque. The user may allow themedia presentation system102 to access her information on the social networking system, including the event page. As such, themedia presentation system102 may identify the date, time, as well as attendees of the event. During the event, themedia presentation system102 may assist a user in creating an event media presentation. Themedia presentation system102 may share the event media presentation with the user and co-users at the family holiday barbeque. Themedia presentation system102 may also post a copy of the event media presentation on the event page in the social networking system.
In one or more embodiments, themedia presentation system102 may provide users with the ability to view a live action (or near live action) media segment of an event using their client device. For example, themedia presentation system102 may enable a user to stream a media segment of an event. Further, themedia presentation system102 may provide users with multiple live media segment streams of the event. The event may by a public event, such as a concert, sports event, parade, play, tour, festival, show, convention, expo, etc. Alternatively, the event may be a private event, such as dinner among friends, a party, a casual get-together, a business meeting, a reception, etc. While the event, in general, occurs at one location, in some embodiments, the event may occur across multiple locations (e.g., an opening night event for a movie occurring at multiple locations).
To illustrate, themedia presentation system102 can provide multiple live streaming media segments from an event, as shown inFIG. 13A. For example,FIG. 13A shows abaseball stadium1300 that is hosting a baseball game. One or more users of themedia presentation system102 may be attending the baseball game at thebaseball stadium1300. As shown inFIG. 13A, users may be situated at different locations throughout the stadium. Users may includeJake1312a,Sue1312b,Mark1312c,Lisa1312d, andRob1312e(collectively referred to as “users1312”).
Each of the users1312 may have access to a client device that is able to capture media segments. Further, one or more of the client devices may include the capability to automatically report a location of the client device to themedia presentation system102. For example, some of the client devices may use GPS and/or WI-FI to identify their location. Alternatively, a user may manually input his or her location. For instance, the user may identify on a stadium map where he or she is seated in the stadium. In another instance, the user can provide the venue and seat location to themedia presentation system102.
Regardless of how a client device determines its location, the client device may report its location to themedia presentation system102. For example, the users1312 may each report their location within thebaseball stadium1300 to themedia presentation system102. Based on the location of each user, themedia presentation system102 may create a location map or schematic that shows each user's location relative to thebaseball stadium1300 and each other. To illustrate, and as shown inFIG. 13,Jake1312ais located next to left field,Sue1312bis near third base.Mark1312cis behind home plate,Lisa1312dis along the first base line, andRobert1312dis located just beyond center field.
During the baseball game, one or more of the users1312 can provide live streaming media segments to themedia presentation system102, and themedia presentation system102 may provide the live streaming media segments to other users of themedia presentation system102, as will described below. For example, at different times throughout the baseball game,Mark1312cmay use his client device to capture a live streaming media segment from behind home plate. For example, when a big hitter is at bat, when there is a play at home plate, when there is a conference on the pitcher's mound, etc. Other users1312 may also capture live streaming media segments from their respective location within thebaseball stadium1300. Because the users1312 are spread out within the baseball stadium, when more than one user provides a live streaming media segment captured at the same time, themedia presentation system102 can provide different angles and perspectives of the baseball game to other users of themedia presentation system102.
In addition to the users1312 capturing and providing live streaming media segments to themedia presentation system102, themedia presentation system102 may also obtain live streaming media segments from a broadcaster, news crew, television network, professional photographer, or other entity filming/capturing the baseball game. As one example, a sports network may be filming the baseball game using a number of cameras, themedia presentation system102 may provide one or more of the camera feeds to users of themedia presentation system102 as live streaming media segments. As another example, most higher-level sports stadiums have a large video screen or “big screen” that shows live action and replays of a game. In this example, themedia presentation system102 may obtain the camera feed displayed on the video screen and provide it as a live streaming media segment.
A user at the baseball game may view one or more live streaming media segments provided by themedia presentation system102. In some embodiments, themedia presentation system102 may limit distribution of the live streaming media segments captured at the baseball game to only users at the game. For example, the live media segment thatMark1312ccaptures may only be viewable toJake1312a,Sue1312b,Lisa1312d, andRob1312e. Alternatively or additionally, themedia presentation system102 may provide the live streaming media segments to other users of themedia presentation system102 not present at the baseball game. In some cases, themedia presentation system102 may allow others access to the media segment only after the game finishes, or after a threshold period time has passed since a live streaming media segment was captured (e.g., after a two-hour delay).
In some example embodiments, depending on a user's client device capabilities, a user may be able to view multiple live streaming media segments at the same time. For example, a client device can provide a split screen that allows a user to view multiple media segments at the same time. Further, a user may have a client device that allows for displaying a media segment in a media segment (e.g., picture-in-picture or P-in-P). Using a split screen or picture-in-picture, a user may be able to use their client device to both watch a co-users live streaming media segment while also capturing a live media segment stream that is being shared with other users.
To further illustrate,FIG. 13B shows a livestreaming media presentation1308 that includes multiple live streaming media segments1330a-nof a baseball game captured by the users1312 at the baseball game. For example, the second livestreaming media segment1330b, captured byMark1312c, shows a live view of the game from behind home plate and the nth livestreaming media segment1330n, captured byLisa1312dshows a live view of the pitcher. As mentioned above, the livestreaming media presentation1308 may also include live streaming media segments provided by sources other than the users1312, such as a television network or a stadium camera feed. Accordingly, a user at the game can use his or her client device to access the livestreaming media presentation1308 and view live streaming media segments of the game that provide additional perspectives of the game to the user.
In some embodiments, the livestreaming media presentation1308 may include a media segment that shows a schematic or map of the event venue, as shown in thefirst media segment1330a. As shown, thefirst media segment1330adisplays a map of thebaseball stadium1300. Specifically, thefirst media segment1330ainFIG. 13B corresponds to thebaseball stadium1300 described with respect toFIG. 13A. Accordingly, the map in thefirst media segment1330amay show the location of the users1312 (e.g.,Jake1312a,Sue1312b,Mark1312c,Lisa1312c, andRob1312e) with respect to their locations within thebaseball stadium1300.
The map in thefirst media segment1330amay display the location of users currently capturing live media segments stream of the baseball game. Within themedia segment1330a, themedia presentation system102 may show an icon, picture, or other indicator that represents each user at their respective location. Thus, as shown inFIG. 13B, each ofJake1312a,Sue1312b,Mark1312c,Lisa1312c, andRob1312eare capturing a live media segment stream of the baseball game and providing the live streaming media segments to other users of themedia presentation system102. As users begin capturing the game, themedia presentation system102 may update themedia segment1330ato reflect the additional users. Similarly, as users that are currently capturing the game stop providing a live streaming media segment, themedia presentation system102 may remove them from the map in the first livestreaming media segment1330a.
In some example embodiments, the map in thefirst media segment1330amay allow a user viewing the map to navigate to other live streaming media segments1330a-nwithin the livestreaming media presentation1308. For example, a user at the game may view the first livestreaming media segment1330aand may select Mark's picture behind home plate. The user's client device may them display thesecond media segment1330b(e.g., the media segment captured by Mark). Accordingly, if the user at the baseball game would like a different view of the game, themedia presentation system102 may allow the user to view the baseball game from a other perspectives, as provided by other users of themedia presentation system102 also at the baseball game. Further, as mentioned above, in some example embodiments, themedia presentation system102 may enable users not at the baseball game to experience the game from the perspective of one or more co-users who are at the game.
In some example embodiments, when a user is viewing a live streaming media segment, such as thesecond media segment1330bbeing captured by Mark, the name and/or image of the user capturing the media segment may be shown within the media segment. For example, Mark's picture is shown in the bottom left corner of thesecond media segment1330binFIG. 13B. Alternatively or in addition, rather than displaying the user, the media segment may include a map of the baseball stadium along with an indication of the location where the media segment is being captured. For instance, thesecond media segment1330bmay display a map of the stadium with an indicator, such as a dot, behind home plate.
As mentioned above, in some example embodiments, the first livestreaming media segment1330amay display a map of the baseball stadium along with users currently capturing live media segment streams. In some example embodiments, thefirst media segment1330acan show reduced versions (e.g., thumbnails) of the live streaming media segments corresponding to the location of each user. For instance, the map may show a smaller version of thesecond media segment1330bon the map behind home plate. The map may also include a smaller version of thenth media segment1330nnear first base where Lisa is sitting.
The reduced or smaller version of a live streaming media segment may be a reduced-quality version of the live stream or one or more images showing the perspective from the particular location. Thus, rather than showing the users1312 (or in addition to showing the users1312) on the map, thefirst media segment1330ashows the live streaming media segments overlaid on the map. Even in these embodiments, a user may still select one of the reduced-size live streaming media segment and the user's client device will take the user to the corresponding full size live streaming media segment.
In one or more embodiments, rather than the map in the first livestreaming media segment1330adisplaying users who are actively capturing live streaming media segment, the map in thefirst media segment1330amay display all users of themedia presentation system102 attending the baseball game regardless of whether they are capturing the a live media segment. When the map in thefirst media segment1330adisplays all users of themedia presentation system102 at the baseball game, themedia presentation system102 may allow a user to select a co-user and request that the co-user begin capturing the game. For example,Robert1312e, who is seated near the outfield, may request thatLisa1312dcapture a live media segment of the picture. UponLisa1312dcapturing a media segment of the pitcher, themedia presentation system102 may provide Robert with the nth livestreaming media segment1330n. As another example, a user at a concert, who is near the back of an event center, may request that a co-user in the front row capture a media segment of the performer so that the user may better see a performer's face.
In some example embodiments, themedia presentation system102 may record the livestreaming media presentation1308 or store one or more live streaming media segments1330a-n. Themedia presentation system102 may allow a user to replay the media segments after the event is over. For example, themedia presentation system102 allow a user attending the game to later watch the replay of the live streaming media presentation. Further, in some cases, themedia presentation system102 may allow a user not at the baseball game access a portion or the entire replay of the livestreaming media presentation1308.
When replaying the livestreaming media presentation1308 or multiple live streaming media segments1330a-n, themedia presentation system102 may align the multiple media segment together so that, even though not live, the user can watch the event from multiple perspectives, as if the user were watching the event live. For example, a user accessing multiple media segments taken at a concert may be able to not only re-watch a song, but also seamlessly switch between the streaming media segments and view the performer from different vantage points as if her or she were watching the streaming media segment live. In addition, a user replaying themedia presentation1308 may be able to view more than one recorded media segment as the same time, as described above.
FIG. 14 illustrates a flowchart of amethod1400 for providing, to a user, a live streaming media segment captured by another user in accordance with one or more embodiments. Themethod1400 includes anact1402 of providing, to a first client device, a listing of users at an event that are not capturing a live streaming media segment, wherein the listing of users provides a location of the one or more users relative to an event venue associated with the event. Themethod1400 also includes anact1404 of receiving, from the first client device, a request for a user on the listing of users to capture the event as a live streaming media segment. Themethod1400 further includes anact1406 of sending the request to a second client device associated with the user on the listing of users. In addition, themethod1400 includes anact1408 of receiving, from the second client device, a live streaming media segment of the event. Themethod1400 also includes anact1410 of providing, to the first client device, the live streaming media segment received from the second client device.
As mentioned above, one or more users of themedia presentation system102 not at an event may be able to view a live streaming media presentation of an event captured by co-users or third parties (e.g., a television network) at the event. To further illustrate,FIGS. 15A-B show a live streamingmedia presentation list1506 that includes one or more media presentations1508a-c. As shown, thefirst media presentation1508adisplays a media presentation that includes landscapes, thesecond media presentation1508bincludes a media presentation that includes a live baseball game, and thethird media presentation1508cdisplays media segments of healthy food.
A client device may present thefirst media presentation1508aand thethird media presentation1508cto a user of the client device as described above. For example, the client device may play through each media segment in thefirst media presentation1508aand thethird media presentation1508cwhen the user gives focus and/or attention to them. In other words, when a user on a client device scrolls or navigates to thefirst media presentation1508a, the client device may present the media segments of landscapes from the first media presentation1508.
The client device may present thesecond media presentation1508b, however, differently from thefirst media presentation1508aand thethird media presentation1508cbecause thesecond media presentation1508bmay be a live streaming media presentation. As described above, a live streaming media presentation may include one or more live streaming media segments of an event. Further, in some embodiments, a live streaming media segment may include media segments that are related to the event, as will be described below.
As shown inFIG. 15A, thesecond media presentation1508bdisplays a notice or message announcing a live baseball game between the Cats and the Storm. When a user navigates to and views thesecond media presentation1508b, for example on a client device, the client device may begin streaming live content. For example, the client device may begin playing a live streaming media segment corresponding to the baseball game. When multiple live media streams are available for an event, the user may switch between the various live streaming media segments within the livestreaming media presentation1508b, as described below. For instance, the client device may switch the display from themedia presentation list1506 to displaying one or more live streaming media segments1530a-nfrom the second livestreaming media presentation1508b, as illustrated inFIG. 15A. Depending on the capabilities of the client device, the client device may be able to display more than one live streaming media segments from the second livestreaming media presentation1508bas the same time.
In some example embodiments, the client device can provide different presentation layouts that a user can select when viewing multiple live streaming media segments. For example, a user may select to have one media segment expand to fill the display of a client device. Alternatively, the user may have the client device stream all visible media segments at the same time. For example, of the client device has a large display screen, the client device may be able to stream a number of live streaming media segments at the same time.
In an alternative embodiment, the user may select an option to scale down each media segments such that all media segments are viewable at one time. Additionally, the user may select an option to view only a select number of media segments at one time. Further, when viewing multiple live streaming media segments at the same time, a client device may present a user with options to overlay one or more media segments on another media segment. For example, the user may select an option to overlay a reduced version the second livestreaming media segment1530aover another live streaming media segment. Additionally, when viewing multiple live streaming media segments at the same time, the client device may allow the user to select an audio stream to which to listen. Often however, a single audio stream will correspond to many, if not all, live streaming media segments (e.g., a radio broadcast of a baseball game).
As shown inFIG. 15A, the second live streaming media presentation2408bfor the baseball game may include a first livestreaming media segment1530a, a second livestreaming media segment1530b, and a nth livestreaming media segment1530n. Each live streaming media segment may represent a type of live streaming media segment that a client device may display to a user. For example, the first livestreaming media segment1530amay include information about the baseball game. The second livestreaming media segment1530bmay include live action from the baseball game. The nth live streaming media segment1430nmay include highlights and replays from the baseball game.
The first livestreaming media segment1530amay be an event information media segment and may dynamically update to provide current game information to a user about the baseball game (e.g., a live “scoreboard”). As shown inFIG. 15A, the first livestreaming media segment1530ashows game information, such as the current score, the number of hits, the number of errors, as well as the current status of the baseball game. For other sporting events, the first livestreaming media segment1530amay display the time remaining, time-outs, current progress of the game, etc.
Further, depending on the type of event, the first livestreaming media segment1530amay display different types information. For instance, in the case that the live event is a concert or performance, the first livestreaming media segment1530amay display the performer(s) currently on stage, the song or number being performed, current lyrics, who performed previously, who is scheduled to perform next, and/or information about the venue or the performers themself. One will appreciate that an event information media segment may include a variety of information that a user may find beneficial.
In some embodiments, the first livestreaming media segment1530amay expand to include additional event information media segments. For instance, upon a user selecting an option on a client device to expand the first livestreaming media segment1530a, the client device may present the user with a list of eventinformation media segments1532. For example,FIG. 15 shows the list of eventinformation media segments1532 including a first eventinformation media segment1534a, a second event information media segment1434b, and a third eventinformation media segment1534c. The event information media segments1534a-cmay provide additional information regarding the baseball game. For example, the first eventinformation media segment1534ashows statistics for the Cats, such as a list of players and their statistics for the game, season, and/or careers. The second eventinformation media segment1534bmay show similar statistics for the Storm. Further, the third eventinformation media segment1534cmay display a game summary.
In some example embodiments, when a user is viewing one or more live streaming media segments1530a-n, themedia presentation system102 may provide an information media segment as overlay across, above, adjacent to, etc., the one or more media segments that includes information about the event. For example, a client device may display an event information media segment (e.g., the first live streaming media segment1230a) adjacent to, or above another live streaming media segment. The event information media segment may remain in place on the display of the client device even with a user switches the other live streaming media segments the user is viewing (e.g., other live streaming events scroll under the event information media segment). In addition, the client device may provide the user with options that allows the user to hide, resize, and/or reposition the first livestreaming media segment1530a. Further, the client device may provide the user with options to specify what information is included in the event information media segment, such as teams playing the baseball game, the score, and/or other game information, as described above.
Returning to the second livestreaming media presentation1508b, the second livestreaming media segment1530bmay display a live stream of the baseball game. The second livestreaming media presentation1508bmay include any number of live streaming media segments that display live streams of the baseball game. The different live streaming media segments may provide a user with a different viewing experience of the baseball game. As described above in connection withFIGS. 13A-B, co-users at and/or third-party sources the baseball game may capture live media segment streams, and themedia presentation system102 may provide the live streaming media segments to the user to view.
The third live streaming media segment1530cmay be a highlight media segment and may display highlights of the baseball game. A user using a client device may select or focus on the third livestreaming media segment1530n, upon which, a client device may present the user with one or more highlight media segments. For example,FIG. 15B, which is a continuation ofFIG. 15A, shows a highlightmedia segment list1536 that themedia presentation system102 may provide to a user upon the user selecting the third livestreaming media segment1530n.
The highlightmedia segment list1536 may include one or more highlight media segments, such as a firsthighlight media segment1538a, a secondhighlight media segment1538b, and a thirdhighlight media segment1538c. Each highlight media segment1538a-cmay represent a highlight in the baseball game. For example, the firsthighlight media segment1538a, may highlight a player, Smith, hitting a homerun in the fifth inning. Further, each highlight media segment1538a-cmay provide an indication to content of the highlight. The indication may be an image, a video clip, text, and/or another type of indication.
When only one highlight media segment is available for a highlight, a client device may play the highlight media segment within the highlight media segment list1236. Alternatively, when multiple highlight media segments are available for a highlight, a client device may provide a user with an additional list of media segments for the highlight. To illustrate, a client device may display an indication to the user that the second highlight media segment1538 includes multiple highlight media segments for a double play by the Cats. Upon a user selecting (e.g., navigating to, giving focus, providing user input, etc.) the secondhighlight media segment1538b, a client device may provide the user with a double play media segment list1540.
As shown inFIG. 15B, the double play media segment list1540 include four double play media segments1542a-d. Each double play media segment may show the double play highlight from different users and/or different angles or perspectives. Accordingly, when a user selects the Cat's double play highlight (e.g., the secondhighlight media segment1538b), a client device may allow a user to view and replay the double play from a number of angles and perspectives (e.g., captured from users located in various locations at the baseball stadium as described above).
In some embodiments, themedia presentation system102 may intelligently identify one or more highlights. To illustrate, themedia presentation system102 may detect that a significant event (e.g., a significant act) has occurred at the event. For instance, themedia presentation system102 may detect that a change in the game status has occurred. If the event is a baseball game, themedia presentation system102 can identify when a homerun, a double play, an error, or other major play occurs, for example, based on changes in the game information. If the event is a concert, themedia presentation system102 may detect when a particular song was performed. In some instances, themedia presentation system102 may user social media (e.g., hashtags, status updates, etc.) to determine at what time a significant event occurred.
In addition to determining the occurrence of a significant event, themedia presentation system102 may identify that one or more users or third-party sources at the event have captured the significant event. For example, after determining that a double play occurred at the baseball game between the Cats and the Storm (e.g., based on a change in the game information), themedia presentation system102 may identify that at least four users captured the double play as a media segment by matching a time period associated with the change in game information with a time period associated with the four media segments. In addition, themedia presentation system102 can analyze the media segments to further refine the portion of each media segment that corresponds with the significant event. Accordingly, themedia presentation system102 may add the four media segments to the double play media segment list1540 shown inFIG. 15B.
Similarly, themedia presentation system102 may identify that an above average number of users are capturing a media segment at an event. For example, when a baseball player was up at bat for the last time in his career, or when a celebrity makes an unexpected appearance at an event, a larger-than-average number of users may have captured the event. When themedia presentation system102 detects an increase in the number of media segments, themedia presentation system102 may store the media segments as a highlight. Further, when themedia presentation system102 identifies the reason for the above-average interest and number of captured media segment at an event, themedia presentation system102 can create a label/title for the highlight media segments. Additionally or alternatively, themedia presentation system102 may allow users to provide information about the highlight media segments, such as providing a title for the highlight media segments.
In one or more embodiments, the media presentation system may remove or promote highlights within an event media presentation (e.g., the second livestreaming media presentation1508b). For example, themedia presentation system102 may promote or remove a highlight based on user feedback, such as user votes or ratings. If a highlight or a highlight media segment within a highlight receives a threshold number of negative feedback, themedia presentation system102 may remove the highlight or highlight media segment. The threshold may be a based on of the proportion of negative feedback votes to the number of views and/or the proportion of negative feedback votes to the number of positive feedback votes. Further, themedia presentation system102 may not remove or promote a highlight unless the highlight has a minimum number of feedback votes and/or views. For instance, themedia presentation system102 may determine to remove a highlight that has been viewed over 100 times, and has over 50 feedback votes, because over half of the feedback is negative.
FIG. 16 illustrates a flowchart of amethod1600 for providing a live streaming media segment to a user in accordance with one or more embodiments. Themethod1600 includes anact1602 of receiving, from one or more client devices associated with one or more users, a plurality of live streaming media segments corresponding to an event. Themethod1600 also includes anact1604 of providing, to a client device, a live streaming media presentation of the event, the live streaming media presentation providing access to the plurality of live streaming media segments corresponding to the event. Themethod1600 further includes anact1606 of receiving, from the client device, a request to access one or more of the live streaming media segments within the live streaming media presentation. In addition, themethod1600 includes anact1608 of providing, to the client device, the one or more requested live streaming media segments.
FIG. 17 illustrates a flowchart of amethod1700 for generating a significant act (“highlights”) media presentation in accordance with one or more embodiments. Themethod1700 includes anact1702 of detecting a significant act at an event. Themethod1700 also includes anact1704 of identifying one or more live streaming media segments captured by one or more client devices associated with one or more users of a media presentation system. Themethod1700 further includes anact1706 of receiving the identified one or more live streaming media segments from the one or more client devices. In addition, themethod1700 includes anact1708 of organizing the received live streaming media segments into a significant act media presentation. Themethod1700 also includes anact1710 of providing the significant act media presentation to a client device.
In some example embodiments, themedia presentation system102 may generate a “favorites” media presentation for a user.FIG. 18 illustrates an example embodiment where themedia presentation system102 generates a favorites media presentation. In particular,FIG. 18 illustratesMedia Presentation A1808aandMedia Presentation B1808b. As illustrated,Media Presentation A1808aandMedia Presentation B1808bboth include four (4) media segments. For example,Media Presentation A1808aincludes media segments1830a-dthat relate to landscapes and nature and Media Presentation B1508bincludes media segments1832a-dthat relate to sports.
Within each media presentation, a user can mark or tag a media segment as a “favorite.” For example, as illustrated inFIG. 18, the user may mark thefirst media segment1830aand thefourth media segment1830dfromMedia Presentation A1808aas favorites. The user may also mark thefirst media segment1832cand thesecond media segment1832dfromMedia Presentation B1808bas favorites. As show inFIG. 18, media segments that are designated as favorites have stars in the top right corner of the media segment. Media segments that the user has not indicated as a favorite do not include stars. Alternatively, media segments that the user has not indicated as a favorite may display a hollow star in the top right corner. One will appreciate that a variety of methods may be employed for marking a media segment as a favorite and for indicating when a user has marked a media segment as a favorite. For instance, a client device may display a heart on media segments that a user has marked or tagged as a favorite.
Themedia presentation system102 may automatically, or upon a user's request, create afavorites media presentation1808c. Thefavorites media presentation1808cmay include one or more media segments that the user has marked as a favorite from across media presentations in a media presentation list. For instance, as shown inFIG. 18,Media Presentation C1808cshows a media presentation that includes user-tagged media segments fromMedia Presentation A1808aandMedia Presentation B1808b. Specifically,FIG. 18 illustrates thatMedia Presentation C1808cincludes thefirst media segment1832aand thesecond media segment1832bfromMedia Presentation A1808a. Media Presentation C also includes thefirst media segment1832aand thesecond media segment1832bfromMedia Presentation B1808b. Further,Media Presentation C1808cmay include other media segments that a user has marked as a favorite.
Themedia presentation system102 may arrange the media segments inMedia Presentation C1808cbased on a number of factors, such as by age, when the media segment was marked as a favorite, user preference, which user provided the media segment, general popularity, etc. In additional or alternative embodiments, themedia presentation system102 may allow a user to manually order or re-order media segments in the favorites media presentation.
Further, themedia presentation system102 may limit the number of favorites within the favorites media presentation or create multiple favorite media presentations based on different criteria. For example, themedia presentation system102 may only include favored media segments that are less than a threshold age (e.g., added in the last day or week) or threshold amount of time since a user marked the media segment as a favorite. In one or more additional embodiments, a favorite media presentation may include only a limited number of media segments in the favorites media presentation, such as ten (10) favored media segments. In another favorite media presentation, themedia presentation system102 may include all media segments marked as a favorite by the user regardless of date added, or number of other media segments in the presentation. In this case (or other cases), themedia presentation system102 may allow a user to unmark a media segment as a favorite within the favorite media presentation when the user no longer wants to include the media segment in the favorite media presentation.
After themedia presentation system102 creates one or more favorite media presentations, themedia presentation system102 may include the favorite media presentations in a user's media presentation list1806 along with other media presentations shared with the user. To illustrate, before themedia presentation system102 createsMedia Presentation C1808c, a client device may only displayMedia Presentation B1808candMedia Presentation B1808bin themedia presentation list1806a. Upon creating a favorite media presentation, the client device may expand themedia presentation list1806ato further include theMedia Presentation C1808cas part of an expanded media presentation list1806a-b.
In a similar manner, in one or more embodiments, themedia presentation system102 may automatically create other media presentations. As further described below, themedia presentation system102 may create a topical or categorical media presentation, an unplayed media presentation (e.g., unviewed media segments), and/or a popular, featured, or trending (e.g., within the last hour, 12 hours, 24-hours, or another specified amount of time) media presentation. Themedia presentation system102 may generate each of these media presentations automatically, and/or upon a user's request.
For example, and as shown inFIG. 19, themedia presentation system102 may generate a media presentation based on a topic or category selected by a user. For example, the user may request to view a media presentation that includes media segment of animals. Themedia presentation system102 may search for media segments, within one or more of the user's media presentations or across themedia presentation system102 as a whole, for media segments that include animals. For instance, themedia presentation system102 may identify media segments that include #ZooAnimals.
To illustrate,FIG. 19 illustrates amedia presentation1908 that includes media segments1930a-cof zoo animals. For example, themedia presentation system102 may search each media segment within the user's media presentation list to identify media segments that have been tagged as having the words “zoo animal” or being associated with a zoo animal. Additionally, themedia presentation system102 may search media segments within the media presentation list of co-users connected to the user to identify media segments that are associated with zoo animals. Further, in some cases, themedia presentation system102 may search media segments within themedia presentation system102 for media segments that are associated with zoo animals.
In addition, themedia presentation system102 may generate a media presentation of unplayed media segments from across a user's media presentation list. For example, themedia presentation system102 may compile an unplayed media presentation that includes currently unplayed media segments. Themedia presentation system102 may include the unplayed media presentation in a user's media presentation list. Themedia presentation system102 may also prioritize the unplayed media presentation in the user's media presentation list so the user first sees the unplayed media presentation upon returning and when viewing his or her media presentation list.
In some instances, themedia presentation system102 may limit media segments in the unplayed media presentation to media segments added since the user last logged in or accessed the user's media presentation list, unplayed media segments within a time period (e.g., the previous 24 hours, week, etc.) or to include all unplayed media segment in a user's media presentation list. Further, themedia presentation system102 may allow a user to configure the parameters to determine which unplayed media segments are included in the unplayed media presentation.
Further, the unplayed media presentation may provide a user with contextual navigation when playing an unplayed media segment. For instance, themedia presentation system102 may allow a user to navigate to the media presentation to which the unplayed media segment belongs, so that the user can view the unplayed media segment in the proper context. Themedia presentation system102 may also provide the user the ability to return to the unplayed media presentation and resume watching other unplayed media segments.
After the user watches an unplayed media segment, whether in the unplayed media segment or in the media presentation to which the unplayed media segment belongs, themedia presentation system102 may remove the media segment from the unplayed media presentation. Alternatively, themedia presentation system102 may leave the unplayed media segment in the unplayed media presentation for a threshold period of time, such as 5 minutes, 30 minutes, an hour, 12 hours, a day, etc. In some cases, themedia presentation system102 may leave the unplayed media segment in the unplayed media presentation until the next time the users leaves and again access an application that facilitates interactions with themedia presentation system102.
In an additional embodiment, themedia presentation system102 may automatically generate a media presentation of media segments that are popular or trending across themedia presentation system102. In one example, themedia presentation system102 may create a popular media presentation using media segments within a user's media presentation list. In another example, themedia presentation system102 creates the popular media presentation based on media segments that are popular or tending based on interest of the user, what co-users associated with the user are viewing/liking, and/or media segments that are trending across themedia presentation system102.
As briefly discussed above, in one or more embodiments, themedia presentation system102 may provide one or more options to a user to restrict one or more media segments or media presentations from the user's media presentation list. To further illustrate,FIG. 20 shows an example embodiment of themedia presentation system102 providing restriction options to a user within amedia presentation2008 on aclient device500. Theclient device500 displayed inFIG. 20 may be one embodiment of theclient device500 illustrated inFIG. 5. As such, theclient device500 may include aGUI502 that displays various views by way of thetouch screen504. Further, theGUI502 shown inFIG. 20 may include thefirst area506aand thesecond area506b, as described above.
As shown inFIG. 20, thefirst area506aincludes amedia presentation2008 and user options, such as a “share”option2010, a “hide”option2012, or a “more”option2014. Theclient device500 may present the user options as graphical user interface elements, such as graphical buttons, within the first graphicaluser interface area2008. Theclient device500 may hide the user option when the user has not provided user input (e.g., a tap or swipe) to thetouchscreen504 to theclient device500 for a minimum amount of time (e.g., one or two seconds).
When a user selects theshare option2010, theclient device500 may provide the user with a sharing interface to share the media segment or the media presentation with one or more co-users, as described above. For example, theclient device500 may display the sharing interface in thesecond area506b(not shown). Similarly, when a user selects themore option2014, themedia presentation system102 may present additional user options to the user, such as the option to the report inappropriate content, like the media segment, like the media presentation, and/or add a media segment to the media presentation. Upon selecting themore option2014, theclient device500 may display a more options interface in thesecond area506b(not shown) that includes one or more of the more options listed above.
Upon a user selecting thehide option2012, theclient device500 may display a restrictions interface in thesecond area506b, as illustrated inFIG. 20. The restrictions interface2016 may include one or more additional options for a user to select that enables a user to hide, restrict, or otherwise block a media segment, media presentation, or co-user. As illustrated inFIG. 20, the user has selected thehide option2012, which is indicated by the hide option being highlighted.
As further illustrated inFIG. 20, the second graphicaluser interface area506bdisplays the restriction interface, that includes theoption2020 to hide the current media segment from themedia presentation2008, theoption2022 to hide all segments from a particular user (e.g., Kim) from themedia presentation2008, theoption2024 to hide/block all media segments from the particular user (e.g., Kim) in all media presentations within the user's media presentation list, or theoption2026 to hide the current media segment for a set time period (e.g., 24-hours, 1-week, etc.). One will appreciate that themedia presentation system102 may present, via theclient device500, an additional number of restriction options to a user, such as blocking all media presentations created by a particular co-user, unblocking a media segment or user, moving the co-user's media segments to the end of a media presentation, or skipping the co-user's media segment for a set number of plays of a media presentation.
In an alternative embodiment, rather than displaying the restriction interface in thesecond area506b, the client device may apply a default restriction option upon the user selecting thehide option2012. For example, theclient device500 may temporarily hide the media segment from themedia presentation2008 for 24-hours upon the user selecting thehide option2012. Further, the default restriction option may change based on the number of times a particular media segment has been hidden and/or the number of time a co-user has been restricted. For example, upon the third time that the user selects thehide option2012 for a particular media segment, theclient device500 may hide the media segment from themedia presentation2008. If theclient device500 detects a pattern of the user hiding media segments from a particular user (e.g., the user generally hides media segments from the particular user), theclient device500 may automatically, or with user consent, hide all media segments from that particular user.
Along the lines of hiding a media segment or media presentation, themedia presentation system102 may censor media segments and media presentations based on a user's censorship preferences. To illustrate,FIG. 21 displays a flow diagram showing a method2100 of themedia presentation system102 censoring content. Themedia presentation system1012 may censor a media segment or media presentation based on user feedback, as described below.
For purposes of explanation, themedia presentation system102 performs the steps described inFIG. 21, however, a client device, such as thefirst client device104aand thesecond client device104bdescribed with respect toFIG. 1, may equally perform the steps described inFIG. 21. Likewise, a client device and themedia presentation system102 may collectively perform the steps described inFIG. 21.
Instep2102, themedia presentation system102 may receive a media segment. For example, a media segment may be captured by a client device and sent to themedia presentation system102 as part of a media presentation. The media segment may include inappropriate content.
Instep2104, themedia presentation system102 may determine whether a user has a maturity level setting or preference in place. In other words, when a media segment is shared with a user, themedia presentation system102 may first check preferences for the user to determine if themedia presentation system102 should filter out content that the user has specified as inappropriate. For example, themedia presentation system102 may determine if a user has requested not to receive explicit content.
In some cases, the maturity level setting or explicit content filter may be automatically applied for a user. For example, users under the age of 21 may automatically have maturity level set to equal the user's current age. For instance, if a user is 15, the user may not be able to view content designated for users under the age of 16. In other cases, a parent, guardian, or administrator may specify the maturity level for another user. For instance, a company administrator may set a maturity level that blocks all inappropriate media segments received on company property.
If the user does not have a maturity level set (manually or automatically), themedia presentation system102 may allow the media segment to be presented to the user, asstep2106 indicates. For example, themedia presentation system102 may send the media segment to a client device associated with the user.
If the user does have a maturity rating, however, themedia presentation system102 may further analyze the media segment to determine if the media segment satisfies the maturity level set by the user. For example, asstep2108 illustrates, themedia presentation system102 may determine whether a user that created the media segment marked the media segment as being explicit or having a designated a maturity level. For instance, the user that created the media segment may indicate that the media segment is not appropriate for users under the age of 14. Further, the user that created the media segment may indicate that the media segment includes explicit content.
In some cases, themedia presentation system102 may analyze the media segment and detect the presence of mature content, such as inappropriate language, nudity, violence, etc. If themedia presentation system102 detects explicit content, the user that created themedia presentation system102 may recommend to a user that created the media segment to add a maturity warning or set a maturity age. Alternatively, themedia presentation system102 may automatically assign a maturity warning to the media segment.
If the creator of the media segment has marked the media segment as explicit, themedia presentation system102 may block the media segment from a user, as shown instep2110. In particular, if the media segment is part of a media presentation, themedia presentation system102 may skip the media segment from playing in the media presentation or not even include the media segment in a media presentation when providing the media presentation to a user who has a maturity rating enabled. Further, in some cases, themedia presentation system102 may provide an indication to a user that a media segment has been blocked from the media presentation.
If the creator of the media segment has not marked the media segment as mature or inappropriate, themedia presentation system102 may determine whether a threshold number of viewers have tagged the media segment as inappropriate (e.g., crowd sourcing censorship), illustrated instep2112. More specifically, themedia presentation system102 may identify the number of users that have hidden the media segment, blocked the media segment, or flagged the media segment as inappropriate (e.g., explicit, inappropriate for users under the age of x, contains mature content, etc.). Themedia presentation system102 may compare the number of users that have hidden, blocked, or flagged the media segment and determine if the number is above a threshold. For example, if ten (10) users have reported a media segment as inappropriate, themedia presentation system102 may block the media segment from a user. Themedia presentation system102 may also designate a media segment as inappropriate when a minimum percentage of viewers (e.g., 30% of viewers) have marked the segment as inappropriate. If a threshold number of users have marked the media segment as inappropriate, themedia presentation system102 may block the media segment from the user, as shown instep2110.
In some embodiments, themedia presentation system102 may apply a sliding scale based on the number of users that flag a media segment as inappropriate to determine the appropriateness of a media segment for a given user. As a simplistic example, themedia presentation system102 may determine the age for which the media segment is appropriate based on the number of times users flagged a media segment as inappropriate. For instance, if a media segment has been flagged twelve (12) times for inappropriateness, themedia presentation system102 may determine that the media segment is inappropriate for users under the age of 12. If the media segment is then flagged as inappropriate twice more (i.e., fourteen (14) times total), themedia presentation system102 may determine that the media segment is inappropriate for users under the age of 14. One will appreciate that themedia presentation system102 can apply a different sliding scale rate between the number of times a media segment is marked as inappropriate and the maturity rating assigned to a media segment.
Similarly, themedia presentation system102 may consider the percentage of viewers who mark a media segment as inappropriate in relation to the total number of users that have viewed the media segment. For example, if a media segment has 1,000 counts of reported inappropriateness, themedia presentation system102 will likely determine that the media segment is marked inappropriate. If the media segment, however, has been viewed over one million times, the 1,000 counts of reported inappropriateness represents less than 0.1% of users who have viewed the media segment and the media segment may not be inappropriate. On the other hand, if over 30% of user who view a media segment report the media segment as inappropriate and the media segment has been viewed over 100 times, themedia presentation system102 may assign a maturity warning to the media segment.
If, however, a threshold number of users have not marked the media segment as inappropriate or mature for certain audiences such as the user, themedia presentation system102 may determine whether the user has tagged the media segment as inappropriate, as shown instep2114. For example, the user may watch a media segment and determine that the user does not want to continue watching or re-watch the media segment. Accordingly, the user may personally mark or tag the media segment as inappropriate. Upon the user tagging the media segment as inappropriate, the media presentation system may block the media segment, asstep2110 illustrates. Otherwise, if the user has not marked the media segment as inappropriate, themedia presentation system102 may allow the user to access (e.g., play) the media segment, asstep2106 illustrates.
In one or more embodiments, themedia presentation system102 may also determine if co-users connected to the user have indicated that the media segment contains inappropriate content. For example, if a parent tags a media segment as inappropriate, themedia presentation system102 may block the media segment for both the parent as well as for their children (in the case that both users have access to the media segment or have the media segment on their own media presentation list). In some cases, a parent may mark a media segment as inappropriate for users under 18. In this case, themedia presentation system102 may block the media segment from any children the parent has that is under 18, but themedia presentation system102 may not block the media segment for any children the parent has over 18, or for the parent.
Similarly, in some embodiments, themedia presentation system102 may identify if friends, family, or other acquaintances have marked a media segment as inappropriate. Themedia presentation system102 may give greater weight when co-users connected to a user have marked a media segment as inappropriate. For example, when two or more friends of a user mark a media segment as inappropriate, and the user has maturity ratings in place, themedia presentation system102 may block the media segment from being provided to the user.
FIG. 22 illustrates a flowchart of amethod2200 for censoring a media segment in a media presentation in accordance with one or more embodiments. Themethod2200 includes anact2202 of receiving, at a client device associated with a user, a media segment having a defined maturity level. Themethod2200 also includes anact2204 of determining whether the user has set a maturity level. Themethod2200 further includes anact2206 of determining whether the media segment violates the maturity level set by the user. In addition, based on the determination that the user has not set a maturity level or the determination that the media segment does not violate the maturity level set by the user, themethod2200 includes anact2208 of presenting the media segment to the user. Based on the determination that the user has set a maturity level and that the media segment violates the maturity level set by the user, themethod2200 also includes anact2210 of blocking access by the user to the media segment.
FIG. 23 illustrates a flowchart of amethod2300 for generating a media presentation based on related media segments in accordance with one or more embodiments. Themethod2300 can be performed by thesocial networking system108 and/or themedia presentation system102 described herein. To illustrate, themethod2300 includes anact2302 of identifying information associated with a first media segment. In particular, theact2302 may involve identifying information associated with a first media segment received from afirst client device104aassociated with a first user. For example, the identified information can include an identity of the first user, a geographic location corresponding to a capture location of the first media segment, a timestamp indicating a capture time of the first media segment, a description of the first media segment, and/or tags representing one or more users that participate in the first media segment.
Themethod2300 also includes anact2304 of identifying information associated with a second media segment. In particular, theact2304 may involve identifying information associated with a second media segment received from asecond client device104bassociated with a second user. Further, the method may also include the act of determining that the first user and the second user are or are not socially connected within a communication system.
In addition, themethod2300 includes anact2306 of determining that the first media segment and the second media segment are related. In particular, theact2306 can involve determining that the first media segment and the second media segment are related based on comparing the information associated with the first media segment to the information associated with the second media segment. For example, theact2306 may involve determining that a minimum number of commonalities match between the information associated with the first media segment and the information associated with the second media segment. In some example embodiments, theact2306 may involve determining that the first user associated with the first media segment is tagged in the second media segment that is associated with the second user, comparing a first timestamp corresponding to the first media segment to a second timestamp corresponding to the second media segment to determine a difference in time between the first timestamp and the second timestamp, and determining that the first media segment and the second media segment are related when the difference in time is less than a threshold amount of time.
Themethod2300 also includes anact2308 of generating a media presentation that includes the first media segment and the second media segment. In other words, theact2308 may involve adding the first media segment to the second media segments within a media presentation. In some example embodiments, theact2308 may also involve logically joining (e.g., within a media presentation database) the media segments together within a configuration file corresponding to the media presentation.
Themethod2300 further includes anact2310 of providing the media presentation to a first user. In particular, theact2310 may involve providing the media presentation to thefirst client device104aassociated with the first user. Themethod2300 may further involve an act of providing, to thefirst client device104aassociated with the first user, an option to edit the media presentation.
In addition to the foregoing, embodiments of the present invention also can be described in terms of flowcharts comprising acts and steps in a method for accomplishing a particular result. For example,FIGS. 4, 9, 14, 16, 17, and 22-23, which are described above, illustrate flowcharts of exemplary methods in accordance with one or more embodiments of the present invention. The methods described in relation toFIGS. 4, 9, 14, 16, 17, and 22-23 can be performed with less or more steps/acts or the steps/acts can be performed in differing orders. Additionally, the steps/acts described herein can be repeated or performed in parallel with one another or in parallel with different instances of the same or similar steps/acts.
Embodiments of the present disclosure may comprise or utilize a special purpose or general-purpose computer including computer hardware, such as, for example, one or more processors and system memory, as discussed in greater detail below. Embodiments within the scope of the present disclosure also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. In particular, one or more of the processes described herein may be implemented at least in part as instructions embodied in a non-transitory computer-readable medium and executable by one or more computing devices (e.g., any of the media content access devices described herein). In general, a processor (e.g., a microprocessor) receives instructions, from a non-transitory computer-readable medium, (e.g., a memory, etc.), and executes those instructions, thereby performing one or more processes, including one or more of the processes described herein.
Computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system. Computer-readable media that store computer-executable instructions are non-transitory computer-readable storage media (devices). Computer-readable media that carry computer-executable instructions are transmission media. Thus, by way of example, and not limitation, embodiments of the disclosure can comprise at least two distinctly different kinds of computer-readable media: non-transitory computer-readable storage media (devices) and transmission media.
Non-transitory computer-readable storage media (devices) includes RAM, ROM, EEPROM, CD-ROM, solid state drives (“SSDs”) (e.g., based on RAM), Flash memory, phase-change memory (“PCM”), other types of memory, other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.
A “network” is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a transmission medium. Transmissions media can include a network and/or data links which can be used to carry desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included within the scope of computer-readable media.
Further, upon reaching various computer system components, program code means in the form of computer-executable instructions or data structures can be transferred automatically from transmission media to non-transitory computer-readable storage media (devices) (or vice versa). For example, computer-executable instructions or data structures received over a network or data link can be buffered in RAM within a network interface module (e.g., a “NIC”), and then eventually transferred to computer system RAM and/or to less volatile computer storage media (devices) at a computer system. Thus, it should be understood that non-transitory computer-readable storage media (devices) can be included in computer system components that also (or even primarily) utilize transmission media.
Computer-executable instructions comprise, for example, instructions and data which, when executed at a processor, cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. In some embodiments, computer-executable instructions are executed on a general-purpose computer to turn the general-purpose computer into a special purpose computer implementing elements of the disclosure. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims.
Those skilled in the art will appreciate that the disclosure may be practiced in network computing environments with many types of computer system configurations, including, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, tablets, pagers, routers, switches, and the like. The disclosure may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks. In a distributed system environment, program modules may be located in both local and remote memory storage devices.
Embodiments of the present disclosure can also be implemented in cloud computing environments. In this description, “cloud computing” is defined as a model for enabling on-demand network access to a shared pool of configurable computing resources. For example, cloud computing can be employed in the marketplace to offer ubiquitous and convenient on-demand access to the shared pool of configurable computing resources. The shared pool of configurable computing resources can be rapidly provisioned via virtualization and released with low management effort or service provider interaction, and then scaled accordingly.
A cloud-computing model can be composed of various characteristics such as, for example, on-demand self-service, broad network access, resource pooling, rapid elasticity, measured service, and so forth. A cloud-computing model can also expose various service models, such as, for example, Software as a Service (“SaaS”), Platform as a Service (“PaaS”), and Infrastructure as a Service (“IaaS”). A cloud-computing model can also be deployed using different deployment models such as private cloud, community cloud, public cloud, hybrid cloud, and so forth. In this description and in the claims, a “cloud-computing environment” is an environment in which cloud computing is employed.
FIG. 24 illustrates a block diagram ofexemplary computing device2400 that may be configured to perform one or more of the processes described above. One will appreciate that one or more computing devices such as thecomputing device2400 may implement themedia presentation system102 and/orcomputing devices104a,104b,104, and300. As shown byFIG. 24, thecomputing device2400 can comprise aprocessor2402, amemory2404, astorage device2406, an I/O interface2408, and acommunication interface2410, which may be communicatively coupled by way of acommunication infrastructure2412. While anexemplary computing device2400 is shown inFIG. 24, the components illustrated inFIG. 24 are not intended to be limiting. Additional or alternative components may be used in other embodiments. Furthermore, in certain embodiments, thecomputing device2400 can include fewer components than those shown inFIG. 24. Components of thecomputing device2400 shown inFIG. 24 will now be described in additional detail.
In one or more embodiments, theprocessor2402 includes hardware for executing instructions, such as those making up a computer program. As an example and not by way of limitation, to execute instructions, theprocessor2402 may retrieve (or fetch) the instructions from an internal register, an internal cache, thememory2404, or thestorage device2406 and decode and execute them. In one or more embodiments, theprocessor2402 may include one or more internal caches for data, instructions, or addresses. As an example and not by way of limitation, theprocessor2402 may include one or more instruction caches, one or more data caches, and one or more translation lookaside buffers (TLBs). Instructions in the instruction caches may be copies of instructions in thememory2404 or thestorage2406.
Thememory2404 may be used for storing data, metadata, and programs for execution by the processor(s). Thememory2404 may include one or more of volatile and non-volatile memories, such as Random Access Memory (“RAM”), Read Only Memory (“ROM”), a solid state disk (“SSD”), Flash, Phase Change Memory (“PCM”), or other types of data storage. Thememory2404 may be internal or distributed memory.
Thestorage device2406 includes storage for storing data or instructions. As an example and not by way of limitation,storage device2406 can comprise a non-transitory storage medium described above. Thestorage device2406 may include a hard disk drive (HDD), a floppy disk drive, flash memory, an optical disc, a magneto-optical disc, magnetic tape, or a Universal Serial Bus (USB) drive or a combination of two or more of these. Thestorage device2406 may include removable or non-removable (or fixed) media, where appropriate. Thestorage device2406 may be internal or external to thecomputing device2400. In one or more embodiments, thestorage device2406 is non-volatile, solid-state memory. In other embodiments, thestorage device2406 includes read-only memory (ROM). Where appropriate, this ROM may be mask programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), electrically alterable ROM (EAROM), or flash memory or a combination of two or more of these.
The I/O interface2408 allows a user to provide input to, receive output from, and otherwise transfer data to and receive data fromcomputing device2400. The I/O interface2408 may include a mouse, a keypad or a keyboard, a touch screen, a camera, an optical scanner, network interface, modem, other known I/O devices or a combination of such I/O interfaces. The I/O interface2408 may include one or more devices for presenting output to a user, including, but not limited to, a graphics engine, a display (e.g., a display screen), one or more output drivers (e.g., display drivers), one or more audio speakers, and one or more audio drivers. In certain embodiments, the I/O interface2408 is configured to provide graphical data to a display for presentation to a user. The graphical data may be representative of one or more graphical user interfaces and/or any other graphical content as may serve a particular implementation.
Thecommunication interface2410 can include hardware, software, or both. In any event, thecommunication interface2410 can provide one or more interfaces for communication (such as, for example, packet-based communication) between thecomputing device2400 and one or more other computing devices or networks. As an example and not by way of limitation, thecommunication interface2410 may include a network interface controller (NIC) or network adapter for communicating with an Ethernet or other wire-based network or a wireless NIC (WNIC) or wireless adapter for communicating with a wireless network, such as a WI-FI.
Additionally or alternatively, thecommunication interface2410 may facilitate communications with an ad hoc network, a personal area network (PAN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), or one or more portions of the Internet or a combination of two or more of these. One or more portions of one or more of these networks may be wired or wireless. As an example, thecommunication interface2410 may facilitate communications with a wireless PAN (WPAN) (such as, for example, a BLUETOOTH WPAN), a WI-FI network, a WI-MAX network, a cellular telephone network (such as, for example, a Global System for Mobile Communications (GSM) network), or other suitable wireless network or a combination thereof.
Additionally, thecommunication interface2410 may facilitate communications various communication protocols. Examples of communication protocols that may be used include, but are not limited to, data transmission media, communications devices, Transmission Control Protocol (“TCP”), Internet Protocol (“IP”), File Transfer Protocol (“FTP”), Telnet, Hypertext Transfer Protocol (“HTTP”), Hypertext Transfer Protocol Secure (“HTTPS”), Session Initiation Protocol (“SIP”), Simple Object Access Protocol (“SOAP”), Extensible Mark-up Language (“XML”) and variations thereof, Simple Mail Transfer Protocol (“SMTP”), Real-Time Transport Protocol (“RTP”), User Datagram Protocol (“UDP”), Global System for Mobile Communications (“GSM”) technologies, Code Division Multiple Access (“CDMA”) technologies, Time Division Multiple Access (“TDMA”) technologies, Short Message Service (“SMS”), Multimedia Message Service (“MMS”), radio frequency (“RF”) signaling technologies, Long Term Evolution (“LTE”) technologies, wireless communication technologies, in-band and out-of-band signaling technologies, and other suitable communications networks and technologies.
Thecommunication infrastructure2412 may include hardware, software, or both that couples components of thecomputing device2400 to each other. As an example and not by way of limitation, thecommunication infrastructure2412 may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a front-side bus (FSB), a HYPERTRANSPORT (HT) interconnect, an Industry Standard Architecture (ISA) bus, an INFINIBAND interconnect, a low-pin-count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCIe) bus, a serial advanced technology attachment (SATA) bus, a Video Electronics Standards Association local (VLB) bus, or another suitable bus or a combination thereof.
As mentioned above, thecommunication system100 can comprise a social networking system. A social networking system may enable its users (such as persons or organizations) to interact with the system and with each other. The social networking system may, with input from a user, create and store in the social networking system a user profile associated with the user. The user profile may include demographic information, communication-channel information, and information on personal interests of the user. The social networking system may also, with input from a user, create and store a record of relationships of the user with other users of the social networking system, as well as provide services (e.g. wall posts, photo-sharing, on-line calendars and event organization, messaging, games, or advertisements) to facilitate social interaction between or among users. Also, the social networking system may allow users to post photographs and other multimedia content items to a user's profile page (typically known as “wall posts” or “timeline posts”) or in a photo album, both of which may be accessible to other users of the social networking system depending upon the user's configured privacy settings.
FIG. 25 illustrates anexample network environment2500 of a social networking system.Network environment2500 includes aclient system2506, asocial networking system2502, and a third-party system2508 connected to each other by anetwork2504. AlthoughFIG. 25 illustrates a particular arrangement ofclient system2506,social networking system2502, third-party system2508, andnetwork2504, this disclosure contemplates any suitable arrangement ofclient system2506,social networking system2502, third-party system2508, andnetwork2504. As an example and not by way of limitation, two or more ofclient system2506,social networking system2502, and third-party system2508 may be connected to each other directly, bypassingnetwork2504. As another example, two or more ofclient system2506,social networking system2502, and third-party system2508 may be physically or logically co-located with each other in whole, or in part. Moreover, althoughFIG. 25 illustrates a particular number ofclient systems2506,social networking systems2502, third-party systems2508, andnetworks2504, this disclosure contemplates any suitable number ofclient systems2506,social networking systems2502, third-party systems2508, andnetworks2504. As an example and not by way of limitation,network environment2500 may includemultiple client system2506,social networking systems2502, third-party systems2508, andnetworks2504.
This disclosure contemplates anysuitable network2504. As an example and not by way of limitation, one or more portions ofnetwork2504 may include an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a cellular telephone network, or a combination of two or more of these.Network2504 may include one ormore networks2504.
Links may connectclient system2506,social networking system2502, and third-party system2508 tocommunication network2504 or to each other. This disclosure contemplates any suitable links. In particular embodiments, one or more links include one or more wireline (such as for example Digital Subscriber Line (DSL) or Data Over Cable Service Interface Specification (DOCSIS)), wireless (such as for example Wi-Fi or Worldwide Interoperability for Microwave Access (WiMAX)), or optical (such as for example Synchronous Optical Network (SONET) or Synchronous Digital Hierarchy (SDH)) links. In particular embodiments, one or more links each include an ad hoc network, an intranet, an extranet, a VPN, a LAN, a WLAN, a WAN, a WWAN, a MAN, a portion of the Internet, a portion of the PSTN, a cellular technology-based network, a satellite communications technology-based network, another link, or a combination of two or more such links. Links need not necessarily be the same throughoutnetwork environment2500. One or more first links may differ in one or more respects from one or more second links.
In particular embodiments,client system2506 may be an electronic device including hardware, software, or embedded logic components or a combination of two or more such components and capable of carrying out the appropriate functionalities implemented or supported byclient system2506. As an example and not by way of limitation, aclient system2506 may include any of the client devices or systems described in the above figures. Aclient system2506 may enable a network user atclient system2506 to accessnetwork2504. Aclient system2506 may enable its user to communicate with other users atother client systems2506.
In particular embodiments,client system2506 may include a web browser, such as MICROSOFT INTERNET EXPLORER, GOOGLE CHROME, or MOZILLA FIREFOX, and may have one or more add-ons, plug-ins, or other extensions, such as TOOLBAR or YAHOO TOOLBAR. A user atclient system2506 may enter a Uniform Resource Locator (URL) or other address directing the web browser to a particular server (such as server, or a server associated with a third-party system2508), and the web browser may generate a Hyper Text Transfer Protocol (HTTP) request and communicate the HTTP request to server. The server may accept the HTTP request and communicate toclient system2506 one or more Hyper Text Markup Language (HTML) files responsive to the HTTP request.Client system2506 may render a webpage based on the HTML files from the server for presentation to the user. This disclosure contemplates any suitable webpage files. As an example and not by way of limitation, webpages may render from HTML files, Extensible Hyper Text Markup Language (XHTML) files, or Extensible Markup Language (XML) files, according to particular needs. Such pages may also execute scripts such as, for example and without limitation, those written in JAVASCRIPT, JAVA, MICROSOFT SILVERLIGHT, combinations of markup language and scripts such as AJAX (Asynchronous JAVASCRIPT and XML), and the like. Herein, reference to a webpage encompasses one or more corresponding webpage files (which a browser may use to render the webpage) and vice versa, where appropriate.
In particular embodiments,social networking system2502 may be a network-addressable computing system that can host an online social network.Social networking system2502 may generate, store, receive, and send social-networking data, such as, for example, user-profile data, concept-profile data, social-graph information, or other suitable data related to the online social network.Social networking system2502 may be accessed by the other components ofnetwork environment2500 either directly or vianetwork2504. In particular embodiments,social networking system2502 may include one or more servers. Each server may be a unitary server or a distributed server spanning multiple computers or multiple datacenters. Servers may be of various types, such as, for example and without limitation, web server, news server, mail server, message server, advertising server, file server, application server, exchange server, database server, proxy server, another server suitable for performing functions or processes described herein, or any combination thereof. In particular embodiments, each server may include hardware, software, or embedded logic components or a combination of two or more such components for carrying out the appropriate functionalities implemented or supported by server. In particular embodiments,social networking system2502 may include one or more data stores. Data stores may be used to store various types of information. In particular embodiments, the information stored in data stores may be organized according to specific data structures. In particular embodiments, each data store may be a relational, columnar, correlation, or other suitable database. Although this disclosure describes or illustrates particular types of databases, this disclosure contemplates any suitable types of databases. Particular embodiments may provide interfaces that enable aclient system2506, asocial networking system2502, or a third-party system2508 to manage, retrieve, modify, add, or delete, the information stored in data store.
In particular embodiments,social networking system2502 may store one or more social graphs in one or more data stores. In particular embodiments, a social graph may include multiple nodes—which may include multiple user nodes (each corresponding to a particular user) or multiple concept nodes (each corresponding to a particular concept)—and multiple edges connecting the nodes.Social networking system2502 may provide users of the online social network the ability to communicate and interact with other users. In particular embodiments, users may join the online social network viasocial networking system2502 and then add connections (e.g., relationships) to a number of other users ofsocial networking system2502 whom they want to be connected to. Herein, the term “friend” may refer to any other user ofsocial networking system2502 with which a user has formed a connection, association, or relationship viasocial networking system2502.
In particular embodiments,social networking system2502 may provide users with the ability to take actions on various types of items or objects, supported bysocial networking system2502. As an example and not by way of limitation, the items and objects may include groups or social networks to which users ofsocial networking system2502 may belong, events or calendar entries in which a user might be interested, computer-based applications that a user may use, transactions that allow users to buy or sell items via the service, interactions with advertisements that a user may perform, or other suitable items or objects. A user may interact with anything that is capable of being represented insocial networking system2502 or by an external system of third-party system2508, which is separate fromsocial networking system2502 and coupled tosocial networking system2502 via anetwork2504.
In particular embodiments,social networking system2502 may be capable of linking a variety of entities. As an example and not by way of limitation,social networking system2502 may enable users to interact with each other as well as receive content from third-party systems2508 or other entities, or to allow users to interact with these entities through an application programming interfaces (API) or other communication channels.
In particular embodiments, a third-party system2508 may include one or more types of servers, one or more data stores, one or more interfaces, including but not limited to APIs, one or more web services, one or more content sources, one or more networks, or any other suitable components, e.g., that servers may communicate with. A third-party system2508 may be operated by a different entity from an entity operatingsocial networking system2502. In particular embodiments, however,social networking system2502 and third-party systems2508 may operate in conjunction with each other to provide social-networking services to users ofsocial networking system2502 or third-party systems2508. In this sense,social networking system2502 may provide a platform, or backbone, which other systems, such as third-party systems2508, may use to provide social-networking services and functionality to users across the Internet.
In particular embodiments, a third-party system2508 may include a third-party content object provider. A third-party content object provider may include one or more sources of content objects, which may be communicated to aclient system2506. As an example and not by way of limitation, content objects may include information regarding things or activities of interest to the user, such as, for example, movie show times, movie reviews, restaurant reviews, restaurant menus, product information and reviews, or other suitable information. As another example and not by way of limitation, content objects may include incentive content objects, such as coupons, discount tickets, gift certificates, or other suitable incentive objects.
In particular embodiments,social networking system2502 also includes user-generated content objects, which may enhance a user's interactions withsocial networking system2502. User-generated content may include anything a user can add, upload, send, or “post” tosocial networking system2502. As an example and not by way of limitation, a user communicates posts tosocial networking system2502 from aclient system2506. Posts may include data such as status updates or other textual data, location information, photos, videos, links, music or other similar data or media. Content may also be added tosocial networking system2502 by a third-party through a “communication channel,” such as a newsfeed or stream.
In particular embodiments,social networking system2502 may include a variety of servers, sub-systems, programs, modules, logs, and data stores. In particular embodiments,social networking system2502 may include one or more of the following: a web server, action logger, API-request server, relevance-and-ranking engine, content-object classifier, notification controller, action log, third-party-content-object-exposure log, inference module, authorization/privacy server, search module, advertisement-targeting module, user-interface module, user-profile store, connection store, third-party content store, or location store.Social networking system2502 may also include suitable components such as network interfaces, security mechanisms, load balancers, failover servers, management-and-network-operations consoles, other suitable components, or any suitable combination thereof. In particular embodiments,social networking system2502 may include one or more user-profile stores for storing user profiles. A user profile may include, for example, biographic information, demographic information, behavioral information, social information, or other types of descriptive information, such as work experience, educational history, hobbies or preferences, interests, affinities, or location. Interest information may include interests related to one or more categories. Categories may be general or specific. As an example and not by way of limitation, if a user “likes” an article about a brand of shoes the category may be the brand, or the general category of “shoes” or “clothing.” A connection store may be used for storing connection information about users. The connection information may indicate users who have similar or common work experience, group memberships, hobbies, educational history, or are in any way related or share common attributes. The connection information may also include user-defined connections between different users and content (both internal and external). A web server may be used for linkingsocial networking system2502 to one ormore client systems2506 or one or more third-party system2508 vianetwork2504. The web server may include a mail server or other messaging functionality for receiving and routing messages betweensocial networking system2502 and one ormore client systems2506. An API-request server may allow a third-party system2508 to access information fromsocial networking system2502 by calling one or more APIs. An action logger may be used to receive communications from a web server about a user's actions on or offsocial networking system2502. In conjunction with the action log, a third-party-content-object log may be maintained of user exposures to third-party-content objects. A notification controller may provide information regarding content objects to aclient system2506. Information may be pushed to aclient system2506 as notifications, or information may be pulled fromclient system2506 responsive to a request received fromclient system2506. Authorization servers may be used to enforce one or more privacy settings of the users ofsocial networking system2502. A privacy setting of a user determines how particular information associated with a user can be shared. The authorization server may allow users to opt in to or opt out of having their actions logged bysocial networking system2502 or shared with other systems (e.g., third-party system2508), such as, for example, by setting appropriate privacy settings. Third-party-content-object stores may be used to store content objects received from third parties, such as a third-party system2508. Location stores may be used for storing location information received fromclient systems2506 associated with users. Advertisement-pricing modules may combine social information, the current time, location information, or other suitable information to provide relevant advertisements, in the form of notifications, to a user.
FIG. 26 illustrates examplesocial graph2600. In particular embodiments,social networking system2502 may store one or moresocial graphs2600 in one or more data stores. In particular embodiments,social graph2600 may include multiple nodes—which may includemultiple user nodes2602 ormultiple concept nodes2604—andmultiple edges2606 connecting the nodes. Examplesocial graph2600 illustrated inFIG. 26 is shown, for didactic purposes, in a two-dimensional visual map representation. In particular embodiments, asocial networking system2502,client system2506, or third-party system2508 may accesssocial graph2600 and related social-graph information for suitable applications. The nodes and edges ofsocial graph2600 may be stored as data objects, for example, in a data store (such as a social-graph database). Such a data store may include one or more searchable or query able indexes of nodes or edges ofsocial graph2600.
In particular embodiments, auser node2602 may correspond to a user ofsocial networking system2502. As an example and not by way of limitation, a user may be an individual (human user), an entity (e.g., an enterprise, business, or third-party application), or a group (e.g., of individuals or entities) that interacts or communicates with or oversocial networking system2502. In particular embodiments, when a user registers for an account withsocial networking system2502,social networking system2502 may create auser node2602 corresponding to the user, and store theuser node2602 in one or more data stores. Users anduser nodes2602 described herein may, where appropriate, refer to registered users anduser nodes2602 associated with registered users. In addition or as an alternative, users anduser nodes2602 described herein may, where appropriate, refer to users that have not registered withsocial networking system2502. In particular embodiments, auser node2602 may be associated with information provided by a user or information gathered by various systems, includingsocial networking system2502. As an example and not by way of limitation, a user may provide his or her name, profile picture, contact information, birth date, sex, marital status, family status, employment, education background, preferences, interests, or other demographic information. Each user node of the social graph may have a corresponding web page (typically known as a profile page). In response to a request including a user name, the social networking system can access a user node corresponding to the user name, and construct a profile page including the name, a profile picture, and other information associated with the user. A profile page of a first user may display to a second user all or a portion of the first user's information based on one or more privacy settings by the first user and the relationship between the first user and the second user.
In particular embodiments, aconcept node2604 may correspond to a concept. As an example and not by way of limitation, a concept may correspond to a place (such as, for example, a movie theater, restaurant, landmark, or city); a website (such as, for example, a website associated with social-network system2502 or a third-party website associated with a web-application server); an entity (such as, for example, a person, business, group, sports team, or celebrity); a resource (such as, for example, an audio file, video file, digital photo, text file, structured document, or application) which may be located withinsocial networking system2502 or on an external server, such as a web-application server; real or intellectual property (such as, for example, a sculpture, painting, movie, game, song, idea, photograph, or written work); a game; an activity; an idea or theory; another suitable concept; or two or more such concepts. Aconcept node2604 may be associated with information of a concept provided by a user or information gathered by various systems, includingsocial networking system2502. As an example and not by way of limitation, information of a concept may include a name or a title; one or more images (e.g., an image of the cover page of a book); a location (e.g., an address or a geographical location); a website (which may be associated with a URL); contact information (e.g., a phone number or an email address); other suitable concept information; or any suitable combination of such information. In particular embodiments, aconcept node2604 may be associated with one or more data objects corresponding to information associated withconcept node2604. In particular embodiments, aconcept node2604 may correspond to one or more webpages.
In particular embodiments, a node insocial graph2600 may represent or be represented by a webpage (which may be referred to as a “profile page”). Profile pages may be hosted by or accessible tosocial networking system2502. Profile pages may also be hosted on third-party websites associated with a third-party server2508. As an example and not by way of limitation, a profile page corresponding to a particular external webpage may be the particular external webpage and the profile page may correspond to aparticular concept node2604. Profile pages may be viewable by all or a selected subset of other users. As an example and not by way of limitation, auser node2602 may have a corresponding user-profile page in which the corresponding user may add content, make declarations, or otherwise express himself or herself. As another example and not by way of limitation, aconcept node2604 may have a corresponding concept-profile page in which one or more users may add content, make declarations, or express themselves, particularly in relation to the concept corresponding toconcept node2604.
In particular embodiments, aconcept node2604 may represent a third-party webpage or resource hosted by a third-party system2508. The third-party webpage or resource may include, among other elements, content, a selectable or other icon, or other inter-actable object (which may be implemented, for example, in JavaScript, AJAX, or PHP codes) representing an action or activity. As an example and not by way of limitation, a third-party webpage may include a selectable icon such as “like,” “check in,” “eat,” “recommend,” or another suitable action or activity. A user viewing the third-party webpage may perform an action by selecting one of the icons (e.g., “eat”), causing aclient system2506 to send to social networking system2502 a message indicating the user's action. In response to the message,social networking system2502 may create an edge (e.g., an “eat” edge) between auser node2602 corresponding to the user and aconcept node2604 corresponding to the third-party webpage or resource andstore edge2606 in one or more data stores.
In particular embodiments, a pair of nodes insocial graph2600 may be connected to each other by one ormore edges2606. Anedge2606 connecting a pair of nodes may represent a relationship between the pair of nodes. In particular embodiments, anedge2606 may include or represent one or more data objects or attributes corresponding to the relationship between a pair of nodes. As an example and not by way of limitation, a first user may indicate that a second user is a “friend” of the first user. In response to this indication,social networking system2502 may send a “friend request” to the second user. If the second user confirms the “friend request,”social networking system2502 may create anedge2606 connecting the first user'suser node2602 to the second user'suser node2602 insocial graph2600 andstore edge2606 as social-graph information in one or more of data stores. In the example ofFIG. 26,social graph2600 includes anedge2606 indicating a friend relation betweenuser nodes2602 of user “A” and user “B” and an edge indicating a friend relation betweenuser nodes2602 of user “C” and user “B.” Although this disclosure describes or illustratesparticular edges2606 with particular attributes connectingparticular user nodes2602, this disclosure contemplates anysuitable edges2606 with any suitable attributes connectinguser nodes2602. As an example and not by way of limitation, anedge2606 may represent a friendship, family relationship, business or employment relationship, fan relationship, follower relationship, visitor relationship, subscriber relationship, superior/subordinate relationship, reciprocal relationship, non-reciprocal relationship, another suitable type of relationship, or two or more such relationships. Moreover, although this disclosure generally describes nodes as being connected, this disclosure also describes users or concepts as being connected. Herein, references to users or concepts being connected may, where appropriate, refer to the nodes corresponding to those users or concepts being connected insocial graph2600 by one ormore edges2606.
In particular embodiments, anedge2606 between auser node2602 and aconcept node2604 may represent a particular action or activity performed by a user associated withuser node2602 toward a concept associated with aconcept node2604. As an example and not by way of limitation, as illustrated inFIG. 26, a user may “like,” “attended,” “played,” “listened,” “cooked,” “worked at,” or “watched” a concept, each of which may correspond to a edge type or subtype. A concept-profile page corresponding to aconcept node2604 may include, for example, a selectable “check in” icon (such as, for example, a clickable “check in” icon) or a selectable “add to favorites” icon. Similarly, after a user clicks these icons,social networking system2502 may create a “favorite” edge or a “check in” edge in response to a user's action corresponding to a respective action. As another example and not by way of limitation, a user (user “C”) may listen to a particular song (“Ramble On”) using a particular application (SPOTIFY, which is an online music application). In this case,social networking system2502 may create a “listened”edge2606 and a “used” edge (as illustrated inFIG. 26) betweenuser nodes2602 corresponding to the user andconcept nodes2604 corresponding to the song and application to indicate that the user listened to the song and used the application. Moreover,social networking system2502 may create a “played” edge2606 (as illustrated inFIG. 26) betweenconcept nodes2604 corresponding to the song and the application to indicate that the particular song was played by the particular application. In this case, “played”edge2606 corresponds to an action performed by an external application (SPOTIFY) on an external audio file (the song “Imagine”). Although this disclosure describesparticular edges2606 with particular attributes connectinguser nodes2602 andconcept nodes2604, this disclosure contemplates anysuitable edges2606 with any suitable attributes connectinguser nodes2602 andconcept nodes2604. Moreover, although this disclosure describes edges between auser node2602 and aconcept node2604 representing a single relationship, this disclosure contemplates edges between auser node2602 and aconcept node2604 representing one or more relationships. As an example and not by way of limitation, anedge2606 may represent both that a user likes and has used at a particular concept. Alternatively, anotheredge2606 may represent each type of relationship (or multiples of a single relationship) between auser node2602 and a concept node2604 (as illustrated inFIG. 26 betweenuser node2602 for user “E” andconcept node2604 for “SPOTIFY”).
In particular embodiments,social networking system2502 may create anedge2606 between auser node2602 and aconcept node2604 insocial graph2600. As an example and not by way of limitation, a user viewing a concept-profile page (such as, for example, by using a web browser or a special-purpose application hosted by the user's client system2506) may indicate that he or she likes the concept represented by theconcept node2604 by clicking or selecting a “Like” icon, which may cause the user'sclient system2506 to send to social networking system2502 a message indicating the user's liking of the concept associated with the concept-profile page. In response to the message,social networking system2502 may create anedge2606 betweenuser node2602 associated with the user andconcept node2604, as illustrated by “like”edge2606 between the user andconcept node2604. In particular embodiments,social networking system2502 may store anedge2606 in one or more data stores. In particular embodiments, anedge2606 may be automatically formed bysocial networking system2502 in response to a particular user action. As an example and not by way of limitation, if a first user uploads a picture, watches a movie, or listens to a song, anedge2606 may be formed betweenuser node2602 corresponding to the first user andconcept nodes2604 corresponding to those concepts. Although this disclosure describes formingparticular edges2606 in particular manners, this disclosure contemplates forming anysuitable edges2606 in any suitable manner.
In particular embodiments, an advertisement may be text (which may be HTML-linked), one or more images (which may be HTML-linked), one or more videos, audio, one or more ADOBE FLASH files, a suitable combination of these, or any other suitable advertisement in any suitable digital format presented on one or more webpages, in one or more e-mails, or in connection with search results requested by a user. In addition or as an alternative, an advertisement may be one or more sponsored stories (e.g., a news-feed or ticker item on social networking system2502). A sponsored story may be a social action by a user (such as “liking” a page, “liking” or commenting on a post on a page, RSVPing to an event associated with a page, voting on a question posted on a page, checking in to a place, using an application or playing a game, or “liking” or sharing a website) that an advertiser promotes, for example, by having the social action presented within a pre-determined area of a profile page of a user or other page, presented with additional information associated with the advertiser, bumped up or otherwise highlighted within news feeds or tickers of other users, or otherwise promoted. The advertiser may pay to have the social action promoted. As an example and not by way of limitation, advertisements may be included among the search results of a search-results page, where sponsored content is promoted over non-sponsored content.
In particular embodiments, an advertisement may be requested for display within social-networking-system webpages, third-party webpages, or other pages. An advertisement may be displayed in a dedicated portion of a page, such as in a banner area at the top of the page, in a column at the side of the page, in a GUI of the page, in a pop-up window, in a drop-down menu, in an input field of the page, over the top of content of the page, or elsewhere with respect to the page. In addition or as an alternative, an advertisement may be displayed within an application. An advertisement may be displayed within dedicated pages, requiring the user to interact with or watch the advertisement before the user may access a page or utilize an application. The user may, for example view the advertisement through a web browser.
A user may interact with an advertisement in any suitable manner. The user may click or otherwise select the advertisement. By selecting the advertisement, the user may be directed to (or a browser or other application being used by the user) a page associated with the advertisement. At the page associated with the advertisement, the user may take additional actions, such as purchasing a product or service associated with the advertisement, receiving information associated with the advertisement, or subscribing to a newsletter associated with the advertisement. An advertisement with audio or video may be played by selecting a component of the advertisement (like a “play button”). Alternatively, by selecting the advertisement,social networking system2502 may execute or modify a particular action of the user.
An advertisement may also include social-networking-system functionality that a user may interact with. As an example and not by way of limitation, an advertisement may enable a user to “like” or otherwise endorse the advertisement by selecting an icon or link associated with endorsement. As another example and not by way of limitation, an advertisement may enable a user to search (e.g., by executing a query) for content related to the advertiser. Similarly, a user may share the advertisement with another user (e.g., through social networking system2502) or RSVP (e.g., through social networking system2502) to an event associated with the advertisement. In addition or as an alternative, an advertisement may include social-networking-system context directed to the user. As an example and not by way of limitation, an advertisement may display information about a friend of the user withinsocial networking system2502 who has taken an action associated with the subject matter of the advertisement.
In particular embodiments,social networking system2502 may determine the social-graph affinity (which may be referred to herein as “affinity”) of various social-graph entities for each other. Affinity may represent the strength of a relationship or level of interest between particular objects associated with the online social network, such as users, concepts, content, actions, advertisements, other objects associated with the online social network, or any suitable combination thereof. Affinity may also be determined with respect to objects associated with third-party systems2508 or other suitable systems. An overall affinity for a social-graph entity for each user, subject matter, or type of content may be established. The overall affinity may change based on continued monitoring of the actions or relationships associated with the social-graph entity. Although this disclosure describes determining particular affinities in a particular manner, this disclosure contemplates determining any suitable affinities in any suitable manner.
In particular embodiments,social networking system2502 may measure or quantify social-graph affinity using an affinity coefficient (which may be referred to herein as “coefficient”). The coefficient may represent or quantify the strength of a relationship between particular objects associated with the online social network. The coefficient may also represent a probability or function that measures a predicted probability that a user will perform a particular action based on the user's interest in the action. In this way, a user's future actions may be predicted based on the user's prior actions, where the coefficient may be calculated at least in part a the history of the user's actions. Coefficients may be used to predict any number of actions, which may be within or outside of the online social network. As an example and not by way of limitation, these actions may include various types of communications, such as sending messages, posting content, or commenting on content; various types of a observation actions, such as accessing or viewing profile pages, media, or other suitable content; various types of coincidence information about two or more social-graph entities, such as being in the same group, tagged in the same photograph, checked-in at the same location, or attending the same event; or other suitable actions. Although this disclosure describes measuring affinity in a particular manner, this disclosure contemplates measuring affinity in any suitable manner.
In particular embodiments,social networking system2502 may use a variety of factors to calculate a coefficient. These factors may include, for example, user actions, types of relationships between objects, location information, other suitable factors, or any combination thereof. In particular embodiments, different factors may be weighted differently when calculating the coefficient. The weights for each factor may be static or the weights may change according to, for example, the user, the type of relationship, the type of action, the user's location, and so forth. Ratings for the factors may be combined according to their weights to determine an overall coefficient for the user. As an example and not by way of limitation, particular user actions may be assigned both a rating and a weight while a relationship associated with the particular user action is assigned a rating and a correlating weight (e.g., so the weights total 260%). To calculate the coefficient of a user towards a particular object, the rating assigned to the user's actions may comprise, for example, 60% of the overall coefficient, while the relationship between the user and the object may comprise 40% of the overall coefficient. In particular embodiments, thesocial networking system2502 may consider a variety of variables when determining weights for various factors used to calculate a coefficient, such as, for example, the time since information was accessed, decay factors, frequency of access, relationship to information or relationship to the object about which information was accessed, relationship to social-graph entities connected to the object, short- or long-term averages of user actions, user feedback, other suitable variables, or any combination thereof. As an example and not by way of limitation, a coefficient may include a decay factor that causes the strength of the signal provided by particular actions to decay with time, such that more recent actions are more relevant when calculating the coefficient. The ratings and weights may be continuously updated based on continued tracking of the actions upon which the coefficient is based. Any type of process or algorithm may be employed for assigning, combining, averaging, and so forth the ratings for each factor and the weights assigned to the factors. In particular embodiments,social networking system2502 may determine coefficients using machine-learning algorithms trained on historical actions and past user responses, or data farmed from users by exposing them to various options and measuring responses. Although this disclosure describes calculating coefficients in a particular manner, this disclosure contemplates calculating coefficients in any suitable manner.
In particular embodiments,social networking system2502 may calculate a coefficient based on a user's actions.Social networking system2502 may monitor such actions on the online social network, on a third-party system2508, on other suitable systems, or any combination thereof. Any suitable type of user actions may be tracked or monitored. Typical user actions include viewing profile pages, creating or posting content, interacting with content, joining groups, listing and confirming attendance at events, checking-in at locations, liking particular pages, creating pages, and performing other tasks that facilitate social action. In particular embodiments,social networking system2502 may calculate a coefficient based on the user's actions with particular types of content. The content may be associated with the online social network, a third-party system2508, or another suitable system. The content may include users, profile pages, posts, news stories, headlines, instant messages, chat room conversations, emails, advertisements, pictures, video, music, other suitable objects, or any combination thereof.Social networking system2502 may analyze a user's actions to determine whether one or more of the actions indicate an affinity for subject matter, content, other users, and so forth. As an example and not by way of limitation, if a user may make frequently posts content related to “coffee” or variants thereof,social networking system2502 may determine the user has a high coefficient with respect to the concept “coffee.” Particular actions or types of actions may be assigned a higher weight and/or rating than other actions, which may affect the overall calculated coefficient. As an example and not by way of limitation, if a first user emails a second user, the weight or the rating for the action may be higher than if the first user simply views the user-profile page for the second user.
In particular embodiments,social networking system2502 may calculate a coefficient based on the type of relationship between particular objects. Referencing thesocial graph2600,social networking system2502 may analyze the number and/or type ofedges2606 connectingparticular user nodes2602 andconcept nodes2604 when calculating a coefficient. As an example and not by way of limitation,user nodes2602 that are connected by a spouse-type edge (representing that the two users are married) may be assigned a higher coefficient thanuser nodes2602 that are connected by a friend-type edge. In other words, depending upon the weights assigned to the actions and relationships for the particular user, the overall affinity may be determined to be higher for content about the user's spouse than for content about the user's friend. In particular embodiments, the relationships a user has with another object may affect the weights and/or the ratings of the user's actions with respect to calculating the coefficient for that object. As an example and not by way of limitation, if a user is tagged in first photo, but merely likes a second photo,social networking system2502 may determine that the user has a higher coefficient with respect to the first photo than the second photo because having a tagged-in-type relationship with content may be assigned a higher weight and/or rating than having a like-type relationship with content. In particular embodiments,social networking system2502 may calculate a coefficient for a first user based on the relationship one or more second users have with a particular object. In other words, the connections and coefficients other users have with an object may affect the first user's coefficient for the object. As an example and not by way of limitation, if a first user is connected to or has a high coefficient for one or more second users, and those second users are connected to or have a high coefficient for a particular object,social networking system2502 may determine that the first user should also have a relatively high coefficient for the particular object. In particular embodiments, the coefficient may be based on the degree of separation between particular objects. Degree of separation between any two nodes is defined as the minimum number of hops required to traverse the social graph from one node to the other. A degree of separation between two nodes can be considered a measure of relatedness between the users or the concepts represented by the two nodes in the social graph. For example, two users having user nodes that are directly connected by an edge (i.e., are first-degree nodes) may be described as “connected users” or “friends.” Similarly, two users having user nodes that are connected only through another user node (i.e., are second-degree nodes) may be described as “friends of friends.” The lower coefficient may represent the decreasing likelihood that the first user will share an interest in content objects of the user that is indirectly connected to the first user in thesocial graph2600. As an example and not by way of limitation, social-graph entities that are closer in the social graph2600 (i.e., fewer degrees of separation) may have a higher coefficient than entities that are further apart in thesocial graph2600.
In particular embodiments,social networking system2502 may calculate a coefficient based on location information. Objects that are geographically closer to each other may be considered to be more related, or of more interest, to each other than more distant objects. In particular embodiments, the coefficient of a user towards a particular object may be based on the proximity of the object's location to a current location associated with the user (or the location of aclient system2506 of the user). A first user may be more interested in other users or concepts that are closer to the first user. As an example and not by way of limitation, if a user is one mile from an airport and two miles from a gas station,social networking system2502 may determine that the user has a higher coefficient for the airport than the gas station based on the proximity of the airport to the user.
In particular embodiments,social networking system2502 may perform particular actions with respect to a user based on coefficient information. Coefficients may be used to predict whether a user will perform a particular action based on the user's interest in the action. A coefficient may be used when generating or presenting any type of objects to a user, such as advertisements, search results, news stories, media, messages, notifications, or other suitable objects. The coefficient may also be utilized to rank and order such objects, as appropriate. In this way,social networking system2502 may provide information that is relevant to user's interests and current circumstances, increasing the likelihood that they will find such information of interest. In particular embodiments,social networking system2502 may generate content based on coefficient information. Content objects may be provided or selected based on coefficients specific to a user. As an example and not by way of limitation, the coefficient may be used to generate media for the user, where the user may be presented with media for which the user has a high overall coefficient with respect to the media object. As another example and not by way of limitation, the coefficient may be used to generate advertisements for the user, where the user may be presented with advertisements for which the user has a high overall coefficient with respect to the advertised object. In particular embodiments,social networking system2502 may generate search results based on coefficient information. Search results for a particular user may be scored or ranked based on the coefficient associated with the search results with respect to the querying user. As an example and not by way of limitation, search results corresponding to objects with higher coefficients may be ranked higher on a search-results page than results corresponding to objects having lower coefficients.
In particular embodiments,social networking system2502 may calculate a coefficient in response to a request for a coefficient from a particular system or process. To predict the likely actions a user may take (or may be the subject of) in a given situation, any process may request a calculated coefficient for a user. The request may also include a set of weights to use for various factors used to calculate the coefficient. This request may come from a process running on the online social network, from a third-party system2508 (e.g., via an API or other communication channel), or from another suitable system. In response to the request,social networking system2502 may calculate the coefficient (or access the coefficient information if it has previously been calculated and stored). In particular embodiments,social networking system2502 may measure an affinity with respect to a particular process. Different processes (both internal and external to the online social network) may request a coefficient for a particular object or set of objects.Social networking system2502 may provide a measure of affinity that is relevant to the particular process that requested the measure of affinity. In this way, each process receives a measure of affinity that is tailored for the different context in which the process will use the measure of affinity.
In connection with social-graph affinity and affinity coefficients, particular embodiments may utilize one or more systems, components, elements, functions, methods, operations, or steps disclosed in U.S. patent application Ser. No. 11/503,093, filed Aug. 8, 2006, U.S. patent application Ser. No. 12/977,027, filed Dec. 22, 2010, U.S. patent application Ser. No. 12/978,265, filed Dec. 24, 2010, and U.S. patent application Ser. No. 13/632,869, field Oct. 1, 2012, each of which is incorporated by reference in their entirety.
In particular embodiments, one or more of the content objects of the online social network may be associated with a privacy setting. The privacy settings (or “access settings”) for an object may be stored in any suitable manner, such as, for example, in association with the object, in an index on an authorization server, in another suitable manner, or any combination thereof. A privacy setting of an object may specify how the object (or particular information associated with an object) can be accessed (e.g., viewed or shared) using the online social network. Where the privacy settings for an object allow a particular user to access that object, the object may be described as being “visible” with respect to that user. As an example and not by way of limitation, a user of the online social network may specify privacy settings for a user-profile page identify a set of users that may access the work experience information on the user-profile page, thus excluding other users from accessing the information. In particular embodiments, the privacy settings may specify a “blocked list” of users that should not be allowed to access certain information associated with the object. In other words, the blocked list may specify one or more users or entities for which an object is not visible. As an example and not by way of limitation, a user may specify a set of users that may not access photos albums associated with the user, thus excluding those users from accessing the photo albums (while also possibly allowing certain users not within the set of users to access the photo albums). In particular embodiments, privacy settings may be associated with particular social-graph elements. Privacy settings of a social-graph element, such as a node or an edge, may specify how the social-graph element, information associated with the social-graph element, or content objects associated with the social-graph element can be accessed using the online social network. As an example and not by way of limitation, aparticular concept node2604 corresponding to a particular photo may have a privacy setting specifying that the photo may only be accessed by users tagged in the photo and their friends. In particular embodiments, privacy settings may allow users to opt in or opt out of having their actions logged bysocial networking system2502 or shared with other systems (e.g., third-party system2508). In particular embodiments, the privacy settings associated with an object may specify any suitable granularity of permitted access or denial of access. As an example and not by way of limitation, access or denial of access may be specified for particular users (e.g., only me, my roommates, and my boss), users within a particular degrees-of-separation (e.g., friends, or friends-of-friends), user groups (e.g., the gaming club, my family), user networks (e.g., employees of particular employers, students or alumni of particular university), all users (“public”), no users (“private”), users of third-party systems2508, particular applications (e.g., third-party applications, external websites), other suitable users or entities, or any combination thereof. Although this disclosure describes using particular privacy settings in a particular manner, this disclosure contemplates using any suitable privacy settings in any suitable manner.
In particular embodiments, one or more servers may be authorization/privacy servers for enforcing privacy settings. In response to a request from a user (or other entity) for a particular object stored in a data store,social networking system2502 may send a request to the data store for the object. The request may identify the user associated with the request and may only be sent to the user (or aclient system2506 of the user) if the authorization server determines that the user is authorized to access the object based on the privacy settings associated with the object. If the requesting user is not authorized to access the object, the authorization server may prevent the requested object from being retrieved from the data store, or may prevent the requested object from be sent to the user. In the search query context, an object may only be generated as a search result if the querying user is authorized to access the object. In other words, the object must have a visibility that is visible to the querying user. If the object has a visibility that is not visible to the user, the object may be excluded from the search results. Although this disclosure describes enforcing privacy settings in a particular manner, this disclosure contemplates enforcing privacy settings in any suitable manner.
The foregoing specification is described with reference to specific exemplary embodiments thereof. Various embodiments and aspects of the disclosure are described with reference to details discussed herein, and the accompanying drawings illustrate the various embodiments. The description above and drawings are illustrative and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding of various embodiments.
The additional or alternative embodiments may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes that come within the meaning and range of equivalency of the claims are to be embraced within their scope.