CROSS-REFERENCE TO RELATED APPLICATIONSThis application is a continuation-in-part of U.S. application Ser. No. 14/683,779, filed Apr. 10, 2015, which is a continuation-in-part of U.S. application Ser. No. 14/569,169, filed Dec. 12, 2014, which claims the benefit of U.S. Provisional Application No. 62/042,114, filed Aug. 26, 2014, and U.S. Provisional Application No. 62/038,493, filed Aug. 18, 2014. The entire disclosures of each of the above applications are incorporated herein by reference.
BACKGROUNDIt is common for users of electronic devices to communicate with other remote users by voice, email, text messaging, instant messaging, and the like. While these means of electronic communication may be convenient in various situations, such means are only suited for transferring isolated segments or files of content between users. For instance, while text messages and email may be used to transmit written dialogue between users, and audio, video, web content, or other files may be transmitted with the text or email messages as attachments, such files are not integrated with the various components of the text or email message in any way.
As a result, electronic device messaging applications have been developed to assist the user in creating digital messages that include, for example, images, audio, or other content. However, the functionality of existing messaging applications is limited. For example, such applications do not enable the user to combine a wide array of digital content segments (e.g., a digital video segment and a digital image) such that portions of two or more content segments, including content segments from different sources, can be presented to the recipient simultaneously as an integrated component of the digital message. Additionally, such applications do not provide the user with the ability to easily edit the digital message during creation. Further, while a variety of different audio and/or video editing software is available, such software does not provide any guidance to the user when preparing a digital content message. In particular such software does not provide desired text of an unformed digital media message (e.g., a script) to the user as a digital video segment is being captured, nor does such software enable the user to easily replace a portion of the digital video segment, such as at least a portion of a video track of the digital video segment, with an image or other digital content segment of the user's choosing. As a result, such video editing software is not suitable for use in creating digital messages as a means of communication between electronic device users.
Example embodiments of the present disclosure are directed toward curing one or more of the deficiencies described above.
BRIEF DESCRIPTION OF THE DRAWINGSThe detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same reference numbers in different figures indicate similar or identical items.
FIG. 1 is a schematic diagram of an illustrative computing environment for implementing various embodiments of digital media message generation.
FIG. 2 is a schematic diagram of illustrative components in an example server that may be used in an example digital media message generation environment.
FIG. 3 is a schematic diagram of illustrative components in an example electronic device that may be used in an example digital media message generation environment.
FIG. 4 shows an illustrative user interface screen displayed on an electronic device that enables users to generate a portion of an example digital media message.
FIG. 5 shows another illustrative user interface screen displayed on an electronic device that enables users to generate a portion of an example digital media message.
FIG. 6 shows still another illustrative user interface screen displayed on an electronic device that enables users to generate a portion of an example digital media message.
FIG. 7 shows yet another illustrative user interface screen displayed on an electronic device that enables users to generate a portion of an example digital media message.
FIG. 8 shows a further illustrative user interface screen displayed on an electronic device that enables users to generate a portion of an example digital media message.
FIG. 9 shows another illustrative user interface screen displayed on an electronic device that enables users to generate a portion of an example digital media message.
FIG. 10 shows still another illustrative user interface screen displayed on an electronic device that enables users to generate a portion of an example digital media message.
FIG. 11 shows yet another illustrative user interface screen displayed on an electronic device that enables users to generate a portion of an example digital media message.
FIG. 12 shows still another illustrative user interface screen displayed on an electronic device that enables users to generate a portion of an example digital media message.
FIG. 13 shows yet another illustrative user interface screen displayed on an electronic device that enables users to generate a portion of an example digital media message.
FIG. 14 shows an illustrative user interface screen displayed on an electronic device that enables users to share an example digital media message.
FIG. 15 is a flow diagram of an illustrative method of generating a digital media message.
DETAILED DESCRIPTIONOverviewThe disclosure is directed to devices and techniques for generating digital media messages that can be easily shared between users of electronic devices as a means of communication. The techniques described herein enable users to combine a variety of different digital content segments into a single digital media message. For example, the user may create a digital media message by capturing audio content segments, video content segments, digital images, web content, and the like. Such digital content segments may be captured by the user during generation of the digital media message. Alternatively, such content segments may be captured by the user prior to generating the digital media message, and may be saved in a memory of the electronic device or in a memory separate from the device (e.g., on a server accessible via a network, etc.) for incorporation into the digital media message at a later time. As part of generating the digital media message, the user may select one or more of the digital content segments for incorporation into the message and may associate the selected content segments with respective positions in a play sequence of the digital media message.
In some embodiments, the electronic device may assist the user in generating the digital media message in a number of ways. For example, the device may receive a desired script of the digital media message from the user or from another source. For example, the user may dictate, type, and/or otherwise provide text of the script to the electronic device. In examples in which the user types the text of the script using the device, the device may directly receive the text of the script from the user. Alternatively, in examples in which the user dictates the script, the electronic device may receive voice and/or other audio input from the user (e.g., the dictation), and may generate the text of the script, based on such input, using a voice recognition module of the device. The electronic device may provide the text of the script to the user via a display of the device while capturing, recording, and/or otherwise receiving a corresponding digital video segment. In such examples, the received digital video segment may comprise video of the user reading the text of the script, or an approximation thereof. Thus, in some examples, the content of the received digital video segment or other such digital content segment may be based on the script.
Additionally, the digital video segment may include a plurality of consecutive portions, and such portions may be indicative of desired divisions in the digital media message. For example, such portions may be indicative of one or more potential locations in which the user may wish to add or insert additional digital content segments into a play sequence of the digital media message. In some examples, the text of the script may be divided into a plurality of separate parts (e.g., sentences, sentence fragments, groups of sentences, etc.), and at least one of the individual parts may correspond to a respective portion of the digital video segment. The electronic device may form one or more portions of the digital video segment in response to input received from the user. For example, the user may provide a touch input or a plurality of consecutive touch inputs while the digital video segment is being recorded. In such examples, the electronic device may form the plurality of consecutive portions in response to the plurality of consecutive touch inputs. For instance, two consecutive touch inputs may result in the formation of a corresponding portion of the plurality of consecutive portions. In such examples, the first touch input may identify the beginning of a portion of the digital video segment, and the second touch input may identify the end of the portion.
An example digital media message generation method may also include determining text of the digital media message corresponding to each respective portion of the digital video segment described above, and providing the text to the user via the display. In some examples, the electronic device may determine the text of the message by correlating, recognizing, and/or otherwise matching at least part of an audio track of the digital video segment with the text of the script. In such examples, the audio track may be matched with the text of the script based on the elapsed time, sequence, breaks in the audio track, or other characteristics of the audio track, and the matching text of the script may be used and/or provided to the user as text of the digital media message. For example, the electronic device may match individual parts of the script with corresponding respective portions of the digital video segment, and may provide the matching text of the script to the user as text of the digital media message. Alternatively, in other examples at least part of the audio track of the digital video segment may be used as an input to a voice recognition module of the electronic device. In such examples, the voice recognition module may generate the text of the digital media message as an output, based on the audio track.
In each of the embodiments described herein, the text of the digital media message corresponding to the respective consecutive portions of the digital video segment may be provided to the user via the display. In some examples, the text may be displayed with lines, boxes, numbering, markings, coloring, shading, or other visual indicia separating various portions of the text. For example, the text of the digital media message corresponding to a first portion of the plurality of consecutive portions of the digital video segment may be displayed as being separate from text corresponding to a second portion of the plurality of portions. In some examples, the text corresponding to the first portion may be displayed at a first location on the display and the text corresponding to the second portion may be displayed at a second location on the display different from the first portion.
Additionally, the user may select one or more digital content segments to be presented simultaneously with audio or other portions of the digital video segment when the digital media message is played by a recipient of the digital media message on a remote device. In such examples, the digital video segment may comprise the main, primary, and/or underlying content on which the digital media message is based, and the various selected digital content segments may comprise additional or supplemental content that may be incorporated into the underlying digital video segment as desired. In such examples, the underlying digital video segment may have an elapsed time, length or duration that defines the elapsed time of the resulting digital media message.
The electronic device may provide a plurality of images via the display to assist the user in selecting one or more digital content segments for inclusion into the digital media message. Each image may be indicative of a respective digital content segment different from the digital video segment. For example, the user may provide an input indicating selection of a portion of the digital video segment with which an additional digital content segment should be associated. The plurality of images described above may be displayed at least partly in response to such an input. Once such images are displayed, the user may provide an input indicating selection of one or more digital content segments associated with corresponding respective images. For example, the user may provide a touch input at a location on the display in which a particular image is provided. Such a touch input may indicate selection of a digital content segment associated with the particular image. The electronic device may then associate the selected digital content segment with the selected portion of the digital video segment.
In some examples, the digital video segment may include and/or may be segmented into separate tracks or sections, such as an audio track and a video track. In example embodiments, at least part of one or more such tracks of the digital video segment may be supplemented, augmented, overwritten, and/or replaced by selected digital content segments during formation of the digital media message. For example, a digital image of a selected digital content segment may replace at least part of the video track of the underlying digital video segment when the selected digital content segment is associated with the digital video segment. As a result, the digital image of the selected digital content segment may be presented simultaneously with a portion of the audio track of the digital video segment corresponding to the replaced portion of the video track. The user may also edit or revise the digital video segment, the digital content segments, or various other portions of the digital media message while the digital media message is being generated.
Replacing, for example, part of the video track of the underlying digital video segment with the digital image may reduce the file size of the digital video segment and/or of a combined segment formed by combining the digital image with the digital video segment. In particular, the replaced portion of the video track typically would typically be rendered at approximately 300 frames/second for a duration of the portion of the video track, and would be characterized by a commensurate memory and/or file size (e.g., in bytes). The selected digital image, on the other hand, comprises a single frame that will be rendered for the duration of the replaced portion of the video track. Thus, replacing a portion of the video track of the underlying digital video segment with the digital image reduces the number of frames/second of the underlying video segment, thereby reducing file size thereof. As a result, a digital media message generated using such techniques will have a smaller file size and will require/take up less memory than a corresponding digital media message generated using the underlying digital video segment with the video track unchanged (e.g., without replacing a portion of the video track with a selected digital image).
Reducing the file size and/or memory requirements of digital media messages in this way has many technical effects and/or advantages. For example, such a reduction in file size and/or memory requirements will enable the various networks, servers, and/or electronic devices described herein to transfer such digital media messages more quickly and with fewer network, server, and/or device resources. As a result, such a reduction in file size and/or memory requirements will reduce overall network load/traffic, and will improve network, server, and/or electronic device performance. As another example, such a reduction in file size and/or memory requirements will enable the various networks, servers, and/or electronic devices described herein to provide, render, display, and/or otherwise process such digital media messages more quickly and with fewer network, server, and/or device resources. In particular, such a reduced file size may reduce the server and/or electronic device memory required to receive and/or store such messages. Such a reduced file size may also reduce the processor load required to provide, render, display, and/or otherwise process such digital media messages. As a result, such a reduction in file size and/or memory requirements will reduce overall network load/traffic, and will improve network, server, and/or electronic device performance and efficiency.
In various embodiments, the devices and techniques described herein may enable users of electronic devices to communicate by transmitting digital media messages that include a rich, unique, and artful combination of digital video segments and/or other digital content segments. Such content segments may be combined in response to, for example, a series of simple touch gestures received from a user of the electronic device. Methods of generating such digital media messages may be far simpler and less time consuming than using, for example, known audio and/or video editing software. Additionally, methods of generating such digital media messages may enable users to combine and present multiple content segments in ways that are not possible using existing messaging applications. Example methods of the present disclosure may also assist the user in generating digital media messages by providing desired text (e.g., a script) of the digital media message to the user as a guide while the underlying digital video segment of the message is being captured. Such text may be generated (e.g., created) and entered (e.g., typed, dictated, and/or otherwise provided) by the user as part of the digital media message generation process. Such methods may also provide text of the digital media message to the user to assist the user in adding digital content segments to the digital media message at locations that correspond contextually to various portions of the message.
Illustrative environments, devices, and techniques for generating digital media messages are described below. However, the described message generation techniques may be implemented in other environments and by other devices or techniques, and this disclosure should not interpreted as being limited to the example environments, devices, and techniques described herein.
Illustrative ArchitectureFIG. 1 is a schematic diagram of anillustrative computing environment100 for implementing various embodiments of scripted digital media message generation. Thecomputing environment100 may include server(s)102 and one or more electronic devices104(1)-104(N) (collectively “electronic devices104”) that are communicatively connected by anetwork106. Thenetwork106 may be a local area network (“LAN”), a larger network such as a wide area network (“WAN”), or a collection of networks, such as the Internet. Protocols for network communication, such as TCP/IP, may be used to implement thenetwork106. Although embodiments are described herein as using a network such as the Internet, other distribution techniques may be implemented that transmit information via memory cards, flash memory, or other portable memory devices.
Amedia message engine108 on theelectronic devices104 and/or amedia message engine110 on the server(s)102 may receive one or more digital video segments, digital audio segments, digital images, web content, text files, audio files, spreadsheets, and/or other digital content segments112(1)-112(N) (collectively, “digital content segments112” or “content segments112”) and may generate one or more digital media messages114 (or “media messages114”) using one or more parts, components, audio tracks, video tracks, and/or other portions of at least one of thecontent segments112. In example embodiments, themedia message engine108 may receive one ormore content segments112 via interaction of auser116 with anelectronic device104. In some embodiments, themedia message engine108 may providesuch content segments112 to themedia message engine110 on theserver102, via thenetwork106, to generate at least a portion of themedia message114. Alternatively, at least a portion of themedia message114 may be generated by themedia message engine108 of the respectiveelectronic device108. In either example, themedia message114 may be directed to one or more additional electronic devices118(1)-118(N) (collectively “electronic devices118”) via thenetwork106. Suchelectronic devices118 may be disposed at a location remote from theelectronic devices104, and one ormore users120 may consume thedigital media message114 via one or more of theelectronic devices118.
Each of theelectronic devices104 may include a display component, a digital camera configured to capture still photos, images, and/or digital video, and an audio input and transmission component. Such audio input and transmission components may include one or more microphones. In some examples, the digital camera may include a video sensors, light sensors, and/or other video input components configured to capture and/or form a video track of adigital content segment112, and theelectronic device104 may also include one or more audio sensors, microphones, and/or other audio input and transmission components configured to capture and/or form a corresponding audio track of the samedigital content segment112. Theelectronic devices104 may also include hardware and/or software that support voice over Internet Protocol (VoIP) as well as any of the display, input, and/or output components described herein. Each of theelectronic devices104 may further include a web browser that enables theuser116 to navigate to a web page via thenetwork106. In some embodiments, theuser116 may generate and/or capture one or moredigital content segments112 using, for example, the camera and the microphone. For example, theuser116 may capture one or more digital images using the camera and/or may capture one or more digital video segments using the camera in conjunction with the microphone. Additionally, each web page may present content that theuser116 may capture via theelectronic device104, using various copy and/or save commands included in the web browser of theelectronic device104, and the user may incorporate such content into one ormore content segments112. Any of thecontent segments112 described herein may be provided to one or both of themedia message engines108,110, and themedia message engines108,110 may incorporatesuch content segments112, and/or portions thereof, into themedia message114.
Upon receiving thecontent segments112 described herein, themedia message engines108,110 may tag therespective content segments112 with associated metadata. The associated metadata may include profile information about the type of content (e.g., image, video, audio, text, animation, etc.), the source of the content segment112 (e.g., camera, microphone, internet web page, etc.), and/or a position or location in a play sequence of thedigital media message114 with which thecontent segment112 is to be associated.
Themedia message engines108,110 described herein may integrate and/or otherwise combine two ormore content segments112 to form thedigital media message114. In some examples, thecontent segments112 may be presented to the user sequentially when themedia message114 is played. Alternatively, themedia message engines108,110 may combine at least part of two ormore content segments112 such that, for example, at least part of afirst content segment112 is presented simultaneously with at least part of asecond content segment112 when themedia message114 is played. For example, a second digital content segment112(2) comprising a digital photo or image may be combined with audio from at least part of a first digital content segment112(1) comprising a digital video segment. As a result, the audio from the first digital content segment112(1) may be presented simultaneously with the image from the second digital content segment112(2) when the resultingdigital media message114 is played. In such examples, the first digital content segment112(1) (e.g., the digital video segment) may comprise an underlying digital content segment forming the basis and/or background of thedigital media message114. In such examples, one or more additional digital content segments (e.g., digital images, audio, etc.) may be combined with the first digital content segment112(1) when thedigital media message114 is formed.
During this process, the additional digital content segments may replace corresponding portions of the first digital content segment112(1). For example, a digital image of the second digital content segment112(2) may replace a corresponding video portion and/or image of the first digital content segment112(1) when the second digital content segment112(2) is combined with the particular portion of the first digital content segment112(1). As a result, audio of the particular portion of the first digital content segment112(1) may be presented simultaneously with the digital image of the second digital content segment112(2) when the resultingdigital media message114 is played. Themedia message engines108,110 may also distribute the finalizedmedia message114 to one or more of theelectronic devices118. Various example components and functionality of themedia message engines108,110 will be described in greater detail below with respect to, for example,FIGS. 2 and 3.
In any of the example embodiments described herein, replacing, for example, a portion of a first digital content segment112(1) (e.g., at least a portion of a video track of a digital video segment) with a second digital content segment112(2) (e.g., a digital image) may reduce the file size and/or memory requirements of the first digital content segment112(1) and/or of a combined segment formed by combining the second digital content segment112(2) with the first digital content segment112(1). In some examples, a replaced portion of a video track of the first digital content segment112(1) may be rendered at approximately 300 frames/second for a duration of the portion of the video track, and would be characterized by a commensurate memory and/or file size (e.g., in bytes). The digital image of the second digital content segment112(2), on the other hand, may comprise a single frame that will be rendered for the duration of the replaced portion of the video track. Thus, replacing a portion of the video track of the first digital content segment112(1) with the digital image of the second digital content segment112(2) may reduce the number of frames/second of the combined segment, thereby reducing the file size thereof relative to the unaltered first digital content segment112(1). As a result, adigital media message114 generated using such techniques will have a smaller file size and will require/take up less memory than a corresponding digital media message generated using the first digital content segment112(1) with the video track unchanged (e.g., without replacing a portion of the video track with a selected digital image).
Reducing the file size and/or memory requirements ofdigital media messages114 in this way has many technical effects and/or advantages. For example, such a reduction in file size and/or memory requirements will enable thevarious networks106,servers102, and/orelectronic devices104,118 described herein to transfer suchdigital media messages114 more quickly and with fewer network, server, and/or device resources. As a result, such a reduction in file size and/or memory requirements will reduce overall network load/traffic, and will improve network, server, and/or electronic device performance. As another example, such a reduction in file size and/or memory requirements will enable thevarious networks106,servers102, and/orelectronic devices104,118 described herein to provide, render, display, and/or otherwise process suchdigital media messages114 more quickly and with fewer network, server, and/or device resources. In particular, such a reduced file size may reduce the server and/or electronic device memory required to receive and/or storesuch messages114. Such a reduced file size may also reduce the server and/or electronic device processor load required to provide, render, display, and/or otherwise process suchdigital media messages114. As a result, such a reduction in file size and/or memory requirements will reduce overall network load/traffic, and will improve network, server, and/or electronic device performance and efficiency.
In various embodiments, theelectronic devices104,118 may include a mobile phone a portable computer, a tablet computer, an electronic book reader device (an “eBook reader device”), or other devices. Each of theelectronic devices104,118 may have software and hardware components that enable the display ofdigital content segments112, either separately or combined, as well as the variousdigital media messages114 described herein. Theelectronic devices104,118 noted above are merely examples, and other electronic devices that are equipped with network communication components, data processing components, electronic displays for displaying data, and audio output capabilities may also be employed.
Example ServerFIG. 2 is a schematic diagram of illustrative components in example server(s)102 of the present disclosure. The server(s)102 may include one or more processor(s)202 andmemory204. Thememory204 may include computer readable media. Computer readable media may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules, or other data. As defined herein, computer readable media does not include communication media in the form of modulated data signals, such as carrier waves, or other transmission mechanisms.
Themedia message engine110 may be a hardware or a software component of the server(s)102 and in some embodiments, themedia message engine110 may comprise a component of thememory204. As shown inFIG. 2, in some embodiments themedia message engine110 may include one or more of acontent presentation module206, asegment collection module208, ananalysis module210, anintegration module212, and adistribution module214. The modules may include routines, programs instructions, objects, and/or data structures that perform particular tasks or implement particular abstract data types. The server(s)102 may also implement adata store216 that stores data,digital content segments112, and/or other information or content used by themedia message engine110.
Thecontent presentation module206 may enable a human reader to selectdigital content segments112 for the purpose of including the selecteddigital content segments112 in adigital media message114. In various embodiments, thecontent presentation module206 may present a web page to auser116 of anelectronic device104, such as via thenetwork106. In further embodiments, thecontent presentation module206 may present digital content, information, and/or one or moredigital content segments112 to theuser116 of anelectronic device104 via thenetwork106. Thecontent presentation module206 may also enable theuser116 to select content, information, and/or one or moredigital content segments112. Once theuser116 has selected, for example, adigital content segment112, thecontent presentation module206 may present further content, information, and/ordigital content segments112 to theuser116. Thecontent presentation module206 may also tag the selecteddigital content segment112 for inclusion in thedigital media message114.
Thesegment collection module208 may collect audio recordings, video recordings, images, files, web content, audio files, video files, web addresses, and/or otherdigital content segments112 identified, selected, and/or captured by theuser116. Additionally, thesegment collection module208 may label eachdigital content segment112 with metadata. The metadata may include profile information about the type of content (e.g., image, video, audio, text, animation, etc.), the source of the content segment112 (e.g., camera, microphone, internet web page, etc.), and/or a position or location in a play sequence of thedigital media message114 with which thecontent segment112 is to be associated. For example, the metadata for an audio recording may include identification information identifying thedigital content segment112 as comprising an audio recording, information indicating that thedigital content segment112 was captured using a microphone of anelectronic device104, information indicating the date and time of recordation, the length of the recording, and/or other information. Such metadata may be provided to thecontent presentation module206 by thesegment collection module208 or alternatively, such metadata may be provided to thesegment collection module208 by thecontent presentation module206.
Theanalysis module210 may be used by thesegment collection module208 to determine whether a collectedcontent segment112 meets certain quality criteria. In various embodiments, the quality criteria may include whether a background noise level in thecontent segment112 is below a maximum noise level, whether video and/or image quality in thecontent segment112 is above a minimum pixel or other like quality threshold, and so forth.
Theintegration module212 may use at least a portion of the metadata described above to assess and/or otherwise determine whichcontent segment112 to select for integration into thedigital media message114. Additionally or alternatively, theintegration module212 may use results received from theanalysis module210 to make one or more such determinations. Such determinations may be provided to theuser116 of theelectronic device104 while adigital media message114 is being generated as a way of guiding the user with regard to the combination of one ormore content segments112. For instance, theintegration module212 may provide advice, suggestions, or recommendations to theuser116 as to whichcontent segment112 to select for integration into thedigital media message114 based on one or more of the factors described above.
Thedistribution module214 may facilitate presentation of thedigital media message114 to one ormore users120 of theelectronic devices118. For example, once completed, thedistribution module214 may direct thedigital media message114 to one or more of theelectronic devices118 via thenetwork106. Additionally or alternatively, thedistribution module214 may be configured to direct one or moredigital content segments112 between theservers102 and one or more of theelectronic devices104. In such embodiments, thedistribution module214 may comprise one or more kernels, drivers, or other like components configured to provide communication between theservers102 and one or more of theelectronic devices104,118.
Thedata store216 may store any of the metadata, content, information, or other data utilized in creating one ormore content segments112 and/ordigital media messages114. For example, thedata store216 may store any of the images, video files, audio files, web links, media, or other content that is captured or otherwise received via theelectronic device104. Such content may be, for example, provided to thedata store216 via the network during creation of acontent segment112 and/or adigital media message114. Alternatively, such content may be provided to thedata store216 prior to generating acontent segment112 and/or adigital media message114. In such examples, such content may be obtained and/or received from thedata store216 during generation of acontent segment112 and/or adigital media message114.
In example embodiments, one or more modules of themedia message engine110 described above may be combined or omitted. Additionally, one or more modules of themedia message engine110 may also be included in themedia message engine108 of theelectronic device104. As a result, the example methods and techniques of the present disclosure, such as methods of generating adigital media message114, may be performed solely on either theserver102 or solely on one of theelectronic devices104. Alternatively, in further embodiments, methods and techniques of the present disclosure may be performed, at least in part, on both theserver102 and one of theelectronic devices104.
Example Electronic DeviceFIG. 3 is a schematic diagram of illustrative components in an exampleelectronic device104 that is used to prepare and/or consumedigital content segments112 anddigital media messages114. As noted above, theelectronic device104 shown inFIG. 3 may include one or more of the components described above with respect to theserver102 such thatdigital content segments112 and/ordigital media messages114 may be created and/or consumed solely on theelectronic device104. Additionally and/or alternatively, theelectronic device104 may include one or more processor(s)302 andmemory304. Thememory304 may include computer readable media. Computer readable media may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules, or other data. As defined herein, computer readable media does not include communication media in the form of modulated data signals, such as a carrier wave, or other transmission mechanisms.
Similar to thememory204 of theserver102, thememory304 of theelectronic device104 may also include amedia message engine108, and theengine108 may include any of the modules or other components described above with respect to themedia message engine110. Additionally or alternatively, themedia message engine108 of theelectronic device104 may include one or more of acontent interface module306, acontent display module308, a user interface module310, adata store312 similar to thedata store216 described above, and a voice recognition module314. The modules described herein may include routines, programs, instructions, objects, and/or data structures that perform particular tasks or implement particular abstract data types. Theelectronic device104 may also include one or more cameras, video cameras, microphones, displays (e.g., a touch screen display), keyboards, mice, touch pads, proximity sensors, capacitance sensors, or other user interface devices316. Such user interface devices316 may be operably connected to theprocessor302 via, for example, the user interface module310. As a result, input received via one or more of the user interface devices316 may be processed by the user interface module310 and/or may be provided to theprocessor302 via the user interface module310 for processing.
Thecontent interface module306 may enable the user to request and download content,digital content segments112, or other information from the server(s)102 and/or from the internet. Thecontent interface module306 may download such content via any wireless or wired communication interfaces, such as Universal Serial Bus (USB), Ethernet, Bluetooth®, Wi-Fi, and/or the like. Additionally, thecontent interface module306 may include and/or enable one or more search engines or other applications on theelectronic device104 to enable theuser116 to search for images, video, audio, and/or other content to be included in adigital media message114.
Thecontent display module308 may present content,digital content segments112,digital media messages114, or other information on a display of theelectronic device104 for viewing. For example, thecontent display module308 may present text of a script of thedigital media message114, text of thedigital media message114 itself, and/or other content to theuser116 via such a display. In various embodiments, thecontent display module308 may provide functionalities that enable theuser116 to manipulate individualdigital content segments112 or other information as adigital media message114 is being generated. For example, thecontent display module308 may provide editing functionality enabling theuser116 to delete, move, modify, augment, cut, paste, copy, save, or otherwise alter portions of eachdigital content segment112 as part of generating adigital media message114.
The voice recognition module314 may comprise hardware (e.g., one or more processors and/or memory), software (e.g., one or more operating systems, kernels, neural networks, etc.), or a combination thereof configured to receive audio input, such as an audio track of a receiveddigital content segment112, an audio file, a video file, and/or other input. In response to receiving such input, the voice recognition module314 may process the input and determine text included in such input. For example, the voice recognition module314 may receive an audio track of a digital video segment comprising video of theuser116 speaking. The voice recognition module314 may process such input using one or more voice recognition algorithms, neural networks, look-up tables, and/or other components to determine text included in the input, and may provide such text to thecontent display module308 and/or other components of theelectronic device104 as an output.
Example User InterfacesFIG. 4 shows anillustrative user interface400 that enables theuser116 to generate adigital media message114. For example, theuser interface400 may be displayed on anelectronic device104 that enables users to generate, create, capture, search for, and/or selectdigital content segments112, and to generate and/or consumedigital media messages114. As noted above, suchdigital content segments112 may comprise digital video segments (including both audio and video portions or tracks), digital audio segments, digital photos or images, and/or other types of digital content. Theuser interface400 may be displayed, for example, on adisplay402 of theelectronic device104. In some examples, theuser interface400 may be a web page that is presented to theuser116 via a web browser on theelectronic device104. Alternatively, theuser interface400 may be an interface generated and provided by thecontent display module308 as part of a digital media message generation application operating locally on theelectronic device104. For the duration of this disclosure, example embodiments in which theuser interface400 is generated and provided by thecontent display module308 and/or other components of themedia message engine108 as part of a digital media message generation application operating locally on theelectronic device104 will be described unless otherwise noted.
As shown, themedia message engine108 may present auser interface400 that includes afirst portion404 displaying text406(1),406(2) . . .406(N) (collectively “text406”), images, video, or other like content. Theuser interface400 may also include asecond portion408 providing one or more controls, images, thumbnails, or other content or devices configured to assist theuser116 in generating adigital media message114. In example embodiments, one or more such images, thumbnails, or other content or devices may alternatively be provided in thefirst portion404.
Further, in some examples thetext406 provided in thefirst portion404 may comprisetext406 of a script (e.g., “script text406”) of thedigital media message114 being created by theuser116. For example, theelectronic device104 may receive a desired script of thedigital media message114 from theuser116 or from another source. In some embodiments, theuser116 may dictate, type, and/or otherwise providetext406 of the script to theelectronic device104 via one or more of thedisplay402 and/or the various user interface devices316 described herein. For example, theuser116 may typetext406 of the script using a physical keyboard connected to theelectronic device104. Alternatively, theuser116 may typetext406 of the script using a virtual keyboard displayed on thedisplay402. In examples in which theuser116 types thetext406 of the script, theelectronic device104 may directly receive thetext406 of the script from theuser116. Alternatively, in examples in which theuser116 dictates thetext406 of the script, a microphone, audio sensor, and/or other user interface device316 of theelectronic device104 may receive voice and/or other audio input from the user116 (e.g., the dictation). The user interface device316 may direct such input to the voice recognition module314, and in response, the voice recognition module314 may generate thetext406 of the script, based on such input. As shown inFIG. 4, thetext406 may include and/or may be separated into separate sentences, sentence fragments, groups of sentences, or other different parts. For example, the text406(1) may be displayed as being separate from the text406(2) and so on. While the text406(1),406(2) . . .406(N) comprises complete individual sentences, in other embodiments, at least one of the separate parts of thetext406 may include a sentence, a sentence fragment, a groups of sentences, and/or a combination thereof.
Additionally, as will be described below, theelectronic device104 may provide thetext406 of the script to theuser116 via thedisplay402 while capturing, recording, and/or otherwise receiving a digital video segment or other suchdigital content segment112. In such examples, the received digital video segment may comprise video of theuser116 reading thetext406 of the script. Thus, in some examples, the content of the received digital video segment may be based on the script and/or thetext406, and at least one of the individual parts of thetext406 may correspond to a respective portion of a digital video segment.
As will be described in greater detail below, themedia message engine108 may receive input from auser116 of theelectronic device104 via either thefirst portion404 or thesecond portion408. In some embodiments, such input may comprise one or more gestures, such as a touch command, a touch and hold command, a swipe, a single tap, a double tap, or other gesture. Receipt of such an input may cause themedia message engine108 to capture and/or otherwise receive a firstdigital content segment112 via, for example, the camera or other user interface device316 of theelectronic device104. In such embodiments, the receiveddigital content segment112 may be displayed within thefirst portion404 as thecontent segment112 is being recorded and/or otherwise captured by the camera. Themedia message engine108 may also associate thedigital content segment112 with a desired position in a play sequence of adigital media message114, and may direct thedigital content segment112 to a portion of thememory304 for storage.
The various controls of theuser interface400 may be configured to assist theuser116 in capturing one or moredigital content segments112, modifying one or more of thedigital content segments112, and/or generating one or moredigital media messages114. For example, theuser interface400 may include amenu control410 configured to provide theuser116 with access to, for example, a user profile, different drafts of variousdigital media messages114, and/or to photo or video libraries stored in thememory304. Additionally, theuser interface400 may include a preview and/orshare control412 configured to control thecontent display module308 to provide one or more draftdigital media messages114, or one or more such messages that is in the process of being generated, to theuser116 for viewing via thedisplay402. Thecontrol412 may also control one or more components of themedia message engine108 to enable sharing of thedigital media message114 being previewed withusers120 of remoteelectronic devices118 via one or more components of themedia message engine108. Theuser interface400 may further include a userinterface device control414 configured to control one or more operations of a user interface device316 of theelectronic device104. For example, the userinterface device control414 may be configured to control activation of one or more cameras, microphones, or other components of thedevice104. In particular, the userinterface device control414 may be configured to select and/or toggle between a first camera of theelectronic device104 on a first side of the electronic device104 (e.g., facing toward the user116) and a second camera on a second side of theelectronic device104 opposite the first side (e.g., facing away from the user116).
Theuser interface400 may also include a plurality of additional controls including one or more navigation controls416 and/or one or more editing controls418. For example, theuser interface400 may include anavigation control416 that, upon selection thereof by theuser116, may enable the user to browse backward or forward betweendifferent user interfaces400 while generating adigital media message114. For example, afirst navigation control416 may comprise a “back” control while asecond navigation control416 may comprise a “forward” control.
Additionally, one or more of the editing controls418 may enable auser116 to add, remove, cut, paste, draw, rotate, flip, shade, color, fade, darken, and/or otherwise modify various aspects of thedigital media message114 and/or variousdigital content segments112. For example, one or more of the editing controls418 may comprise an “undo” control that enables theuser116 to cancel the last action performed via theuser interface400. In some embodiments, the actuation of theediting control118 may enable theuser116 to delete and/or otherwise remove one or moredigital content segments112 from a play sequence of thedigital media message114. Although a variety of different controls have been described above with regard to theuser interface400, it is understood that in further example embodiments one or more additional controls may be presented to theuser116 by themedia message engine108. For example, such editing controls418 may further comprise any audio, video, image, or other editing tools. In some examples, at least one of the controls described herein may be configured to modify a firstdigital content segment112 before a second, third, or other additionaldigital content segment112 is captured and/or otherwise received by themedia message engine108.
Additionally, theuser interface400 may include acapture control420 configured to receive one or more inputs from theuser116 and to capture one or moredigital content segments112 in response to such input. For example, a finger or other part of ahand422 of theuser116 may provide a tap, touch, swipe, touch and hold, and/or other type of input via thecapture control420 and/or at other locations on either thefirst portion404 or thesecond portion408. In response to receiving such input, thecapture control420 may direct one or more signals corresponding to and/or otherwise indicative of such input to the user interface module310. The user interface module310 and/or other components of themedia message engine108, either alone or in combination with theprocessor302, may direct the camera and/or other user interface device316 to capture one or moredigital content segments112 in response to such input. Suchdigital content segments112 may then be stored automatically in thememory304 for use in generating one or moredigital media messages114. For example, a first touch input received via thecapture control420 may start a record or capture operation performed by the user interface device316, and a second touch input received via thecapture control420 may cause a first portion of the digital video segment to be formed while recording continues. This process may be repeated multiple times to create multiple consecutive portions of a digital video segment. In such examples, a double tap or other input received via thecapture control420 may stop an ongoing capture operation. In an example embodiment, theuser interface400 may also include atimer424 configured to provide visual indicia indicative of one or more aspects of thedigital content segment112 and/or of thedigital media message114. For example, thetimer424 may display an elapsed time of adigital content segment112 that is being captured and/or that is being played via thedisplay402.
As noted above, thetext406 may include individual portions or parts406(1),406(2) . . .406(N), and in some examples thescript text406 may be divided into one or more such parts406(1),406(2) . . .406(N) in response to corresponding inputs received from theuser116 via thecapture control420 or via one or more user interface devices314 of theelectronic device104. For example, in embodiments in which theuser116 types and/or otherwise enters thetext406 directly via a keyboard or other like user interface device314, theuser116 may control thecontent display module108 and/or other components of themessage generation engine108 to separate thetext406 into such parts406(1),406(2) . . .406(N) by pressing a “return” key or other like key of the keyboard. Alternatively, in embodiments in which theuser116 enters thetext406 via the microphone or other user interface devices314, such as by dictation, theuser116 may control thecontent display module108 and/or other components of themessage generation engine108 to separate thetext406 into such parts406(1),406(2) . . .406(N) by providing consecutive touch inputs via thecapture control420. For example, a first touch input may begin recording such dictation, and a second consecutive touch input may form a break in thetext406, thereby separating thetext406 into a first part406(1) and a second part406(2) consecutive to the first part406(1).
Theuser interface400 may also include one ormore controls426 configured to assist theuser116 in transitioning to a next stage of digital media message generation. For example, thecontrol426 may initially not be displayed by thedisplay402 while a firstdigital content segment112 is being recorded. Once recording of the firstdigital content segment112 is complete, the other hand, such as when an input is received via thecapture control420 stopping recording, thecontrol426 may appear on thedisplay402. Thecontrol426 may be operable as a “continue” control configured to enable theuser116 to access a further user interface in which theelectronic device104 may record and/or otherwise capture one or more additionaldigital content segments112, such as a digital video segment. In some examples, thecontrol426 may be operable to enable theuser116 to access a plurality ofdigital content segments112 for incorporation into thedigital media message114. In further examples, thecontrol426 may also be operable to provide theuser116 with access to one or more folders, libraries, or other digital content sources within which a plurality ofdigital content segments112 are stored.
Theelectronic device104 may also be configured to record, capture, and/or otherwise receive a digital video segment or otherdigital content segment112 that is based at least partly on the script described above. For example,FIG. 5 illustrates auser interface500 of the present disclosure in which animage502 is provided by thecontent display module308 in thefirst portion404. In example embodiments, theimage502 displayed in thefirst portion404 may be one or more images, photos, or first frames of a digital video segment stored in thememory304 of theelectronic device104. Alternatively, thecontent display module308 may present one ormore images502 in thefirst portion404 that are obtained in real time via, for example, a camera or other user interface device314 of theelectronic device104. For example, thefirst portion404 may provide animage502 of objects that are within a field of view of the camera.
Theuser interface500 may also be configured to provide thetext406 of the script in order to assist theuser116 in generating a correspondingdigital media message114. For example, theuser interface500 may include one ormore windows504 configured to provide thetext406. In some embodiments, thetext406 may be provided via thewindow504 in response to a touch input or other input received from theuser116, such as via thecapture control420 or other controls described herein. Additionally, thetext406 may remain stationary in thewindow504, may scroll from top to bottom within thewindow406, or may be displayed in any other format. For example, thetext406 may be displayed in a scrolling manner within thewindow504 at a default constant scrolling speed. Theuser116 may increase or decrease the scrolling speed via a dedicated scrolling speed control associated with thewindow504 or other control of theuser interface500. Alternatively, in other embodiments theuser116 may manually scroll and/or otherwise advance thetext406 via one or more scroll bars or other controls associated with thewindow504. In any of the embodiments described herein, thecontent display module308 and/or themessage generation engine108 may provide thetext406 of the script via thewindow504 while the camera or other user interface device316 is controlled to capture, record, and/or otherwise receive a corresponding digital video segment or other suchdigital content segment112. In such examples, a received digital video segment may comprise video of theuser116 reading thetext406 of the script provided via thewindow504. Thus, in some examples, the content of the received digital video segment may be based on the script.
Additionally, the digital video segment or other receiveddigital content segment112 may include a plurality of consecutive portions. In some examples, such portions of the digital video segment may be indicative of desired divisions in thedigital media message114 being generated. Further, such portions may be indicative of one or more potential locations in which theuser116 may wish to add or insert additionaldigital content segments112 into a play sequence of thedigital media message114. For example, theuser116 may provide a touch input or a plurality of consecutive touch inputs, such as via thecapture control420, while the digital video segment is being recorded and while thetext406 is being provided via thewindow504. In such examples, themessage generation engine108 may form the plurality of consecutive portions of the digital video segment in response to one or more such inputs. For instance, themessage generation engine108 may receive two consecutive touch inputs via thecapture control420 and may, in response, insert a break in the digital video segment. Such a break in the digital video segment may result in the formation of a corresponding portion of the plurality of consecutive portions. In some examples, at least one of the individual parts of thetext406 described above may correspond to a respective portion of the digital video segment.
Theexample user interface500 may also include aprogress bar506. In example embodiments, theprogress bar506 may provide visual indicia of, for example, the amount of time elapsed while a digital video segment or otherdigital content segment112 is being captured and/or while a captureddigital content segment112 is being played. In some embodiments, theprogress bar506 may be disposed between thefirst portion404 and thesecond portion408 of theuser interface500. In additional examples, on the other hand, theprogress bar506 may be located at any desirable position on thedisplay402 to facilitate providing information to theuser116.
Theprogress bar506 may include one ormore portions508 or other dynamic visual indicia. For example, theprogress bar506 may be provided via thedisplay402 while a digital video segment, digital audio segment, or other suchdigital content segment112 is being captured. In such embodiments, theprogress bar506 may include visual indicia, such as the at least oneportion508, having a length that changes, in real time, as thedigital content segment112 is being captured. For example, theportion508 may move or expand in the direction ofarrow510 as adigital content segment112 is being recorded.
In some embodiments, theprogress bar506 may include a plurality of separate and/ordifferent portions508, and eachrespective portion508 may correspond to a single respectivedigital content segment112 of thedigital media message114 being created and/or played. Alternatively, eachrespective portion508 may correspond to a single respective portion of the plurality of consecutive portions of the digital video segment. Each of the one ormore portions508 of theprogress bar506 may have a visibly different appearance on thedisplay402 in order to identify, for example, the location and/or the amount of time associated with the respective portions of the digital video segment. For example, suchdifferent portions508 may be displayed using different colors, different shading, different patterns, or other distinct characteristics. Additionally, in some embodiments thedifferent portions508 may be separated by at least one break, line, mark, or other visual indicia included in theprogress bar508.
As previously noted, a first digital content segment112(1) captured by theelectronic device104 while theuser interface500 is operable may comprise a digital video segment. In such examples, the digital video segment may comprise the main, primary, and/or underlying content on which a resultingdigital media message114 will be based. Such a digital video segment may have a total elapsed time, length, or duration that defines the elapsed time of the resultingdigital media message114. The elapsed time of the digital video segment may be displayed by thetimer424, and the length of aportion508, such as a first portion, of theprogress bar506 may represent the length or duration of the underlying digital video segment. Once recording of the underlying digital video segment has been completed, thecontrol426 may enable theuser116 to access a plurality of additional digital content segments112(N) for incorporation into thedigital media message114, and the various additional digital content segments112(N) may comprise additional or supplemental content that may be incorporated into thedigital media message114 as desired. As will be described below, at least part of the first digital content segment112(1) (e.g., at least part of the underlying digital video segment) may be supplemented, augmented, overwritten, and/or replaced by such additional digital content segments112(N) during formation of thedigital media message114. For example, a digital image of a second digital content segment112(2) may replace at least part of a video track of the first digital content segment112(1). As a result, the digital image of the second digital content segment112(2) may be presented simultaneously with a portion of an audio track of the first digital content segment112(1) corresponding to the replaced portion of the video track.
FIG. 6 illustrates anotherexample user interface600 of the present disclosure. In example embodiments, themedia message engine108 may provide such anexample user interface600 once an underlying digital video segment or other suchdigital content segment112 has been received. For example, once a digital video segment has been captured using theuser interface500 ofFIG. 5, themessage generation engine108 may determine text602(1),602(2),602(3),602(4) (collectively “text602” or “message text602”) of thedigital media message114 being created, and may provide thedetermined text602 to theuser116 via thedisplay402. Thetext602 may correspond to respective portions of the underlying digital video segment described above, and in some examples, thetext602 may correspond torespective text406 of the script. Additionally or alternatively, at least some of thetext602 may vary from thetext406 of the script.
For example, themessage generation engine108 may determine thetext602 of thedigital media message114 by correlating, recognizing, and/or otherwise matching at least part of an audio track of a digital video segment with thetext406 of the script. In such examples, the audio track may be matched with thetext406 of the script based on the elapsed time, sequence, or other characteristics of the audio track, and thematching text406 of the script may be used and/or provided to theuser116 astext602 of thedigital media message114 in theuser interface600. In particular, themessage generation engine108 may match individual parts of thetext406 described above with respect toFIG. 4 with corresponding respective portions of the digital video segment described above with respect toFIG. 5. Thecontent display module308 may cause thematching text406 of the script to be provided to theuser116 astext602 of thedigital media message114, and each of the respective portions of the digital video segment may include correspondingtext602.
In some embodiments,text406 of the script may be matched and/or otherwise associated with more than one portion of the digital video segment. For instance, in the example embodiment ofFIG. 6 script text406(1) and406(N) have been associated with portions of a digital video segment corresponding to message text604(1) and604(4), respectively. Script text406(2), on the other hand, has been associated with portions of the digital video segment corresponding to message text604(2) and604(3). In this example, theuser116 may have formed a first portion of the digital video segment while reading the first sentence of script text406(2) (corresponding to message text602(2)), and may have formed a second portion of the digital video segment while reading the second sentence of script text406(2) (corresponding to message text602(3)).
In other examples themessage generation engine108 may determine thetext602 of thedigital media message114 by using at least part of the audio track of the digital video segment as an input to the voice recognition module314 of theelectronic device104. In such examples, the voice recognition module314 may generate thetext602 of thedigital media message114 as an output, based on the audio track. In the example embodiment shown inFIG. 6, a first portion of the digital video segment (formed in response to consecutive touch inputs received via the capture control420) may comprise a recording of theuser116 reading script text406(1). The audio track from the first portion of the digital video segment may be entered as an input to the voice recognition module314, and themessage generation engine108 may associate the message text602(1) (e.g., the resulting output of the voice recognition module314) with the first portion of the digital video segment. A similar process may be repeated when determining text602(2),603(3),602(4).
In each of the embodiments described herein, thetext602 of thedigital media message114 corresponding to the respective consecutive portions of the digital video segment may be provided to theuser116 via thedisplay402. In some examples, thetext602 may be displayed with lines, boxes, numbering, markings, coloring, shading, or other visual indicia separating various portions of the text602(1),602(2),603(3),602(4). For example, the text602(1) corresponding to a first portion of the plurality of consecutive portions may be displayed as being separate from the text602(2) corresponding to a second portion of the plurality of portions. In some examples, the text602(1) corresponding to the first portion may be displayed at a first location on thedisplay402 and the text602(2) corresponding to the second portion may be displayed at a second location on thedisplay402 different from the first portion. In such examples, the first, second, and other locations on thedisplay402 may comprise different respective locations within thefirst portion404 or within thesecond portion408.
In some examples themessage generation engine108 may cause theuser interface600 to provide one or more controls604(1),604(2),604(3),604(4) (collectively “controls604) operable to receive input from theuser116, and to cause thedisplay402 to provide one or more images corresponding to respectivedigital content segments112 at least partly in response to such input. For example, theuse interface600 may include a respective control604(1),604(2),604(3),604(4) associated with each portion of the plurality of consecutive portions of the digital video segment. Such controls604(1),604(2),604(3),604(4) may be displayed proximate, at substantially the same location as, and/or otherwise as corresponding to the text602(1),602(2),602(3),602(4), respectively. In some examples, an input received via one of the controls604(1),604(2),604(3),604(4) may indicate selection of the corresponding portion of the plurality of consecutive portions of the digital video segment, and may enable theuser116 to edit, modify, augment, re-order, and/or otherwise change the corresponding portion of the digital video segment. In some examples, such a change may include combining the corresponding portion with at least part of an additionaldigital content segment112, and such a combination may result in an audio track, a video track, and/or another component of the portion of the digital video segment being replaced by part of the additionaldigital content segment112. For example, a first digital content segment112(1) may comprise a digital video segment, such as the underlying digital video segment described above, and may include an audio track recorded by a microphone of theelectronic device104 and a corresponding video track recorded in unison with the audio track by a camera of theelectronic device104. Themedia message engine108 may replace, for example, the video track or the audio track of a first portion of the digital video segment when combining the second digital content segment112(2) with the first portion of the digital video segment.
In some examples thecontent interface module306,content display module308, and/or other components of themedia message engine108 may segment eachdigital content segment112 into its respective components or content types. For example, a digital video segment received by themedia message engine108 may be segmented into an audio track and a separate video track. Somedigital content segments112, such as digital images, audio clips, and the like may be segmented into only a single track/component depending on the content type associated with suchdigital content segments112. Once the digital content segments112(N) have been segmented in this way, themedia message engine108 may replace various tracks of portions of thedigital content segments112 based on an input received from theuser116 during generation of thedigital media message114. In some examples, themedia message engine108 may determine a content type of a selected additional digital content segment112(2) (e.g., audio, image, video, etc.), and may replace a track of a portion of the underlying digital video segment having substantially the same content type (e.g., an audio track, a video track, etc.).
FIG. 7 illustrates anexample user interface700 in which a portion of a captured digital video segment corresponding to the text602(4) has been selected. Such a selection may be the result of an input received via, for example, the control604(4). In particular, themessage generation engine108 may receive a touch input from theuser116 indicating selection of a particular portion of the digital video segment via the control604(4), and thecontent display module308 may cause the text602(4) corresponding to the selected portion of the digital video segment to be displayed in thefirst portion404 of thedisplay402 at least partly in response to the input. In example, embodiments, such an input may indicate a desire of theuser116 to supplement, augment, overwrite, replace, and/or otherwise modify a portion of the digital video segment corresponding to the text602(4).
Additionally, themessage generation engine108 may cause a plurality of thumbnails702(1),702(2),702(3),702(4) (collectively “thumbnails702”) to be displayed and/or otherwise provided via thedisplay402 at least partly in response to the input. Eachthumbnail702 may correspond to, for example, a different respective digital content source. For example, eachthumbnail702 may be indicative of a respective folder, library, or other source ofdigital content segments112. In example embodiments, such digital content sources may include for example, photo libraries, video libraries, photo streams, albums, or other such sources stored locally in thememory304 or remotely in, for example, thememory204 of one ormore servers102. Additionally, such sources may include various website or other web-based sources of content.
Each of thethumbnails702 may be configured to receive a touch input from theuser116 via thedisplay402. For example, an input received via a first thumbnail702(1) may control themedia message engine108 to provide theuser116 with access to one or moredigital content segments112 stored in a variety of albums associated with thememory304. Similarly, input received via one or more of theadditional thumbnails702 may control themedia message engine108 to provide theuser116 with access to one or moredigital content segments112 stored in video libraries, camera rolls, audio libraries, or other sources. Additionally, one or more of thethumbnails702 may enable theuser116 to capture additionaldigital content segments112 using one or more of the user interface devices316 described above. Further, one or more of thethumbnails702 may enable theuser116 to perform an Internet search using, for example, a web browser or other component of themedia message engine108.Such thumbnails702 may be displayed in thesecond portion408 of thedisplay402. Alternatively, at least one of thethumbnails702 may be displayed and/or otherwise located in thefirst portion404. Additionally, in example embodiments the height, width, and/or other configurations of thefirst portion404 and/or thesecond portion408 may be adjusted by theuser116 to facilitate display of one or more of thethumbnails702. For example, theuser116 may provide a touch, swipe, touch and hold, and/or other like input to thedisplay402 in order to modify the relative size of the first andsecond portions404,408.
As noted above, an input received via one or more of thethumbnails702 may provide access to a plurality of images representative of respectivedigital content segments112.FIG. 8 illustrates anexample user interface800 of the present disclosure in which themessage generation engine108 has received an input via, for example, the thumbail702(4) corresponding to a “camera roll” of theelectronic device104, and in which a plurality of images802(1)-802(N) (collectively “images802”) have been provided via thedisplay402 in response. In particular, in response to receiving such an input at the “camera roll” thumbnail702(4) themedia message engine108 and/or thecontent display module308 may control thedisplay402 to provide a plurality ofimages802 corresponding to respective images and/or otherdigital content segments112 stored in a camera roll or other portion of thememory304. In other example embodiments in which an input is received via adifferent thumbnail702, on the other hand, theimages802 displayed in thesecond portion408 may be representative ofdigital content segments112 stored within the particular source identified by thethumbnail702 receiving the input. For example, in additional embodiments in which an input is received via the “videos” thumbnail702(2), themedia message engine108 and/or thecontent display module308 may control thedisplay402 to provide a plurality ofimages802 corresponding to respective digital video segments and/or otherdigital content segments112 stored in a video folder, video library, or other portion of thememory304.
Theexample user interface800 may also include one or morevisual indicia804 indicating, for example, which of thethumbnails702 has been selected by theuser116, as well as acontrol806 operable to transition theuser interface800 to a next phase of a digital media message generation process. For example, thecontrol806 may comprise a “next” control or other control similar to thenavigation control416 described above.
Additionally, as noted above the shape, size, and/or other configurations of the first and/orsecond portions404,408 of thedisplay402 may be adjusted by theuser116 in order to facilitate viewing theimages802. For example, theuser116 may provide a touch, swipe, touch and hold, and/or other input within thesecond portion408 in the direction ofarrow808. Receiving such an input may cause thecontent display module308 and/or themedia message engine108 to increase the size of thesecond portion408 relative to the size of thefirst portion404. Such an input may, as a result, enable a greater number of theimages802 to be viewed via thesecond portion408 of thedisplay402 while theuser interface800 is operable. Alternatively, receiving a touch, swipe, and/or other input in a direction opposite ofarrow808 may cause thecontent display module308 and/or themedia message engine108 to decrease the size of thesecond portion408 relative to the size of thefirst portion404.
Similar to thethumbnails702 described above with respect toFIG. 7, the portion of thedisplay402 providing each of theimages802 may be configured to receive input from theuser116. For example, theelectronic device104 may receive one or more inputs at a location proximate and/or within thesecond portion408 of thedisplay402. Such an input may be received at, for example, a location in thesecond portion408 where aparticular image802 is being displayed. Such an input may be received by the user interface module310 and/or other components of themedia message engine108 and may be interpreted as indicating selection of adigital content segment112 associated with the particularcorresponding image802 provided at the location in thesecond portion408 at which the input was received. Selecting variousdigital content segments112 in this way may assist theuser116 in associating the selecteddigital content segment112 with a play sequence of adigital media message114 being created. In particular, themessage generation engine108 may associate the selecteddigital content segment112 with the selected portion of the digital video segment corresponding to the text602(4) provided in thefirst portion404 of thedisplay402. In some examples, themessage generation engine108 may associate the various portions of the digital video segment as well as the selecteddigital content segment112 with the play sequence of thedigital media message114 such that the selecteddigital content segment112 will be presented simultaneously with a video track, an audio track, and/or at least some other part of the selected portion of the digital video segment when thedigital media message114 is played.
As noted above, themedia message engine108 may overwrite and/or otherwise replace part of the audio track and/or the video track of a first digital content segment112(1) with at least part of a second digital content segment112(2). For example, an image or other component of the second digital content segment112(2), and the audio track from a second portion of a first digital content segment112(1) (e.g., a digital video segment) may be combined to form a combined segment of thedigital media message114. In particular, upon receiving one or more inputs described above with respect toFIGS. 6-8, themedia message engine108 may combine the second digital content segment112(2) with the audio track of a portion of the digital video segment, and may configure the combined segment such that the audio track of the portion of the digital video segment is presented simultaneously with the image of second digital content segment112(2) when thedigital media message114 is played.
In some examples, in response to receiving one or more inputs via thedisplay402, thecontent display module308 and/or themedia message engine108 may cause thedisplay402 to provide one or more visual indicia indicating selection of adigital content segment112 corresponding to the associatedimage802. For example, as shown in theuser interface900 ofFIG. 9, in response to receiving an input indicating selection of a particulardigital content segment112, thecontent display module308 and/or themedia message engine108 may cause the image (e.g., image802(2)) corresponding to thedigital content segment112 to be displayed in association with thetext602. In particular, the image802(2) corresponding to the selecteddigital content segment112 may be displayed in association with the particular text602(4) corresponding to the portion of the digital video segment with which thedigital content segment112 corresponding to the image802(2) will be associated. Providing the image802(2) in association with the corresponding text602(4) in this way may assist theuser116 in visualizing whichdigital content segments112 will be associated with which of the various portions of the underlying digital video segment.
Additionally, theuser interface900 may include thecontrol426 described above configured to assist theuser116 in transitioning to a next stage of digital media message generation. For example, thecontrol426 may be displayed on thedisplay402 in response to receiving an input indicating selection of adigital content segment112 associated with a corresponding image802(2) of the plurality ofimages802. When such an input is received, such as via the portion of thedisplay402 providing theimages802, thecontent display module308 and/or themedia message engine108 may cause thedisplay402 to provide thecontrol426. Thecontrol426 may be operable as a “done” control configured to enable theuser116 to finish selectingdigital content segments112 for incorporation into thedigital media message114.
As noted above with respect to at leastFIGS. 6-9, theelectronic device104 may enable theuser116 to modify an underlying digital video segment and/or otherdigital content segment112 by replacing at least part of an audio track, video track, or other component of the various portions of the digital video segment with an image or other component of an additionaldigital content segment112. In additional examples, theelectronic device104 may also enable theuser116 to augment and/or otherwise modify the underlying digital video segment described above without replacing components of the digital video segment. In such additional examples, an additionaldigital content segment112 may be selected by theuser116 and added to the underlying digital video segment as a new portion. The additionaldigital content segment112 may be combined with and/or otherwise added to the underlying digital video segment at any location, and such an addition may increase the overall elapsed time of the digital video segment, as well as the resulting total elapsed time of thedigital media message114.
As shown in theuser interface1000 ofFIG. 10, theuser116 may provide an input via thefirst portion404, such as a touch input, a touch and hold input, a swipe input, a touch and drag, input, and/or other input. In one example, theuser116 may designate a location in a play sequence of thedigital media message114 for inserting an additionaldigital content segment112 by providing an input at acorresponding location1002 in thefirst portion404 of thedisplay402. For example, thefirst portion404 may displaytext602 corresponding to each respective portion of the plurality of consecutive portions of the digital video segment, andsuch text602 may be displayed with visual indicia separating various portions of thetext602. For example, visible separation of the text602(1),602(2),602(3),602(4) may correspond to the start or end points of corresponding portions of the underlying digital video segment. To insert an additionaldigital content segment112 at a location in the play sequence between two consecutive, sequential, and/or adjacent portions of the digital video segment, theuser116 may provide an input at acorresponding location1002 on thedisplay402. In the example, shown inFIG. 9, auser116 wishing to insert an additionaldigital content segment112 at a location in the play sequence between adjacent portions of the digital video segment corresponding to the text604(3) and text604(4) may touch thedisplay402 proximate the location1002 (e.g., proximate a location on thedisplay402 displaying either the text604(3) or text604(4)) and may drag a finger of the user'shand422 in the direction ofarrow1004. Upon receiving such an input, thecontent display module308 and/or other components of themessage generation engine108 may at least temporarily display a corresponding empty space at thelocation1002. In such examples, the empty space at thelocation1002 may designate the location in the play sequence at which an additionaldigital content segment112 will be added.
The additionaldigital content segment112 may, for example, be presented consecutive with and separate from the adjacent portions of the digital video segment when thedigital video message114 is played. For example, in the embodiment ofFIG. 9, an additionaldigital content segment112 added at a location of the play sequence corresponding to thelocation1002 may be presented immediately after and separate from a portion of the digital video segment to which the text602(3) corresponds. The additionaldigital content segment112 added at the location of the play sequence corresponding to thelocation1002 may also be presented immediately before and separate from a portion of the digital video segment to which the text602(4) corresponds. In such examples, the additionaldigital content segment112 added to the play sequence may comprise, among other things, a digital audio segment, a digital video segment and/or any otherdigital content segment112 having a respective elapsed time. Additionally, the additionaldigital content segment112 may include an audio track, a video track, and/or any other components.
Theuser interface1000 may also include at least one control604(5) associated with thelocation1002. In such embodiments, the control604(5) may be substantially similar to thecontrols604 described above. For example, the control604(5) may be operable to receive input from theuser116, and to cause thedisplay402 to provide one or more images corresponding to respectivedigital content segments112 at least partly in response to such input. For example, as described above with respect to at leastFIGS. 6 and 7 themessage generation engine108 may receive a touch input from theuser116 via the control604(5), and themessage generation engine108 may cause thethumbnails702 described above to be displayed and/or otherwise provided via thedisplay402 at least partly in response to the input. Eachthumbnail702 may correspond to, for example, a different respective digital content source. As noted with respect to at leastFIGS. 7 and 8, in response to receiving an input at one of thethumbnails702 themedia message engine108 and/or thecontent display module308 may control thedisplay402 to provide a plurality ofimages802 corresponding to respective images and/or otherdigital content segments112 stored within the particular content source identified by thethumbnail702 receiving the input.
Theelectronic device104 may also receive a further input at, for example, a location in thesecond portion408 where aparticular image802 is being displayed. Such an input may be received by the user interface module310 and/or other components of themedia message engine108 and may be interpreted as indicating selection of an additionaldigital content segment112 associated with the particularcorresponding image802 provided at the location in thesecond portion408 at which the input was received. Themessage generation engine108 may associate, add, and/or otherwise insert the additionaldigital content segment112 associated with theparticular image802 into the play sequence of thedigital media message114 as noted above with respect toFIG. 10.
Additionally, thecontent display module308 and/or other components of themedia message engine108 may provide visual indicia via thedisplay402 indicating that the additionaldigital content segment112 associated with theparticular image802 has been inserted into the play sequence of thedigital media message114. For example, as show in theuser interface1100 ofFIG. 11, thecontent display module308 may cause thedisplay402 to provide the image corresponding to the additionaldigital content segment112, and the image (shown asimage1102 inFIG. 11) may be displayed at thelocation1002 described above. Providing theimage1102 in this way may assist theuser116 in visualizing which additionaldigital content segment112 will be inserted into the play sequence adjacent one or more of the various portions of the underlying digital video segment.
Theexample user interface1100 may also include apreview control1104 operable to provide a preview of thedigital media message114 via thedisplay402 for review by theuser116. For example, thepreview control1104 may be configured to receive one or more touch inputs from theuser116, and thecontent display module308 and/or themedia message engine108 may cause theelectronic device104 to display a preview of thedigital media message114 in response to such input. For example, in response to receiving an input via thepreview control1104, thecontent display module308 may cause the display to provide theexample user interface1200 shown inFIG. 12 in which thedigital media message114 may be provided to theuser116 for review and editing.
Theuser interface1200 may include, among other things, asearch control1202 operable to enable theuser116 to conduct one or more web-based searches for additional digital content. Theuser interface1200 may also include anaudio selection control1204 configured to enable theuser116 to add audio clips or other such digital media to adigital media message114 currently under creation. Further, theuser interface1200 may include anediting control1206 substantially similar to thecontrol418 described above with respect toFIG. 4, and a play/pause control1208 configured to control the playback and/or previewing of a draftdigital media message114. Each of thecontrols1202,1204,1206,1208 may be configured to receive one or more touch inputs from theuser116, and thecontent display module308 and/or themedia message engine108 may cause theelectronic device104 to perform any of the functions described above with respect to therespective controls1202,1204,1206,1208 in response to such input.
Theuser interface1200 may also include theprogress bar506 described above. Theprogress bar506 may be useful in providing visual indicia of the elapsed playtime of thedigital media message114 being created. Theprogress bar506 may also enable theuser116 to visualize various different portions of thedigital media message114 and, in particular, to visualize the various locations within thedigital media message114 at which differentdigital content segments112 are located and/or have been added. For example, theprogress bar506 may include aplay marker1210 that moves in real time, in the direction ofarrow510, as the draftdigital media message114 is played. Theprogress bar506 may also include a plurality of separate and distinct portions1212(1)-1212(4) (collectively “portions1212”). Taken together, the plurality ofportions1212 may provide visual indicia of a play sequence of thedigital media message114 currently being generated. Theprogress bar506 may further include abreak1214 and/or other visual indicia separating each of theportions1212.
Eachrespective portion1212 of theprogress bar506 may correspond to and/or be indicative of a respective location in and/or portion of such a play sequence. Additionally, one or moredigital content segments112 may be associated with eachportion1212 of theprogress bar506. For example, each portion of an underlying digital video segment of thedigital media message114 may correspond to and/or be associated with a respective one of theportions1212 of theprogress bar506. Likewise, one or more additionaldigital content segments112 combined the digital video segment as described herein with respect to at leastFIGS. 6-9, and/or inserted into the play sequence as described herein with respect to at leastFIGS. 10 and 11, may correspond to and/or be associated with a respective one of theportions1212.
As a result, theprogress bar506 may indicate the order and the elapsed time in which each of thedigital content segments112 will be played and/or otherwise presented. In example embodiments, size, length, and/or other configurations of eachportion1212 may be indicative of such an elapsed time. Further, the arrangement of eachportion1212 from left to right along thedisplay402 may be indicative of such an order. Thus, the full length of theprogress bar506 may be representative of the full duration and/or elapsed time of an underlying digital video segment or other first digital content segment112(1), and any additional digital content segments112(N) that have been combined with the first digital content segment112(1). For example, when the first digital content segment112(1) comprises a digital video segment, the full length of theprogress bar506 may represent the total elapsed time of the digital video segment.
FIG. 13 illustrates anotherexample user interface1300 of the present disclosure. In example embodiments, themedia message engine108 may provide such anexample user interface1300 in response to receiving one or more inputs via one or more of the controls described above. For example, themedia message engine108 may receive a touch input or other such input indicating selection of theshare control412. In response to receiving such an input, themedia message engine108 may provide animage1302 via thedisplay402. Such animage1302 may comprise, for example, one or more images, photos, or first frames of a digital video segment stored in thememory304 of theelectronic device104. Alternatively, thecontent display module308 may present one ormore images1302 in thefirst portion404 that are obtained in real time via, for example, a camera or other user interface device316 of theelectronic device104. For example, thefirst portion404 may provide animage1302 of objects that are within a field of view of the camera.
Themedia message engine108 may also provide amessage thumbnail1304 via thedisplay402. In example embodiments, such amessage thumbnail1304 may be similar to one or more of theimages802 described above. In some examples, however, themessage thumbnail1304 may be larger than one or more of theimages802, and/or may have one or more visual characteristics (e.g., highlighting, shading, a label, a frame, etc.) configured to enable theuser116 to distinguish themessage thumbnail1304 from one ormore images802 concurrently displayed in, for example, thesecond portion408. For example, themessage thumbnail1304 may be provided at thesecond portion408 of thedisplay402 simultaneously withvisual indicia1306 indicative of the play sequence of thedigital media message114. In example embodiments, thevisual indicia1306 of the play sequence may include theimages802, digital video segments, and/or other portions included in the play sequence, arranged in the order in which such content will appear when thedigital media message114 is played. In such embodiments, themessage thumbnail1304 may be disposed above, beneath, to the side of, and/or at any other location on thedisplay402 relative to thevisual indicia1306 of the play sequence such that theuser116 may easily identify themessage thumbnail1304 as being distinct from theimages802 and/or other components of thevisual indicia1306. In example embodiments, themessage thumbnail1304 may comprise, for example, a first frame and/or any other image or content indicative of thedigital media message114 being generated by theuser116. As a result, it may be desirable for themedia message engine108 to present themessage thumbnail1304 with one or more visual characteristics enabling theuser116 to identify themessage thumbnail1304 with relative ease.
Theexample user interface1300 may also include one or more additional controls configured to assist theuser116 in making further modifications to one or more of thedigital content segments112, the play sequence, and/or other components of thedigital media message114. For example, theuser interface1300 may include acontrol1308 configured to enable theuser116 to add one or more cover images, cover videos, cover photos, and/or other content to thedigital media message114. In example embodiments, themedia message engine108 may receive an input, such as a touch input, indicative of selection of thecontrol1308 by theuser116. In response to receiving such an input, themedia message engine108 may enable theuser116 to browse various photos, images, videos, and/or other content stored in thememory304 and/or in thememory204 of theserver102. Additionally and/or alternatively, in response to receiving such an input, themedia message engine108 may enable theuser116 to perform a web-based search, such as via one or more search engines or applications of theelectronic device104, for such content. Theuser116 may be permitted to select one or more such content items for use as, for example, a cover image and/or other indicator of thedigital media message114 currently being generated. Upon selection of such a content item, themedia message engine108 may add the selected item to the play sequence of thedigital media message114 and/or may combine the selected item with one ormore content segments112 of thedigital media message114.
Theuser interface1300 may further include one ormore controls1310 configured to enable theuser116 to modify one or more of thedigital content segments112, the play sequence, and/or other components of thedigital media message114.Such controls1310 may comprise, among other things, any audio, video, image, or other editing tools known in the art. In example embodiments,such controls1310 may provide editing functionality enabling theuser116 to delete, move, modify, augment, cut, paste, copy, save, or otherwise alter portions of eachdigital content segment112 as part of generating adigital media message114. Additionally, one or more of thecontrols1310 may enable auser116 to add, remove, cut, paste, draw, rotate, flip, shade, color, fade, darken, and/or otherwise modify various aspects of thedigital media message114 and/or variousdigital content segments112 included in the play sequence thereof. In some embodiments, at least one of thecontrols1310 may be similar to and/or the same as one or more of thecontrols418 described above.
Additionally, theuser interface1300 may include one or more additional controls (not shown) configured to enable theuser116 to add one or more audio clips, segments, files, and/or other content to thedigital media message114. In example embodiments, themedia message engine108 may receive an input, such as a touch input, indicative of selection of such a control by theuser116. In response to receiving such an input, themedia message engine108 may enable theuser116 to browse various audio files and/or other content stored in thememory304 and/or in thememory204 of theserver102. Additionally and/or alternatively, in response to receiving such an input, themedia message engine108 may enable theuser116 to perform a web-based search, such as via one or more search engines or applications of theelectronic device104, for such content. Theuser116 may be permitted to select one or more such content items, and upon selection of such a content item, themedia message engine108 may add the selected item to the play sequence of thedigital media message114 and/or may combine the selected item with one ormore content segments112 of thedigital media message114.
Theuser interface1300 may also include theshare control412 and/or the next/donecontrol426 described above. Upon selection of such a control by theuser116, themedia message engine108 may enable theuser116 to browse forward to a next user interface configured to assist theuser116 in generating, modifying, and/or sharing thedigital media message114. For example, themedia message engine108 may receive an input, such as a touch input, indicating selection of theshare control412 by theuser116. In response to receiving such an input, themedia message engine108 may provide theexample user interface1400 illustrated inFIG. 14. Such anexample user interface1400 may include, among other things, themessage thumbnail1304 indicating and/or otherwise identifying thedigital media message114 that theuser116 desires to share. Such anexample user interface1400 may also include a plurality of controls configured to assist theuser116 in providing thedigital media message114 for sharing with, for example, a remoteelectronic device118, such as via thenetwork106. For example, one or more of thecontrols1402 may enable theuser116 to add a title, a name, and/or other identifier to themedia message114 such that themedia message114 may be easily recognizable and/or identifiable by one ormore users120 of the remoteelectronic device118. In some examples, the title and/or other identifier added to themedia message114 may be provided to theuser120 simultaneously and/or otherwise in conjunction with thedigital media message114 when theuser120 consumers at least a portion of thedigital media message114 on the remoteelectronic device118.
In addition, theuser interface1400 may include one ormore controls1404,1406 configured to enable theuser116 to privatize thedigital media message114 prior to providing thedigital media message114 for sharing with a remoteelectronic device118. For example, one or moresuch controls1404 may enable theuser116 to encrypt and/or otherwise configure thedigital media message114 such that only an approveduser120 or plurality ofusers120 may receive and/or access thedigital media message114. In example embodiments, themedia message engine108 may receive an input, such as a touch input, indicating selection of thecontrol1404 by theuser116. In response to receiving such an input, themedia message engine108 may enable theuser116 to browse, for example, an address book or other like directory stored in thememory304 of theelectronic device104 and/or in thememory204 of theserver102. Upon browsing such a directory, theuser116 may select one or more contacts approved by theuser116 to have access to thedigital media message114. Additionally and/or alternatively, in response to receiving such an input, themedia message engine108 may enable theuser116 to password protect and/or otherwise encrypt thedigital media message114 prior to sharing. In any of the example embodiments described herein, one or more of thecontrols1206 may comprise a slide bar and/or other like icon indicating whether theuser116 has privatized thedigital media message114. For example, such acontrol1406 may change color, transition between a “no” indication and a “yes” indication, and/or may otherwise provide a visual indication of the privacy status/level of thedigital media message114.
Theuser interface1400 may also include one ormore controls1408 configured to enable theuser116 to select one or more means of providing thedigital media message114 for sharing with a remoteelectronic device118. For example, one or moresuch controls1408 may enable theuser116 to select from a plurality of common social media websites and/or other portals useful in sharing thedigital media message114. In such example embodiments, themedia message engine108 may receive an input, such as a touch input, indicating of selection of thecontrol1408 by theuser116. In response to receiving such an input, themedia message engine108 may enable theuser116 to access an existing account on the selected social media portal. Once such an account has been accessed, themedia message engine108 may provide thedigital media message114 to the selected social media portal for sharing withremote users120 via the selected portal.
One or moresuch controls1408 may also enable theuser116 to select between email, text messaging (SMS), instant messaging, and/or other like means for sharing thedigital media message114. In such example embodiments, themedia message engine108 may receive an input, such as a touch input, indicating selection of thecontrol1408 by theuser116. In response to receiving such an input, themedia message engine108 may enable theuser116 to browse, for example, an address book or other like directory stored in thememory304 of theelectronic device104 and/or in thememory204 of theserver102. Upon browsing such a directory, theuser116 may select one or more contacts with which theuser116 desires to share thedigital media message114. Upon selecting such contacts, theuser116 may provide thedigital media message114 to the selected users by providing an input, such as a touch input, indicative of selection of ashare control1410.
Illustrative MethodsFIG. 15 shows anillustrative method1500 of generating an exampledigital media message114. Theexample method1500 is illustrated as a collection of steps in a logical flow diagram, which represents a sequence of operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the steps represent computer-executable instructions that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described steps can be combined in any order and/or in parallel to implement the process. For discussion purposes, and unless otherwise specified, themethod1500 is described with reference to theenvironment100 ofFIG. 1.
At block:1502, themedia message engine108 may receive a script of adigital media message114 being created by auser116 of anelectronic device104. For example, theuser116 may type and/or otherwise directly entertext406 of the script using a keyboard or other user interface devices316 of theelectronic device104. Alternatively, theuser116 may dictate thetext406 of the script orally using, for example, a microphone and/or other user interface device316. In examples in which theuser116 dictates the script, theelectronic device104 may receive voice and/or other audio input from the user116 (e.g., the dictation), and the voice recognition module314 may generate thetext406 of the script based on such input. In any of the example embodiments described herein, one or more inputs received from theuser116 at block:1502 may be stored in thememory304 of theelectronic device104 and/or in thememory204 associated with theserver102.
Atblock1504, thecontent display module308 and/or other components of themessage generation engine108 may provide thetext406 of the script to theuser116. For example, thecontent display module308 may cause various portions of thetext406 to be displayed on thedisplay402 of thedevice104. For instance, thecontent display module308 may provide awindow504 in thefirst portion404 of thedisplay402, and various portions of thetext406 may be rendered within thewindow504. In example embodiments, thetext406 may automatically scroll within thewindow504 at a predetermined scroll rate. Alternatively, in further examples themessage generation engine108 may cause one or more controls to be provided via thedisplay402 and configured to control presentation of thetext406 within thewindow504.
In some examples, theelectronic device104 may provide thetext406 of the script to theuser116 via thedisplay104 while capturing, recording, and/or otherwise receiving a digital video segment or otherdigital content segment112. For example, at block:1506 themessage generation engine108 may receive adigital content segment112, such as a digital video segment. The digital video segment received at block:1506 may comprise video of theuser116 reading thetext406 of the script, or an approximation thereof. Thus, in some examples, the content of the received digital video segment may be based on the script.
In some examples, the digital video segment or otherdigital content segment112 received at block:1506 may include a plurality of consecutive portions or other like divisions, and such portions may be indicative of desired divisions in thedigital media message114 being generated. For example, respective portions of the digital video segment received at block:1506 may be indicative of one or more locations at which theuser116 may wish to add or insert additionaldigital content segments112 into a play sequence of thedigital media message114.
In some examples, theelectronic device104 may form one or more portions of the digital video segment at block:1506 in response to input received from theuser116. For example, theuser116 may provide a touch input or a plurality of consecutive touch inputs while the digital video segment is being recorded at block:1506. In such examples, themessage generation engine108 may form a plurality of consecutive portions of the digital video segment in response to the plurality of consecutive touch inputs. For instance, theuser116 may provide a first touch input via thecapture control420, and themessage generation engine108 may begin recording and/or otherwise receiving the digital video segment at block:1506 in response to the first input. Theuser116 may then provide a second touch input via thecapture control420, and themessage generation engine108 may form a first portion of the digital video segment in response to the second input. For example, the first portion of the digital video segment may include audio and/or video recorded from the time the first input was received up to the time the second input was received. The plurality of consecutive portions of the digital video segment may be formed in a similar fashion in response to repeated consecutive taps, and/or other touch inputs received via thecapture control420. Additionally, themessage generation engine108 may associate metadata or other information with each of the consecutive portions formed at block:1506, and such information may indicate, for example, start and end times of each portion, an elapsed time of each portion, a size (e.g., megabits) of each portion, a content type (e.g., audio, video, image, etc.), of each portion, a user interface device316 used to capture each portion, a storage location of each portion in thememory304, and/or other identifying characteristics of each respective portion. In some examples, a double tap, double touch, and/or other alternate input may stop recording at block:1506.
At block:1508, themessage generation engine108 may determinetext602 of thedigital media message114, and in some examples, themessage generation engine108 may determinesuch message text602 corresponding to each respective portion of the digital video segment received at block:1506. In some examples, themessage generation engine108 may determine thetext602 of thedigital media message114 by correlating, recognizing, and/or otherwise matching at least part of an audio track of the digital video segment with thetext406 of the script received at block:1502. In such examples, the audio track may be matched with thetext602 of the script based on the elapsed time, sequence, touch inputs received via thecapture control420 at block:1506, or other characteristics of the audio track. For example, themessage generation engine108 may match individual parts of thescript text406 with corresponding respective portions of the digital video segment at block:1508.
Alternatively, at block:1508 themessage generation engine108 may provide at least part of the audio track of the digital video segment as an input to the voice recognition module314. In such examples, the voice recognition module314 may generate thetext602 of thedigital media message114 as an output at block:1508 based on the audio track. In particular, the voice recognition module314 may output themessage text602 corresponding to each respective portion of the digital video segment at block:1508. In such examples, the voice recognition module314 and/or other components of themessage generation engine108 may separate themessage text602 into separate or otherwise distinct portions based on metadata associated with each respective separate portion of the digital video segment. Such metadata may, for example, identify and/or otherwise distinguish a first portion from a second portion, and so on.
At block:1510 thecontent display module308 may provide the digitalmedia message text602 via thedisplay402. As shown in, for example,FIG. 6 thecontent display module308 may provide thetext602 corresponding to each of the plurality of consecutive portions of the digital video segment, and thetext602 corresponding to each portion may be displayed separately. For example, text602(1) of a first portion of the plurality of consecutive portions of such a digital video segment may be displayed separate from text602(2) of a second portion of the plurality of consecutive portions. In particular, the text602(1) corresponding to the first portion may be displayed at a first location on thedisplay402 and the text602(2) corresponding to the second portion may be displayed at a second location on thedisplay402 different from the first portion. In some examples, thetext602 may be displayed with lines, boxes, numbering, markings, coloring, shading, or other visual indicia separating various portions of thetext602. Providing thetext602 in this way at block:1510 may assist theuser116 in combining a first digital content segment112(1) (e.g., the underlying digital video segment received at block:1506) with one or more additional digital content segments112(N).
For example, at block:1512 themessage generation engine108 may receive input from theuser116 indicating selection of one or more portions of the digital video segment received at block:1506. For example, theuser116 may select a portion of the digital video segment by providing a touch input via one or more of thecontrols604. Themessage generation engine108 may receive such an input via thecontrol604 and may, in response, cause thedisplay402 to provide a plurality ofthumbnails702 associated with respective digital content sources. As shown in at leastFIG. 7, themessage generation engine108 may also cause thedisplay402 to provide text602(4) corresponding to the selected portion of the digital video segment. Each of thethumbnails702 may be configured to receive further input from theuser116, and thecontent display module308 may cause thedisplay402 to provide corresponding content in response to such input.
For example, at block:1514 and the message generation under108 may receive an input indicating selection of one or more of the digital content sources. Theuser116 may select a particular digital content source at block:1514 by providing a touch input via one or more of thethumbnails702. Themessage generation engine108 may receive such an input via the selectedthumbnail702 and may, in response, cause thedisplay402 to provide a plurality ofimages802 via thedisplay402 associated with the selected digital content source corresponding to thethumbnail702 receiving the input. As shown in at leastFIG. 8, each of the plurality ofimages802 may be provided in thesecond portion408 of thedisplay402. Further, each of the plurality ofimages802 may be indicative of a respectivedigital content segment112 stored in and/or otherwise associated with the digital content source corresponding to the selectedthumbnail702. Further, each of theimages802 may be indicative of a respectivedigital content segment112 different from the digital video segment received at block:1506.Such images802 may be provided by thecontent display module308 to assist theuser116 in selecting one or moredigital content segments112 for inclusion into thedigital media message114.
At block:1516 themessage generation engine108 may receive input indicating selection of at least onedigital content segment112, and thedigital content segment112 selected at block:1516 may be associated with a corresponding one of the plurality ofimages802. For example, theuser116 may provide a touch input at a location on thedisplay402 in which aparticular image802 is provided. Such a touch input may indicate selection of adigital content segment112 associated with theparticular image802. As shown inFIG. 9, in response to such input, thecontent display module308 may cause thedisplay402 to display theimage802 associated with the selecteddigital content segment112 in association with thetext602 corresponding to the portion of the digital video segment selected at block:1512.
At block:1518 themessage generation engine108 may combine and/or otherwise associate thedigital content segment112 selected at block:1516 with at least a portion of the digital video segment received at block:1506. In some examples, one or more portions of the digital video segment received at block:1506 may include both an audio track and a video track. In such examples, at least part of one or more such tracks of the digital video segment may be supplemented, augmented, overwritten, and/or replaced by the digital content segment selected at block:1516. For example, themessage generation engine108 may replace at least part of the video track of the underlying digital video segment with a digital image of the selecteddigital content segment112 at block:1518. As a result, the digital image of the selecteddigital content segment112 may be presented simultaneously with a portion of the audio track of the digital video segment corresponding to the replaced portion of the video track. Alternatively, themessage generation engine108 may combine and/or replace at least part of the audio track of the digital video message with a selecteddigital audio segment112 at block:1518.
In some examples, adding and/or otherwise associating the selecteddigital content segment112 with at least a portion of the received digital video segment at bock:1518 may include generating one or more combined message segments. In such embodiments, the selecteddigital content segment112 may be merged with part of a portion of the digital video segment in order to form such a combined message segment. In such examples, the combined message segment may include, among other things, the selected digital content segment112 (e.g., a digital image) as well as at least part of a portion of the digital video segment received at block:1508 (e.g., the audio track from a portion of the digital video segment). In such examples, the digital image of the selecteddigital content segment112 may be displayed simultaneously with audio from the portion of the digital video segment when the combined message segment is played. In particular, atblock1518 themedia message engine108 may replace, for example, video and/or images of a portion of the digital video segment with a digital image of the selecteddigital content segment112.
Further, at block:1518 themessage generation engine108 may associate each portion of the digital video segment received at block:1508, as well as thedigital content segment112 selected at block:1516, with a play sequence of thedigital media message114. In some examples, adding and/or otherwise associating the portions of the digital video segment and one or more selecteddigital content segments112 with the play sequence at bock:1518 may include adding and/or otherwise associating one or more combined message segments with the play sequence. As shown inFIG. 12, thecontent display module308 may cause thedisplay402 to display aprogress bar506 as visual indicia of such a play sequence. Additionally, as shown inFIG. 13, thecontent display module308 may cause thedisplay402 to display a plurality of images and/or othervisual indicia1306 of the play sequence.
In example embodiments, the processes described with respect to one or more of blocks1510-1518 may be repeated numerous times until generation of thedigital media message114 has been completed. Additionally, themedia message engine108 may receive any number of additional inputs via, for example, thedisplay402. In response to such an additional input, themedia message engine108 may cause one or more additionaldigital content segments112 to be inserted into the play sequence of thedigital media message114 adjacent to at least one portion of the plurality of consecutive portions of the digital video segment released at block:1506. The insertion of such an additionaldigital content segment112 is described herein with respect to at leastFIGS. 10 and 11.
In response to another additional input, themedia message engine108 may direct thedigital media message114, via theelectronic device104, to anetwork106 such that thedigital media message114 may be transferred over thenetwork106 in at least one of a text message, an email, a website, or other such portal. In this way, thedigital media message114 may be received by a remoteelectronic device118, and may be consumed on the remoteelectronic device118 by one or moreadditional users120. In such embodiments, thedigital media message114 may include at least the combined segment described above.
In summary, example embodiments of the present disclosure provide devices and methods for generating digital media messages as a means for communication between users in remote locations. Such digital media messages include various combinations of audio, video, images, photos, and/or other digital content segments, and can be quickly and artfully created by each user with little effort. For example, the user may combine a wide variety, and a large number, of different digital content segments into a single digital media message. The methods of generating such a digital media message described herein enable the user to utilize a wide array of audio, video, and/or photo editing controls to quickly and easily modify each individual content segment, or combinations thereof. As a result, such methods provide the user with great artistic freedom in creating the digital media message. Additionally, the methods described herein may include assisting and/or guiding the user during the message generation process. For example, in some embodiments text of a desired digital media message script may be captured and provided to the user as the user records a digital video segment. The digital video segment may be used as an underlying video component of the digital media message, and the digital video segment may be of increased quality due to the script being provided to the user. As a result, such methods enable the user to generate content-richdigital media messages114 relatively quickly, thereby facilitating the use of such digital media messages as an efficient means of communication.
CONCLUSIONAlthough the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as illustrative forms of implementing the claims.