This application claims benefit of U.S. provisional patent application serial No. 60/621,273 entitled "mobile 3D graphical message" filed on day 22/10/2004 by the same assignee as this application, which is incorporated herein by reference in its entirety.
Detailed Description
In the following description, certain specific details are set forth in order to provide a thorough understanding of various embodiments. However, it will be understood by those skilled in the art that the provided systems and methods may be practiced without these details. In other instances, well-known structures, protocols, and other details have not been shown or described in detail to avoid unnecessarily obscuring descriptions of the embodiments.
Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
The headings provided herein are for convenience only and do not interpret the scope or meaning of the claimed invention.
In summary, embodiments provide novel 3D graphical communication capabilities for mobile wireless devices connected to a communication network. Examples of 3D graphical communications include, but are not limited to, messaging, publishing content to a network location, content communication from a content provider to a client device, online gaming, and various other forms of communication that may have animated 3D graphical content. In one example and non-limiting embodiment, the 3D graphical messaging takes the form of user customizable 3D graphical animations. As explained previously above, conventional forms of mobile messaging can be divided into two main categories: audio (e.g., voicemail) or text (e.g., SMS or email service). Embodiments provide improvements to mobile messaging by adding animated 3D graphical representations that go well beyond the capabilities of existing messaging technologies that simply involve a combination of audio, text, image and video media formats, and for which 3D graphical representations have not been traditionally used/integrated. Another feature of an embodiment allows mobile devices to author and/or enhance these graphical messaging by using a 3D graphical messaging platform resident on the sender's mobile device and/or on a server, thereby providing customized 3D graphical messaging capabilities.
According to one embodiment, the animated 3D graphical message may take the form of an animated 3D avatar of the user. In another embodiment, the animated 3D avatar may be someone else (not necessarily the user of the wireless device), and may in fact be an animated 3D avatar of a fictional character or any other creature, which may be artistically customized and created by the user. In still other embodiments, the animated 3D graphical message need not even have any graphical representation of a person or other person at all. Animated 3D graphical messages may be provided to represent machines, background scenes, mythical worlds, or any other type of content that may be represented in a 3D world and that may be created and customized by a user. In still other embodiments, the animated 3D graphical message may contain any suitable combination of 3D avatars, 3D scenes, and other 3D content.
It will be appreciated that the customization and animation described above is not limited to 3D messaging. Customization and animation of 3D content may be applied to other applications where the representation may be enhanced by adding 3D elements, including but not limited to publishing content at a network location, playing games, presenting content for access by other users, providing services, and so forth. For simplicity of explanation, various embodiments will be described herein in the context of messaging, and it will again be appreciated that such descriptions may be adapted to applications that do not necessarily involve messaging, where appropriate.
Conventional forms of visual communication use formats that do not preserve the object characteristics of the captured natural video media. By preserving the object characteristics of the video, embodiments allow the user to personalize and interact with each of the object components of the video. An advantage of the 3D animation format is that it is easy to construct an almost unlimited personalized customization set by simply adapting the objects containing the video — not possible with traditional video formats (or extremely difficult for users). For example, if the representation of the image maintains 3D spatial coordinates of objects represented in the image, the user may rotate or change the texture of the image.
Fig. 1 is a block diagram of an embodiment of a system 100, which system 100 may be used to implement mobile 3D graphical communications, such as animated 3D graphical messaging and other forms of animated 3D graphical communications for wireless devices. For the sake of brevity and to avoid confusion, not every possible type of network device and/or component within a network device is shown and described in FIG. 1-only network devices and components germane to understanding the operation and features of embodiments are shown and described herein.
The system 100 includes at least one server 102. Although only one server 102 is shown in FIG. 1, the system 100 may have any number of servers 102. For example, there may be multiple servers 102 to share and/or provide certain functions separately for load balancing, availability, and so forth. The server 102 includes one or more processors 104 and one or more storage media having machine-readable instructions stored thereon that are executable by the processors 104. For example, a machine-readable medium may contain a database or other data structure. For example, the user information database 106 or other type of data structure may store user preference data, user profile information, device performance information, or other user-related information.
The machine-readable instructions may comprise software, applications, services, modules, or other types of code. In an embodiment, the various functional components described herein that support mobile 3D graphical messaging are embodied as machine readable instructions.
In an embodiment, such functional components residing on the server 102 include an animation engine 108, a transcoding component 110, a 3D graphical messaging application 112a, and other components 114. For simplicity, 3D graphics application 112 is described below in the context of a messaging application — other types of 3D graphics communication applications may be provided based on the particular implementation to be used, which may provide functionality similar to those described for the messaging 3D graphics application. Each of these components of the server 102 is described in detail next.
Embodiments of the animation engine 108 provide animations to 3D graphical representations, such as 3D avatars, 3D background scenes, or any other content that may be presented in a 3D world. The 3D graphical representation may contain templates, such as 3D images of faces with hair, eyes, ears, nose, mouth, lips, etc.; 3D images of mountains, clouds, rain, sun, etc.; 3D images of mythical world or fictional settings; or any other kind of template of 3D content. The animation sequence generated by the animation engine 108 provides animation (which may include accompanying sound) to move or drive the lips, eyes, mouth, etc. of a 3D template for a 3D avatar, thereby providing a realistic appearance of a live speaker conveying a message. As another example, the animation sequence may drive the movement and sound of rain, birds, foliage, etc. in a 3D background scene, which may or may not have any accompanying 3D avatar representation of the individual. In an embodiment, the server 102 provides the animation engine 108 for user devices that do not independently have their own ability to animate their own 3D graphical representations.
Embodiments of the transcoding component 110 convert the animated 3D graphical message into a form suitable for the recipient device. The form suitable for the recipient device may be based on device capability information and/or user preference information stored in user information database 106. For example, the recipient device may not have processing power or other capabilities to present an animated 3D graphical message, and thus, the transcoding component may convert the animated 3D graphical message from the sender device into a text message or other message form that the recipient device may present that differs from the animated 3D graphical message.
In embodiments, the transcoding component 110 can also transform the animated 3D graphical message into a form suitable for the recipient device based at least in part on certain communication channel conditions. For example, a large traffic volume may instruct the recipient device to receive textual information instead of animated 3D graphical animation, since smaller text files may be sent faster than animated graphical files.
As another example, the transcoding component 110 can also transform or adjust individual features in the animated 3D graphical message itself. For example, the size or resolution of specific objects (3D images such as people, trees, etc.) in an animated 3D graphical message may be reduced in order to optimize transmission and/or playback during states when network traffic may be heavy. By reducing the size or resolution of the single object, the file size and/or bit rate may be reduced.
Embodiments of server 102 may include a 3D graphical messaging application 112a for use with user devices that do not independently have this application installed locally. That is, embodiments of the 3D graphical messaging application 112a provide authoring tools to create and/or select 3D graphical representations from a library, and further provide authoring tools to allow a user to remotely create a voice/text message to be used to animate a graphical representation if such authoring tools are not available at the sender device and/or if the user of the sender device wishes to use the remote 3D graphical messaging application 112a available at the server 102. Further details of embodiments of the 3D graphical messaging application 112 at the server and/or at the user device will be described later below.
The other components 114 may include any other type of component to support the operation of the server 102 for mobile 3D graphical messaging. FOR example, one of the components 114 may contain a DYNAMIC Bandwidth Adaptation (DBA) module, as disclosed in U.S. patent application Ser. No. 10/452,035 entitled "METHOD AND APPARATUS FOR DYNAMIC BANDWIDTHADAPTATION," filed on 30/5/2003 AND assigned to the same assignee as the present application, AND incorporated herein by reference in its entirety. The DBA module of an embodiment can, for example, monitor the status of the communication channel and instruct the transcoding component 110 to dynamically change the bit rate, frame rate, resolution, etc. of the signal being transmitted to the receiving device in order to provide the optimal signal to the receiving device. As explained above, the DBA may be used to make adjustments associated with the overall animated 3D graphical message and/or to make adjustments associated with any individual object present therein.
In another embodiment, one OF the components 114 may comprise a MEDIA customization SYSTEM, as disclosed in U.S. provisional patent application serial No. 60/693,381 entitled "APPARATUS, SYSTEM, METHOD, AND ARTICLE OF management FOR AUTOMATIC recording AND recording," filed on 23/6.2005, assigned to the same assignee as the present application, AND incorporated herein by reference in its entirety. The disclosed media customization system may be used by embodiments of system 100 to provide supplemental information in context to accompany an animated 3D graphical message.
In one embodiment, the media customization system may be used to generate or select graphical components in the context of content to be converted into animated 3D graphical content. For example, text or voice input of a weather report may be examined to determine a graphical representation of clouds, sun, rain, etc. (e.g., trees blowing in the wind, raindrops falling, etc.) that may be used for an animated 3D graphical presentation of the weather.
In the embodiment of fig. 1, the server 102 is communicatively coupled to one or more sender devices 116 and one or more recipient devices 118 via a communication network 120. The sender device 116 and the recipient device 118 may communicate (including communication of animated 3D graphical messages) with each other via the server 102 and the communication network 120. In an embodiment, either or both of the sender device 116 and the recipient device 118 may comprise wireless devices that may send and receive animated 3D graphical messages. In embodiments where one of the user devices does not have the ability or the option to present an animated 3D graphical message, the server 102 may transform the animated 3D graphical message into a form that is more suitable for that user device.
In an embodiment, some of these user devices need not be wireless devices. For example, one of these user devices may comprise a desktop PC with the capability to generate, transmit, receive, and playback animated 3D graphical messages via a hardware, wireless, or hybrid communication network. Various types of user devices may be used in the system 100, including but not limited to cellular phones, PDAs, portable laptops, blackberries, and so forth.
An embodiment of the sender device 116 includes a 3D graphical messaging application 112b that is similar to the 3D graphical messaging application 112a residing on the server 102. That is, the user device may be equipped with its own locally installed 3D graphical messaging application 112b to create/select a 3D graphical representation, generate a voice/text message whose content will be used in an animated 3D representation, animate a 3D graphical representation, and/or other functionality associated with animated 3D graphical messaging. Thus, such animated 3D graphical messaging capabilities may be provided at the user device, alternatively or additionally to the server 102.
The sender device 116 may also include a display 124, such as a display screen, to present an animated 3D graphical message. The display 124 may include a rendering engine to present (including animation, if desired) the received 3D graphical message.
The sender device 116 may include an input mechanism 126, such as a keypad, to support operation of the sender device 116. The input mechanism 126 may be used, for example, to create or select a 3D graphical representation, provide user preference information, control play, rewind, pause, fast forward, etc. animated 3D graphical messages, and so forth.
The sender device 116 may include other components 128. For example, the components 128 may include one or more processors and one or more machine-readable storage media having machine-readable instructions stored thereon that are executable by the processors. The 3D graphical messaging application 112b may be embodied as software or other such machine-readable instructions executable by a processor.
Embodiments of the receiver device 118 may contain the same/similar, different, fewer, and/or greater number of components than the sender device 116. For example, the recipient device 118 may not have a 3D graphical messaging application 112b and therefore may use a 3D graphical messaging application 112a that resides on the server 102. As another example, the recipient device 118 may not have the ability to render or present an animated 3D graphical message and therefore may utilize the transcoding component 110 of the server 102 to convert the animated 3D graphical message from the sender device 116 into a more appropriate form. However, regardless of the specific capabilities of devices 116 and 118, embodiments allow such devices to communicate with each other, with server 102, and/or with content provider 122.
In one embodiment, the sender device 116 (and any other user devices in the system 100 with sufficient capabilities) may post the animated 3D graphical representation to a web blog, web portal, bulletin board, forum, on-demand location, or other network location controlled on the network device 130 that may be accessed by multiple users. For example, the user at the sender device 116 may wish to express his political view in the form of an animated 3D graphical message. Thus, instead of creating a message that is exposed at the recipient device 118 as explained above, the sender device 116 may create a message so that the message is accessible from the network device 130 as an animated 3D graphical message.
The network 120 may be any type of network suitable for communicating various types of messages between the sender device 116, the recipient device 118, the server 102, and other network devices. The network 120 may comprise wireless, hardware, hybrid, or any combination of networks. The network 120 may also contain or be coupled to the internet or any other type of network, such as a VIP, LAN, VLAN, intranet, and so forth.
In an embodiment, the server 102 is communicatively coupled to one or more content providers 122. The content provider 122 provides various types of media to the server 102, which the server 102 may then communicate to the devices 116 and 118. For example, the content provider 122 may provide media that the server 102 transforms (or substantially leaves as is) to accompany the animated 3D graphical message as supplemental contextual content.
As another example, the content provider 122 (and/or the server 122 in cooperation with the content provider 122) may provide information to the devices 116 and 118 on a subscription basis. For example, the sender device 116 may subscribe to the content provider 122 to receive athletic information, such as latest scores, schedules, player profiles, and so forth. In such a case, embodiments provide the sender device 116 with the ability to receive such information in the form of an animated 3D graphical message, such as an animated 3D avatar representation of a soccer score when a popular sports announcer explains/speaks the court, an animated 3D graphical representation of a rotating scoreboard, or any other type of animated 3D graphical representation specified by the subscribing user. Further details of such embodiments will be described later below.
In yet another embodiment, the content provider 122 may take the form of an online service provider (such as a dating service) or other type of entity that provides services and/or applications to users. In such embodiments, individual users may have different types of client devices, including desktop and portable/wireless devices. Even for a particular individual user, can have a wireless device to receive voicemail messages, a desktop device to receive email or other online content, and various other devices to receive content and use applications based on the user's particular preferences.
Thus, embodiments allow individual users and their devices to receive animated 3D graphical content and/or to receive content in a form different from the original 3D graphical form. As one example, two users may communicate with each other using a dating service available from the content provider 122 or other entity. The first user may generate a text file with his profile and his own 2D graphical image and then pass this content to the content provider 122 via the server 102 for communication to potential matches. A first user may use a cellular phone to communicate text files and a desktop PC to communicate 2D images.
In an embodiment, the server 102 determines the capabilities and preferences associated with matching the second user. For example, if the second user is able to and prefers to receive animated 3D graphical content, the server 102 may use information from the text file to transform and animate the content of the first user into an animated 3D graphical presentation and then communicate the animated 3D graphical representation to the second user's device, whether a cell phone, PC, or other device of the second user's choice. Further, the second user may specify the form (whether 3D or non-3D) of the content to be received at any of her particular devices.
Further according to an embodiment, the first user may also specify preferences regarding how the second user may receive content. For example, a first user may specify that an animated 3D graphical representation of his profile be presented on the second user's cellular telephone, while a textual version of his profile is presented on the second user's PC. The first user may further specify the manner in which he prefers to communicate with the server 102, including in 3D or non-3D formats such as text, voice, and so forth.
In the above and/or other example implementations, the conversion of content from one form to another may be performed so that the end user experience remains as best as possible. For example, if an end user's client device is capable of receiving and presenting animated 3D content, that type of content may be delivered to the client device. However, if the client device is not capable of receiving/rendering animated 3D content, then the server 102 may convert the content to be delivered to "next closest thing," such as video content. If the client device is not capable of receiving or presenting or using video content, the server 102 may provide the content in some other form as appropriate, and so on.
In yet another embodiment, the user may interactively change the animated 3D graphical content during presentation. For example, a sender and/or receiver of content in an online gaming environment may choose to change the characteristics of a 3D graphical component during the mid-game, such as making a character smaller or larger, perhaps even removing a 3D aspect of the character or the entire game. Furthermore, the user may specify the type of game format (3D or not) for different devices used by the same user.
2-4 are flow diagrams illustrating operation of such embodiments with respect to operation of animated 3D graphical messaging. It will be appreciated that the various operations shown in the figures do not have to occur in the exact order shown, and that various operations may be added, removed, modified or combined in various embodiments. In an example embodiment, at least some of the operations depicted may be implemented as software or other machine-readable instructions stored on a machine-readable medium and executable by a processor. Such processors and machine-readable media may reside in any one of server 102 and/or user equipment.
Fig. 2 is a flow diagram of a method 200 that may be used at the sender device 116. At block 202, a user generates a voice, text message, or other type of original message. For example, a text message may be generated by typing a message using the alphanumeric keypad of the input mechanism 16; a voice message may be generated by using the recording microphone of the input mechanism 16; an audio-visual message may be generated using the camera of the input mechanism 126; or other message generation techniques may be used. In one embodiment, one of the other components 128 may include a transformation engine to transform text messages into voice messages, to transform voice messages into text messages, or to otherwise obtain user messages in electronic form that may be used to drive 3D animations.
At block 204, the user uses the 3D graphical messaging application 112b at the sender device or remotely accesses the 3D graphical messaging application 112a resident at the server 102 to obtain a 3D graphical representation or other 3D template. For example, with the advent of camera-enabled mobile devices, a device with sufficient processing power may capture images and video with the camera and convert them into a 3D graphical representation at block 204. For example, a user may create his own 3D avatar representation by capturing his avatar with a mobile camera and using a 3D graphical messaging application to convert the captured video or still image representation into a 3D graphical representation. Again, the 3D avatar representation of the user is just an example. The 3D avatar representation may be any other imaginary or real person or object-the 3D graphical representation need not even take the form of an avatar, instead of a 3D graphical representation that may contain a scene, surrounding environment, or other object selected by the user.
The user may then morph, personalize, customize, etc. the 3D graphical representation. In another embodiment, the user may select a fully pre-constructed 3D graphical representation from a local or remote library, such as at server 102 (and/or select an object of the 3D representation, such as hair, eyes, lips, trees, clouds, etc., for subsequent construction into a complete 3D graphical representation).
If the capabilities of the sender device 116 are sufficient to provide animation at block 206, an animated 3D graphical message may be fully constructed on the client device 210 and then sent to the server 102 at block 212. Otherwise, the client device 116 sends the message and the 3D graphical representation to the server 102 for animation at block 208. For example, if the 3D graphical messaging application 112b is not resident on the sender device 116, the sender device 116 may instead send a communication (such as, for example, an email) to the server 102 containing the text version of the message, the complement of the recipient device 118 (e.g., a telephone number or IP number), and the selected 3D graphical representation.
Thus, using the method 200 of FIG. 2, one embodiment allows the user of the sender device 116 to provide an animated 3D graphical message that mimics a voice message or uses a text message that has been converted to speech using a text-to-speech engine or other suitable conversion engine. Thus, 3D graphical messaging application 112: 1) allowing a user to select or create a 3D graphic from a library of pre-authored 3D graphic representations; 2) allowing a user to create a traditional voice message or text message; and then 3) sending the 3D graphical representation and the voice/text message to a remote server application that animates the selected 3D graphical representation using the voice/text message or animates the 3D graphical representation locally.
Fig. 3 is a flow diagram illustrating a method 300 that may be performed at the server 102. At block 302, the server 102 receives an animated 3D graphical message from the sender device 116 or a message and a (non-animated) 3D graphical representation from the sender device 116. If the sender device 116 does not animate the 3D message/graphic, as determined at block 304, the animation engine 108 of the server 102 provides the animation at block 306.
The animation at block 306 may be provided from a voice message received from the sender device 116. Alternatively or additionally, the animation at block 306 may be provided from a text message that is converted to a voice message. Other animation message sources may also be used.
If the sender device 116 has provided the animation, then at block 308, the server 102 determines the capabilities and/or user preferences of the recipient device 118. For example, if the recipient device 118 does not have a locally installed 3D graphical messaging application 112b, the transcoding component 110 of the server 102 at block 312 may instead convert the animated 3D graphical message into a form suitable for the capabilities of the recipient device 118. For example, if the recipient device 118 is a mobile phone with an audio and video enabled application, the server 110 may convert the animated 3D graphical message to 2D video with an audio message for delivery to the recipient device 118 at block 314. This is merely one example of a transformation that may be performed to provide a message form suitable for the recipient device 118 so that the message may be received and/or presented by the recipient device 118.
If the recipient device 118 does support animated 3D graphical messages, the animated 3D message created at block 306 or received from the sender device 116 is sent to the recipient device 118 at block 314. The supplemental content may also be sent to the recipient device 118 at block 314. For example, if the animated 3D graphical message pertains to the collection of an upcoming football game, the supplemental content may include a weather forecast for the day of the game.
Sending the animated 3D graphical message to the recipient device at block 314 can be performed in several ways. In one embodiment, the animated 3D graphical message can be delivered in the form of a downloadable file, such as a 3D graphical file or a compressed video file. In another embodiment, the animated 3D graphical message can be delivered by streaming, such as by streaming streamable 3D content or compressed video frames to the recipient device 118.
Fig. 4 is a flow diagram of a method 400 performed at the recipient device 118 to present a message that is an animated 3D graphical message and/or a message transformed therefrom. At block 402, the recipient device 118 receives a message from the server 102 (or from some other network device communicatively coupled to the server 102).
If the recipient device 118 needs to access or obtain additional resources to present the message, then at block 404, the recipient device 118 obtains such resources. For example, if the server 102 does not otherwise determine that the recipient device 118 requires such additional resources to present or enhance the presentation of the message, the recipient device 118 may download a player, application, supporting graphics and text or other content from the internet or other network source. In general, the recipient device 118 may not need to obtain such additional resources if the device capability information stored at the server 102 is complete and accurate, and because the server 102 converts the message into a form suitable for presentation at the recipient device 118.
At block 406, the recipient device 118 presents the message. If the message is an animated 3D graphical message, the message is visually presented on the display of the recipient device 118, accompanied by appropriate audio. If the user so desires, the animated message may also be accompanied by a textual version of the message, such as a type of "closed captioning," so that the user can read the message and listen to the message from the animated graphic.
As explained above, the representation at block 406 may involve playback of the downloaded file. In another embodiment, the representation may take the form of a streaming representation.
At block 408, the recipient device 118 may send device data (such as data pertaining to dynamically changing characteristics of its capabilities, such as power level, processing capacity, etc.) and/or data indicative of channel status to the server 102. In response to such data, the server 102 may make DBA adjustments to ensure that the message presented by the recipient device 118 is optimal.
In one embodiment, the adjustment may involve changing a characteristic of the animated 3D graphical content provided, such as changing the overall resolution of the entire content, or changing the resolution of only individual components within the 3D graphical content. In another embodiment, for the server 102, the adjustment may involve switching from one output file to a different output file (e.g., a pre-rendered file). For example, the same content may be embodied in different animated 3D graphical content files (e.g., having different resolutions, bit rates, color formats, etc.), or perhaps even embodied in forms other than animated 3D graphical forms. Based on the needed adjustment, the server 102 and/or the recipient client device 118 may choose to seamlessly switch from a current output file to a different output file.
Various embodiments are described herein with particular reference to the type of message (animated 3D graphical messages, non-animated messages such as voice or text, non-3D messages such as 2D messages, etc.) and the network device that generates or processes such messages. It will be appreciated that these descriptions are merely illustrative.
For example, the sender device 116 may generate a text or voice message and then provide the text or voice message to the server 102 — the original message provided by the sender device 116 need not be graphical in nature. The server 102 may determine that the recipient device 118 has the ability to animate the message and also provide its own 3D graphics. Thus, the server 102 may communicate a text or voice message to the recipient device 118, and the recipient device 118 may then animate the desired 3D graphics based on the received message.
Fig. 5 is a flow diagram of a method 500 of providing an animated 3D graphical message to a client device, such as the sender device 116 and/or the recipient device 118, based on a subscription model. In particular, embodiments of method 500 relate to techniques for: content is provided from the content provider 122 to the client devices in the form of animated 3D graphical messages and/or in a form suitable for the client devices based on device capabilities, channel conditions, and/or user preferences.
At block 502, the server 102 receives content from the content provider 122. Examples of content include, but are not limited to, audio, video, 3D presentations, animations, text feeds such as stock quotes, news and weather broadcasts, satellite images and sports feeds, internet content, games, entertainment, advertisements, or any other type of multimedia content.
One or more client devices, such as the sender device 116 and/or the recipient device 118, may have subscribed to receive such content. In addition, the subscribing client device may provide information to the server 102 regarding how to prefer to receive such content, device capabilities, and other information. For example, the client device may provide information regarding whether it has the ability and/or preference to receive content in the form of animated 3D graphical messages. An implementation of such a message may for example contain an animated 3D graphical image of a popular sports announcer or other individual showing scores of a football match.
At block 504, the server 102 determines the message form of the client device used to make the subscription and may also confirm the subscription status of the client device. In one embodiment, this determination at block 504 may involve accessing data stored in user information database 106. Alternatively or additionally, the client device may be queried for this information.
Determining the message form may for example comprise checking parameters of messages already provided by the subscribing user. In a manner that the user can receive the content in a form, time, and other conditions specified by the user, the user can customize a particular 3D template for presenting the content.
If the client device does not have special preferences or the need for conversion, as determined at block 506, the server 102 sends the content to the client device at block 510. On the other hand, if the client device does have special preferences or needs for the content, then the content is converted at block 508 before being sent to the client device at block 510.
For example, the client device may specify a desire to receive the entire textual content in the form of an animated 3D graphical message. Thus, the server 102 may convert the textual content to speech and then use the speech to drive animation of the desired 3D graphical representation.
As another example, a customer equipment device may wish to receive textual content in the form of animated 3D graphical messages, while other types of content need not be delivered in animated 3D. Thus, in embodiments messages and other content may be provided to client devices in a mixed form, where a particular single client device may be capable of receiving content in different forms, and/or where multiple different client devices operated by the same (or different) user may be capable of receiving content in respective different forms.
Of course, it is to be appreciated that the above-described animations and transitions need not be performed at the server 102. As previously described above, a client device with sufficient capabilities may alternatively or additionally perform animation, transformation, or other related operations to cause such operations to be performed at the server 102.
In embodiments that may be supported by the above-described features and functions, certain types of media files may provide animated 3D graphical content that is derived from input data that may not necessarily be visual in nature. Examples of such files include, but are not limited to, third generation partnership project (3GPP) files.
For example, the input data may be in the form of text that provides a weather forecast. Embodiments examine input text, such as by parsing individual words, and associate the parsed words with graphical content, such as graphical representations of clouds, rain, wind, weather, a person standing with an umbrella, and so forth. At least some of this graphical content may be in the form of a 3D graphical representation. Next, an image frame is generated that depicts the movement of the graphical content (either the entire graphic, or portions thereof such as lips) from one frame to another, thereby providing animation.
The frames are assembled together to form an animated 3D graphical representation and encoded into a 3GPP file or other type of media file. The media files are then delivered, such as by downloading or streaming, to a user device that is capable of receiving and presenting the files, and/or has preferences that favor receiving such types of files.
Various embodiments may use several techniques to create and animate a 3D graphical representation. Examples of these techniques are disclosed in U.S. patent nos. 6,876,364 and 6,853,379. Further, various embodiments of available wireless user devices may use the system and user interface to facilitate or enhance communication of animated 3D graphical content. An example is disclosed in us patent No. 6,948,131. All of these patents are of the same assignee as the present application and are incorporated herein by reference in their entirety.
All of the above U.S. patents, U.S. patent application publications, U.S. patent applications, foreign patents, foreign patent applications and non-patent publications referred to in this specification and/or listed in the application data sheet, are incorporated herein by reference, in their entirety.
While specific embodiments of, and examples for, the system and method for mobile 3D graphical communication are described herein for illustrative purposes, various equivalent modifications are possible without departing from the spirit and scope of the invention, as those skilled in the relevant art will recognize after reviewing the description. The various embodiments described above can be combined to provide further embodiments. Aspects of the embodiments can be modified, if necessary, to employ systems, circuits and concepts of the various patents, applications and publications owned by the assignee of the present application (and/or by others) to provide yet further embodiments.
For example, software or other machine-readable instructions stored on a machine-readable medium may implement at least some of the features described herein. Such machine-readable media may reside at a sender device, a receiver device, a server or other network location, or any suitable combination thereof.
These and other changes can be made to the embodiments in light of the above description. In general, in the following claims, the terms used should not be construed to limit the invention to the specific embodiments disclosed in the specification, the abstract, and the claims. Accordingly, the invention is not limited by the disclosure, but instead its scope is to be determined entirely by the following claims, which are to be construed in accordance with established doctrines of patent claim interpretation.