BACKGROUNDMulti-screen solutions display second-screen content on second-screen devices while a user watches first-screen content (e.g., a television show) on a first-screen device (e.g., television). Second-screen applications allow users to interact with their second-screen devices while viewing first-screen content on first-screen devices. In one example, a user may watch a television show on a television. Then, the user can use his/her second-screen device to access second-screen content, such as supplemental content for the television show or advertisements, while watching the television show. In one example, the first-screen content may be delivered via a cable television network to the television. The user may then use a content source's application on the second-screen device to access the second-screen content via another communication medium, such as the Internet. For example, while watching the television show on a television network, the user may open the television network's application to request the second-screen content via the Internet.
While second-screen use has increased, the overall uptake has been limited. Some issues may be limiting the uptake, such as the user typically has to download an application to view the second-screen content. In some cases, for each different content source, the user needs to download a different application to view the second-screen content. For example, a first television network has a first application and a second television network has a second application. Also, there may be problems with the synchronization between the first-screen content and the second-screen content. For example, the second-screen content should be output in coordination with the first-screen content. However, there may be latency in retrieving content for the second-screen in response to the first-screen event, and also there may be latency when the second-screen device has to connect via a different communication network to receive the second-screen content from the communication network delivering the first-screen content. The latency may cause problems with some content, such as in real-time programs (e.g., sports), where latency in the synchronization is not acceptable.
BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 depicts a system for delivering first-screen content and second-screen content using multi-view coding (MVC) extensions according to one embodiment.
FIG. 2 depicts a more detailed example of a head-end according to one embodiment.
FIG. 3 depicts a more detailed example of a gateway for de-multiplexing the content stream according to one embodiment.
FIG. 4 depicts a more detailed example of a second-screen processor according to one embodiment.
FIG. 5 depicts a simplified flowchart of a method for delivering second-screen content using MVC extensions according to one embodiment.
FIG. 6 illustrates an example of a special purpose computer system configured with the multi-view delivery system, the multi-view stream processor, and the second-screen processor according to one embodiment.
DETAILED DESCRIPTIONDescribed herein are techniques for a second-screen delivery system using multi-view coding extensions. In the following description, for purposes of explanation, numerous examples and specific details are set forth in order to provide a thorough understanding of particular embodiments. Particular embodiments as defined by the claims may include some or all of the features in these examples alone or in combination with other features described below, and may further include modifications and equivalents of the features and concepts described herein.
Particular embodiments provide a second-screen experience for users on a second-screen device. A system uses multi-stream capabilities designed for delivering multi-view content to a first-screen device. However, the system uses the multi-stream capabilities to enable the second-screen experience. For example, encoding standards have incorporated multi-stream capabilities. The multi-stream capabilities allow a system to deliver multiple video streams to a single source. Typically, a multi-view coding (MVC) extension is used to provide multiple views to a first-screen device. For example, a three dimensional (3D) movie includes a main video stream and another stream for a second view. The main video stream and second view are sent to the first-screen device, which combines the second view with the main video stream to create the 3D picture on the first-screen device. The second view is encoded into a single stream with the main video stream using the MVC extension.
Particular embodiments use the MVC extension to provide second-screen content along with the first-screen content. In one embodiment, a head-end multiplexes the first-screen content with the second-screen content into a single content stream. The second-screen content is added to the video stream according to the MVC extension requirements. At the user end, such as at a gateway, instead of sending the first-screen content and second-screen content to the first-screen device, the gateway de-multiplexes the first-screen content and the second-screen content. The gateway can then send the first-screen content to the first-screen device while caching the second-screen content. When the gateway determines that the second-screen content should be displayed on the second-screen device, the gateway can send the second-screen content to the second-screen device for display on the second-screen of the second-screen device.
FIG. 1 depicts asystem100 for delivering first-screen content and second-screen content using MVC extensions according to one embodiment.System100 includes a head-end102 andcustomer premise location104. Head-end102 may be servicing multiple customer premise locations104 (not shown). Eachcustomer premise location104 may be offered a personalized second-screen experience using methods described herein.
Head-end102 may deliver first-screen content tocustomer premise location104. In one embodiment, head-end102 is part of a cable television network that delivers video content for different television networks via broadcast and also on demand. The first-screen content may be delivered via the cable television network using a variety of communication protocols. Different communication protocols schemes may be used, such as quadrature amplitude modulation (QAM) or Internet Protocol, to deliver the video content. Although a head-end and cable network is described, other types of networks that can deliver content using the MVC extension can be used.
In one embodiment, amulti-view delivery system106 includes multiple computing devices that send first-screen content tocustomer premise location104.Customer premise location104 may include agateway112 that is a device that interfaces with an outside wide area network103 (e.g., cable network and/or the Internet) and alocal area network105 withinlocation104. A first-screen (1stscreen)device108 and a set of second-screen devices108 are connected togateway112 vialocal area network105. The first-screen device108 may be considered a primary screen, such as a television, that a user is primarily watching. For example, a user may watch a television show on first-screen device108. Second-screen (2ndscreen)devices110 may be secondary screens in which supplemental or second-screen content can be viewed while a user is watching first-screen device108. Examples of second-screen devices110 include mobile devices, such as smartphones, tablets, and laptop computers.
Multi-view delivery system106 may deliver the first-screen content destined for first-screen device108 for display on a first-screen. Also,multi-view delivery system106 may deliver second-screen content destined for one or more of second-screen devices110. The second-screen content may be considered supplemental content to the first-screen content. For example, the second-screen content may include advertisements, additional information for first-screen content, promotion coupons or offers, and other types of information.
An encoding standard, such as H.264, high-efficiency video coding (HEVC), or other similar protocols, allow multi-views to be sent in single video stream. For example, an extension of H.264/motion pictures experts group (MPEG)-4, advanced video coding (AVC) standard, the joint video teams of the international telecommunications union (ITU)-T video coding experts group (VCEG), and the international standard (ISO)/international electro-technical commission (IEC) moving picture experts group (MPEG) standardized an extension of a transcoder. The extension refers to multi-view video coding, which is amendment 4 to the H.264/AVC standard. The extension states multiple video streams can be multiplexed via different P frames into a single stream. Other extensions may be used, such as supplemental enhancement information (SEI), which may be used in the standard to allow metadata to be sent with the first-screen content, and also other alternative approaches to providing multi-view capabilities.
One common use of the extension is to provide at least two multi-view video streams to allow a single-screen to experience three-dimensional video. However, particular embodiments may leverage the extension to provide second-screen content. In this case,multi-view delivery system106 is enhanced to enable delivery of multi-view streams that include first-screen content and second-screen content.
Also,gateway112 is specially configured to process the second-screen content that is sent using the MVC extension. For example, conventionally, a gateway would have received the two multi-view streams using the MVC extension in a single content stream. Then, the gateway would have sent both multi-view streams to only first-screen device108 (e.g., not to any second-screen devices). This is because the MVC extension is meant to provide multi-views on a single device. However,gateway112 uses amulti-view stream processor114 to de-multiplex the first-screen content and the second-screen content that is included in a content stream frommulti-view delivery system106.Multi-view stream processor114 may analyze the content stream to determine where to send the different streams. In some embodiments, the content streams may be intended entirely for first-screen device108, such as when a 3D movie is being watched and the multi-view content includes the additional view. In this case,multi-view stream processor114 may send both the first-screen content and the multi-view content to first-screen device108. For example, both of the multi-view streams are merged again and encoded using an encoder, and then sent to first-screen device108.
When using the MVC extension to enable the second-screen environment,encoder308 re-encodes the first-screen content and then sends the first-screen content to first-screen device108, which can then display the first-screen content. In one embodiment, a set top box (STB)116 may receive the first-screen content, decode the content, and then display the content on first-screen device108. However, instead of sending the multi-view content to first-screen device108,multi-view stream processor114 may determine whether the multi-view stream is for the first-screen or the second-screen. The determination may be made based on metadata associated with the multi-view stream that may indicate whether or not the multi-view content is first-screen content or second-screen content.
When the multi-view content is second-screen content, a second-screen processor118 may then send the second-screen content to second-screen device110 at the appropriate time. For example,multi-view stream processor114 may first cache the second-screen content. Then, at the appropriate time, second-screen processor118 synchronizes the display of the second-screen content with first-screen content being displayed on first-screen device108. For example, an encoder encodes the second-screen content for sending to second-screen devices110. In other embodiments, a user may request the second-screen content when desired. Other methods of providing the second-screen content may be appreciated and will be described in more detail below.
Accordingly, the MVC extension is used to send both the first-screen content and the second-screen content in multiple views in a content stream. Anintelligent gateway112 is used to parse the content stream to separate out the first-screen content and the second-screen content based on metadata. The second-screen content can be sent to second-screen devices110 without being sent to first-screen device108. Particular embodiments usegateway112 becausegateway112 sits betweenheadend102 andfirst screen devices108/second-screen devices110.Gateway112 has the processing power to decode the stream and determine whether one view is for the second-screen devices.Gateway112 can then re-encode the streams and send separate content streams for the first-screen content and the second-screen content to the appropriate destinations. This allowsfirst screen devices108/second-screen devices110 to not have to be changed to handle MVC extensions for providing second-screen content. For example, eitherfirst screen device108 or second-screen devices110 would have had to receive the single stream with both the first-screen content and the second-stream content and determine how to process the second-screen content.Gateway112 sits naturally in betweenfirst screen devices108/second-screen devices110, and can determine how to send the second-screen content.
Head-End EncodingAs mentioned above, head-end102 may multiplex the first-screen content and the second-screen content together into a content stream.FIG. 2 depicts a more detailed example of head-end102 according to one embodiment. Aprimary stream processor202 and asupplemental stream processor204 may determine the first-screen content and the second-screen content, respectively, to add to the single content stream. Although only content stream is described, it will be recognized that multiple content streams may be processed, such as content streams for multiple television broadcasts. Any number of the broadcasts may include second-screen content.
Primary stream processor202 may receive the first-screen content206 may be received from other content sources in real time via satellite or other networks. In other embodiments, first-screen content may retrieve the first-screen content fromstorage205, which may be cache memory or other types of long term storage. Although one content stream is described,primary stream processor202 may be sending multiple content streams for multiple television channels tolocations104. Some of the television channels may have related second-screen content, and some may not.
Supplemental stream processor204 may receive second-screen content from a second-screen content provider208. Second-screen content provider208 may include an advertiser, a service provider, a retailer, or even a cable television provider. Also, second-screen content provider208 may be the same content source that provided the first-screen content. In one embodiment, second-screen content provider208 may provide second-screen content to head-end102, which is stored instorage205 at210.
Second-screen content provider208 can now target specific user devices, and also service providers can provide enhancements to the first-screen content. For example a service provider could provide the player statistics for a sporting event video stream.Supplemental stream processor204 may then determine which second-screen content is appropriate to send with the first-screen content. In one example,supplemental stream processor204 determines second-screen content targeted to a user of second-screen device110 or first-screen device108. Once determining the second-screen content,supplemental stream processor204 sends the second-screen content to amultiplexer212.
Multiplexer212 receives the first-screen content and the second-screen content, and multiplexes them together into a single multi-view content stream.Multiplexer212 may multiplex the first-screen content and the second-screen content based on the MVC extension. Also, metadata to identify the second-screen content as being “second-screen content” or for a specific second-screen device110 may be added to the content stream. The metadata may be needed because the MVC extension is being used for a purpose other than sending multi-views to a single device. The metadata allowsgateway112 to determine when second-screen content is included in the single content stream. Then,encoder214 may encode the first-screen content and the second-screen content together into an encoded content stream. In one embodiment,encoder214 encodes the second-screen content using the MVC extension. In this case, the second-screen content is sent as a multi-view stream with the first-screen content.Encoder214 can then send the single encoded content stream throughnetwork103 tocustomer premise location104.
Gateway De-MultiplexingFIG. 3 depicts a more detailed example ofgateway112 for de-multiplexing the content stream according to one embodiment.Gateway112 receives the encoded content stream that includes the multiplexed the first-screen content and the second-screen content. Because of the multiplexing, a de-multiplexer302 de-multiplexes the content stream to separate the multi-view streams. Adecoder304 can then decode the first-screen content and the second-screen content.
Multi-view stream processor114 can then determine whether the multi-view streams include first-screen content and the second-screen content, or are conventional multi-view streams. For example, depending on the metadata associated with the second-screen content,multi-view stream processor114 may prepare the second-screen content for forwarding to second-screen device110. In other embodiments, the second-screen content may actually be meant for first-screen device108 (in this case, it would not be referred to as second-screen content and is actually multi-view content being traditionally used). If the content stream included traditional multi-view content, then the first-screen content and the second-screen content may be recombined into a single stream, and then anencoder308 re-encodes the single stream, which is sent to first-screen device108.
When the content stream includes first-screen content and the second-screen content,multi-view stream processor114 determines where to send the first-screen content and the second-screen content. For example,multi-view stream processor114 sends the first-screen content to set top box116 (encoded by encoder308). Then,multi-view stream processor114 determines where to send the second-screen content. In this embodiment,multi-view stream processor114 stores the second-screen content incache memory306. Although cache memory is described, any type of storage may be used.
Once the second-screen content has been stored incache memory306, second-screen processor118 may determine when and where to send the second-screen content to second-screen device110. Encoder308 (this may be the same encoder used to encode the single stream with multi-views or a different encoder) may encode the second-screen content into a stream. This stream is different as it only includes second-screen content and is not multiplexed with first-screen content. This type of content stream may be in a format that second-screen device110 is configured to process (that is, second-screen device110 does not have to de-multiplex a content stream with both first-screen content and second-screen content).Encoder308 then sends the second-screen content to second-screen device110. It should be noted that encoding may be performed at any time before delivery to second-screen device110.
Second-Screen DeliveryFIG. 4 depicts a more detailed example, of second-screen processor118 according to one embodiment. Second-screen processor118 may deliver the second-screen content differently. For example, second-screen processor118 may forward all second-screen content to second-screen device110 with metadata that is selected based on how and when to display the second-screen content. Or, second-screen processor118 may detect different events (e.g., in the first screen content) and send the second-screen content in a synchronized manner.
Second-screen processor118 also may determine which second-screen device110 is connected togateway112 and determine which second-screen device110 should be the destination for the second-screen content. For example, second-screen processor118 maintains a list of devices withinlocation104 that are associated with a user or users. This information may be determined via a user profile408 for the user (or multiple user profiles for multiple users). The user profile information may be found in a subscription profile (when using an application supported by the MSO) or provided by a user. Also, second-screen processor118 may include a second-screen device detector402 to detect which second-screen devices110 are active incustomer premise location104. Second-screen device detector402 may also track whichapplications404 are being used by second-screen devices110.
In detecting which second-screen devices110 are active, second-screen device detector402 may message with second-screen devices110 to determine which second-screen devices110 are active and in what location. This may involve sending a message toapplication404 and having a user confirm the activity and location. Also, second-screen device detector402 may use fingerprinting or application detection methods may be used to maintain the list of devices. For example, second-screen device detector402 may activate a microphone of second-screen device110 to detect the audio being output in the location of second-screen device110. Then, second-screen device110 may determine a fingerprint of the first-screen content being output by first-screen device108. In one example, a television may be outputting a television show, and second-screen device110 may take a fingerprint of the audio within a range of second-screen device110. Second-screen device110 or second-screen device detector402 (or a back-end device) can then determine that a user is watching the television show when the fingerprint matches a fingerprint from the television show. Further, second-screen device detector402 may detect which application the user is using by intercepting transfers between the application and a wide area network, such as the Internet.
As discussed above,cache306 buffers the second-screen content. Also, metadata about the second-screen data may be stored incache306. The metadata may include information that can be used to determine when the second-screen content should be output to second-screen device110.
Then, a content delivery processor406 determines when second-screen content should be provided to second-screen device110. Content delivery processor406 may monitor the first-screen content being sent and metadata for the second-screen content incache306. For example, when a first-screen device renderer requests a change in the content view via a channel change, content delivery processor406 records the change such that content delivery processor406 knows the channel first-screen device108 is watching. Then, content delivery processor406 can retrieve second-screen content for second-screen device110 appropriately. For example, content delivery processor406 may retrieve second-screen content for the current channel at a time defined by the metadata for the second-screen content. This synchronizes the second-screen content with the first-screen content.
Content delivery processor406 may use user profile408 for users that second-screen device detector402 built to personalize the second-screen content delivery. The user profile may store personal information for the user, such as user preferences for second-screen content, such as what types of advertisements the user likes to view. Content delivery processor406 may then determine which second-screen content to provide to second-screen application404.
Content delivery processor406 may sit within a protocol stack ongateway112 to allow it to disseminate second-screen content to various second-screen devices110. A software development kit can be used by a second-screen application404 to allow interaction with content delivery processor406 ingateway112 to receive second-screen content. For example, second-screen applications404 can subscribe to and access different capabilities provided bygateway112. For example, the software development kit allows second-screen applications404 to interface with content delivery processor406 and request specific second-screen sub-streams based on provided parameters. In other embodiments, content delivery processor406 may automatically determine which second-screen content to send based on the user profile.
Becausegateway112 is context aware via the second-screen device detection,gateway112 can use the user profile for a user and disseminate the appropriate second-screen content to the second-screen devices110. For example, when two second-screen devices110 exist withincustomer premise location104 and are active while first-screen device108 is active, one second-screen device110 may be targeted with first second-screen content based on the user profile and another second-screen device may be provided with general second-screen content that is not targeted. In one example, when watching a cooking show on the first-screen, a first second-screen device108 may receive general coupons, and a second second-screen device may receive personalized recipes.
In another embodiment, the second-screen content may include sign language-enabled broadcasts in which sign language can be displayed on second-screen devices110. The standard method for hearing-impaired services is to provide closed caption or in some broadcasts to set up a picture-in-a-picture (PIP) where a sign language source may be in the corner of the first-screen device display screen while the first-screen content is displayed in the remainder of the first-screen device display screen. This may not be ideal for viewers that may be in the same household. For example, it may either disrupt the viewing experience for users that do not need the sign-language view or overlay too much of the sign-language view over the first-screen broadcast. Also, the PIP window may be too small to view the sign language. Using particular embodiments, the first-screen content may include the main broadcast program and the second-screen content may include sign language information that is associated with the first-screen content.Gateway112 may track second-screen devices110 that are active and determine that a user who is hearing-impaired is watching the first-screen content via any detection process.Gateway112 may then determine that the sign language information should be sent to this second-screen device. Then, the user can watch the sign-language display on his/her own second-screen device110 without interrupting the television show. Also, this can be enhanced to allow the user to design the elements of how the sign language view and the first-screen content view should be laid out and cast back to the primary screen renderer. For example, a user can segment the first-screen content and the second-screen content as desired.
The second-screen content can be provided without the need for the second-screen application to use any first-screen content detection, such as fingerprint detection. Rather,gateway112 has access to the first-screen content and can perform this detection itself. Further, second-screen device110 does not need any over-the-top capabilities as the second-screen content is sent with the first-screen content. This may also help synchronization as the second-screen content arrives with the first-screen content and experiences the same latency.
Gateway112 also allows for new application capabilities that go beyond simply overlaying content on second-screen devices110 based on first-screen content contacts. For example, extended features not only at the content source, but also by application developers may be used. For example, a cooking show can produce multi-stream views that include the main program, detailed recipe instructions, and ingredient manufacturer coupons. Hence, a second-screen application designer can create different overlays that allow the user to view the recipe details and store them on their home recipe file while previewing the manufacturer coupons and storing the coupons in user-specific logs at the same time as watching the first-screen content.
In one example, a user is viewing a channel on first-screen device108 while accessingapplication404 on second-screen device110. When the user tunes to the channel to view the first-screen content, content delivery processor406 detects the user is watching certain first-screen content. Then, content delivery processor406 may send second-screen content toapplication404 including metadata for when and how the second-screen content should be presented to the user. The second-screen content may include time-based synchronized advertisements to the first-screen content, promotion offers, such as coupons, or supplemental content, such as detailed episode instructions in the form of additional video. The episode-related material may be cooking instructions or detailed auto inspection information that relates to the first-screen content being viewed.
Accordingly, second-screen application404 can display second-screen content related to the first-screen content without the need to have a connection to an external source through a wide area network, such as the Internet or an over-the-top connection, different from the connection being used by first-screen device108. That is, the second-screen content is received via the same communication network and content stream as the first-screen content. Further, second-screen device110 does not need to rely on external technologies to determine what the user is watching and to retrieve related material.Gateway112 can detect the existing second-screen devices110 being used and through context build user profile information along with information sent from second-screen applications404 to determine the appropriate second-screen content to provide to users.
Head-End Enhancements to Personalize User ExperienceIn some embodiments,gateway112 may detect which second-screen devices110 are active. Then,gateway112 may consult a user profile to determine which second-screen content may be of interest to this user using this second-screen device110. For example, if a mobile telephone that is associated with a user #1 is active, and this user likes cooking shows, thengateway112 may send a message to head-end102 indicating that user #1 is active and likes cooking shows.
When user #1 requests a cooking show, head-end102 may determine that recipe information should be sent togateway112 as second-screen content. In this case, head-end102 may selectively provide second-screen content to different users. This may more efficiently use bandwidth as only second-screen content may be sent based on active second-screen devices110 and only to users that may be interested in this second-screen content. Alternatively, second-screen content can be always sent with first-screen content.
Method FlowFIG. 5 depicts asimplified flowchart500 of a method for delivering second-screen content using MVC extensions according to one embodiment. At502,gateway112 receives a content stream including first-screen content and second-screen content. Head-end102 sent the content stream using the MVC extension configured to be used to provide multi-view content for the first-screen content.
At504,gateway112 separates the first-screen content and the second-screen content from the content stream.Demultiplexer302 may be used to perform demultiplexing. At506,gateway112 may decode the first-screen content and the second-screen content.
At508,gateway112 determines that the second-screen content is for a second-screen device. At510,gateway112 can store the second-screen content incache306.
At512,gateway112 detects a second-screen device actively connected to the gateway. Also,gateway112 may determine that this second-screen device is the destination for the second-screen content. Then, at514,gateway112 sends the first-screen content to a first-screen device. Also, at516,gateway112 sends the second-screen content to the second-screen device.
Computer SystemFIG. 6 illustrates an example of a special purpose computer system600 configured withmulti-view delivery system106,multi-view stream processor114, and second-screen processor118 according to one embodiment. In one embodiment, computer system600-1 describes head-end102. Also, computer system600-2 describesgateway112. Only one instance of computer system600 will be described for discussion purposes, but it will be recognized that computer system600 may be implemented for other entities described above, such asmulti-view delivery system106,multi-view stream processor114, and second-screen processor118, first-screen devices108,STB116, and/or second-screen devices110.
Computer system600 includes abus602,network interface604, acomputer processor606, amemory608, astorage device610, and adisplay612.
Bus602 may be a communication mechanism for communicating information.Computer processor606 may execute computer programs stored inmemory608 orstorage device608. Any suitable programming language can be used to implement the routines of particular embodiments including C, C++, Java, assembly language, etc. Different programming techniques can be employed such as procedural or object oriented. The routines can execute on a single computer system600 or multiple computer systems600. Further,multiple computer processors606 may be used.
Memory608 may store instructions, such as source code or binary code, for performing the techniques described above.Memory608 may also be used for storing variables or other intermediate information during execution of instructions to be executed byprocessor606. Examples ofmemory608 include random access memory (RAM), read only memory (ROM), or both.
Storage device610 may also store instructions, such as source code or binary code, for performing the techniques described above.Storage device610 may additionally store data used and manipulated bycomputer processor606. For example,storage device610 may be a database that is accessed by computer system600. Other examples ofstorage device610 include random access memory (RAM), read only memory (ROM), a hard drive, a magnetic disk, an optical disk, a CD-ROM, a DVD, a flash memory, a USB memory card, or any other medium from which a computer can read.
Memory608 orstorage device610 may be an example of a non-transitory computer-readable storage medium for use by or in connection with computer system600. The non-transitory computer-readable storage medium contains instructions for controlling a computer system600 to be configured to perform functions described by particular embodiments. The instructions, when executed by one ormore computer processors606, may be configured to perform that which is described in particular embodiments.
Computer system600 includes adisplay612 for displaying information to a computer user.Display612 may display a user interface used by a user to interact with computer system600.
Computer system600 also includes anetwork interface604 to provide data communication connection over a network, such as a local area network (LAN) or wide area network (WAN). Wireless networks may also be used. In any such implementation,network interface604 sends and receives electrical, electromagnetic, or optical signals that carry digital data streams representing various types of information.
Computer system600 can send and receive information throughnetwork interface604 across anetwork614, which may be an Intranet or the Internet. Computer system600 may interact with other computer systems600 throughnetwork614. In some examples, client-server communications occur throughnetwork614. Also, implementations of particular embodiments may be distributed across computer systems600 throughnetwork614.
Particular embodiments may be implemented in a non-transitory computer-readable storage medium for use by or in connection with the instruction execution system, apparatus, system, or machine. The computer-readable storage medium contains instructions for controlling a computer system to perform a method described by particular embodiments. The computer system may include one or more computing devices. The instructions, when executed by one or more computer processors, may be configured to perform that which is described in particular embodiments.
As used in the description herein and throughout the claims that follow, “a”, “an”, and “the” includes plural references unless the context clearly dictates otherwise. Also, as used in the description herein and throughout the claims that follow, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise.
The above description illustrates various embodiments along with examples of how aspects of particular embodiments may be implemented. The above examples and embodiments should not be deemed to be the only embodiments, and are presented to illustrate the flexibility and advantages of particular embodiments as defined by the following claims. Based on the above disclosure and the following claims, other arrangements, embodiments, implementations and equivalents may be employed without departing from the scope hereof as defined by the claims.