FIELD OF THE DISCLOSURE Aspects of the disclosure relate to communications methods, communications session organizers, communications session participants, articles of manufacture, and communication systems.
BACKGROUND OF THE DISCLOSURE Interactive media sessions are increasing in popularity. As the processing power of computers continues to increase, and communications bandwidth of communications devices is improved, the popularity of interactive media communications is also expected to increase. However, to date, the substantial focus has been on media delivery with relatively little advancements for interactive media experiences.
Exemplary interactive media communications sessions may involve 3D graphics or other rich interactive media having very large files (even after compression) to be communicated and processed. The amount of data increases exponentially with richness and raises formidable issues in distribution and experience of the media over the Internet.
For example, in many situations it may not be possible to communicate these files to clients in entirety before experience commences. In fact, even very high bandwidth connections by modern standards may not be able to accommodate these interactive media files. This problem is compounded by the presence of clients connecting to a source of the interactive media files having diverse capabilities. For example, at least some of the clients may be coupled with relatively slow communications connections, use computers of relatively slow processing capabilities, or use computers with limited display capabilities.
One method of implementing communications of very large interactive media files has been to maintain multiple versions of the same media, and to serve one to each client based upon the user's capabilities. This method has significant drawbacks with respect to handling and maintenance of files if all possible combinations of different types of scalability are supported inasmuch as numerous versions of the media are used.
At least some aspects of the disclosure provide improved systems and methods for experiencing interactive content or multimedia collaboration.
SUMMARY OF THE DISCLSOURE Aspects of the invention relate to communications methods, communications session organizers, communications session participants, articles of manufacture, and communications systems.
According to one aspect, a communications method comprises providing scalable media data, organizing the scalable media data into a plurality of subparts, and providing a plurality of data requests from a plurality of participants requesting different ones of the subparts during user interaction with the media data. After the providing the data requests, the method may further include scaling respective ones of the requested subparts of the scalable media data according to receiving attributes of the respective participants, and communicating the scaled subparts to respective ones of the participants.
According to another aspect, a communications session organizer comprises an interface configured to communicatively couple with a plurality of participants during a communications session, and processing circuitry coupled with the interface. The processing circuitry may be configured to access a plurality of data requests from the participants during the communications session, to identify a plurality of subparts of scalable media data responsive to the requests, to scale the subparts of the media data according to respective receiving attributes of the participants, and to output the scaled media data to respective ones of the participants.
Other aspects of the disclosure are described herein as is apparent from the following description and figures.
DESCRIPTION OF THE DRAWINGSFIG. 1 is an illustrative representation of an exemplary interactive media session according to one embodiment.
FIG. 2 is a functional block diagram of a participant according to one embodiment.
FIG. 3 is an illustrative representation of scalable media data of a bit stream according to one embodiment.
FIG. 4 is an illustrative representation of nested tiers of scalable encoded media data according to one embodiment.
FIG. 5 is a block diagram of an organizer according to one embodiment.
FIG. 6 is a flow chart illustrating an exemplary methodology of operations of an organizer of an interactive media session according to one embodiment.
FIG. 7 is a flow chart illustrating an exemplary methodology of operations of a participant of an interactive media session according to one embodiment.
DETAILED DESCRIPTION OF THE INVENTION At least some aspects of the disclosure relate to methods and apparatus for experiencing interactive content for diverse users having different capabilities (e.g., diverse communication bandwidths, processing powers, display resolutions, etc.). At least one embodiment permits an interactive media session to be initiated without communication of the entire media file to users. Further in accordance with an exemplary embodiment, users interacting with the media receive content in accordance with their respective receiving and processing capabilities accommodating low and high power users. For example, in one implementation, scalable encoding formats of the media data may be utilized to permit appropriate transcoding to suit the capabilities and preferences of end participants enabling heterogeneous interaction. Additional aspects and embodiments are described herein.
Referring toFIG. 1, an exemplary interactive media communications session implemented using a communications system is depicted asreference character10.Interactive media session10 refers to communications of interactive media data within a communications system between a plurality of participants or users. An example of an interactive media session includes providing a 3D initial image (e.g., of a house or other real estate) to users and then providing different views of the initial image (e.g., a view of the side of the house) as the users interact or navigate with the initial image. Exemplary interaction may involve mouse input, keyboard input, joystick input, or other user input with respect to the image and new media data being provided to the user responsive to the user inputs.
Theinteractive media session10 ofFIG. 1 uses a collaboration infrastructure comprising asession organizer12 configured to implement communications within the communications system of the collaboration infrastructure.Organizer12 may comprise a single server or a plurality of servers (e.g., arranged in a peer-to-peer arrangement) in possible embodiments.
A plurality ofparticipants14 are coupled withorganizer12 in the illustrated embodiment.Exemplary participants14 comprise computing devices and may be embodied as personal computers, visualization workstations, personal digital assistants (PDAs), or other devices capable of receiving interactive media data, communicating the interactive media data to a user, and processing interactive user commands. During communications,participants14, connect toorganizer12 using the communications system to forminteractive media session10. In one embodiment, the communications system of the collaboration infrastructure may comprise network connections (e.g., Internet) providing coupling of theparticipants14 withorganizer12, and hardware and appropriate programming oforganizer12 andparticipants14. At a given moment in time,organizer12 may be arranged to implement a plurality of different interactive media sessions between respective different groups of participants in at least one embodiment.
In one arrangement,participants14 individually execute anapplication16 to participate in an interactive media session.Applications16 may implement communications modules to establish communications (e.g., start or join a session) and provide transcoding or decoding operations of received data. In one embodiment,applications16 provide standardized protocols for communications betweenorganizer12 andparticipants14 allowingsessions10 to be created, participated in, and terminated by users as well as provide interactive exchange of media in a seamlessly scalable manner.Applications16 may provide interaction in different ways with different types of media including organizing, transcoding and viewing specific types of content. Accordingly, the communications modules ofapplications16 provide connections to organizer12 sosessions10 may be initiated, joined, or terminated byparticipants14 as well as interacting with content in one embodiment.
Organizer12 is configured to implement a heterogeneousinteractive media session10 in one aspect whereinorganizer12 communicates withparticipants14 having different communications, processing, display or other capabilities. For example, different communication attributes may correspond to the specific implementations or configurations of thepresent participants14 which may vary widely in a giveninteractive media session10. In a givensession10,participants14 may have different capabilities corresponding to one or more of the respective network connections providing different rates of data transfer for theparticipants14, different processing circuitry (e.g., microprocessor executing respective software or other programming) of participants0.14 providing different processing powers, different resolutions of displays of participants-14, etc.Organizer12 is configured to implementinteractive media session10 providing communication of scalable media data with respect to theheterogeneous participants14 wherein theparticipants14 with limited abilities do not adversely impact communications with respect toparticipants14 having greater abilities in one embodiment.
At least some aspects of the disclosure provide scaling of media data byorganizer12 and communication of the scaled data toparticipants14 within a giveninteractive media session10 to provide heterogeneous communications. For example, scalable media data enables communications toparticipants14 having different capabilities (e.g., bandwidth, processing power, display resolution, etc.). Scalable encoding formats and meta-formats are described in “Proposals for End-To-End Digital Item Adaptation Using Structured Scalable Meta-Formats (SSM),” listing Debargha Mukherjee, Geraldine Kuo, Amir Said, Girodano Beretta, Sam Liu, and Shih-ta Hsiang as authors, (published October, 2002), and a co-pending U.S. patent application entitled “System, Method and Format Thereof For Scalable Encoded Media Delivery,” listing Debargha Mukherjee and Amir Said as inventors, having U.S. patent application Ser. No. 10/196,506, filed Jul. 15, 2002, having client docket no. 100202339-1 (and attached hereto Appendix A), and the teachings of which are incorporated herein by reference.
For example,participants14 may communicate a respective client profile to organizer12 prior to communications in an interactive media session10 (e.g., upon session creation or aparticipant14 joining a session10) or at another moment in time. The client profile may define one or more configuration parameter for the respective communicatingparticipant14 defining one or more maximums for one or more individual levels of scalability (e.g., signal-to-noise ratio (SNR), resolution, temporal and interactivity) that therespective device14 can receive and process. In another embodiment,organizer12 senses the configuration parameters ofrespective recipient participants14. Exemplary configuration parameters comprise receiving attributes corresponding to the abilities of therespective participant14 to receive, process or display the media data. Exemplary receiving attributes may be defined by or include unique parameters of one or more of communications bandwidth, processing speeds, or display resolution with respect to theparticipant14. Exemplary receiving attributes may also be referred to as outbound constraints and include limit constraints (i.e., limiting values for attribute measures) and optimization constraints (e.g., requested minimization or maximization of attribute measures) as discussed in the U.S. patent application Ser. No. 10/196,506.
Client profiles may convey terms of meaningful defined levels such as signal-to-noise ratio (SNR), resolution, temporal and interactivity to implement scaling operations. Additional levels may be defined and used in other embodiments. The client profiles may convey specifications for these or other qualities in a top-down order by means of a 4-tuple in one embodiment (e.g., a resolution client profile of 1 conveys that therespective participant14 is able to receive the highest resolution, a resolution client profile of 2 coveys abilities to receive the second highest resolution and so on).
Media data routed throughorganizer12 may contain the levels in any nesting order in one embodiment. A header in the bit stream may specify how many of the scalability levels there are and where they occur in the bit stream.
Organizer12 is arranged to access the client profiles for therespective participants14 and to scale media data to be communicated to therespective participants14 in accordance with receiving attributes of theparticipants14 providing a plurality of respective scaled data streams for communication toparticipants14. For example,organizer12 may compare the levels of a bit stream with the client profile forrespective participants14 and rearrange the respective bit streams accordingly.Organizer12 may accomplish the rearrangement irrespective of the nesting order if the bit stream conforms to the scalable media format. Further scaling details are described in the U.S. patent application having Ser. No. 10/196,506.
At least some additional aspects facilitate theinteractive media sessions10 by communicating less than an entirety of the interactive media data (e.g., resident upon or accessible by organizer12) to some or allparticipants14 during asession10. As described herein, interactive media data may be indexed by subparts for selective communication responsive to user inputs generated byparticipants14. For example, in one embodiment,organizer12 may communicate an initial subpart as well as an index torespective participants14. The initial subpart may comprise interactive media data usable byparticipants14 to display an initial image. The user may then interact with the displayed image by generating user inputs. For example, the user may request different views of a subject of the initial image during navigation. The user inputs may be translated by the participant into an addressing value of an index to request additional subparts of the media data. Accordingly, different portions of an entirety of the interaction media data may be communicated toparticipants14 during interaction.
In one implementation, the initial image may comprise a 3D interactive image of a subject (e.g., house). An index of the 3D image may be defined in any convenient or desired format for retrieving subsequent interactive media data. In one example, viewing angles about the subject may correspond to respective addressing values of the index. In another example, addressing values may be represented as vectors, perhaps corresponding to coordinates of a multi-dimensional grid of the 3D image. In another possible embodiment, addressing values of the index may correspond to details of subject, such as internal components of the 3D image (e.g., rooms of the house). Any suitable indexing scheme or space may be used byorganizer12 andparticipants14 to provide additional interactive media data to users as needed. The additional interactive media data may include data regarding additional details of information present in the initial interactive media data or data regarding details not present in the initial interactive media data in at least some embodiments.
Users ofindividual participants14 are free to interact with or navigate the initial media data in different ways as desired, and accordingly, different user inputs may be generated by the users of theindividual participants14 during the respective individual interactions. Therespective participants14 are configured to translate the user inputs of the interactions into respective addressing values of the index. Thereafter, theparticipants14 communicate the respective addressing values via data requests toorganizer12 to implement the communication of subsequent interactive media data desired by theparticipants14 and as indicated by the communicated addressing values.Organizer12 interprets the received addressing values and operates to identify appropriate additional subparts of the interactive media data for communication to respective ones of theparticipants14 responsive to the data requests.Organizer12 may scale communicated interactive media data according to the respective capabilities ofparticipants14 in one embodiment. Additional details of exemplary implementations of communications of interactive media data are described below.
Referring toFIG. 2, an exemplary configuration of aparticipant14 for experiencing interactive media data is shown. In the illustrated embodiment,individual participants14 compriseprocessing circuitry30, storage circuitry ordevice32, auser interface34, and acommunications interface36.
In one embodiment, processingcircuitry30 may comprise circuitry configured to implement desired programming. For example, the processing circuitry may be implemented as a processor or other structure configured to execute executable instructions including, for example, software and/or firmware instructions. Other exemplary embodiments of processingcircuitry30 include hardware logic, PGA, FPGA, ASIC, and/or other structures. These examples of processing circuitry are for illustration and other configurations are possible.Processing circuitry30 may access programming ofapplication16, decode interactive media data, communicate interactive media data to a respective user, receive and process user inputs providing interaction, implement indexing operations with respect to interactive media data, formulate data requests to be communicated toorganizer12, and perform other desired operations.
Storage circuitry ordevice32 may be configured to store electronic data and/or programming such as executable instructions (e.g., software and/or firmware), data, or other digital information and may include processor-usable media. Exemplary programming includesapplications16 in one embodiment. Processor-usable media includes any article of manufacture which can contain, store, or maintain programming, data and/or digital information for use by or in connection with an instruction execution system including processing circuitry in the exemplary embodiment. For example, exemplary processor-usable media may include any one of physical media such as electronic, magnetic, optical, electromagnetic, infrared or semiconductor media. Some more specific examples of processor-usable media include, but are not limited to, a portable magnetic computer diskette, such as a floppy diskette, zip disk, hard drive, random access memory, read only memory, flash memory, cache memory, and/or other configurations capable of storing programming, data, or other digital information.
User interface34 is arranged to communicate content of interactive media data to a user and to receive user inputs during the interaction.User interface34 may comprise a display (e.g., cathode ray tube, liquid crystal display, or other arrangement) to depict visual media content and an input device (e.g., keyboard, mouse, or other arrangement) to receive user inputs. Other implementations are possible.
Communications interface36 is arranged to implement bi-directional communications of the participant with respect to external devices, such asorganizer12. Communications interface36 may embodied as a network interface card (NIC), modem, access point, or any other appropriate communications device.
Referring toFIG. 3, one possible embodiment of aformat40 for a bit stream of scalable interactive media data is illustrated. The interactive media data may be generated and formatted by one or more ofparticipants14,organizer12, or other appropriate originating device. The scalable interactive media data may be stored locally withinorganizer12 or accessed from a remote storage location byorganizer12. Theformat40 may comprise a non-media type specific format for scalable encoded media data in one arrangement. Exemplary encoded media data may be compressed and encrypted in one embodiment.
The depictedformat40 comprises afirst portion42 and asecond portion44 in accordance with a content-agnostic meta format of the described embodiment.Format40 allows one or more levels of scalability to co-exist in a bit stream, and allows rearrangement tasks to produce bit streams of different scales and quality without knowing the actual content or compression scheme applied. In an embodiment wherein all media data is routed throughorganizer12, theorganizer12 may supply different subsets of data todifferent participants14 based upon the respective configuration parameters of therespective participants14.
First portion42 corresponds to non-media type scalability attributes and second portion data structure information, andsecond portion44 corresponds to original scalable encoded media data arranged in non-media type specific indexable structure in one embodiment. More specifically, exemplary non-media scalability attributes may include attributes common to all media types and may include size of a bit stream, SNR and processing power used to process and experience data of the bit stream in one embodiment. Scalability attributes may be used to implement appropriate scaling of the media data.First portion42 also includes non-media type specific data structure information ofsecond portion44 and comprising dimensions of a multi-dimensional representation of the scalable media data ofsecond portion44 in one embodiment.
Second portion44 corresponds to scalable media data arranged in a content independent indexable data structure in the described embodiment. The media data is arranged into a generic format regardless of content of the media data permitting generic transcoding wherein the transcoding operations are performed without knowledge of the data content and without decrypting or decoding the media data enabling a single infrastructure (e.g., organizer12) to deliver the media data according to a plurality of scales in one embodiment. Processing circuitry oforganizer12 may comprise a transcoder to implement scaling operations during transcoding including one or more of bit truncation, bit-stream skips, or bit repacking in accordance with capabilities ofparticipants14. Further details regarding exemplary scalable formats including first andsecond portions42,44 are described in the U.S. patent application having Ser. No. 10/196,506.
Referring toFIG. 4, indexable concepts ofsecond portion44 of a bit stream of interactive media data are described according to one embodiment. The indexing ofsecond portion44 may implement both scaling operations and data selection operations responsive to received data requests in the depicted embodiment. The exemplarysecond portion44 comprises aheader41, a table of contents (TOC)43,common portion45 and a plurality ofsubparts47 which comprise segments of interactive media data in the depicted embodiment. Individually, subparts47 are relatively small compared with the entire bit stream.
Header41 may provide the actual order for a given media in the bit stream. For example,header41 may comprise an index including a correlation of addressing values with one or more ofsubparts47.Header41 may be accessed byorganizer12 and used to translate received addressing values in data requests fromparticipants14 to identify one or more ofsubparts47 being requested byrespective participants14. Following identification of appropriate subpart(s)47, the interactive media data of the identified subpart(s)47 may be scaled if desired and communicated to the requestingparticipants14 to implement the media interaction.
Table ofcontents43 includes locations ofsubparts47 in the bit stream. For example, following the identification of requestedsubparts47 as described above, theorganizer12 may access the table ofcontents43 for specification of exact locations of the requested media data in the bit stream providing access to the requested interactive media data.
Data ofcommon portion45 is used byparticipants14 to initialize a respective decoding and viewing object.Participants14 may thereby operate to decode interactive media data previously transcoded byorganizer12 andcontrol user interface34 to display the interactive content.
Subparts47 have respective identifiers “segment INIT,” “segment ID 1,” “segment ID 2,” etc. In one embodiment, segment INIT may correspond to the media data of the initial visual image described above which is initially scaled and communicated toparticipants14 to start the media experience process and interaction. Thereafter, the other segments may be individually requested byrespective participants14 as interaction occurs using the addressing values which identify the desired segments by therespective segment ID 1,ID 2, etc. in the described example. Accordingly, the segment ID may be comprise an index into the data set andindividual subparts47 may be independently represented in a scalable manner with possibly multiple nesting levels described herein.
The exemplary embodiment ofFIG. 4 includes a plurality of nested tiers orlevels50,52 of abstraction for respectiveindividual subparts47 to implement data scaling operations. The data is indexable using a plurality of table of contents (TOCs) whereindividual levels50,52 are indexable by respective TOCs. TOCs provide random access and facilitate identification ofsubsets54,56 for dropping or truncating during transcoding operations. In one example, four possible nesting levels corresponding to resolution, temporal, SNR and interactivity may be provided in any desired order. The order of one or more nesting level used for respective interactive media content may be conveyed inheader41 and may be referred to as a mediaprofile permitting organizer12 to know how to derivesubsets54,56 from the bit stream for transmission to clients.
First tier50 includes first and second bit-stream subsets54. Scalability of media data may be achieved by grouping subsets to provide scalability to a particular tier. For example, a first scalability may be provided by only thefirst subset54 while a second scalability may be provided by the first andsecond subsets54. Further,individual subsets54 may be further scaled usingsubsets56 oflevel52 and additional levels may also be provided to enable further scaling. The type of scalability implemented byrespective levels50,52 corresponds to the content of the data of therespective levels50,52. One example of a multi-tier scalable bit-stream is a JPEG2000 bit-stream wherein thehighest level50 corresponds to resolution scalability and within individual resolution scalable subsets are nested a second level of signal-to-noise ratio subsets. Alternately, there are other image compression schemes wherein the highest level is SNR and within SNR layers there are nested resolution layers. Exemplary scalability attributes for scaling include resolution, SNR, temporal, and interactivity as described in the U.S. patent application Ser. No. 10/196,506. Additional scalability attributes may be used for scaling in other embodiments. In addition, one or more different scalability attributes may be used to scale a given dataset of media data fordifferent recipient participants14 of a giveninteractive media session10.
Following identification of therespective subparts47, a transcoder oforganizer12 accesses the identified scalable media data of thesubparts47 and transcodes the data in accordance with configuration parameters of the requestingparticipant14. Transcoding of the media data operates to scale the media data in accordance with the communications abilities of the requestingparticipant14. The transcoded (i.e., scaled) media data is communicated fromorganizer12 to the requestingparticipant14.
In one embodiment, the media data may be scaled using different scalability attributes for different requestingparticipants14 as mentioned above. Accordingly, the scaled media data communicated to the requestingparticipants14 may comprise plural digital data streams having different amounts of data usable to depict the same subject (e.g., different amounts of content providing different resolutions of the subject such as the initial visual image corresponding to the Segment INIT).
Scaling is implemented in the described embodiment using configuration parameters comprising receiving attributes of the requestingparticipants14 and the scalability attributes of the media data. For example, a transcoder implemented byorganizer12 may access the respective receiving attributes for one or more appropriate requestingparticipants14 to receive the data, match the scalability attributes and the respective receiving attributes, and scale the media data using the matched attributes to truncate, rearrange or otherwise modify the media data using the subsets to provide the respective data stream(s) for communication. Further details regarding scaling in one embodiment are described in the U.S. patent application Ser. No. 10/196,506. Other scaling configurations are possible.
Scaling enables heterogeneous participation ininteractive media session10. The originating media data provides data which may comprise images, video, animation, etc. of a subject. Scaling of the media data provides a plurality of bit streams which may have different quantities of data content usable for representation and navigation of the subject byparticipants14.
As described herein, data requests formulated byindividual participants14 to accesssubparts47 of interactive media data fromorganizer12 may be random in nature responsive to user inputs provided during interaction in one embodiment.Organizer12 may implement compression of the interactive media data using efficient random access capabilities in addition to scaling of the compressed bit stream to permitorganizer12 to dynamically transcode the interactive media data bit stream based on capabilities ofparticipants14. Details regarding exemplary compression having efficient random access capabilities are described in a U.S. patent application entitled “Communications Methods, Compressed Media Data Decoding Methods, Compressed Media Data Decoders, Articles Of Manufacture, And Data Communications Systems,” listing Debarargha Mukherjee as inventor, having Attorney Docket No. 100201426-1, the teachings of which are incorporated herein by reference.
Referring toFIG. 5, an exemplary configuration oforganizer12 embodied as a server is shown. The depictedorganizer12 includes aninterface60, processingcircuitry62, and astorage device64. Other embodiments are possible. For example,organizer12 may comprise a plurality of servers.
Interface60 is configured to implement bi-directional communications with respect toparticipants14. Exemplary configurations ofinterface60 include a network interface card (NIC), access point, or any other appropriate device for implementing communications.
In one embodiment, processingcircuitry60 may comprise circuitry configured to implement desired programming. For example, theprocessing circuitry60 may be implemented as a processor or other structure as described above with respect to processingcircuitry30 ofparticipant14. Programming may configure processingcircuitry60 to access bit streams conforming to theformat40 and provide manipulations of the bit streams to create content suitable for different connections and capabilities ofheterogeneous participants14. Programming may also configure theprocessing circuitry60 to implement indexing operations to identify requested subparts47 of interactive media data, and to scale the media data of the identifiedsubparts47 to provide communications with theparticipants14 and to provide other desired operations.
Similar tostorage device32 described above,storage device62 may comprise processor-usable media configured to store programming arranged to causeorganizer12 to arrange and conductinteractive media sessions10 including implementing identification and scaling operations with respect to interaction media data.Storage device62 may also store any other appropriate digital information or programming.
Referring toFIG. 6, an exemplary methodology performed by processingcircuitry62 oforganizer12 to enable user interaction with media data according to one embodiment is shown. Other methods are possible including more, less or alternative steps. Further, the ordering of one or more of the illustrated steps may occur in different sequences in other arrangements.
At a step S10, the processing circuitry is configured to access the bit stream of the interactive media data. The accessing may include accessing the bit stream resident upon the organizer or obtaining the bit stream from an external source, such as one of the participants.
At a step S12, the processing circuitry is configured to access client profiles of respective participants of the respective interactive media session. Alternately, the processing circuitry may sense or otherwise obtain information regarding the capabilities of the participants of the session.
At a step S14, the processing circuitry is configured to transcode the initial interactive media data (e.g., segment INIT). The interactive media data may be scaled according to respective configuration parameters of the participants.
At a step S16, the processing circuitry is configured to output index data (e.g., header and TOC information), data ofcommon portion45, and the transcoded data corresponding to the initial interactive media data. The index data permits the participants to implement indexing operations to request subparts of interactive media data during user interaction and the transcoded data permits the participants to communicate the initial interactive media data to the user to begin the interactive media experience.
At a step S18, the processing circuitry accesses a data request received from one or more participant of the interactive media session. The data request may comprise addressing values of the index identifying one or more requested subpart of the interactive media data.
At a step S20, the processing circuitry identifies one or more requested subpart and transcodes the interactive media data of the one or more requested subpart corresponding to the configuration parameters of the requesting participants.
At a step S22, the processing circuitry operates to control communication of the transcoded data to the requesting participants.
Thereafter, if the process has not been terminated, the processing circuitry may return to step S18 to access subsequent data requests received from participants.
Referring toFIG. 7, an exemplary methodology performed by processingcircuitry30 of one ofparticipants14 to enable user interaction with media data according to one embodiment is shown. Other methods are possible including more, less or alternative steps. Further, the ordering of one or more of the illustrated steps may occur in different sequences in other arrangements.
At a step S28, the processing circuitry is configured to output a respective client profile comprising the configuration parameters of the respective participant. The participant may also communicate a data request for initial data to initiate participation in the interactive media session in step S28.
At steps S30 and S32, the processing circuitry is configured to receive the index data and the initial interactive media data from the organizer.
At a step S34, the processing circuitry is configured to decode the interactive media data received from the organizer.
At a step S36, the processing circuitry is configured to implement communication of the interactive media data to a user. For example, the processing circuitry may control the user interface to depict the initial visual image for interaction.
At a step S38, the processing circuitry is configured to receive user inputs from the user interface during the interaction.
At a step S40, the processing circuitry is configured to access the index and to translate the user inputs into respective addressing values of the index.
At a step S42, the processing circuitry is configured to output the index addressing values to the organizer in one or more data request.
Thereafter, if the process has not been terminated, the processing circuitry may return to step S32 to receive and process the subsequent interactive media data requested in step S42.
At least one embodiment described herein enhances interaction of heterogeneous users with respect to interactive media data. For example, the described embodiment may be useful in applications involving visualization of high-resolution interactive 3D content, multimedia collaboration, etc. At least some aspects of the disclosure provide scaling of interactive media data and delivery of interactive content on a “need to know” basis. Unauthorized copying of interactive media data is extremely difficult inasmuch as files are not transmitted in their entireties according to at least one aspect.
In addition,organizer12 may be configured to be content-independent and universal across different types of interactive media data and across different applications built around them. Further,organizer12 may not know what the interactive media data represents, or what compression and/or encryption technology is employed.Organizer12 may monitor whereindividual subparts47 of the interactive media data begin and end to extract desired portions of the interactive media data in a random manner for communication toparticipants14 in one embodiment. In one example,individual subparts47 may be scalable compressed images in an exemplary image based 3D media experience and as users navigate through a screen using user inputs,organizer12 serves random image requests.
According to at least one described embodiment, theorganizer12 andparticipants14 understand a common interactive scalable meta-file-format for the content to be experienced.Applications16 ofparticipants14 interpret, decode and permit display of received interactive media data and the scalable communications protocol indexed by the segment identifiers for a giveninteractive media session10 may be standardized in at least one embodiment. One embodiment facilitates and enhances experiences with respect to interactive media data by communicating only a portion of the content to initiate asession10 and providing scalable interactive media data commensurate with the capabilities of participants accommodating “low” and “high” power users. Inasmuch as at least some described protocols and architecture are content-independent and general-purpose, it may be convenient to support the services inorganizers12 comprising one or more generic server. A large number of interactive media experiencing applications of different kinds (e.g., games, visualization, or collaboration applications) can use acommon organizer12 to experience the media.
The protection sought is not to be limited to the disclosed embodiments, which are given by way of example only, but instead is to be limited only by the scope of the appended claims.