1. RELATED APPLICATIONSThis application incorporates by reference, and claims priority from, the U.S. provisional patent application entitled, “Streaming Audio and Video for Multiple Users,” serial No. 60/034,128, having inventors Michael J. Fuller and John J. Graham.
2. THE BACKGROUND OF THE INVENTIONa. The Field of the Invention
This invention relates to the field of network delivery of audio and video information. In particular, the invention relates to a system for delivering audio and video information to multiple users.
b. Background Information
The Internet enables many different ways of communicating. Originally, the Internet was used for the exchange of files and electronic mail. As the capabilities of the Internet expand, other types of communications are enabled.
Audio and video transmissions are an important area of the communications that the Internet enables. For example, many technologies support the transmission of digital video and/or audio signals over the Internet. An example of such a technology is Quicktime™, available from Apple Computer, Inc., of Cupertino Calif. Quicktime movies are files that can be transmitted across the Internet. Quicktime provides both audio and video displays. Many other file formats allow audio and video to be displayed on people's computers.
This paragraph describes an example use of a the Quicktime technology. A user will have a browser application that resides on his/her computer. The computer, acting as a client under the direction of the browser application, will connect to various World Wide Web (web) servers. Each web server will typically serve hypertext markup language (HTML) files to the clients. The files may include text, graphics references, and references to specialized files. Some of these specialized files can include audio and video information in Quicktime form. The clients can then play these audio and video files once they are downloaded using a Quicktime plug-in, a helper application, or Quicktime capabilities built into the browser application. A plug-in and a helper application are described in greater detail below.
Streaming audio and video, as a subset of all the types of audio and video that can be transmitted over the Internet, allow people to broadcast long and/or live video and audio transmissions across the Internet. Streaming video and audio is video and audio digital data that is transmitted on a continuous basis. A client can access the data stream and regenerate the video images and audio signal as they are being transmitted. Streaming technology is particularly helpful where the events are live, or where the files would be so large as to be a burden on the end users. Examples of where streaming technology is particularly useful are for the display of conferences, sporting events, radio broadcasts, television broadcasts, and the like.
RealNetworks, Inc. of Seattle, Wash., provides a system for transmitting streaming audio and video signals to users over the Internet. RealNetworks supplies a server that allows multiple users to simultaneously receive streaming audio and video.
The real audio system requires that not only the client have additional software, but that the content provider have a separate server from their normal web server. For a client to receive a real audio broadcast, the client typically connects through their browser to a Web page with a reference to a real audio server. The client then accesses its separate real audio player program. The real audio player program then connects to the referenced real audio server. A significant drawback to such an arrangement is that the user must download the real audio player program. This program must then be installed on the user's computer. This may cause a number of problems for the user. For example, if the user is behind a firewall, or some security program, the client may not be able to receive the broadcast from the server. Additionally, the installation of any program may have conflicts with other programs. The program has the disadvantage of being platform specific. This means that a different program must be developed and downloaded for each type of computer that is to be used to access RealNetworks broadcasts. Additionally, the broadcasters of the streaming audio and video need to use the RealNetworks server, which is separate from the broadcasters' World Wide Web server (also referred to as the web server). This increases the broadcasters' security problems because now the broadcasters must be concerned with two separate servers.
Another example of a video and audio system that uses Internet like communications is the MBone. The MBone is a specialized communication network that allows for the distribution of streaming video and audio signals to multiple users. A specialized network is set up specifically to transmit MBone communications. A significant drawback of this system is that users must be connected to the specialized network. Additionally, users will be required to have specialized software on their computers to listen to and watch MBone transmissions.
A streaming video system, not requiring a user to download a separate program, was developed for a single user by John Graham of California. This single user broadcast technology allowed a web server to serve a single streaming video signal to a single client. Although the user did not need to download a plug-in to see the video, only one user was allowed to access the video stream at a time. In this system, video information was captured from a video camera and digitized. The digital video information was then encapsulated in a MIME encoded multipart data stream. The client received this data stream and reconstructed frames of the digital video.
Therefore, what is desired is a platform independent video and audio streaming system that does not require the user to download additional programs beyond the functionalities found in a browser.
3. A SUMMARY OF THE INVENTIONA system and method of providing streaming audio and video data to multiple users is described. In one embodiment, the system comprises a first client, a second client and a server. The first and second clients are executing browsers. The server can communicate with the two clients. The server concurrently provides streaming audio and video data to both of the clients. Importantly, the server does not require the two browsers to use a plug-in or a helper application to receive and use the streaming audio and video data.
In some embodiments of the invention, a browser causes a client to request an HTML file from a web server. The client receives the HTML file. The HTML file includes an HTML tag that directs the browser to load one or more applets from the server. The browser executes the applets causing the browser to request streaming audio and video from the web server. That request may or may not include parameters giving information about the type of request being made. The web server associates a server process with the request, given the parameters in the request. The web server notifies real-time audio and video process that streaming audio and video information is needed. In response to the notification, the real-time audio and video process stores encoded audio and video data in a shared memory location. The server process accesses the shared memory and inserts the audio and video data into one or more data streams. The client receives the data streams and reconstructs the audio and video signals using only the capabilities of the browser. In some embodiments, a separate stream and server process is used for each of the audio and video data. These embodiments allow multiple clients to simultaneously receive the same audio and video data.
Other embodiments of the invention include a web server that can serve streaming audio and video information as well as perform more usual web server functions (such as, serving web pages, performing file transfers, supporting secure communications). These embodiments have the advantage of allowing the broadcasters and the users to set up their security configurations for one web server, rather than two servers (a web server and a streaming audio and video server).
Although many details have been included in the description and the figures, the invention is defined by the scope of the claims. Only limitations found in those claims apply to the invention.
4. A BRIEF DESCRIPTION OF THE DRAWINGSThe figures illustrate the invention by way of example, and not limitation. Like references indicate similar elements.
FIG. 1 illustrates a system including one embodiment of a streaming audio and video system for multiple users where client computers do not need plug-ins or helper programs.
FIG. 2 illustrates an example method of streaming audio and video for multiple users.
FIG. 3 illustrates a web page that a user can use to access a streaming audio and video broadcast.
FIG. 4 illustrates a web page for selecting the bandwidth of the user's Internet connection.
FIG. 5 illustrates a web page having streaming audio and video.
5. THE DESCRIPTIONThe following sections describe embodiments of the invention. The first section provides definitions that will help in the understanding of the remaining sections. The second section shows an example system that supports various embodiments of the invention. The third section describes an example method of using the invention. The fourth section illustrates an actual use of the streaming audio and video used in some embodiments of the invention. The last section reviews additional alternative embodiments of the invention.
a. Definitions
The following definitions will help in the understanding of the following description.
Client: a computer, process, program, or combination of computers, processes or programs, that can communicate with a server.
Server: a computer, process, program, or combination of computers, processes or programs, that can communicate with a client. The server and the client can be executing on the same computer(s).
Web server: a server for serving at least Internet related requests. Example web servers can serve HTML pages in response to HTTP requests from clients. Some web servers can serve many different kinds of requests, e.g., HTTPS, and/or FTP.
Browser: an application, program, process, or combination of applications, programs or processes, that allow a client to make a request of a web server and process the results of the request. The browser may be part of a stand alone application or a set of programs that are integrated into the operating system of the client.
Plug-in: plug-ins are external software programs that extend the capabilities of the browser in a specific way. For example, a plug-in can be used to give the browser the ability to play audio samples or view video movies from within the browser.
Helper application: like plug-ins, helper applications are external software programs. The browser redirects some types of file to the helper applications. The helper applications allow clients to process many different types of files on the Internet. When the browser encounters a sound, image, or video file, the browser hands off the data to the helper applications to run or display the file.
#"client side" filenames="US6711622-drawings-page-3.png" state="{{state}}">client side100, acommunications interface180, and aserver side130. Theclient side100 includes aclient112 and aclient111. Theclient112 includes abrowser102 having avideo display area103. Theclient111 includes abrowser108 and avideo display area104. Thecommunications interface180 includes theInternet185. Theserver side130 includes aweb server131, a sharedmemory135, and a real-time server140. Theweb server131 includes two processes, aprocess138 and aprocess139. The real-time server140 includes avideo module144, anaudio module146, avideo proxy148, and anaudio proxy149. The real-time server140 interfaces with a number of other elements. These elements include avideo card159, aninput audio interface162, and an HTTP connection toremote server170. Various elements of FIG. 1 have now been listed.
The following paragraphs describe the interconnections between the elements of FIG.1. Beginning on theserver side130, thevideo card159 receives avideo input158 outputs a digital video signal to thevideo module144. Similarly, aninput audio interface162 receives amike input164 and/or aline input166 and outputs a digital audio signal to theaudio module146. The HTTP connection toremote server170 receives a streaming audio andvideo data175. The HTTP connection toremote server170 outputs the video data to thevideo proxy module148 and the audio data to theaudio proxy module149. The real-time server140 uses the data received by the various modules and stores portions of that data in the sharedmemory135, after some manipulation of the data. The sharedmemory135 is accessed by theweb server131. Theprocess138 and theprocess139 transmit and receive data to thecommunications interface180. Thus, the couplings of theserver side130 have been described. In some embodiments, theprocess138 and theprocess139 correspond to HTTPD processes.
Thecommunications interface180 allows theclient side100 to communicate with theserver side130. The communications interface, and theInternet185 in particular, support many different types of connections by theclient side100 and theserver side130. In the example communications inter face180, thecommunications interface180 includes theInternet185. In particular, in this example, theprocess138 is communicating streaming audio andvideo data176 to theInternet185. Similarly, theprocess139 is communicating the streaming audio andvideo data178 to theInternet185.
On theclient side100, theclient112 is communicating the streaming audio andvideo data176 with theInternet185. Similarly, theclient111 is communicating with theInternet185 to receive and manage the streaming audio andvideo data178. The clients then communicate the information to their respective browser applications. The browser applications the generate video images in their respective video display areas.
Thus, the connections between the various elements of FIG. 1 have been described. Now the various elements will be described in greater detail in the following paragraphs. An example method of using these elements is described below in relation to FIG.2.
Theserver side130 will be described first.
Thevideo input158 represents a video signal that a user of such a system wishes to broadcast to the clients on theclient side100. Thevideo input158 can include analog signals representing video information. Thevideo card159 digitizes thevideo input158 to produce a digital video image. Various embodiments of the invention include Sun video cards available from Sun Microsystems, of Mountain View, Calif., and Parallax XVideo Xtra™ video cards, available from Parallax Graphics, Inc., of Santa Clara, Calif. However, what is important is that the real-time server140, and in particular thevideo module144, receives some sort of digitized video signal. Thevideo module144 is responsible for providing the real-time server140 with the video information that will be broadcast to theclient side100. Thevideo module144 can convert the digitized video signals to be of better use to the rest of the system. For example, thevideo module144 may, if not done by thevideo card159, convert digital video data into a sequence of JPEG digital video frames. In any case, what is important is that the real-time server140 receives digital video information in a format that it can use (example formats include, JPEG, MPEG, GIF, and AVI).
Similarly, theinput audio interface162 allows for the input of analog audio signals and converts this input to digital audio signals. What is important is that theaudio module146 receives a digitized audio signal that can be used by the real-time server140. Theaudio module146 may convert the digitized audio signal into any of a number of formats, corresponding to any of a number of audio transmission rates (examples include G.711 and G.723 audio compression formats).
The HTTP connection toremote server170 represents an important advantage of one embodiment of the invention. In this embodiment, the HTTP connection toremote server170 allows theserver side130 to forward broadcasts of audio and video signals from other streaming audio and video servers. In these uses, theserver side130 acts as a client to another server. The HTTP connection toremote server170 can receive video and audio signals being broadcast through theInternet185 from another server. The HTTP connection toremote server170 provides the digital video information from the other server to thevideo proxy module148. Similarly, the HTTP connection toremote server170 provides the audio data to theaudio proxy module149. Thevideo proxy module148 and theaudio proxy module149 then supply the respective video and audio data to the real-time server140.
The real-time server140 represents an application, or set of applications, executing on one or more computers, that prepares audio and video data for broadcasting to multiple users through theweb server131. The real-time server140 takes the data from the various modules, processes the data, and stores the processed data in the sharedmemory135. The real-time server140 can perform compression, and other manipulations of the data, to reduce the processing burden on theweb server131. For example, in some embodiments of the invention, the real-time server140 receives digitized video data and compresses that data into JPEG images. These JPEG images are sequenced digital frames of video. Similarly, for the audio data, the real-time server140 breaks the audio information into one-half second time periods of audio data (other embodiments use other time periods). These one-half second time periods of data are stored in the sharedmemory135. The real-time server140 can also compress the audio information into one of a number of various compressed audio signals (e.g., G.711 and/or G.723 audio compression formats). In some embodiments of the invention, the real-time server can broadcast audio and video from multiple sources to multiple clients.
The sharedmemory135 represents a shared storage area for use by the real-time server to store audio and video data for access by theweb server131. In one embodiment the sharedmemory135 has a locking and semaphore usage scheme to ensure that the real-time server140 is not writing data into the sharedmemory135 while theweb server131 is accessing that data. In some embodiments, the semaphores act as notifiers to indicate that new data in the sharedmemory135 is available for use by theweb server131. In some embodiments, the video data and the audio data are stored in different shared memory locations.
Theweb server131 communicates data over theInternet185 using one or more communications protocols. In some embodiments of the invention, these protocols include HTTP (Hypertext Transfer Protocol), TCP (Transmission Control Protocol) and UDP (User Datagram Protocol). Theweb server131 represents an application, including one or more processes, for communicating over theInternet185. In one embodiment, theweb server131 includes an Apache web server. Each of the processes in theweb server131 represents one or more processes for serving streaming audio and video data to theclient side100. In some embodiments, theweb server131 transmits the video data as a multipart MIME (multi-purpose Internet mail extensions) encoded file for decoding directly by the browser or as compressed video information for decoding by an applet run in the browser. Theweb server131 transmits the audio data as compressed audio data for decoding by an applet run in the browser.
In some embodiments, theweb server131 initiates a separate process for each audio and video connection made fromclient side100. Thus, for one client receiving streaming audio and video data, two processes would be started within theweb server131. The first process would supply video data and the second process would supply audio data. The processes access the sharedmemory135 and serve the data across the Internet to the respective client.
In some embodiments, theweb server131 initiates at least one process for each client. This provides important advantages in some embodiments of the invention. In particular, because theweb server131 is serving the data directly through processes it created,server side130 users need not worry about security issues beyond those already faced with theirweb server131. Thus, these embodiments of the invention have a lower chance of interfering withclient side100 fire walls and have a lower chance of having aserver side130 security problem.
Other embodiments of the invention include separate Common Gateway Interface (CGI) programs for audio and video. These CGI programs are used by theweb server131 to serve the streaming audio and video data. These CGI programs are not necessarily integrated as tightly to theweb server131 as theprocess138 and theprocess139. However, a CGI program allows for the easy extension of many different types of web servers.
The communications interface180 permits communications between theserver side130 and theclient side100. In this example, theInternet185 supports the communications. Other embodiments of the invention support other communications interfaces. For example, theInternet185 can be replaced by a local area network, a wide are network, a proprietary telecommunications and networking infrastructure, or some other communications interface. What is important is that theserver side130 can communicate with theclient side100. Thecommunications interface180 can also include combinations of the above described technologies. For example, theInternet185 can include a web server to which the clients on the client side communicate through to access theInternet185. Theclient side100 can be on a local area network that is connected through a server, or router, to theInternet185.
Theclient side100 represents the consumers of the streaming audio and video data. In this example, the two clients are receiving separated streaming audio and video data signals. Other embodiments of the invention support many more clients.
Theclient112 represents a computer, such as a PC compatible computer, running abrowser application102. For video display, thebrowser application102 can include a Netscape Navigator™ or Communicator™ program for “multipart/x-mixed-replace MIME type video,” or a Microsoft Internet Explorer™ 3.0 or later for a Java based video transmission. In some embodiments, the Java based video transmission applet parses the multipart/x-mixed-replace MIME type video. For audio, thebrowser application102 can include any browser that supports Java and/or JavaScript.
Thebrowser application102 is responsible for receiving the streaming audio andvideo data176 and reconstructing an audio and video signal suitable for the end user. In this example, thevideo display area103 displays the reconstructed video information received from thevideo input158 at the real-time server140. Similarly, theclient111 is executing thebrowser108. Thebrowser108 is displaying the same video signal in thevideo display area104. Theclient11 represents another computer executing a browser application.
Various embodiments of the invention have modifications to the system shown in FIG.1. Some of these variations are described in this paragraph. For example, theclient111 and theclient112 can be the same computer or be different computers. The clients can be on the same local area network or be on completely different local area networks. There can be many more clients receiving the information from theclient side100. Additionally, as shown with the HTTP connection toremote server170, a real-time server140 can appear on theclient side100 to distribute data to other clients.
Note that portions of the system, and embodiments of the invention, are sets of computer programs or computer that can be stored on computer usable media such as floppy disks, hard drives, CD ROMs, Zip disks, etc.
Thus, an example system supporting streaming audio and video data for multiple users has been described.
c. Example Method
FIG. 2 illustrates one example of a method of broadcasting streaming audio and video data to multiple users. This example method could be executed on the system of FIG.1. In this example, theclient112 will initiate a request to receive streaming audio andvideo data176 from theserver side130. Theclient112 will display the video data in thevideo display area103 and will play the audio data for the user. Importantly, theclient112 can play audio and video without the need of a plug-in or helper application. As will be shown, the audio and video play as part of a transparent process of theclient112 loading a web page from theserver131.
Atblock210, theclient112 initiates an HTTP request from theweb server131. This could be the result of thebrowser102 receiving and displaying an HTML (hypertext markup language) page including a link that will initiate streaming audio and video. The user would have selected this link which will then result in thebrowser102 making the connection to theweb server131.
As a result of the connection, atblock220, theweb server131 supplies a Java applet for decoding audio data and that will help in the display of video information. These instructions could be supplied as two separate Java applets or one combined Java applet.
Atblock230, the client executes the Java applets. The video display portion of the applet initiates a request of theweb server131 for a HTTP transmission of a multipart/mixed replace MIME encoded video (or in other embodiments, video data for Java decoding). Also atblock230, the audio Java applet makes a similar request of theweb server131. The requests can include optional information such as desired video frame rates or audio rates. The requests include the URI (universal resource indicator) indicating the particular audio or video streaming information to be served.
The following will describe the audio serving by theweb server131 and eventual decoding by theclient112. The video serving will be described after the audio.
Atblock240, in response to executing the Java applet, the client makes an audio request of theweb server131. This is done through the Java applet which supplies a universal resource indicator to theweb server131. The universal resource indicator can indicate the audio stream that is being requested. This request looks, to theclient112, like a file download request. Theweb server131 responds accordingly to this request by beginning to transfer the streaming audio information. Importantly, theclient112 does not need to know that the file is a streaming audio or video signal that is essentially never ending.
In response to the client request, theweb server131, and in particular theprocess138, makes an audio data request of the sharedmemory135. Note, if theprocess138 had not been created, theweb server131 creates the process (in some embodiments, theweb server131 creates a separate process for each of the audio and video data streams). The audio data request of the shared memory is done by theweb server131, and in particular by theprocess138, by notifying the real-time server140 that audio information is requested. In subsequent iterations, theweb server131 need not make the explicit request for the audio information. Once requested, the real-time server140 will continue to provide audio information, in these embodiments, until it is told to stop.
In any case, the real-time server140, in response to the request, prepares audio information and writes this information into the sharedmemory135. In various embodiments of the invention, the real-time server140 prepares the audio data by breaking the audio information into time periods. This audio information is also compressed into various sets of compressed data corresponding to different audio rates. Higher audio rates correspond to better quality audio signals. In these embodiments of the invention, the real-time server140 writes the data for the various audio rates to the sharedmemory135, thereby reducing the work load of theweb server131.Different web server131 processes will require different audio rates depending on their connections to their respective clients. By storing the information corresponding to the different audio rates into the sharedmemory135, each process can access the desired audio rate data from the sharedmemory135. Thus, theweb server131 need not calculate the compressed audio data for each process within theweb server131.
Theprocess138, through theweb server131, now transmits the data accessed from the sharedmemory135. This corresponds to block246.
Atblock248, theclient112 receives the compressed audio data. The client decompresses the audio data as commanded by Java audio applet, and plays the audio information through the audio system of theclient112.
Atblock241, a test is made to determine whether theweb server131 should continue broadcasting the streaming audio information to theclient112. This test is made by determining whether theclient112 has broken the connection to theweb server131.
The web server will continue serving the data as long as theclient112 is connected to theweb server131 through the Java audio applet.
Note, importantly, the user at theclient112 has not had to download any additional plug-ins or helper programs to play the streaming audio information.
Turning to the video broadcasting, atblock252, theweb server131 makes a video information request to the real-time server140. The real-time server140 takes each video frame from thevideo module144, or thevideo proxy module148, and compresses that information into a JPEG image. In some embodiments of the invention, thevideo card159 provides the images as JPEGs. The requesting procedure is similar to that followed in theaudio request block240.
The real-time server140, atblock254, writes one JPEG frame into the sharedmemory135. Theprocess138 accesses the sharedmemory135 to retrieve the JPEG frame and transmit that frame to theclient112. Atblock256, the process also formats the JPEG as part of a multipart MIME encoded file.
Atblock258, theclient112, using the capabilities of thebrowser102, decompresses the video data from the MIME encoded format, and the JPEG encoded form, and creates thevideo display103.
Atblock251, theweb server131 determines whether the video Java applet is still requesting video frames.Block254 throughblock251 are then repeated. The result of these blocks is that multiple frames of video information is displayed in thevideo display103. Thus, the user has the perception of a video display at theclient112.
Note, by the time the audio information is played from the previous audio transmission, a new audio transmission has been received and decompressed. Thus, theclient112 will have a continuous audio signal being presented to the user.
If it is the case that the audio, or the video, information is not being received by theclient112 at a sufficient data rate, the corresponding Java applet, in some embodiments of the invention, can request a different rate of transmission. The Java applet can request a lower rate corresponding to a lower audio or video signal, that will more appropriately match the bandwidth availability of theclient112.
One advantage of the system of FIG. 1 is that if theweb server131 becomes heavily loaded, the video frame rate is automatically reduced. This is done by ensuring that the audio processes take priority over the video processes. If a video process cannot access the sharedmemory135 in sufficient time, that video frame will simply not be transmitted to theclient112. However, the corresponding audio process should have an opportunity to transmit the audio information.
As has been seen by the above discussion, the user has not been required to download any plug-ins or use any helper applications in the receiving of the streaming audio and video data. Additionally, theweb server131 is able to execute this example method for multiple clients. Each client would have a corresponding set of processes in theweb server131. The number of processes is only limited to the number of connections that can be supported by theweb server131. As has been noted, some of the work that would normally be performed by theweb server131 has been moved into the real-time server140 to reduce the load on theweb server131.
d. Example Video Display
FIG.3 through FIG. 5 represent the interface presented to a user using abrowser102 on aclient112. In this example, the user is using a standard PC with a standard Netscape Communicator™ 4.0 browser application. FIG. 3 includes abrowser window302 with a cursor positioned over atransmission selection link304. Thetransmission selection link304 corresponds to a request for transmission of streaming audio and video data. FIG. 4 illustrates the result of the selection of thetransmission selection link304. The user is presented with a number of connection speeds to select from. The user selects the speed associated with his or her connection speed to theInternet185. In this example, the user is selecting a T1 or better connection speed. In one embodiment of the invention, the Java applets are controlling the presentation of the various selection speeds. This is shown as abandwidth selection link404.
FIG. 5 illustrates the result of the “T1 or higher” bandwidth selection link being chosen. Adigital video display502 is shown in thebrowser window302. The client computer executing, the Netscape Communicator browser, is playing an audio signal corresponding to thedigital video display502. In this example, a user is allowed to modify the video rate, shown in frames per second, by selecting one of thevideo rate selectors510. A result of selecting one of the variousvideo rate selectors510 would be that the corresponding video Java applet will communicate with its corresponding process in theweb server131. The Java applet would request a change in the video frame per second rate being supplied by the process in theweb server131. Similarly, a set ofaudio rate selectors512 allow the user to select a higher or lower quality audio signal. The corresponding Java audio applet would communicate with its corresponding process in theweb server131 to request the change in the audio rate.
e. Additional Embodiments
Thus a system for broadcasting streaming video and audio information to multiple users has been described. However, various embodiments of the invention include modifications and optional additions to the system.
It is possible for the real-time server140 to be receiving and transmitting multiple different video and audio signals through theweb server131. Additionally, multiple web servers could be accessing the information from the real-time server140. Additionally, multiple real-time servers140 could be providing information for theweb server131.
In some embodiments of the invention, users can select either the audio stream, the video stream, or both the audio and video stream from a server. Similarly, some users can select one of these sets of streams while other users select other sets of streams. Additionally, some users can select different rates for audio or video information than other users. In such systems, the real-time server140 supplies the data corresponding to the requested rates to the sharedmemory135. For example, if one user is requesting a first audio rate and a second user is requesting a second audio rate, the sharedmemory135 would store audio data for both audio rates of compression. In some embodiments, theweb server131 notifies the real-time server140 when a particular audio rate is not required by any of the processes anymore.