RELATED APPLICATIONSThis application is a Continuation of U.S. patent application Ser. No. 13/661,572, filed on Oct. 26, 2012, which is incorporated herein by reference in its entirety.
BACKGROUND OF THE INVENTIONVideo security systems include security cameras and concurrently present video data streams for simultaneous observation on such devices as dedicated displays or computer workstation displays. Often individual panes within a matrix view can be selected and expanded to extend over the entire area of the displays.
Typically, video data streams from each of the security cameras is combined into the matrix using mixers that combine the video data streams from the security cameras. This matrix can then be sent directly to the displays or streamed to user devices over a network to mobile devices. In fact, some of these systems enable the video data streams to be controlled at the mobile user devices.
These video security systems allow users on the user devices to select one or more of the displayed video data streams to create auxiliary views of interest beyond the standard matrix view of the video data streams from each video camera. These video security systems typically require the installation of custom software on a view server that pushes predefined views to mobile and non-mobile user devices. These predefined views occupy fixed regions on the displays of the user devices. Once the user has configured the views on the view server, the user devices can then access and display the video data streams in these views.
SUMMARY OF THE INVENTIONOne of the problems with current video security systems is that the creation of new views of displayed video data streams requires multiple configuration steps. On the video stream manager device, the user selects the video data streams to mix and encode for display on the new view. On the view server, the user creates the new view and selects the video data streams for display on the new view. User devices can then display the new view of video data streams. These different actions add equipment costs, time delays, and require coordination of steps.
The present invention overcomes this problem by providing a security video distribution system for a video security system that allows user devices such as mobile user devices to select displayed video data from security cameras, create new views containing the selected video data, and program the security video distribution system to perform mixing and transcoding of the selected video data streams in response to the selection. In one example, the transcoding and mixing uses shared memory buffers, which provide more flexibility and robustness when performing operations upon video data streams at different frame rates and resolutions. Mixers can read those shared memory buffers, resize the video if necessary, change the color space if necessary, then write the result to parts of the memory buffer. Then encoders read the shared memory buffer.
This has advantages in ease of configuration and lower cost. Moreover, once a user device creates a new view of selected video data streams, all other user devices can access the same view. This allows first responders to accident scenes and law enforcement to create and share selected views of interest in real-time.
In general, according to one aspect, the invention features a security video distribution system for a video security system, which comprises an image processing system that performs transcoding and mixing of video data from security cameras, and an application support system that streams the mixed video data to user devices. In some cases, the image processing system further performs transcoding and mixing of image data.
In one embodiment, the application support system enables selection of the streaming video data at the user devices, and the image processing system changes the mixing of the mixed video data in response to the selection. In one implementation, the image processing system comprises a video decoder subsystem that decodes security camera video data from the security cameras, a video mixer subsystem that mixes the decoded video data into the mixed video data that includes video data from one or more of the security cameras, and a video encoder subsystem that encodes the mixed video data into encoded mixed video data for streaming to the user devices.
The application support system preferably comprises a web services component that receives messages from the user devices from the selection of the streaming video data at the user devices and an operation to perform on the selected video data, and a video streaming server that receives encoded mixed video data from the video encoder subsystem for streaming to the user devices. In another implementation, the system enables the user to perform a combined operation upon the selected video data at the user device.
In general, according to another aspect, the invention features a security video distribution method for a video security system, which comprises transcoding and mixing video data from security cameras, and streaming the mixed video data to user devices. In some cases, the invention further comprises transcoding and mixing of image data and streaming the mixed image data to user devices.
The security video distribution method preferably further comprises enabling selection of streaming video data at the user devices, and mixing the decoded video data from different security cameras in response to the selection.
The security video distribution method can further comprise receiving messages from the user devices from the selection of the streaming video data at the user devices and an operation to perform on the selected video data, communicating with a web services component, and receiving encoded mixed video data for streaming to the user devices. In another detail, the security video distribution method further comprises performing a combine operation on the selected video data.
In general, according to still another aspect, the invention features a transcode and mixing server for a security video distribution system, comprising a video decoder subsystem that decodes security camera video data from security cameras into decoded video data, a video mixer subsystem that mixes the decoded video data into mixed video data, and a video encoder subsystem that encodes the mixed video data into encoded mixed video data for streaming to user devices. In some cases, the video mixer subsystem also accepts image data.
In implementations, the video mixer subsystem receives messages from the user devices from the selection of the streaming video data at the user devices and an operation to
The transcode and mixing server preferably further comprises a decoder mixer shared memory subsystem for buffering the decoded video data from the video decoder subsystem for the video mixer subsystem, a mixer encoder shared memory subsystem for buffering the mixed video data from the video mixer subsystem for the video encoder subsystem.
In another embodiment, the transcode and mixing server further comprises a video transfer switch for copying decoded video data for multiple user devices.
In general, according to still another aspect, the invention features a transcode and mixing method for a security video distribution system, including decoding security camera video data from security cameras into decoded video data, mixing the decoded video data into mixed video data, and encoding the mixed video data into encoded mixed video data for streaming to user devices. In some cases, the transcode and mixing method further comprises accepting image data.
In examples, the transcode and mixing method further comprises receiving messages from the user devices from the selection of the mixed video data at the user devices and an operation to perform on the selected video data, buffering the decoded video data prior to mixing, buffering the mixed video data prior to encoding, and mixing video data for different user devices.
The above and other features of the invention including various novel details of construction and combinations of parts, and other advantages, will now be more particularly be understood that the particular method and device embodying the invention are shown by way of illustration and not as a limitation of the invention. The principles and features of this invention may be employed in various and numerous embodiments without departing from the scope of the invention.
BRIEF DESCRIPTION OF THE DRAWINGSIn the accompanying drawings, reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale; emphasis has instead been placed upon illustrating the principles of the invention. Of the drawings:
FIG. 1 is block diagram showing a video security system with a security video distribution system according to the present invention;
FIG. 2 is a block diagram showing a transcode and mixing server according to one embodiment of the present invention;
FIG. 3 is a block diagram showing a transcode and mixing server according to another embodiment of the present invention;
FIG. 4 illustrates an exemplary image of video data displayed on a user device;
FIG. 5A illustrates one example of an application control message and a configuration control message;
FIG. 5B illustrates one example of a video switch control message;
FIG. 6A is an exemplary data flow block diagram showing the video security system receiving security camera video data from four security cameras and displaying the video data stream on a user device, and then receiving a selection of one video data stream and operation zoom on a user device;
FIG. 6B is an exemplary data flow block diagram showing the displaying of the video data on a user device display in response to the selection inFIG. 6A;
FIG. 7A is an exemplary data flow block diagram showing the security system receiving security camera video data from four security cameras and displaying the video data on a user device, and then receiving a selection of two video data streams and image data and operation combine on a user device;
FIG. 7B is an exemplary data flow block diagram showing the displaying of the video data and image data on a user device display in response to the selection inFIG. 7A.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSFIG. 1 shows avideo security system100 and a securityvideo distribution system102 constructed according to the principles of the present invention.
The securityvideo distribution system102 is comprised of anapplication support system112 and animage processing system110. Theapplication support system112 communicates withuser devices104 overnetwork108, and interfaces with theimage processing system110. Theimage processing system110, in turn, interfaces with asecurity control system114. Theimage processing system110 further comprises one or more transcode and mixingservers128.
Theapplication support system112 is further comprised of aweb services component132, and one or morevideo streaming servers134. Theweb services component132 of theapplication support system112 interfaces withexternal systems184, aweb application server180,user devices104, and theimage processing system110. Theweb services component132 provides an Application Programming Interface (“API”) that interacts with components and services both internal and external to the securityvideo distribution system102. Thevideo streaming server134 accepts transcodedvideo data154 sent by theimage processing system110, and transmits streamingvideo data146 over thenetwork108 touser devices104.
Theuser devices104 such asmobile user devices106 have user applications182 that communicate with the securityvideo distribution system102 and theweb application server180 over thenetwork108. Thenetwork108 can be a private or public network, and examples of supported networks include but are not limited to Local Area Networks (“LAN”), Wide Area Networks, (“WAN”), broadband networks, and the Internet/World-Wide Web.
Thevideo streaming server134 provides access to the transcodedvideo data154 via a Universal Resource Locator (“URL”) by streamingvideo data146. The URL is provided by the web services132. More specifically, using the user application182, when the user wants to view a pre-configured video matrix view, the user selects it from a list. The user indicates “play” versus other commands like “edit”. The user application calls web service to “play” the video matrix. It will get a URL from the Web Service upon call return. The web service determines if the video matrix pipeline is already running. If not it creates and starts a pipeline. It gets the URL from the pipeline and returns it to the user application. If there is failure, then there is no URL. In either case there is also a success or failure return value.
The user applications182 can be applications native to theuser devices104, running executable code built for theuser devices104, such as web browsers that acceptweb page data140, sent by aweb application portal130 of theweb application server180 over thenetwork108.
Thesecurity control system114 is further comprised of one or morenetwork video recorders126. Thenetwork video recorders126 receive securitycamera video data138 from video data sources such as ananalog security camera118 anddigital security cameras116. While thesecurity control system114 primarily accepts securitycamera video data138, thesecurity control system114 can also accept video from sources such as Internet Protocol (“IP”) cameras and mobile user devices, and non-video sources such as JPEG images, screen capture image data taken from personal computers, and animated films. An example of animated films is flash animation from Adobe, Inc.
Preferably, the API presented by theweb services component132 allows the user applications182 to upload non-video media such asimage data136 from image data sources such as JPEG stillimage data120 and screen capture still imagedata122 to the securityvideo distribution system102.
Theimage processing system110 receives theimage data136 from image data sources such as the JPEG stillimage data120 and the screen capture stillimage data122. For communications with the securityvideo distribution system102, thesecurity control system114 receives image processingsystem control messages156 from theimage processing system110, sends non-video data to theimage processing system110 via a securitysystem data message158, and sends video and image data via a securitysystem media message160. Though thesecurity control system114 primarily accepts securitycamera video data138 fromdigital security cameras116 andanalog security cameras118, thesecurity control system114 can also accept video data from video sources such as video capture cards andnetwork video recorders126 and present this as input to theimage processing system110. If theimage processing system110 accepts input from an integrated video capture card, the transcode and mixingserver128 can be configured to access graphics memory from the video capture card. In some cases, theimage processing system110 further performs transcoding and mixing of image data and other composite media such as images, text, graphics, and animation.
In one example, the user applications182 allow theuser devices104 to display and perform operations upon streamingvideo data146 sent by the securityvideo distribution system102. The user applications182 allow the user to perform operations such as create, find, select, start, stop, configure, save state, and view streamingvideo data146 on theuser devices104. The user applications182 send anapplication control message144 that includes the selected video data and operations to perform on the selected video data to theapplication support system112. Theweb services component132 receives theapplication control message144, and sends aconfiguration control message150 to theimage processing system110 which programs the securityvideo distribution system102 in response to the selection.
In another example, theweb services component132 of the securityvideo distribution system102 interfaces withexternal systems184 viaexternal_messages186. Theexternal systems184 comprise such systems assecurity databases188 anduser authentication systems190. This allows the securityvideo distribution system102 to provide integrated capabilities such as authentication and authorization of users, and to save information to a database. This information includes such data as status and state of security cameras, video data recorders, video data streams, and alarm history.
FIG. 2 is a block diagram showing the transcode and mixingserver128 according to one embodiment of the present invention. The transcode and mixingserver128 accepts a variety of video data streams and image data as input, performs decoding, transcoding, mixing, and encoding of the video data streams and image data, and outputs transcodedvideo data154 to thevideo streaming server134. Each transcodedvideo data stream154 is a single encoded video data stream composed of one or more video data streams and image data decoded, mixed, and encoded by the transcode and mixingserver128. As a result, transcodedvideo data154 can also be referred to as encoded mixed video data.
The video data streams and image data input to the transcode and mixingserver128 compriseraw video data260 fromraw video sources240, securitycamera video data138 fromnetwork video recorders126, andimage data136 from raw image sources242. Video data streams can be either compressed or raw (uncompressed) format, and image data is typically in raw format.Raw video sources240 can include analog, composite, video from capture cards, and cable TV video. Securitycamera video data138 can be encoded in different formats, such as H.264 or MPEG4, and at different frame rates, such as 15 frames per second (“fps”) and 30 fps.
The transcode and mixingserver128 is comprised of avideo decoder subsystem202, avideo mixer subsystem204, and avideo encoder subsystem206. Thevideo decoder subsystem202 provides input to thevideo mixer subsystem204, which in turn provides input to thevideo encoder subsystem206. In one case, thevideo decoder subsystem206 is comprised ofseparate decoders216 for decoding each stream of securitycamera video data138 andraw video data260, andraw capture components250 to acceptimage data136. Thevideo mixer subsystem204 is comprised of one ormore mixers218, and thevideo encoder subsystem206 is comprised of one ormore encoders220.
A pipeline is a set of data processing elements connected in series, so that the output of one element is the input of the next element. The transcode and mixingserver128 creates and manages video transcoding pipelines, each comprised of one ormore decoders216 orraw capture components250, onemixer218, and oneencoder220. The output of each pipeline is an encodedvideo data stream154.
The transcode and mixingserver128 supports real-time transcoding and mixing of video data from security cameras at different frame rates. Within the transcode and mixingserver128, video sources,decoders216,mixers218, andencoders220 operate independently of each other and can support different frame rates. In this way, the frame rate of each stage of the video transcoding pipeline can fluctuate or be changed without affecting the other stages. These stages can utilize shared memory for the output of each stage.
In one example that illustrates independent operation of each stage in the transcode and mixingserver128, video sources continually update the contents ofdecoders216, which place the decoded video into decoder sharedmemory222. Eachmixer218 independently polls decoder sharedmemory222 for the video streams/frames selected by the user, and combines the selected frames to a single composite frame in mixer sharedmemory224.Encoders220 independently poll mixer sharedmemory224 as input for creating transcodedvideo data154. Note that this flexibility can sometimes cause the system to miss an occasional video data stream sample or provide a duplicate sample, requiring careful selection of update rates for each stage in response to system conditions.
Transcoding is a process that changes the original encoding format of a source file, image data, or video data stream to a different target format. The source is typically first decoded to an intermediate uncompressed format, and then encoded in the desired target format. Thedecoders216 andencoders220 may support different compression formats, and update frequencies (frame rates), in frames per second (“fps”). Themixers218 also support different frame rates. Thevideo mixer subsystem204 instructsmixers218 to combine composite multiple frames at a frame rate independent of the input streams to eachmixer218.
In three-dimensional computer graphics systems that integrate video, the image plane is a portion of computer memory associated with the plane of the monitor or user display device. Each video data and image data stream resides in its own portion of computer memory called video planes and graphics planes, respectively. Manipulation of video and images involves performing operations upon the video planes and graphics planes associated with the video and images, and projecting the result to the image plane for viewing on the monitor or user display.Mixers218 in the transcode and mixingserver128 function in a similar fashion, in one specific implementation.
Eachmixer218 selects one or more of the decoded video data streams and image data, analogous to video planes and graphics planes in computer graphics, performs operations upon the images/video frames, and outputs a single composite video data stream similar to an image plane in computer graphics. The output of amixer218, therefore, is a representation of what will be displayed on theuser devices104, stored in memory.
Thevideo decoder subsystem202 andvideo mixer subsystem204 utilize sharedmemory212 to provide buffering of video data and allow access to data between independently executing processes. Shared memory allows independent software processes associated withdecoders216,mixers218, andencoders220 to communicate with each other by reading and writing video data independently via sharedmemory subsystems212 and214, though they may operate at different frame rates.
InFIG. 2, the different stages of the video transcoding pipeline such as thevideo decoder subsystem202, thevideo mixer subsystem204, and thevideo encoder subsystem206 are depicted as independent software processes within and executing on the transcode and mixingserver128. Each of these stages/subsystems supports their respective decoder, mixer, and encoder components. However, it is possible for each subsystem to run on completely different hardware systems (e.g. different computers) connected by a network using data communications links. In such a case, the interfaces between the subsystems should operate with very low latency due to the relatively large size of uncompressed (raw) video frames created after the decoding and mixing stages. The “shared” memory between hardware systems is implemented as a file that each computer supporting a stage can access, or a designated memory area that is accessed via the network by using a communications protocol, in two examples. Due to advances in modern software development frameworks, components running on different physical systems can be abstracted to virtually reside and execute in one virtual computer. As a result, thevideo decoder subsystem202,video mixer subsystem204, and thevideo encoder subsystem206 and their respective components can be implemented as combinations of hardware and/or software.
Thevideo decoder subsystem202 decodes each input source as needed usingseparate decoders216, and stores each resulting decoded stream/frame into a separate sharedmemory buffer222. Eachmixer218 is configured to read from some combination of the shareddecoder memory buffers222 the video data streams selected by the user, resize the video data streams or adjust their color spaces as needed, combine them into a single stream, and then write the resulting combined (or composite) video data stream to a portion of mixer shared memory. Eachencoder220 then reads a single composite video data stream from mixer shared memory and compresses/encodes the stream to create transcodedvideo data154.
The transcode and mixingserver128 is further comprised of a decoder mixer sharedmemory subsystem212, and a mixer encoder sharedmemory subsystem214. The decoder mixer sharedmemory subsystem212 is comprised of one or more instances of decoder sharedmemory222, and the mixer encoder sharedmemory subsystem214 is comprised of one or more instances of mixer sharedmemory224. In addition, thevideo mixer subsystem204 accepts aconfiguration control message150, wherein thevideo mixer subsystem204 changes the mixing and transcoding of securitycamera video data138 in response to theconfiguration control message150. Theconfiguration control message150 may comprise information about which video data streams have been selected for display, as well as an operation to perform on one or more of the component streams, such as zoom, pan, combine, and/or change camera.
Thevideo decoder subsystem202 provides an interface for all input sources to the transcode and mixingserver128. Input sources comprise the securitycamera video data138, theraw video data260, andimage data136, but could also include video from Internet Protocol (“IP”) cameras, and image data in proprietary or third-party formats. Because each input source has specific characteristics, thevideo decoder subsystem202 accepts each input source into arespective decoder216 and creates a separate instance of decoder sharedmemory222 within the decoder mixer sharedmemory subsystem212 for buffering and isolation of each input source, in a current implementation. The one-to-one relationship between an input source and adecoder216 also allows the transcode and mixingserver128 to be extended to handle future video sources and unsupported encodings by today's standards.
According to this embodiment, securitycamera video data138 is accepted by thevideo decoder subsystem202 and decoded via aseparate decoder216 for each stream of the securitycamera video data138. Thevideo decoder subsystem202 then writes decodedvideo data238 for each video data stream into its own instance of decoder sharedmemory222. In response to theconfiguration control message150, thevideo mixer subsystem204 instructs themixers218 to combine buffered decodedvideo data228 from one or more instances of decoder sharedmemory222, i.e., from selected video streams, intomixed video data232. Thevideo mixer subsystem202 then writes each mixed (combined)video data232 to its own instance of mixer sharedmemory224. Thevideo encoder subsystem206 reads the bufferedmixed video data236 for each composite stream ofmixed video data232 into aseparate encoder220, and thevideo encoder subsystem206 encodes the bufferedmixed video data236 into transcodedvideo data154 for streaming by thevideo streaming server134.
In one example of the current embodiment, the transcode and mixingserver128 constructs a video transcoding pipeline using two separate video data input streams from different video sources, encoded in different compression formats and at different frame rates. The transcode and mixingserver128 decodes each input stream using separate decoders, combines the decoded streams into a single mixed stream at a common frame rate, then encodes the single combined stream with a common compression format and frame rate. The example uses specific values to better illustrate the behavior of the transcode and mixingserver128 at each stage during the construction of the video transcoding pipeline.
For this example, thenetwork video recorder126 accepts two input streams of securitycamera video data138. One video data stream was originally encoded using H.264 format at 15-30 fps, and the other video data stream was originally encoded using MPEG4 format at 30 fps. Thenetwork video recorder126 records each video data stream in their native format, and provides the recorded securitycamera video data138 to thevideo decoder subsystem202. Thevideo decoder subsystem202 determines the compression format of each stream, and decodes one video datastream using decoder1216 which decodes using H.264 format at 15-30 fps, and decodes the otherstream using decoder2216 which decodes using MPEG4 format at 30 fps.
The decoder mixer sharedmemory subsystem212 then buffers the decodedvideo data238 fromdecoder1216 anddecoder2216 into their own instances of shared memory, decoder1 sharedmemory222 and decoder2 sharedmemory222, respectively. This makes buffered decodedvideo data228 for each video data stream available to other components in the transcode and mixingserver128, such asmixers218.
In response to aconfiguration control message150 with selected securitycamera video data138 and operation combine, thevideo mixer subsystem204 instructsmixer1218 operating at 60 fps to read the buffered decodedvideo data228 from decoder1 sharedmemory222 and decoder2 sharedmemory222.Mixer1218 combines the contents of the buffered decodedvideo data228 from each of the two streams intomixed video data232, and thevideo mixer subsystem202 writes themixed video data232 to an instance of mixer shared memory, mixer1 sharedmemory224. This makes bufferedmixed video data236 available to other components in the transcode and mixingserver128, such asencoders220. Finally, thevideo encoder subsystem206 reads the bufferedmixed video data236 into a single encoder,encoder1220, which is using H.264 format at 15 fps.Encoder1220 compresses the bufferedmixed video data236 at 15 fps intervals using H.264 format, creating a single stream of transcodedvideo data154 for streaming by thevideo streaming server134. As a result, the transcode and mixingserver128 completes a video transcoding pipeline that combines two separate input video data streams, encoded in different formats and at different frame rates, into a single output transcoded video stream suitable for transmission to avideo streaming server134 for display onuser devices104 such asmobile user devices106. In a more general sense, eachmixer218 may be able to read and combine video data from any combination ofdecoder memory instances222.
Shared memory within the transcode and mixingserver128 can use system memory from the transcode and mixingserver128, or graphics memory from systems such as integrated video capture cards. Video capture cards allow for such display possibilities such as rotation or stretching of the image plane as a virtual rectangular surface, or changing the image plane to a sphere or cylinder, and efficient processing and manipulation of video data, images, and graphics.
FIG. 3 is a block diagram showing the transcode and mixingserver128 according to another embodiment of the present invention. The transcode and mixingserver128 further comprises avideo transfer switch310 which copies the contents of one shared memory buffer to another at a specified interval in response to a videoswitch control message336. In one implementation, thevideo transfer switch310 is unconstrained by the contents of the shared memory buffer, allowing for “any-to-any” (including one-to-many) copying of shared memory within the transcode and mixingserver128.
Thevideo transfer switch310 “pulls” or “pushes” raw video frames from one shared memory buffer to another. It has the knowledge of where the source and target buffers are. The switch allows for creation of simplified mixers and encoders that do not need to know where their source shared memory buffers are, whose update logic relies only on the transfer of internal shared memory and not the transfer of raw video samples.
In one example based onFIG. 2, the encoder “pulls” or reads raw video frames from the mixer shared memory buffer to a private buffer within the encoder. In another example based onFIG. 3, thevideo transfer switch310 is used in combination with an external encoder sharedmemory buffer314. This allows for independence of the encoder and mixer (the encoder does not have to know about the existence of the mixer shared memory buffer in order to use its contents). Thevideo transfer switch310 has this knowledge and it “pushes” the raw video sample from the mixers or video sources to the encoder's sharedmemory314.
For example, thevideo transfer switch310 can copy one instance of decoder sharedmemory222 to another instance of decoder shared memory, one instance of decoder sharedmemory222 to an instance of mixer sharedmemory224, or an instance of mixer sharedmemory224 to another instance of mixer sharedmemory224. In yet another example, thevideo encoder subsystem206 creates an instance of encoder sharedmemory314 associated with asingle encoder220, and thevideo transfer switch310 copies the contents of mixer sharedmemory224 to the newly created instance of encoder sharedmemory314. This last example provides processing time savings over the creation of a non-switched video transcoding pipeline.
For the example of this embodiment, the transcode and mixingserver128 performs the decoding and mixing stages of a video transcoding pipeline as in the example for the embodiment ofFIG. 2, but the video data is prepared for the encoding stage differently. For the encoding stage, in response to a videoswitch control message336, thevideo transfer switch310 copies the mixer sharedmemory224 to an instance of encoder sharedmemory314 for each stream ofmixed video data232. The resultant buffered switchedvideo data318 is available for encoding in real-time by theencoders220 of thevideo encoder subsystem206.
Unlike theFIG. 2 example, where theencoders220 must first read themixed video data232 from an existing shared memory buffer, theencoders220 in theFIG. 3 example have immediate access to themixed video data232 in encoder sharedmemory314 and can encode the contents directly. This eliminates the time and resources needed forencoders220 to read the data out of mixer shared memory before encoding, while adding only the limited overhead of creating extra copies of the video data in shared memory.
FIG. 4 illustrates an exemplary mixed video data image406 displayed within a view402 on theuser device display404 ofuser device104. Because of the ability of theFIG. 1 transcode and mixingserver128 to transcode and mix multiple video data streams into a single transcoded stream for display, what appears to be a separate overlay video image of four other video data streams in the lower right corner of mixed video data image406 is actually part of the same mixed video data image406 within view402. In one example, theFIG. 2mixers218 can combine multiple video data streams into such views as a matrix view or picture-in-picture view, with the ability to overlay image data such as text and graphics over the video.
FIG. 5A provides one example of theapplication control message144 andconfiguration control message150, which includes multiple fields. According to various examples of the present invention, the fields comprise anoperation506 andselected_view_data508. According to one aspect, theselected_view_data508 comprises identifiers for panes within a view, or URLs for display streams.Operation506 comprises information to perform onselected_view_data508, such as zoom and combine. TheFIG. 1 user applications182 provide the contents of theapplication control message144 and theconfiguration control message150.
FIG. 5B provides one example of the videoswitch control message336, which includes multiple fields. According to another aspect, the fields include aframe rate530, asource buffer532, and adestination buffer534. TheFIG. 1 user applications182 provide the contents of the videoswitch control message336.
FIG. 6A illustrates the high-level data flows in thevideo security system100 when thevideo security system100 is actively receiving, processing, and displaying video data streams, then receives aselection610 from theuser device104 to zoom in on one of the streams. Before theselection610,user device display404 ofuser device104 is displaying video data from four security video cameras in a matrix view of fourpanes604 withinTab1 view602 ofuser device display404. Then, the user requests to zoom in on the video data in one of the panes, with the expectation that the securityvideo distribution system102 will leave the current matrix view of video data undisturbed and create a new view on theuser device display404 containing the zoomed video data. Dashed lines appear inFIG. 6A to separate the boundaries of theuser device104, the securityvideo distribution system102, and theimage processing system110.
Video mixer subsystem204 sendsmixer_messages614 instructing themixers218 to perform operations such as “combine” and “zoom” on selected video data streams.Video mixer subsystem204 also sendsencoder_messages612 toencoders220 comprising such functions as “create,” “setup,” “start,” and “stop.”Video mixer subsystem204 also sendsstreaming_server_messages616 to thevideo streaming server134 comprising such functions as “start,” “stop,” and to use standard streaming protocols such as HTTP Live Streaming (“HLS”).
In this example, before theselection610, theimage processing system110 is receiving securitycamera video data138 from four security cameras, which themixers218 then mix intomixed video data232. Theencoders220 encode themixed video data232 into transcodedvideo data154 for streaming by thevideo streaming server134 into streamingvideo data146.User device104 displays the streamingvideo data146 withinTab1view602 ofuser device display404. The securitycamera video data138 for security cameras1-4 display as a matrix view in panes1-4604.
After theselection610,user device104 sends anapplication control message144 to theweb services component132, with the following contents: aFIG.5A operation506 with value “zoom,” andFIG. 5A selected_view_data508 with value ofpane2604. to theweb services component132. Theweb services component132 sends theconfiguration control message150 to thevideo mixer subsystem204, with the following contents: aFIG.5A operation506 with value “zoom,” and theFIG. 5A selected_view_data508 with value ofpane2604.
FIG. 6B illustrates the high-level data flows in thevideo security system100 in response to the selection inFIG. 6A. Though thevideo security system100 continues to accept securitycamera video data138 and display it touser device display404, this example focuses on the newly created view,Tab2 view620, containing the zoomedvideo data image622. Dashed lines appear inFIG. 6B to separate the boundaries of theuser device104, the securityvideo distribution system102, and theimage processing system110.
Thevideo mixer subsystem204 receives theconfiguration control message150 from theweb services component132. Thevideo mixer subsystem204, viamixer_messages614, performs operation zoom on the video data stream for Pane2. Thevideo mixer subsystem204 then instructs encoder(s)220, viaencoder_messages612, to encode themixed video data232 into transcodedvideo data154 for streaming by thevideo streaming server134 into streamingvideo data146.User device display104 displays the streamingvideo data146 in tab2 view620 ofuser device display404 as the zoomedvideo data image622.
In this example, no new streams were created for the zoomedvideo data image622, and the only thing that has changed in theimage processing system110 betweenFIG. 6A andFIG. 6B is the contents of the mixer(s)218. Moreover, nothing has changed with themixer218 contents betweenFIG. 6A andFIG. 6B other than the mixer zoomed in on a particular region of video.
With respect to the earlier analogy for three-dimensional graphics systems,mixer218 maintains different memory spaces for each of theFIG. 6A fourpanes604 ofTab1 view602, positioned next to each other so as to form a full-view virtual singular rectangular surface. Using the analogy of amixer218 as a virtual video camera that focuses on different regions of mixer memory, themixer218 projects this virtual rectangular surface onto the image plane inmixer218 memory, resulting in the single four-pane image inFIG.6A Tab1 view602.
To perform the zoom operation, theFIG.6B mixer218 moves its virtual video camera closer and to the upper-right of theFIG.6A Tab1 view602 until the image ofPane2604 fills the virtual video camera's view, displayed onuser device104 as the zoomedvideo data image622.
FIG. 7A illustrates the high-level data flows in the video security system when the system is actively receiving, processing, and displaying video data from four security video cameras, and then receives a selection of two video streams from the displayed video data and abackground image706, with operation combine from a user device. Before theselection710,user device display404 ofuser device104 is displaying video data from four security video cameras in a matrix view of fourpanes604 withinTab1 view602 ofuser device display404. Then, the user requests to combine video data from two of the panes with the background image, with the expectation that the securityvideo distribution system102 will leave the current matrix view of video data undisturbed and create a new view on the user device display containing the combined video data. Dashed lines appear inFIG. 7A to separate the boundaries of theuser device104, the securityvideo distribution system102, and theimage processing system110.
In this example, before theselection710, theimage processing system110 is receiving securitycamera video data138 from four security cameras, which themixers218 then mix intomixed video data232. Theencoders220 encode themixed video data232 into transcodedvideo data154 for streaming by thevideo streaming server134 into streamingvideo data146.User device104 displays the streamingvideo data146 withinTab1 view602 ofuser device display404. The securitycamera video data138 for security cameras1-4 display as a matrix view in panes1-4604.
After theselection710,user device104 sends anapplication control message144 to theweb services component132, with the following contents: aFIG.5A operation506 with value “combine,” andFIG. 5A selected_view_data508 with value pane1604 andpane2604,background image706 to theweb services component132. Theweb services component132 sends aconfiguration control message150 to thevideo mixer subsystem204, with the following contents: aFIG.5A operation506 with value “combine,” and aFIG. 5A selected_view_data508 with the value ofpane1604,pane2604, and image data704.
FIG. 7B illustrates the high-level data flows in the video security system in response to the selection inFIG. 7A. Though the video security system continues to accept securitycamera video data158 and display it touser device display404, this example focuses on the newly created view,Tab4 view730, containing the combinedvideo data image722. Dashed lines appear inFIG. 7B to separate the boundaries of theuser device104, the securityvideo distribution system102, and theimage processing system110.
Thevideo mixer subsystem204 receives theconfiguration control message150 from theweb services component132. Thevideo mixer subsystem204, viamixer_messages614, performs operation combine upon the video data streams forPane1604,Pane2604, andimage data136 associated with theFIG.7A background image706. Thevideo mixer subsystem204 then instructs encoder(s)220, viaencoder_messages612, to encode themixed video data232 into transcodedvideo data154 for streaming by thevideo streaming server134 into streamingvideo data146.User device display104 displays the streamingvideo data146 in tab4 view730 ofuser device display404 as the combinedvideo data image722.
In this example, no new streams were created for the combinedvideo data image722, and the only thing that has changed in theimage processing system110 betweenFIG. 7A andFIG. 7B is the contents of the mixer(s)218.
With respect to the earlier analogy for three-dimensional graphics systems,mixer218 maintains different memory spaces for each of theFIG. 7A fourpanes604 ofTab1 view602, positioned next to each other so as to form a full-view virtual singular rectangular surface. TheFIG.7B background image706 also has its own graphics plane or memory space, positioned behind the other image planes. Thebackground image706 fills its memory space within themixer218. Using the analogy of amixer218 as a virtual video camera that focuses on different regions of mixer memory, in response to the combine operation uponselection710, themixer218 first hides or makes invisible the non-selected memory spaces forFIG.7A pane3604 andpane4604. Then, themixer218 moves the memory spaces forFIG.7A pane1604 and pane2604 to the center of the image plane, resulting in theFIG. 7B combinedvideo data image722.
While this invention has been particularly shown and described with references to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims.