FIELDThe present invention relates to systems and methods for controlling cameras. Some embodiments relate to systems and methods to control cameras using historical or predicted event data.
BACKGROUNDA wide variety of types of events are captured by video cameras for broadcast or other viewing. Many types of events require cameras to be panned, tilted or zoomed during the event to properly capture the action. Live operators are required to physically operate these cameras. For example, track and field events, such as a 100 meter race, require one or more video cameras to be panned and zoomed across at least a 100 meter field of view. A 400 meter race requires one or more video cameras to track the runners as they circle the track. Manning live cameras for live events can be expensive. Some systems provide joysticks or other remote operator-controlled devices which allow individual cameras to be remotely controlled, however these systems require that a live operator manipulate the joystick to control each camera. It would be desirable to provide camera controls which allow certain types of events to be captured without live operators.
BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is an illustration of a system in accordance with some embodiments.
FIG. 2 is a flow chart of a method in accordance with some embodiments of the present invention.
FIG. 3 is a block diagram of a camera control system in accordance with some embodiments of the present invention.
FIG. 4 is a tabular representation of a portion of a historical event data table in accordance with some embodiments of the present invention.
FIG. 5 is a tabular representation of a portion of an athlete or participant data table in accordance with some embodiments of the present invention.
FIG. 6 is an illustration of a system in accordance with another embodiment.
DETAILED DESCRIPTIONApplicants have recognized that there is a need for methods, systems, apparatus, means and computer program products to control video cameras using historical or predicted event data. According to some embodiments, systems, methods, apparatus, and computer program code for controlling a video camera to capture video associated with an event are provided which use historical or predictive data. In some embodiments, event characteristic information and participant information is received. Video camera configuration and position data is received, as well as current event data (such as a signal indicating the start of a race, etc.). The current event data may be received substantially in real time from the event. Video camera control signals are generated based on some (or all) of the information received, including the event characteristic information, the participant information, and the current event data. The control signals are transmitted to a video camera to control the operation of the video camera.
Features according to some embodiments will be described by first referring to FIG.
1, which illustrates asystem100 in which avideo camera110 is positioned to capture a video feed associated with an event102 (such as a live sporting event or the like). Thevideo camera110 is in communication with acamera control system120 which provides dynamic adjustment and control data to control the operation and orientation of thevideo camera110. For example, thevideo camera110 may be a robotically operated video camera which is capable of receiving and responding to control data, including data to pan, tilt, zoom, focus, or otherwise control operation of thevideo camera110.
Pursuant to some embodiments, thecamera control system120 receiveshistorical data130 including, for example, event characteristic data and athlete or participant data. For example, the event characteristic data may include data associated with the specific event to be captured by thevideo camera110. As an illustrative example, provided to describe, but not limit, features of some embodiments, theevent102 is a track and field event—the Prefontaine Classic held in Eugene, Oreg., and the race to be broadcast is the 100 meter men's finals. Athletes participating in the finals include Usain Bolt, as well as a number of other world-class athletes. Pursuant to some embodiments, historical event characteristic information associated with prior 100 meter men's races may be used to identify a relevant set of camera control parameters to use to control thevideo camera110. Further, athlete or participant information may also be used to further refine the camera control parameters.
Continuing the specific illustrative example, event characteristic information about the event include data classifying a type of the event, including the fact that it is an outdoor track event, with a distance of 100 meters. Further, the event is typically run in under 10 seconds. Further, athlete data associated with Usain Bolt indicates that he is capable of running the event under 10 seconds, and could possibly run the race under 9.58 seconds. Historical event characteristic data about the Prefontaine Classic may include information about the relative direction of the race (it's typically run from left to right when facing the track from the stands), the fact that the race is run outdoors, and other location-specific or event-specific information. Using this historical data, embodiments control the operation of thevideo camera110 to capture all or portions of the event, without need for a human operator to man the camera during the event. Instead, thevideo camera110 can be controlled under direction of thecamera control system120 during the event. In some embodiments, operation of thevideo camera110 from the start of the race until the completion of the race is entirely automated and under the control of thecamera control system120.
As shown inFIG. 1, data associated with theactual event102 may also be received by thecamera control system120. For example, the data may include a start of the race triggered by a starter's timer, the completion of a race, or the like. As a specific illustrative example, avideo camera110 located to capture the 100 meters men's final at the Prefontaine Classic may be triggered or caused to start panning and zooming along the expected path of the runners, at the expected pace, as soon as information identifying the start of the race is received. Thecamera control system120 may compute the expected acceleration and velocity of the lead runners based on thehistorical data130, and may start the panning and zooming (and other controls) based on the computed expected acceleration and velocity once the race starts.
In some embodiments, thecamera control system120 may include video or audio recognition software to identify an event or action that is used to trigger the start of camera control. For example, in the case of video recognition,camera control system120 may include (or receive data from) a motion detection unit which compares video frame data to identify an action or movement that signals or can be used to start the camera control of the present invention. The motion detection unit may compare the current camera video frame data to the previous video frame to identify a movement or action or other change in the video frame data which should be used to start the camera control of the present invention. As an illustrative example where the system of the present invention is used to capture video data associated with a Nordic Combined ski event, the start of camera control may be triggered when a motion detection unit identifies that a skier has entered the zone to be covered by avideo camera110. Other event detection devices or approaches may be used to signal or trigger the start of a camera control action pursuant to the present invention. Pursuant to some embodiments, the start of the camera control action is performed based on an automated detection of the start of an event or portion of an event to be captured (e.g., via a motion detection unit, an audio detection unit or the like).
In this way, thevideo camera110 will generate a video feed (e.g., a video feed to be broadcast to viewers) that matches the expected or calculated path of selected athletes. The video feed is produced at significantly lower cost than would be possible with a manned video camera. As used herein, the phrases “video feed” and “received image” may refer to any signal conveying information about a moving or still image, such as a High Definition-Serial Data Interface (“HD-SDI”) signal transmitted in accordance with the Society of Motion Picture and Television Engineers 292M standard. Although HD signals may be described in some examples presented herein, note that embodiments may be associated with any other type of video feed, including a standard broadcast feed and/or a 3D image feed. Moreover, video feeds and/or received images might comprise, for example, an HD-SDI signal exchanged through a fiber cable and/or a satellite transmission. The video feed data output fromvideo camera110 may be transmitted to a production facility (not shown) for cutting and editing prior to broadcast to viewers or other use.
Note that thevideo camera110 may be any device capable of generating a video feed, such as a Sony® broadcast camera with a pan, tilt and zoom head and that is capable of being robotically or remotely controlled, and that is capable of receiving and responding, substantially in real-time, to control signals causing dynamic adjustments to be made to thevideo camera110. As used herein, the phrase “dynamic adjustments” might refer to, for example, a panning motion, a tilting motion, a focal change, and/or a zooming adjustment being made to a video camera (e.g., zooming the camera in or out). In some embodiments, the robotically operatedvideo camera110 may be adapted to provide information to thecamera control system120. This information may be provided substantially in real-time, and may include information about the current state or orientation of thevideo camera110 based on the dynamic adjustments made to the video camera, such as a panning motion, a tilting motion, a focal change, and/or a zooming adjustment.
Thecamera control system120 could be implemented using a Personal Computer (PC) running a Windows® Operating System (“OS”) or an Apple® computing platform, or other computing device. In some embodiments, thecamera control system120 is remotely located from thevideo camera110, and further, in some embodiments, thecamera control system120 may be remotely located from the event102 (e.g., such as in a production control facility or the like). Communication between thecamera control system120 andvideo camera110 may be via a wired or wireless communication network. The result is systems and methods which allow one or more video cameras to be controlled in an automated fashion, without need for live operators for each camera.
FIG. 2 illustrates a method that might be performed, for example, by some or all of the elements described herein. The flow charts described herein do not imply a fixed order to the steps, and embodiments of the present invention may be practiced in any order that is practicable. Note that any of the methods described herein may be performed by hardware, software, or any combination of these approaches. For example, a computer-readable storage medium may store thereon instructions that when executed by a machine result in performance according to any of the embodiments described herein. In some embodiments, the process ofFIG. 2 is executed under control of a camera control system such as thesystem120 ofFIG. 1.
Processing begins at202, where information may be received about an event and event participants. For example, the information may include information received from an operator ofcamera control system120 during event planning or set up, while the camera(s) are being positioned. Such data may include information identifying the specific event for which video is to be captured (such as the “Prefontaine Classic”), information identifying event characteristic data (e.g., the event is a track and field event, of the type “100 meter mens”) as well as information identifying the participants (such as the name of one or more athletes who will be participating in the specific event, as well as their expected pace and completion time). Alternatively, or in addition, some or all of the event and athlete data may be retrieved from a database in an automated or semi-automated fashion. In some embodiments, the data received at202 may be used in camera setup as well as for later control of thevideo camera110 during the event. Some or all of the data received at202 may be received from one or more external or internal data sources (such ashistorical data130 ofFIG. 1). Further, some or all of the data may be received from operator input (e.g., via an operator interface or input device as described below in conjunction withFIG. 3). For example, in some embodiments, eachvideo camera110 may be set up or configured so that the video camera's range of motion is oriented to include a starting line or home position.
At204, processing continues where camera configuration and position information is received. For example, when one ormore video cameras110 are set up for use in capturing video data at anevent102, they may be connected (via a wired or wireless network) to thecamera control system120. Information about the position and orientation of each camera may be captured when the cameras are initially configured so that thecamera control system120 can track the orientation and configuration of each camera during the event.
Processing at206 includes receiving current event data. For example, a sensor or other device located at the event may be triggered to transmit current event data to thecamera control system120 when the event starts (e.g., it may be triggered by a starters gun at a track and field event or swim meet). Current event data may also include data identifying a “restart” or “reset” of the processing. In some embodiments, thecurrent event data120 may be provided by user input and transmitted to thecamera control system120. Thecurrent event data120 may be used by thecamera control system120 to cause the operation and adjusting ofvideo cameras110 pursuant to the present invention.
Processing continues at208, where the one ormore video cameras110 are dynamically adjusted based on historical or predictive event data received or generated at202 and based on the current event data received at206. For example, in a track and field event such as themens 100 meter race described above, processing at208 may include generating a series of control signals to cause one or more video cameras to pan, zoom and focus on the expected path of the runners, at the expected pace. The camera control signals generated at208 may be generated in real time during the event, or they may be generated prior to the start of an event based on the historical data. In the example, since Usain Bolt is racing, and because of the data associated with his past performances, an expected trajectory of the race may be pre-computed by thecamera control system120, and camera control signals may be pre-established. The trigger causing the execution or transmission of the pre-established control signals may be receipt of the current event data (e.g., the exact time of the start of the race). In situations where a “restart” or “reset” of the pre-established camera control signals is required (e.g., such as when an athlete has a false start, etc.), the pre-established control signals may be reset and restarted from the beginning
FIG. 3 is a block diagram of acamera control system300 that might be associated with, for example, thesystem100 ofFIG. 1 in accordance with some embodiments of the present invention. Thecamera control system300 comprises aprocessor310, such as one or more INTEL® Pentium® processors, coupled tocommunication devices320 configured to communicate with remote devices (not shown inFIG. 3). Thecommunication devices320 may be used, for example, to receive current event data from sensors or devices at the event102 (such as the start of a race, etc.) as well as data from one or morerobotic video cameras110 at the event, and to transmit control data to dynamically adjust the orientation and operation of one ormore video cameras110 at the event.
Theprocessor310 is also in communication with aninput device340. Theinput device340 may comprise, for example, a keyboard, a mouse, or computer media reader. Such aninput device340 may be used, for example, to enter information about an event for which video feed data is to be captured and/or to set up one ormore video cameras110 for use in capturing video from an event. Theprocessor310 is also in communication with anoutput device350. Theoutput device350 may comprise, for example, a display screen or printer. Such anoutput device350 may be used, for example, to provide information about an event, aboutvideo camera110 set up, or the like, to an operator.
Theprocessor310 is also in communication with astorage device330. Thestorage device330 may comprise any appropriate information storage device, including combinations of magnetic storage devices (e.g., hard disk drives), optical storage devices, and/or semiconductor memory devices such as Random Access Memory (RAM) devices and Read Only Memory (ROM) devices.
Thestorage device330 stores acamera control application335 for controlling theprocessor310. Theprocessor310 performs instructions of theapplication335, and thereby operates in accordance with any embodiments of the present invention described herein. For example, theprocessor310 may receive information about one or more video cameras110 (including their orientation and operational status) as well as current event data associated with an event102 (such as, for example, the start of a race, or the like).Camera control application335 causes the operation of thecamera control system120 as described herein (e.g., such as described in conjunction withFIG. 2 to remotely control the usage and operation of one ormore video cameras110 based on historical event and participant data).
As used herein, information may be “received” by or “transmitted” to, for example: (i) thecamera control system300 from other devices; or (ii) a software application or module withincamera control system300 from another software application, module, or any other source.
As shown inFIG. 3, thestorage device330 also stores (or, in some embodiments, has access to) historical data for use in controlling the operation of one ormore video cameras110 pursuant to the present invention. As depicted inFIG. 3, the historical data may be stored in datasets identified as sportcharacteristic data400 and athlete data500 (together, generally referred to herein as “historical” data). One example of such adatabase400 that may be used in connection with thecamera control system300 will now be described in detail with respect toFIG. 3. The illustration and accompanying descriptions of the database presented herein are exemplary, and any number of other database arrangements could be employed besides those suggested by the figures.
FIG. 4 is a tabular representation of a portion of a sport characteristic data table400 in accordance with some embodiments of the present invention. The table400 includes entries associated with different sporting events. The table400 also defines fields for each of the entries. For example, the sport characteristic data table400 may include fields specifying, for each event, an event identifier (used to uniquely identify a specific event type), an event description (e.g., such as information classifying the type of event), an event duration (e.g., specifying the historical event duration), etc. The fields shown inFIG. 4 are for illustrative purposes and may include any of a number of different types of data to allow an event to be characterized and to predict how the event may unfold. For example, additional data may be included for specific events that are held on a regular basis (such as the Prefontaine Classic), that are at a specific location, or that have a known path or region for a camera to traverse. This data is used, for example, by the camera control system to match a current event with camera control parameters that allow one or more video cameras to be controlled in an automated fashion.
FIG. 5 is a tabular representation of a portion of an athlete data table500 in accordance with some embodiments of the present invention. The table500 includes entries associated with different athletes or event participants. The table500 also defines fields for each of the entries. For example, the table500 may store information about individual participants in different events, such as, a unique athlete or participant identifier, the athlete's name, the type of event(s) the athlete participates in, the athlete's most recent time in each event, the athlete's average time in the event, etc. This information may be used by the camera control system, along with other historical and event-specific data, to generate camera control signals allowing remote, automated control of one or more video cameras to capture a video feed of events the athlete is participating in. In some embodiments, information about a group of athletes in an event are used to generate the camera control signals. For example, in the Prefontaine Classic example introduced above, there may be 10 athletes participating in the 100 meter men's finals. Analysis of the athlete data may indicate that three of those athletes may be expected to run the race in 10 seconds or less, while the average time will be 10.2 seconds. This data may be used to control one or more video cameras to ensure that at least one camera follows the expected trajectory of the top three athletes, while another camera may be controlled to follow the expected trajectory of the rest of the pack of athletes. Those skilled in the art, upon reading this disclosure, will appreciate that other types and uses of athlete or participant data may be used in conjunction with the present invention to allow automated and accurate remote control of one or more video cameras.
Pursuant to some embodiments,camera control system300 may store, or have access to, additional data not shown inFIG. 3 orFIGS. 4-5. For example,camera control system300 may store camera data associated with the one ormore video cameras110 positioned at an event. The storage and use of such data may be particularly beneficial in situations such as the embodiment ofFIG. 6 (discussed further below) where multiplerobotic video cameras110 are deployed at anevent102. For example,camera control system300 may store data identifying each video camera (such as by a camera identifier), a location of each video camera (such as a distance between a camera and the field or event), an orientation of each video camera, as well as current operational data for each video camera (such as information identifying tilt data, zoom data, focus data, field of view data, etc). The storage or access to such camera data may allow embodiments to perform more fine control of each camera, including, for example, cutting betweendifferent video cameras110 in an automated fashion based on a location (or expected location) of athletes, or the like.
FIG. 6 is a block diagram of asystem600 in accordance with some embodiments of the present invention. Thesystem600 includesmultiple video cameras610,612 at anevent location602. Although two video cameras are shown, those skilled in the art will appreciate upon reading this disclosure, that more than two cameras may be deployed and operated using features of the present invention. Each of thevideo cameras610,612 might comprise, for example, an instrumented hard camera that can be dynamically adjusted (e.g., via pan, tilt, zoom, and other motions). Eachvideo camera610,612 is in communication with acamera control system620 to provide and receive data to perform dynamic adjustments of eachvideo camera610,612 during the capture of an event. Thecamera control system620 may be as described above (e.g., in conjunction withFIGS. 1 and 3).
Multiple camera embodiments may be used in conjunction with a number of different types of events that benefit from different camera angles and different shots. For example, the system may be used to capture a 400 meter track and field race, where a different camera angle is desired when the runners round the track. Pursuant to some embodiments, control signals may be generated to automatically switch from one camera to another at an expected time in the event based on the historical and other data associated with the event.
The following illustrates various additional embodiments of the invention. These do not constitute a definition of all possible embodiments, and those skilled in the art will understand that the present invention is applicable to many other embodiments. Further, although the following embodiments are briefly described for clarity, those skilled in the art will understand how to make any changes, if necessary, to the above-described apparatus and methods to accommodate these and other embodiments and applications.
While embodiments have been described with respect to sporting events and athletes, those skilled in the art, upon reading this disclosure will appreciate that features of the present invention may be used with desirable results in conjunction with the control of one or more video cameras at other types of events. For example, embodiments may be used to capture video data: at amusement parks (e.g., to capture the action and facial expressions of people riding a roller coaster), at airports (e.g., to capture and track airplane takeoffs and landings, using historical or predictive data about individual plane types, etc.), or the like. Further, while embodiments have been described with respect to robotically controlled video cameras, those skilled in the art, upon reading this disclosure, will appreciate that embodiments may be used with other types of cameras (e.g., to control the taking of still shots of wildlife, or the like).
The present invention has been described in terms of several embodiments solely for the purpose of illustration. Persons skilled in the art will recognize from this description that the invention is not limited to the embodiments described, but may be practiced with modifications and alterations limited only by the spirit and scope of the appended claims.