CROSS-REFERENCE TO RELATED APPLICATIONSThis application is a continuation application of U.S. patent application Ser. No. 12/794,773, filed Jun. 6, 2010, which claims the benefit of Provisional Application Ser. No. 61/321,871, filed Apr. 7, 2010, both of which are incorporated herein by reference in their entireties.
This application is related to the following applications: U.S. patent application Ser. No. 12/794,766, filed Jun. 6, 2010; U.S. patent application Ser. No. 12/794,768, filed Jun. 6, 2010; U.S. patent application Ser. No. 12/794,771, filed Jun. 6, 2010; U.S. patent application Ser. No. 12/794,772, filed Jun. 6, 2010; U.S. patent application Ser. No. 12/794,774, filed Jun. 6, 2010; and U.S. patent application Ser. No. 12/794,775, filed Jun. 6, 2010.
BACKGROUND OF THE INVENTIONMany of today's portable devices, such as smartphones, provide video capture functionality. A user of the portable device can capture both still images and video through a camera on the phone. However, to transmit captured video to another party, the user must generally either send the video directly to the other party or upload the video to another location (e.g., an Internet video hosting site) after the video is done being captured. Unfortunately, this does not allow the other party to view the live video stream as it is captured by the portable device.
In addition, standard portable devices are only equipped with one camera, and processing information from this one camera is difficult enough. An ideal device would have multiple cameras and could send out live video that is a composition of video from at least two cameras. This is an especially difficult problem in light of the limited resources available for portable devices, both in terms of the device processing multiple captured video streams and a network to which the device is connected handling the transmission of the live video streams.
SUMMARY OF THE INVENTIONSome embodiments of the invention provide a mobile device with two cameras that can take pictures and videos. The mobile device of some embodiments has a display screen for displaying the captured picture images and video images. It also includes a storage for storing the captured images for later transmission to another device. The device further has a network interface that allows the device to transmit the captured images to one or more devices during a real-time communication session between the users of the devices. The device also includes an encoder that it can use to encode the captured images for local storage or for transmission to another device. The mobile device further includes a decoder that allows the device to decode images captured by another device during a real-time communication session or to decode images stored locally.
One example of a real-time communication session that involves the transmission of the captured video images is a video conference. In some embodiments, the mobile device can only transmit one camera's captured video images at any given time during a video conference. In other embodiments, however, the mobile device can transmit captured video images from both of its cameras simultaneously during a video conference or other real-time communication session.
During a video conference with another device, the mobile device of some embodiments can transmit other types of content along with the video captured by one or both of its cameras. One example of such other content includes low or high resolution picture images that are captured by one of the device's cameras, while the device's other camera is capturing a video that is used in the video conference. Other examples of such other content include (1) files and other content stored on the device, (2) the screen display of the device (i.e., the content that is displayed on the device's screen), (3) content received from another device during a video conference or other real-time communication session, etc.
The mobile devices of some embodiments employ novel in-conference adjustment techniques for making adjustments during a video conference. For instance, while transmitting only one camera's captured video during a video conference, the mobile device of some embodiments can dynamically switch to transmitting a video captured by its other camera. In such situations, the mobile device of some embodiments notifies any other device participating in the video conference of this switch so that this other device can provide a smooth transition on its end between the videos captured by the two cameras.
In some embodiments, the request to switch cameras not only can originate on the “local” device that switches between its cameras during the video conference, but also can originate from the other “remote” device that is receiving the video captured by the local device. Moreover, allowing one device to direct another device to switch cameras is just one example of a remote control capability of the devices of some embodiments. Examples of other operations that can be directed to a device remotely in some embodiments include exposure adjustment operations (e.g., auto-exposure), focus adjustment operations (e.g., auto-focus), etc. Another example of a novel in-conference adjustment that can be specified locally or remotely is the identification of a region of interest (ROI) in a captured video, and the use of this ROI identification to modify the behavior of the capturing camera, to modify the image processing operation of the device with the capturing camera, or to modify the encoding operation of the device with the capturing camera.
Yet another example of a novel in-conference adjustment of some embodiments involves real-time modifications of composite video displays that are generated by the devices. Specifically, in some embodiments, the mobile devices generate composite displays that simultaneously display multiple videos captured by multiple cameras of one or more devices. In some cases, the composite displays place the videos in adjacent display areas (e.g., in adjacent windows). In other cases, the composite display is a picture-in-picture (PIP) display that includes at least two display areas that show two different videos where one of the display areas is a background main display area and the other is a foreground inset display area that overlaps the background main display area.
The real-time modifications of the composite video displays in some embodiments involve moving one or more of the display areas within a composite display in response to a user's selection and movement of the display areas. Some embodiments also rotate the composite display during a video conference, when the screen of the device that provides this composite display rotates. Also, the mobile device of some embodiments allows the user of the device to swap the videos in a PIP display (i.e., to make the video in the foreground inset display appear in the background main display while making the video in the background main display appear in the foreground inset display).
The preceding Summary is intended to serve as a brief introduction to some embodiments of the invention. It is not meant to be an introduction or overview of all inventive subject matter disclosed in this document. The Detailed Description that follows and the Drawings that are referred to in the Detailed Description will further describe the embodiments described in the Summary as well as other embodiments. Accordingly, to understand all the embodiments described by this document, a full review of the Summary, Detailed Description and the Drawings is needed.
BRIEF DESCRIPTION OF THE DRAWINGSThe novel features of the invention are set forth in the appended claims. However, for purpose of explanation, several embodiments of the invention are set forth in the following figures.
FIG. 1 illustrates a composite display of some embodiments.
FIG. 2 illustrates another composite display of some embodiments.
FIG. 3 conceptually illustrates a software architecture for a video processing and encoding module of a dual camera mobile device of some embodiments.
FIG. 4 conceptually illustrates a captured image processing unit of some embodiments.
FIG. 5 conceptually illustrates examples of different frame rates based on different vertical blanking intervals (VBIs).
FIG. 6 conceptually illustrates a software architecture for a video conferencing and processing module of a dual camera mobile device of some embodiments.
FIG. 7 conceptually illustrates an example video conference request messaging sequence of some embodiments.
FIG. 8 illustrates a user interface of some embodiments for a video conference setup operation.
FIG. 9 illustrates a user interface of some embodiments for accepting an invitation to a video conference.
FIG. 10 illustrates another user interface of some embodiments for accepting an invitation to a video conference.
FIG. 11 illustrates another user interface of some embodiments for a video conference setup operation.
FIG. 12 conceptually illustrates another software architecture for a video conferencing and processing module of a dual camera mobile device of some embodiments.
FIG. 13 conceptually illustrates another software architecture for a dual camera mobile device of some embodiments.
FIG. 14 conceptually illustrates a process performed by a video conference manager of some embodiments such as that illustrated inFIG. 12.
FIG. 15 conceptually illustrates a software architecture for a temporal noise reduction module of some embodiments.
FIG. 16 conceptually illustrates a process of some embodiments for reducing temporal noise of images of a video.
FIG. 17 conceptually illustrates a process performed by an image processing manager of some embodiments such as that illustrated inFIG. 6.
FIG. 18 illustrates a user interface of some embodiments for an exposure adjustment operation.
FIG. 19 illustrates a user interface of some embodiments for a focus adjustment operation.
FIG. 20 conceptually illustrates a perspective correction process performed by an image processing manager of some embodiments such as that illustrated inFIG. 12.
FIG. 21 conceptually illustrates example perspective correction operations of some embodiments.
FIG. 22 conceptually illustrates a software architecture for an encoder driver of some embodiments of some embodiments such as that illustrated inFIG. 12.
FIG. 23 conceptually illustrates an image resizing process performed by an encoder driver of some embodiments such as that illustrated inFIG. 22.
FIG. 24 conceptually illustrates a software architecture for a decoder driver of some embodiments such as that illustrated inFIG. 12.
FIG. 25 conceptually illustrates an image extraction process performed by a decoder driver of some embodiments such as that illustrated inFIG. 24.
FIG. 26 illustrates an encoder driver of some embodiments that includes two rate controllers.
FIG. 27 conceptually illustrates a software architecture for a networking manager of some embodiments such as that illustrated inFIG. 12.
FIG. 28 illustrates a user interface of some embodiments for a snap-to-corner operation.
FIG. 29 illustrates another user interface of some embodiments for a snap-to-corner operation.
FIG. 30 illustrates a user interface of some embodiments for a PIP display rotation operation.
FIG. 31 illustrates another user interface of some embodiments for a PIP display rotation operation.
FIG. 32 illustrates another user interface of some embodiments for a PIP display rotation operation.
FIG. 33 illustrates another user interface of some embodiments for a PIP display rotation operation.
FIG. 34 illustrates a user interface of some embodiments for resizing a foreground inset display area in a PIP display.
FIG. 35 illustrates another user interface of some embodiments for resizing an inset display area in a PIP display.
FIG. 36 illustrates another user interface of some embodiments for resizing an inset display area in a PIP display.
FIG. 37 illustrates another user interface of some embodiments for resizing an inset display area in a PIP display.
FIG. 38 illustrates a user interface of some embodiments for identifying a region of interest in a display.
FIG. 39 illustrates another user interface of some embodiments for identifying a region of interest in a display.
FIG. 40 illustrates another user interface of some embodiments for identifying a region of interest in a display.
FIG. 41 illustrates a process of some embodiments for performing a local switch camera operation on a dual camera mobile device.
FIG. 42 illustrates a user interface of some embodiments for a switch camera operation.
FIG. 43 illustrates another user interface of some embodiments for a switch camera operation.
FIG. 44 illustrates another user interface of some embodiments for a switch camera operation.
FIG. 45 illustrates another user interface of some embodiments for a switch camera operation.
FIG. 46 illustrates a process of some embodiments for performing a remote switch camera operation on a dual camera mobile device.
FIG. 47 illustrates a user interface of some embodiments for a remote control switch camera operation.
FIG. 48 illustrates another user interface of some embodiments for a remote control switch camera operation.
FIG. 49 illustrates another user interface of some embodiments for a remote control switch camera operation.
FIG. 50 illustrates another user interface of some embodiments for a remote control switch camera operation.
FIG. 51 conceptually illustrates a process of some embodiments for performing an exposure adjustment operation.
FIG. 52 illustrates a user interface of some embodiments for an exposure adjustment operation.
FIG. 53 illustrates another user interface of some embodiments for an exposure adjustment operation.
FIG. 54 illustrates another user interface of some embodiments for an exposure adjustment operation.
FIG. 55 conceptually illustrates an exposure adjustment process performed by an image processing manager of some embodiments such as that illustrated inFIG. 12.
FIG. 56 conceptually illustrates exposure adjustment operations of some embodiments.
FIG. 57 conceptually illustrates a process of some embodiments for performing a focus adjustment operation.
FIG. 58 illustrates a user interface of some embodiments for a focus adjustment operation.
FIG. 59 illustrates another user interface of some embodiments for a focus adjustment operation.
FIG. 60 illustrates another user interface of some embodiments for a focus adjustment operation.
FIG. 61 conceptually illustrates an application programming interface (API) architecture of some embodiments.
FIG. 62 illustrates an architecture for a dual camera mobile computing device of some embodiments.
FIG. 63 conceptually illustrates a touch input/output (I/O) device of some embodiments.
FIG. 64 conceptually illustrates an example communication system of some embodiments.
FIG. 65 conceptually illustrates another example communication system of some embodiments.
DETAILED DESCRIPTIONIn the following description, numerous details are set forth for purpose of explanation. However, one of ordinary skill in the art will realize that the invention may be practiced without the use of these specific details. In other instances, well-known structures and devices are shown in block diagram form in order not to obscure the description of the invention with unnecessary detail.
Some embodiments of the invention provide a mobile device with two cameras that can take pictures and videos. Examples of mobile devices include mobile phones, smartphones, personal digital assistants (PDAs), laptops, tablet personal computers, or any other type of mobile computing device. As used in this document, pictures refer to still picture images that are taken by the camera one at a time in a single-picture mode, or several at a time in a fast-action mode. Video, on the other hand, refers to a sequence of video images that are captured by a camera at a particular rate, which is often referred to as a frame rate. Typical frame rates for capturing video are 25 frames per second (fps), 30 fps, and 60 fps. The cameras of the mobile device of some embodiments can capture video images (i.e., video frames) at these and other frame rates.
The mobile device of some embodiments (1) can display the captured picture images and video images, (2) can store the captured images for later transmission to another device, (3) can transmit the captured images to one or more devices during a real-time communication session between the users of the devices, and (4) can encode the captured images for local storage or for transmission to another device.
One example of a real-time communication session that involves the transmission of the captured video images is a video conference. In some embodiments, the mobile device can only transmit one camera's captured video images at any given time during a video conference. In other embodiments, however, the mobile device can transmit captured video images from both of its cameras simultaneously during a video conference or other real-time communication session.
The mobile devices of some embodiments generate composite displays that include simultaneous display of multiple videos captured by multiple cameras of one or more devices. In some cases, the composite displays place the videos in adjacent display areas (e.g., in adjacent windows).FIG. 1 illustrates one such example of acomposite display100 that includes twoadjacent display areas105 and110 that simultaneously display two videos captured by two cameras of one device or captured by two cameras of two different devices that are in a video conference.
In other cases, the composite display is a PIP display that includes at least two display areas that show two different videos, where one of the display areas is a background main display area and the other is a foreground inset display area that overlaps the background main display area.FIG. 2 illustrates one such example of acomposite PIP display200. Thiscomposite PIP display200 includes a backgroundmain display area205 and a foregroundinset display area210 that overlaps the background main display area. The twodisplay areas205 and210 simultaneously display two videos captured by two cameras of one device, or captured by two cameras of two different devices that are in a video conference. While the example composite PIP displays illustrated and discussed in this document are similar to thecomposite PIP display200, which shows the entire foregroundinset display area210 within the backgroundmain display area205, other composite PIP displays that have the foregroundinset display area210 overlapping, but not entirely inside, the backgroundmain display area205 are possible.
In addition to transmitting video content during a video conference with another device, the mobile device of some embodiments can transmit other types of content along with the conference's video content. One example of such other content includes low or high resolution picture images that are captured by one of the device's cameras, while the device's other camera is capturing a video that is used in the video conference. Other examples of such other content include (1) files and other content stored on the device, (2) the screen display of the device (i.e., the content that is displayed on the device's screen), (3) content received from another device during a video conference or other real-time communication session, etc.
The mobile devices of some embodiments employ novel in-conference adjustment techniques for making adjustments during a video conference. For instance, while transmitting only one camera's captured video during a video conference, the mobile device of some embodiments can dynamically switch to transmitting the video captured by its other camera. In such situations, the mobile device of some embodiments notifies any other device participating in the video conference of this switch so that this other device can provide a smooth transition on its end between the videos captured by the two cameras.
In some embodiments, the request to switch cameras not only can originate on the “local” device that switches between its cameras during the video conference, but also can originate from the other “remote” device that is receiving the video captured by the local device. Moreover, allowing one device to direct another device to switch cameras is just one example of a remote control capability of the devices of some embodiments. Examples of other operations that can be directed to a device remotely in some embodiments include exposure adjustment operations (e.g., auto-exposure), focus adjustment operations (e.g., auto-focus), etc. Another example of a novel in-conference adjustment that can be specified locally or remotely is the identification of a region of interest (ROI) in a captured video, and the use of this ROI identification to modify the behavior of the capturing camera, to modify the image processing operation of the device with the capturing camera, or to modify the encoding operation of the device with the capturing camera.
Yet another example of a novel in-conference adjustment of some embodiments involves real-time modifications of composite video displays that are generated by the devices. Specifically, in some embodiments, the real-time modifications of the composite video displays involve moving one or more of the display areas within a composite display in response to a user's selection and movement of the display areas. Some embodiments also rotate the composite display during a video conference, when the screen of the device that provides this composite display rotates. Also, the mobile device of some embodiments allow the user of the device to flip the order of videos in a PIP display (i.e., to make the video in the foreground inset display appear in the background main display, while making the video in the background main display appear in the foreground inset display).
Several more detailed embodiments are described below. Section I provides a description of the video processing architecture of some embodiments. Section II then describes the captured image processing unit of some embodiments. In some embodiments, this unit is the component of the device that is responsible for processing raw images captured by the cameras of the device.
Next, Section III describes the video conferencing architecture of some embodiments. This section also describes the video conference module of some embodiments, as well as several manners for setting up a single camera video conference. Section IV then describes in-conference adjustment and control operations of some embodiments. Lastly, Section V describes the hardware architecture of the dual camera device of some embodiments.
I. Video Capture and Processing
FIG. 3 conceptually illustrates a video processing andencoding module300 of a dual camera mobile device of some embodiments. In some embodiments, themodule300 processes images and encodes videos that are captured by the cameras of the dual camera mobile device. As shown inFIG. 3, thismodule300 includes a captured image processing unit (CIPU)driver305, amedia exchange module310, an encoder driver320, and a video processing module325.
In some embodiments, themedia exchange module310 allows programs on the device that are consumers and producers of media content to exchange media content and instructions regarding the processing of the media content. In the video processing andencoding module300, themedia exchange module310 of some embodiments routes instructions and media content between the video processing module325 and theCIPU driver305, and between the video processing module325 and the encoder driver320. To facilitate the routing of such instructions and media content, themedia exchange module310 of some embodiments provides a set of application programming interfaces (APIs) for the consumers and producers of media content to use. In some of such embodiments, themedia exchange module310 is a set of one or more frameworks that is part of an operating system running on the dual camera mobile device. One example of such amedia exchange module310 is the Core Media framework provided by Apple Inc.
The video processing module325 performs image processing on the images and/or the videos captured by the cameras of the device. Examples of such operations include exposure adjustment operations, focus adjustment operations, perspective correction, dynamic range adjustment, image resizing, image compositing, etc. In some embodiments, some image processing operations can also be performed by themedia exchange module310. For instance, as shown inFIG. 3, themedia exchange module310 of some embodiments performs a temporal noise reduction (TNR) operation (e.g., by TNR315) that reduces noise in video images captured by the cameras of the device. Further examples of such image processing operations of the video processing module325 and themedia exchange module310 will be provided below.
Through themedia exchange module310, the video processing module325 interfaces with theCIPU driver305 and the encoder driver320, as mentioned above. TheCIPU driver305 serves as a communication interface between a captured image processing unit (CIPU)330 and themedia exchange module310. As further described below, the CIPU330 is the component of the dual camera device that is responsible for processing images captured during image capture or video capture operations of the device's cameras. From the video processing module325 through themedia exchange module310, theCIPU driver305 receives requests for images and/or videos from one or both of the device's cameras. TheCIPU driver305 relays such requests to the CIPU330, and in response receives the requested images and/or videos from the CIPU330, which theCIPU driver305 then sends to the video processing module325 through themedia exchange module310. Through theCIPU driver305 and themedia exchange module310, the video processing module325 of some embodiments also sends instructions to the CIPU330 in order to modify some of its operations (e.g., to modify a camera's frame rate, exposure adjustment operation, focus adjustment operation, etc.).
The encoder driver320 serves as a communication interface between themedia exchange module310 and an encoder hardware335 (e.g., an encoder chip, an encoding component on a system on chip, etc.). In some embodiments, the encoder driver320 receives images and requests to encode the images from the video processing module325 through themedia exchange module310. The encoder driver320 sends the images to be encoded to theencoder335, which then performs picture encoding or video encoding on the images. When the encoder driver320 receives encoded images from theencoder335, the encoder driver320 sends the encoded images back to the video processing module325 through themedia exchange module310.
In some embodiments, the video processing module325 can perform different operations on the encoded images that it receives from the encoder. Examples of such operations include storing the encoded images in a storage of the device, transmitting the encoded images in a video conference through a network interface of the device, etc.
In some embodiments, some or all of the modules of the video processing andencoding module300 are implemented as part of an operating system. For example, some embodiments implement all fourcomponents305,310,320, and325 of thismodule300 as part of the operating system of the device. Other embodiments implement themedia exchange module310, theCIPU driver305, and the encoder driver320 as part of the operating system of the device, while having the video processing module325 as an application that runs on the operating system. Still, other implementations of themodule300 are possible.
The operation of the video processing andencoding module300 during a video capture session will now be described. To start a video capture session, the video processing module325 initializes several components that are needed for the video capture session. In some embodiments, these components include (1) the CIPU330, (2) a scaling and compositing module (not shown) of the video processing module325, (3) an image processing module (not shown) of the video processing module325, and (4) theencoder335. Also, the video processing module325 of some embodiments initializes a network manager (not shown) when it is participating in a video conference.
Through themedia exchange module310 and theCIPU driver305, the video processing module sends its initialization request to the CIPU330, in order to have one or both of the cameras of the device start video capturing. In some embodiments, this request specifies a particular frame rate, exposure level, and scaling size for each camera that needs to capture a video. In response to this request, the CIPU330 starts to return video images from the requested cameras at the specified rate(s), exposure level(s), and scaling size(s). These video images are returned to the video processing module325 through theCIPU driver305 and themedia exchange module310, which, as mentioned above, performs TNR operations on the video images before supplying them to the video processing module325. At the video processing module325, the video images are stored in a buffer (not shown) for additional image processing.
The image processing module of the video processing module325 retrieves the video images stored in the buffer for additional video processing. The scaling and compositing module then retrieves the processed video images in order to scale them if necessary for real time display on the display screen of the device. In some embodiments, this module creates composite images from the images captured by two cameras of the device or from images captured by the camera(s) of the device along with the camera(s) of another device during a video conference in order to provide a real-time display of the captured video images on the device or to create a composite video image for encoding.
The processed and/or composited video images are supplied to theencoder335 through the encoder driver320 and themedia exchange module310. Theencoder335 then encodes the video images. The encoded images are then returned to the video processing module325 (again through the encoder driver320 and the media exchange module310) for storage on the device or for transmission during a video conference. When the device is participating in a video conference, the network manager (that was initialized by the video processing module325) then retrieves these encoded images, packetizes them and transmits them to one or more other devices through a network interface (not shown) of the device.
II. Captured Image Processing
The images captured by cameras of the dual camera mobile device of some embodiments are raw, unprocessed images. These images require conversion to a particular color space before the images can be used for other operations such as transmitting the images to another device (e.g., during a video conference), storing the images, or displaying the images. In addition, the images captured by the cameras may need to be processed to correct errors and/or distortions and to adjust the images' color, size, etc. Accordingly, some embodiments perform several processing operations on the images before storing, transmitting, and displaying such images. Part of the processing of such images is performed by the CIPU330.
One example of such a CIPU is illustrated inFIG. 4. Specifically, this figure conceptually illustrates a captured image processing unit (CIPU)400 of some embodiments. ThisCIPU400 includes asingle processing pipeline485 that either processes images from only one of the device's cameras at a time, or processes images from both of the device's cameras simultaneously in a time-division multiplex fashion (i.e., in a time interleaved manner). TheCIPU400'sprocessing pipeline485 can be configured differently to address differing characteristics and/or operational settings of the different cameras. Examples of different camera characteristics in some embodiments include different resolutions, noise sensors, lens types (fixed or zoom lens), etc. Also, examples of different operational settings under which the device can operate the cameras in some embodiments include image resolution size, frame rate, zoom level, exposure level, etc.
As shown inFIG. 4, theCIPU400 includes asensor module415, a line/frame buffer417, a bad pixel correction (BPC)module420, a lens shading (LS)module425, ademosaicing module430, a white balance (WB)module435, agamma module440, a color space conversion (CSC) module445, a hue, saturation, and contrast (HSC)module450, ascaler module455, afilter module460, astatistics engine465, two sets ofregisters470, and acontroller module475. In some embodiments, all of the modules of theCIPU400 are implemented in hardware (e.g., an ASIC, FPGA, a SOC with a microcontroller, etc.), while in other embodiments, some or all of the modules of theCIPU400 are implemented in software.
As shown inFIG. 4, thesensor module415 communicatively couples to twopixel arrays410aand410band two sets ofsensors405aand405bof two cameras of the device. In some embodiments, this communicative coupling is facilitated through each camera sensor's mobile industry processor interface (MIPI).
Through this communicative coupling, thesensor module415 can forward instructions to the cameras to control various aspects of each camera's operations such as its power level, zoom level, focus, exposure level, etc. In some embodiments, each camera has four operational power modes. In the first operational power mode, the camera is powered off. For the second operational power mode, the camera is powered on, but it is not yet configured. In the third operational power mode, the camera is powered on, the camera's sensor is configured, and the camera sensor's pixels are collecting photons and converting the collected photons to digital values. However, the camera sensor is not yet sending images to thesensor module415. Finally, in the fourth operational power mode, the camera is in the same operational power mode as the third power mode except the camera is now sending images to thesensor module415.
During the operation of the device, the cameras may switch from one operational power mode to another any number of times. When switching operational power modes, some embodiments require the cameras to switch operational power modes in the order described above. Therefore, in those embodiments, a camera in the first operational power mode can only switch to the second operational power mode. When the camera is in the second operational power mode, it can switch to the first operational power mode or to the third operational power mode. Similarly, the camera can switch from the third operational power mode to the second operational power mode or the fourth operation power mode. When the camera is in the fourth operational power mode, it can only switch back to the third operational power mode.
Moreover, switching from one operational power mode to the next or the previous operational power mode takes a particular amount of time. Thus, switching between two or three operational power modes is slower than switching between one operational power mode. The different operational power modes also consume different amounts of power. For instance, the fourth operational power mode consumes the most amount of power, the third operational power mode consumes more power than the first and second, and the second operational power mode consumes more than the first. In some embodiments, the first operational power mode does not consume any power.
When a camera is not in the fourth operational power mode capturing images, the camera may be left in one of the other operational power modes. Determining the operational mode in which to leave the unused camera depends on how much power the camera is allowed to consume and how fast the camera may need to respond to a request to start capturing images. For example, a camera configured to operate in the third operational power mode (e.g., standby mode) consumes more power than a camera configured to be in the first operational power mode (i.e., powered off). However, when the camera is instructed to capture images, the camera operating in the third operational power mode can switch to the fourth operational power mode faster than the camera operating in the first operational power mode. As such, the cameras can be configured to operate in the different operational power modes when not capturing images based on different requirements (e.g., response time to a request to capture images, power consumption).
Through its communicative coupling with each camera, thesensor module415 can direct one or both sets of camera sensors to start capturing images when the video processing module325 requests one or both cameras to start capturing images and thesensor module415 receives this request through thecontroller module475, as further described below. Bayer filters are superimposed over each of the camera sensors and thus each camera sensor outputs Bayer pattern images, which are stored in the pixel array associated with each camera sensor. A Bayer pattern image is an image where each pixel only stores one color value: red, blue, or green.
Through its coupling with thepixel arrays410aand410b, thesensor module415 retrieves raw Bayer pattern images stored in thecamera pixel arrays410aand410b. By controlling the rate at which thesensor module415 retrieves images from a camera's pixel array, thesensor module415 can control the frame rate of the video images that are being captured by a particular camera. By controlling the rate of its image retrieval, thesensor module415 can also interleave the fetching of images captured by the different cameras in order to interleave theCIPU processing pipeline485's image processing of the captured images from the different cameras. Thesensor module415's control of its image retrieval is further described below and in U.S. patent application Ser. No. 12/794,766, now U.S. Pat. No. 8,744,420, entitled “Establishing Video Conference During a Phone Call.” This U.S. patent application Ser. No. 12/794,766, now U.S. Pat. No. 8,744,420, entitled “Establishing Video Conference During a Phone Call,” is incorporated herein by reference.
Thesensor module415 stores image lines (i.e., rows of pixels of an image) in the line/frame buffer417, which thesensor module415 retrieves from thepixel arrays410aand410b. Each image line in the line/frame buffer417 is processed through theCIPU processing pipeline485. As shown inFIG. 4, theCIPU processing pipeline485 is formed by theBPC module420, theLS module425, thedemosaicing module430, theWB module435, thegamma module440, the CSC module445, theHSC module450, thescaler module455, and thefilter module460. In some embodiments, theCIPU processing pipeline485 processes images from the line/frame buffer417 on a line-by-line (i.e., row-by-row) basis while in other embodiments theCIPU processing pipeline485 processes entire images from the line/frame buffer417 on a frame-by-frame basis.
In the exemplary pipeline illustrated inFIG. 4, theBPC module420 is the module that retrieves the images from the line/frame buffer417. This module performs a bad-pixel removal operation that attempts to correct bad pixels in the retrieved images that might have resulted from one or more of the camera sensors being defective (e.g., the defective photo sensors do not sense light at all, sense light incorrectly, etc.). In some embodiments, theBPC module420 detects bad pixels by comparing a particular pixel in an image with one or more neighboring pixels in the image. If the difference between the value of the particular pixel and the values of the neighboring pixels is greater than a threshold amount, the particular pixel's value is replaced by the average of several neighboring pixel's values that are of the same color (i.e., red, green, and blue) as the particular pixel.
The operation of theBPC module420 is in part controlled by the values stored for this module in the two sets ofregisters470 of theCIPU400. Specifically, to process the images captured by the two different cameras of the device, some embodiments configure theCIPU processing pipeline485 differently for each camera, as mentioned above. TheCIPU processing pipeline485 is configured for the two different cameras by storing two different sets of values in the two different sets of registers470a(Ra) and470b(Rb) of theCIPU400. Each set ofregisters470 includes one register (Ra or Rb) for each of the modules420-460 within theCIPU processing pipeline485. Each register in each register set stores a set of values that defines one processing pipeline module's operation. Accordingly, as shown inFIG. 4, the register set470ais for indicating the mode of operation of each processing pipeline module for one camera (camera A) of the dual camera mobile device, while the register set470bis for indicating the mode of operation of each module for the other camera (camera B) of the dual camera mobile device.
One example of configuring theCIPU processing pipeline485 differently for each camera is to configure the modules of theCIPU processing pipeline485 to process different sized images. For instance, if thecamera sensor405ais 640×480 pixels and the camera sensor405bis 2048×1536 pixels, the set of registers470ais configured to store values that instruct the modules of theCIPU processing pipeline485 to process 640×480 pixel images and the set of registers470bis configured to store values that instruct the modules of theCIPU processing pipeline485 to process 2048×1536 pixel images.
In some embodiments, different processing pipeline configurations (i.e., register values) are stored in different profile settings. In some of such embodiments, a user of the mobile device is allowed to select one of the profile settings (e.g., through a user interface displayed on the mobile device) to set the operation of a camera(s). For example, the user may select a profile setting for configuring a camera to capture high resolution video, a profile setting for configuring the same camera to capture low resolution video, or a profile setting for configuring both cameras to capture high resolution still images. Different configurations are possible, which can be stored in many different profile settings. In other of such embodiments, instead of allowing the user to select a profile setting, a profile setting is automatically selected based on which application or activity the user selects. For instance, if the user selects a video conferencing application, a profile that configures both cameras to capture video is automatically selected, if the user selects a photo application, a profile that configures one of the cameras to capture still images is automatically selected, etc.
After theBPC module420, theLS module425 receives the bad-pixel-corrected images. TheLS module425 performs a lens shading correction operation to correct for image defects that are caused by camera lenses that produce light falloff effects (i.e., light is reduced towards the edges of the camera sensor). Such effects cause images to be unevenly illuminated (e.g., darker at corners and/or edges). To correct these image defects, theLS module425 of some embodiments estimates a mathematical model of a lens' illumination fall-off. The estimated model is then used to compensate the lens fall-off of the image to evenly illuminate unevenly illuminated portions of the image. For example, if a corner of the image is half the brightness of the center of the image, theLS module425 of some embodiments multiplies the corner pixels value by two in order to produce an even image.
Thedemosaicing module430 performs a demosaicing operation to generate full color images from images of sampled colors. As noted above, the camera sensors output Bayer pattern images, which are incomplete because each pixel of a Bayer pattern image stores only one color value. Thedemosaicing module430 reconstructs a red, green, blue (RGB) image from a Bayer pattern image by interpolating the color values for each set of colors in the Bayer pattern image.
TheWB module435 performs a white balance operation on the RGB images received from thedemosaicing module430 so that the colors of the content of the images are similar to the colors of such content perceived by the human eye in real life. TheWB module435 adjusts the white balance by adjusting colors of the images to render neutral colors (e.g., gray, white, etc.) correctly. For example, an image of a piece of white paper under an incandescent light may appear yellow whereas the human eye perceives the piece of paper as white. To account for the difference between the color of the images that the sensor captures and what the human eye perceives, theWB module435 adjusts the color values of the image so that the captured image properly reflects the colors perceived by the human eye.
Thestatistics engine465 collects image data at various stages of theCIPU processing pipeline485. For example,FIG. 4 shows that thestatistics engine465 collects image data after theLS module425, thedemosaicing module430, and theWB module435. Different embodiments collect data from any number of different stages of theCIPU processing pipeline485. Thestatistics engine465 processes the collected data, and, based on the processed data, adjusts the operations of thecamera sensors405aand405bthrough thecontroller module475 and thesensor module415. Examples of such operations include exposure and focus. AlthoughFIG. 4 shows thestatistics engine465 controlling thecamera sensors405aand405bthrough thecontroller module475, other embodiments of thestatistics engine465 control the camera sensors through just thesensor module415.
The processed data can also be used to adjust the operations of various modules of theCIPU400. For instance, thestatistics engine465 of some embodiments adjusts the operations of theWB module435 based on data collected after theWB module435. In some of such embodiments, thestatistics engine465 provides an automatic white balance (AWB) function by using the processed data to adjust the white balancing operation of theWB module435. Other embodiments can use processed data collected from any number of stages of theCIPU processing pipeline485 to adjust the operations of any number of modules within theCIPU processing pipeline485. Further, thestatistics engine465 can also receive instructions from thecontroller module475 to adjust the operations of one or more modules of theCIPU processing pipeline485.
After receiving the images from theWB module435, thegamma module440 performs a gamma correction operation on the image to code and decode luminance or tristimulus values of the camera system. Thegamma module440 of some embodiments corrects gamma by converting a 10-12 bit linear signal into an 8 bit non-linear encoding in order to correct the gamma of the image. Some embodiments correct gamma by using a lookup table.
The CSC module445 converts the image received from thegamma module440 from one color space to another color space. Specifically, the CSC module445 converts the image from an RGB color space to a luminance and chrominance (YUV) color space. However, other embodiments of the CSC module445 can convert images from and to any number of color spaces.
TheHSC module450 may adjust the hue, saturation, contrast, or any combination thereof of the images received from the CSC module445. TheHSC module450 may adjust these properties to reduce the noise or enhance the images, for example. For instance, the saturation of images captured by a low-noise camera sensor can be increased to make the images appear more vivid. In contrast, the saturation of images captured by a high-noise camera sensor can be decreased to reduce the color noise of such images.
After theHSC module450, thescaler module455 may resize images to adjust the pixel resolution of the image, to adjust the data size of the image. Thescaler module455 may also reduce the size of the image in order to fit a smaller display, for example. Thescaler module455 can scale the image a number of different ways. For example, thescaler module455 can scale images up (i.e., enlarge) and down (i.e., shrink). Thescaler module455 can also scale images proportionally or scale images anamorphically.
Thefilter module460 applies one or more filter operations to images received from thescaler module455 to change one or more attributes of some or all pixels of an image. Examples of filters include a low-pass filter, a high-pass filter, a band-pass filter, a bilateral filter, a Gaussian filter, among other examples. As such, thefilter module460 can apply any number of different filters to the images.
Thecontroller module475 of some embodiments is a microcontroller that controls the operation of theCIPU400. In some embodiments, thecontroller module475 controls (1) the operation of the camera sensors (e.g., exposure level) through thesensor module415, (2) the operation of theCIPU processing pipeline485, (3) the timing of the CIPU processing pipeline485 (e.g., when to switch camera sensors, when to switch registers, etc.), and (4) a flash/strobe (not shown), which is part of the dual camera mobile device of some embodiments.
Some embodiments of thecontroller module475 process instructions received from thestatistics engine465 and theCIPU driver480. In some embodiments, the instructions received from theCIPU driver480 are instructions from the dual camera mobile device (i.e., received from the local device) while in other embodiments the instructions received from theCIPU driver480 are instructions from another device (e.g., remote control during a video conference). Based on the processed instructions, thecontroller module475 can adjust the operation of theCIPU400 by programming the values of theregisters470. Moreover, thecontroller module475 can dynamically reprogram the values of theregisters470 during the operation of theCIPU400.
As shown inFIG. 4, theCIPU400 includes a number of modules in theCIPU processing pipeline485. However, one of ordinary skill will realize that theCIPU400 can be implemented with just a few of the illustrated modules or with additional and different modules. In addition, the processing performed by the different modules can be applied to images in sequences different from the sequence illustrated inFIG. 4.
An example operation of theCIPU400 will now be described by reference toFIG. 4. For purposes of explanation, the set of registers Ra is used for processing images captured bycamera sensor405aof the dual camera mobile device and the set of registers Rb is used for processing images captured by camera sensor405bof the dual camera mobile device. Thecontroller module475 receives instructions from theCIPU driver480 to produce images captured by one of the cameras of the dual camera mobile device.
Thecontroller module475 then initializes various modules of theCIPU processing pipeline485 to process images captured by one of the cameras of the dual camera mobile device. In some embodiments, this includes thecontroller module475 checking that the correct set of registers of theregisters470 are used. For example, if theCIPU driver480 instructs thecontroller module475 to produce images captured by thecamera sensor405a, thecontroller module475 checks that the set of registers Ra is the set of registers from which the modules of theCIPU400 read. If not, thecontroller module475 switches between the sets of registers so that the set of registers Ra is the set that is read by the modules of theCIPU400.
For each module in theCIPU processing pipeline485, the mode of operation is indicated by the values stored in the set of registers Ra. As previously mentioned, the values in the set ofregisters470 can be dynamically reprogrammed during the operation of theCIPU400. Thus, the processing of one image can differ from the processing of the next image. While the discussion of this example operation of theCIPU400 describes each module in theCIPU400 reading values stored in registers to indicate the mode of operation of the modules, in some software-implemented embodiments, parameters are instead passed to the various modules of theCIPU400.
In some embodiments, thecontroller module475 initializes thesensor module415 by instructing thesensor module415 to delay a particular amount of time after retrieving an image from thepixel array410a. In other words, thecontroller module475 instructs thesensor module415 to retrieve the images from thepixel array410aat a particular rate.
Next, thecontroller module475 instructs thecamera sensor405athrough thesensor module415 to capture images. In some embodiments, thecontroller module475 also provides exposure and other camera operation parameters to thecamera sensor405a. In other embodiments, thecamera sensor405auses default values for the camera sensor operation parameters. Based on the parameters, thecamera sensor405acaptures a raw image, which is stored in thepixel array410a. Thesensor module415 retrieves the raw image from thepixel array410aand sends the image to the line/frame buffer417 for storage before theCIPU processing pipeline485 processing the image.
Under certain circumstances, images may be dropped by the line/frame buffer417. When thecamera sensors405aand/or405bare capturing images at a high rate, thesensor module415 may receive and store images in the line/frame buffer417 faster than theBPC module420 can retrieve the images from the line/frame buffer417 (e.g., capturing high frame-rate video), and the line/frame buffer417 will become full. When this happens, the line/frame buffer417 of some embodiments drops images (i.e., frames) based on a first in, first out basis. That is, when the line/frame buffer417 drops an image, the line/frame buffer417 drops the image that was received before all the other images in the line/frame buffer417.
The processing of the image by theCIPU processing pipeline485 starts by theBPC module420 retrieving the image from the line/frame buffer417 to correct any bad pixels in the image. TheBPC module420 then sends the image to theLS module425 to correct for any uneven illumination in the image. After the illumination of the image is corrected, theLS module425 sends the image to thedemosaicing module430 where it processes the raw image to generate an RGB image from the raw image. Next, theWB module435 receives the RGB image from thedemosaicing module430 and adjusts the white balance of the RGB image.
As noted above, thestatistics engine465 may have collected some data at various points of theCIPU processing pipeline485. For example, thestatistics engine465 collects data after theLS module425, thedemosaicing module430, and theWB module435 as illustrated inFIG. 4. Based on the collected data, thestatistics engine465 may adjust the operation of thecamera sensor405a, the operation of one or more modules in theCIPU processing pipeline485, or both, in order to adjust the capturing of subsequent images from thecamera sensor405a. For instance, based on the collected data, thestatistics engine465 may determine that the exposure level of the current image is too low and thus instruct thecamera sensor405athrough thesensor module415 to increase the exposure level for subsequently captured images. Thus, thestatistics engine465 of some embodiments operates as a feedback loop for some processing operations.
After theWB module435 adjusts the white balance of the image, it sends the image to thegamma module440 for gamma correction (e.g., adjusting the gamma curve of the image). The CSC module445 receives the gamma-corrected image from thegamma module440 and performs color space conversion. In this example, the CSC module445 converts the RGB image to a YUV image. In other words, the CSC module445 converts an image that is represented in an RGB color space to an image that is represented in a YUV color space. TheHSC module450 receives the YUV image from the CSC module445 and adjusts the hue, saturation, and contrast attributes of various pixels in the image. After theHSC module450, thescaler module455 resizes the image (e.g., enlarging or shrinking the image). Thefilter module460 applies one or more filters on the image after receiving the image from thescaler module455. Finally, thefilter module460 sends the processed image to theCIPU driver480.
In this example of the operation of theCIPU400 described above, each module in theCIPU processing pipeline485 processed the image in some manner. However, other images processed by theCIPU400 may not require processing by all the modules of theCIPU processing pipeline485. For example, an image may not require white balance adjustment, gamma correction, scaling, or filtering. As such, theCIPU400 can process images any number of ways based on a variety of received input such as instructions from theCIPU driver480 or data collected by thestatistic engine465, for example.
Different embodiments control the rate at which images are processed (i.e., frame rate) differently. One manner of controlling the frame rate is through manipulation of vertical blanking intervals (VBI). For some embodiments that retrieve image lines for processing images on a line-by-line basis, a VBI is the time difference between retrieving the last line of an image of a video captured by a camera of the dual camera mobile device from a pixel array and retrieving the first line of the next image of the video from the pixel array. In other embodiments, a VBI is the time difference between retrieving one image of a video captured by a camera of the dual camera mobile device from a pixel array and retrieving the next image of the video the pixel array.
One example where VBI can be used is between thesensor module415 and thepixel arrays410aand410b. For example, some embodiments of thesensor module415 retrieve images from thepixel arrays410aand410bon a line-by-line basis and other embodiments of thesensor module415 retrieve images from thepixel arrays410aand410bon an image-by-image basis. Thus, the frame rate can be controlled by adjusting the VBI of the sensor module415: increasing the VBI reduces the frame rate and decreasing the VBI increases the frame rate.
FIG. 5 conceptually illustrates examples ofdifferent frame rates505,510, and515 based on different VBIs. Each sequence shows an image, which is captured by one of the cameras of the dual camera mobile device, of a person holding a guitar at various time instances525-555 alongtimeline520. In addition, the time between each time instance525-555 is the same and will be referred to as one time unit. For purposes of explanation,FIG. 5 will now be described by reference to thesensor module415 and thepixel array410aofFIG. 4. As such, each image represents a time instance along thetimeline520 at which thesensor module415 retrieves an image from thepixel array410a.
In the example frame rate505, the VBI of thesensor module415 for thepixel array410ais set to three time units (e.g., by the controller module475). That is, thesensor module415 retrieves an image from thepixel array410aevery third time instance along thetimeline520. As shown in the example frame rate505, thesensor module415 retrieves an image at thetime instances525,540, and555. Thus, the example frame rate505 has a frame rate of one image per three time units.
The example frame rate510 is similar to the example frame rate505 except the VBI is set to two time units. Thus, thesensor module415 retrieves an image from thepixel array410aevery second time instance along thetimeline520. The example frame rate510 shows thesensor module415 retrieving an image from thepixel array410aat thetime instances525,535,545, and555. Since the VBI of the example frame rate510 is less than the VBI of the example frame rate505, the frame rate of the example frame rate510 is higher than the frame rate of the example frame rate505.
Theexample frame rate515 is also similar to the example frame rate505 except the VBI of thesensor module415 for thepixel array410ais set to one time unit. Therefore, thesensor module415 is instructed to retrieve an image from thepixel array410aevery time instance along thetimeline520. As illustrated, thesensor module415 retrieves an image from thepixel array410aat the time instances525-555. The VBI of theexample frame rate515 is less than the VBIs of the example frame rates505 and510. Therefore, the frame rate of theexample frame rate515 is higher than the example frame rates505 and510.
III. Video Conferencing
A. Video Conference Architecture
FIG. 6 conceptually illustrates a software architecture for a video conferencing andprocessing module600 of a dual camera mobile device of some embodiments. The video conferencing andprocessing module600 includes aCIPU driver605, a media exchange module610, and anencoder driver620 that are similar to the corresponding modules anddrivers305,310, and320 described above by reference toFIG. 3. The video conferencing andprocessing module600 also includes avideo conference module625, avideo conference client645, and anetwork interface650 for performing a variety of video conferencing functions. Like the video processing andencoding module300, the video conferencing andprocessing module600 processes and encodes images that are captured from cameras of the dual camera mobile device.
As described above by reference toFIG. 3, the media exchange module610 allows consumers and producers of media content in the device to exchange media content and instructions regarding the processing of the media content, theCIPU driver605 serves as a communication interface with the captured image processing unit (CIPU)655, and theencoder driver620 serves as a communication interface with the encoder hardware660 (e.g., an encoder chip, an encoding component on a system on chip, etc.).
Thevideo conference module625 of some embodiments handles various video conferencing functions such as image processing, video conference management, and networking. As shown, thevideo conference module625 interacts with the media exchange module610, thevideo conference client645, and thenetwork interface650. In some embodiments, thevideo conference module625 receives instructions from and sends instructions to thevideo conference client645. Thevideo conference module625 of some embodiments also sends data to and receives data from networks (e.g., a local area network (LAN), a wireless local area network (WLAN), a wide area network (WAN), a network of networks, a code division multiple access (CDMA) network, a GSM network, etc.) through thenetwork interface650.
Thevideo conference module625 includes animage processing layer630, amanagement layer635, and anetwork layer640. In some embodiments, theimage processing layer630 performs image processing operations on images for video conferencing. For example, theimage processing layer630 of some embodiments performs exposure adjustment, image resizing, perspective correction, and dynamic range adjustment as described in further detail below. Theimage processing layer630 of some embodiments sends requests through the media exchange module610 for images from the CIPU655.
Themanagement layer635 of some embodiments controls the operation of thevideo conference module625. For instance, in some embodiments, themanagement layer635 initializes a camera/cameras of the dual camera mobile device, processes images and audio to transmit to a remote device, and processes images and audio received from the remote device. In some embodiments, themanagement layer635 generates composite (e.g., PIP) displays for the device. Moreover, themanagement layer635 may change the operation of thevideo conference module625 based on networking reports received from thenetwork layer640.
In some embodiments, thenetwork layer640 performs some or all of the networking functionalities for video conferencing. For instance, thenetwork layer640 of some embodiments establishes a network connection (not shown) between the dual camera mobile device and a remote device of a video conference, transmits images to the remote device, and receives images from the remote device, among other functionalities, as described below and in the above-incorporated U.S. patent application Ser. No. 12/794,766, now U.S. Pat. No. 8,744,420, entitled “Establishing Video Conference During a Phone Call.” In addition, thenetwork layer640 receives networking data such as packet loss, one-way latency, and roundtrip delay time, among other types of data, processes such data, and reports the data to themanagement layer635.
Thevideo conference client645 of some embodiments is an application that may use the video conferencing functions of thevideo conference module625 such as a video conferencing application, a voice-over-IP (VOIP) application (e.g., Skype), or an instant messaging application. In some embodiments, thevideo conference client645 is a stand-alone application while in other embodiments thevideo conference client645 is integrated into another application.
In some embodiments, thenetwork interface650 is a communication interface that allows thevideo conference module625 and thevideo conference client645 to send data and receive data over a network (e.g., a cellular network, a local area network, a wireless network, a network of networks, the Internet, etc.) through thenetwork interface650. For instance, if thevideo conference module625 wants to send data (e.g., images captured by cameras of the dual camera mobile device) to another device on the Internet, thevideo conference module625 sends the images to the other device through thenetwork interface650.
B. Video Conference Set Up
FIG. 7 conceptually illustrates an example video conferencerequest messaging sequence700 of some embodiments. This figure shows the video conferencerequest messaging sequence700 among avideo conference client710 running on adevice705, avideo conference server715, and a video conference client725 running on adevice720. In some embodiments, thevideo conference clients710 and725 are the same as thevideo conference client645 shown inFIG. 6. As shown inFIG. 7, one device (i.e., the device705) requests a video conference and another device (i.e., the device720) responds to such request. The dual camera mobile device described in the present application can perform both operations (i.e., make a request and respond to a request).
Thevideo conference server715 of some embodiments routes messages among video conference clients. While some embodiments implement thevideo conference server715 on one computing device, other embodiments implement thevideo conference server715 on multiple computing devices. In some embodiments, the video conference server is a publicly accessible server that can handle and route messages for numerous conferences at once. Each of thevideo conference clients710 and725 of some embodiments communicates with thevideo conference server715 over a network (e.g., a cellular network, a local area network, a wireless network, a network of networks, the Internet etc.) through a network interface such as thenetwork interface650 described above.
The video conferencerequest messaging sequence700 of some embodiments starts when thevideo conference client710 receives (at operation 1) a request from a user of thedevice705 to start a video conference with thedevice720. Thevideo conference client710 of some embodiments receives the request to start the video conference when the user of thedevice705 selects a user interface (UI) item of a user interface displayed on thedevice705. Examples of such user interfaces are illustrated inFIG. 8 andFIG. 11, which are described below.
After thevideo conference client710 receives the request, thevideo conference client710 sends (at operation 2) a video conference request, which indicates thedevice720 as the recipient based on input from the user, to thevideo conference server715. Thevideo conference server715 forwards (at operation 3) the video conference request to the video conference client725 of thedevice720. In some embodiments, thevideo conference server715 forwards the video conference request to the video conference client725 using push technology. That is, thevideo conference server715 initiates the transmission of the video conference request to the video conference client725 upon receipt from thevideo conference client710, rather than waiting for the client725 to send a request for any messages.
When the video conference client725 of some embodiments receives the video conference request, a user interface is displayed on thedevice720 to indicate to the user of thedevice720 that the user of thedevice705 sent a request to start a video conference and to prompt the user of thedevice720 to accept or reject the video conference request. An example of such a user interface is illustrated inFIG. 9, which is described below. In some embodiments, when the video conference client725 receives (at operation 4) a request to accept the video conference request from the user of thedevice705, the video conference client725 sends (at operation 5) a video conference acceptance to thevideo conference server715. The video conference client725 of some embodiments receives the request to accept the video conference request when the user of thedevice720 selects a user interface item of a user interface as illustrated inFIG. 9, for example.
After thevideo conference server715 receives the video conference acceptance from the video conference client725, thevideo conference server715 forwards (at operation 6) the video conference acceptance to thevideo conference client710. Some embodiments of thevideo conference server715 forward the video conference acceptance to thevideo conference client710 using the push technology described above.
Upon receiving the video conference acceptance, some embodiments establish (at operation 7) a video conference between thedevice705 and thedevice720. Different embodiments establish the video conference differently. For example, the video conference establishment of some embodiments includes negotiating a connection between thedevice705 and thedevice720, determining a bit rate at which to encode video, and exchanging video between thedevice705 and thedevice720.
In the above example, the user of thedevice720 accepts the video conference request. In some embodiments, thedevice720 can be configured (e.g., through the preference settings of the device) to automatically accept incoming video conference requests without displaying a UI. Moreover, the user of thedevice720 can also reject (at operation 4) the video conference request (e.g., by selecting a user interface item of a user interface displayed on the device720). Instead of sending a video conference acceptance, the video conference client725 sends a video conference rejection to thevideo conference server715, which forwards the video conference rejection to thevideo conference client710. The video conference is then never established.
In some embodiments, a video conference is initiated based on an ongoing phone call. That is, while the user of a mobile device is engaged in a phone call with a second user, the user can turn the phone call into a video conference with the permission of the other party. For some embodiments of the invention,FIG. 8 illustrates the start of such a video conference by a dual camera handheldmobile device800. This figure illustrates the start of the video conference in terms of fiveoperational stages810,815,820,825, and830 of a user interface (“UI”)805 of thedevice800.
As shown inFIG. 8, theUI805 includes aname field835, aselection menu840, and aselectable UI item845. Thename field835 displays the name of the person on the other end of the phone call, with whom a user would like to request a video conference. In this example, the selectable UI item845 (which can be implemented as a selectable button) provides a selectable End Call option for the user to end the phone call. Theselection menu840 displays a menu of selectable UI items, such as aSpeakerphone item842, aMute item844, a Keypad item846, aPhonebook item848, aHold item852, aVideo Conference item854, etc. Different embodiments display the selection menu differently. For the embodiments illustrated byFIG. 8, theselection menu840 includes several equally sized icons, each of which represents a different operation. Other embodiments provide a scrollable menu, or give priority to particular items (e.g., by making the items larger).
The operation of theUI805 will now be described by reference to the state of this UI during the five stages,810,815,820,825, and830 that are illustrated inFIG. 8. In thefirst stage810, a phone call has been established between the handheld mobile device user and Nancy Jones. Thesecond stage815 displays theUI805 after the user selects the selectable Video Conference option854 (e.g., through a single finger tap by finger850) to activate a video conference tool. In this example, the Video Conference option854 (which can be implemented as a selectable icon) allows the user to start a video conference during the phone call. In the second stage, theVideo Conference option854 is highlighted to indicate that the video conference tool has been activated. Different embodiments may indicate such a selection in different ways (e.g., by highlighting the border or the text of the item).
Thethird stage820 displays theUI805 after thedevice800 has started the video conference process with the selection of theVideo Conference option854. The third stage is a transitional hold stage while the device waits for the video conference to be established (e.g., while the device waits for the device on the other end of the call to accept or reject the video conference). In thethird stage820, the user of thedevice800 can still talk to the user of the other device (i.e., Nancy Jones) while the video conference connection is being established. In addition, some embodiments allow the user of thedevice800 to cancel the video conference request in thethird stage820 by selecting a selectable UI item displayed on the UI805 (not shown) for canceling the video conference request. During this hold stage, different embodiments use different displays in theUI805 to indicate the wait state.
As shown inFIG. 8, in some embodiments the wait state of the third stage is illustrated in terms of a full screen display of a video being captured by thedevice800 along with a “Preview” notation at the bottom of this video. Specifically, inFIG. 8, thethird stage820 illustrates the start of the video conference process by displaying in adisplay area860 of the UI805afull screen presentation of the video being captured by the device's camera. In some embodiments, the front camera is the default camera selected by the device at the start of a video conference. Often, this front camera points to the user of the device at the start of the video conference. Accordingly, in the example illustrated inFIG. 8, thethird stage820 illustrates thedevice800 as presenting a full screen video of the user of thedevice800. The wait state of the device is further highlighted by the “Preview”designation865 below the video appearing in thedisplay area860 during thethird stage820.
The transitionalthird hold stage820 can be represented differently in some embodiments. For instance, some embodiments allow the user of thedevice800 to select the back camera as the camera for starting the video conference. To allow for this selection, some embodiments allow the user to specify (e.g., through a menu preference setting) the back camera as the default camera for the start of a video conference, and/or allow the user to select the back camera from a menu that displays the back and front cameras after the user selects theVideo Conference option854. In either of these situations, the UI805 (e.g., display area860) displays a video captured by the back camera during thethird hold stage820.
Also, other embodiments might indicate the activation of the video conference tool by displaying the smaller version of the video captured by thedevice800, by displaying a still image that is stored on thedevice800, by providing a message to highlight the wait state of the device (e.g., by showing “Conference Being Established”), by not displaying the “Preview” designation, etc. Also, in thethird stage820, theUI805 of some embodiments provides an End button (not shown) to allow the user to cancel entering the video conference and revert back to the phone call if he decides not to enter the video conference at this stage (e.g., while the user is waiting for the remote user to respond to his request).
Thefourth stage825 illustrates theUI805 in a transitional state after the remote user has accepted the video conference request and a video conference connection has been established. In this transitional state, thedisplay area860 that displays the video of the local user (that is being captured by the front camera in this example) gradually decreases in size (i.e., gradually shrinks), as indicated by thearrows875. The display area860 (i.e., the local user's video) shrinks so that theUI805 can display a display area870 (e.g., a display window870) that contains the video from a camera of the remote device behind thedisplay area860. In other words, the shrinking of the local user'svideo860 creates aPIP display880 that has aforeground inset display860 of the local user's video and a backgroundmain display870 of the remote user. In this example, the backgroundmain display870 presents a video of a lady whose video is being captured by the remote device's front camera (e.g., Nancy Jones, the user of the remote device) or a lady whose video is being captured by the remote device's back camera (e.g., a lady whose video is being captured by Nancy Jones). One of ordinary skill will realize that the transitional fourth stage shown inFIG. 8 is simply one exemplary approach used by some embodiments, and that other embodiments might animate the transitional fourth stage differently.
Thefourth stage825 also illustrates aselectable UI item832 in alower display area855. The selectable UI item832 (which can be implemented as a selectable button) provides a selectableEnd Conference option832 below thePIP display880. The user may select thisEnd Conference option832 to end the video conference (e.g., through a single finger tap). Different embodiments may allow the user to end the conference in different ways, such as by toggling a switch on the mobile device, by giving voice commands, etc. Moreover, different embodiments may allow theEnd Conference option832 to fade away during the video conference, thereby allowing the PIP display880) to take up theentire display area885. TheEnd Conference option832 may then reappear at a single finger tap at the bottom of thedisplay area885, giving the user access to theEnd Conference option832. In some embodiments, the layout of thedisplay area855 is same as thedisplay area855 described in further detail below.
Thefifth stage830 illustrates theUI805 after the animation of the fourthtransitional state825 has ended. Specifically, thefifth stage830 illustrates aPIP display880 that is presented by theUI805 during the video conference. As mentioned above, thisPIP display880 includes two video displays: alarger background display870 from the remote camera and a smallerforeground inset display860 from the local camera.
ThisPIP display880 is only one manner of presenting a composite view of the videos being captured by the remote and local devices. In addition to this composite view, the devices of some embodiments provide other composite views. For example, instead of having alarger background display870 of the remote user, thelarger background display870 can be of the local user and the smallerforeground inset display860 of the remote user. As further described below, some embodiments allow a user to switch during a video conference between the local cameras and/or remote cameras as the cameras for the inset and main views in thePIP display880.
Also, some embodiments allow the local and remote videos to appear in theUI805 in two side-by-side display areas (e.g., left and right display windows, or top and bottom display windows) or two diagonally aligned display areas. The manner of the PIP display or a default display mode may be specified by the user in some embodiments through the preference settings of the device or through controls that the user can select during a video conference, as further described below and in the above-incorporated U.S. patent application Ser. No. 12/794,766, now U.S. Pat. No. 8,744,420, entitled “Establishing Video Conference During a Phone Call.”
When the user of thedevice800 ofFIG. 8 invites the remote user to a video conference, the remote user may accept or reject the invitation.FIG. 9 illustrates a UI905 of the remote user'sdevice900 at sixdifferent stages910,915,920,925,930, and935 that show the sequence of operations for presenting and accepting a video conference invitation at the remote user's device. The description of the UI905 below refers to the user of the device900 (i.e., the device that receives the video conference request) as the invite recipient, and the user of the device800 (i.e., the device that sends the video conference request) as the invite requestor. Also, in this example, it is assumed that the invite recipient'sdevice900 is a dual camera device, like that of the invite requestor. However, in other examples, one or both of these devices are single camera devices.
Thefirst stage910 illustrates the UI905 when the invite recipient receives an invitation to a video conference from the invite requestor, John Smith. As shown inFIG. 9, the UI905 in this stage includes aname field995, amessage field940, and twoselectable UI items945 and950. Thename field995 displays the name of a person who is requesting a video conference. In some embodiments, thename field995 displays a phone number of the person who is requesting a video conference instead of the name of the person. Themessage field940 displays an invite from the invite requestor to the invite recipient. In this example, the “Video Conference Invitation” in thefield940 indicates that the invite requestor is requesting a video conference with the invite recipient. Theselectable UI items945 and950 (which can be implemented as selectable buttons) provide selectable Deny Request and AcceptRequest options945 and950 for the invite recipient to use to reject or accept the invitation. Different embodiments may display these options differently and/or display other options.
Upon seeing the “Video Conference Invitation” notation displayed in themessage field940, the invite recipient may deny or accept the request by selecting the DenyRequest option945 or AcceptRequest option950 in the UI, respectively. Thesecond stage915 illustrates that in the example shown inFIG. 9, the user selects the AcceptRequest option950. In this example, this selection is made by the user's finger tapping on the AcceptRequest option950, and this selection is indicated through the highlighting of thisoption950. Other techniques are provided in some embodiments to select the Accept or DenyRequest options945 and950 (e.g., double-tapping, etc.) to indicate the selection (e.g., highlighting the border or text of the UI item).
Thethird stage920 displays the UI905 after the invite recipient has agreed to join the video conference. In this stage, the UI905 enters into a preview mode that shows a full screen presentation of the video from the remote device's front camera in adisplay area944. The front camera in this case is pointed to the user of the remote device (i.e., Nancy Jones in this example). Accordingly, her image is shown in this preview mode. This preview mode allows the invite recipient to make sure that her video is displayed properly and that she is happy with her appearance before the video conference begins (e.g., before actual transmission of the video begins). In some embodiments, a notation, such as a “Preview” notation, may be displayed below thedisplay area944 to indicate that the invite recipient is in the preview mode.
Some embodiments allow the invite recipient to select the back camera as the default camera for the start of the video conference, or to select the front or back camera at the beginning of the video conference, as further described in the above-incorporated U.S. patent application Ser. No. 12/794,766, now U.S. Pat. No. 8,744,420, entitled “Establishing Video Conference During a Phone Call.” Also, other embodiments display the preview display of the invite recipient differently (e.g., in a smaller image placed in the corner of the display area944). Yet other embodiments do not include this preview mode, but rather start the video conference immediately after the invite recipient accepts the request.
In the third stage, the UI905 shows twoselectable UI items975 and946, one of which overlaps thedisplay area944 while the other is below thisdisplay area944. Theselectable UI item975 is an Acceptbutton975 that the user may select to start video conferencing. The selectable UI item946 is an End button946 that the invite recipient can select if she decides not to join the video conference at this stage.
The fourth stage925 displays the UI905 after the invite recipient selects the Acceptbutton975. In this example, the Acceptbutton975 is highlighted to indicate that the invite recipient is ready to start the video conference. Such a selection may be indicated in different ways in other embodiments.
Thefifth stage930 illustrates the UI905 in a transitional state after the invite recipient has accepted the video conference request. In this transitional stage, thedisplay area944 that displays the video of the invite recipient (that is being captured by the front camera in this example) gradually decreases in size (i.e., gradually shrinks), as indicated by thearrows960. The invite recipient's video shrinks so that the UI905 can display a display area965 (e.g., a display window965) that contains the video from a camera of the invite requestor behind thedisplay area944. In other words, the shrinking of the invite recipient's video creates aPIP display980 that has a foregroundinset display area944 of the invite recipient's video and a background main display965 of the invite requestor.
In this example, the background main display965 presents a video of a man whose video is being captured by the local device's front camera (i.e., John Smith, the user of the local device800). In another example, this video could have been that of a man whose video is being captured by the local device's back camera (e.g., a man whose video is being captured by John Smith). Different embodiments may animate this transitional fifth stage differently.
The UI at thefifth stage930 also displays a display area855 (e.g., a tool bar or a menu bar) that includes selectable UI item985 (e.g., mute button985) for muting the audio of the other user during the video conference, selectable UI item987 (e.g., end conference button987) for ending the video conference, and selectable UI item989 (e.g., switch camera button989) for switching cameras, which is described in further detail below. As such, the invite recipient may select any of the selectable UI items985-989 (e.g., through a single finger tap) to perform the desired operation during the video conference. Different embodiments may allow the invite recipient to perform any of the operations in different ways, e.g., by toggling a switch on the mobile device, by giving voice commands, etc.
AlthoughFIG. 9 shows an example layout for thedisplay area855, some embodiments provide different layouts of thedisplay area855 such as the layout ofdisplay area855 ofFIG. 8, which includes just a selectable EndConference UI item832 for ending the video conference. Other layouts ofdisplay area855 can include any number of different selectable UI items for performing different functions. Moreover, thefifth stage930 shows thedisplay area855 displayed at the bottom of the UI905. Different embodiments of thedisplay area855 can be displayed at different locations within the UI905 and/or defined as different shapes.
FIG. 9 shows thedisplay area855 as a static display area (i.e., thedisplay area855 is always displayed). However, in some embodiments thedisplay area855 is a dynamic display area. In some such embodiments, thedisplay area855 is not ordinarily displayed. Rather, thedisplay area855 is displayed when a triggering event is received (e.g., a user selection such tapping thedisplay area980 once, a voice command, etc.). Thedisplay area855 disappears after a user selection is received (e.g., selecting the selectable mute UI item985) or a defined amount of time (e.g., 3 seconds), which can be specified by the user through the preference settings of the mobile device or the video conference application. In some such embodiments, thedisplay area855 is automatically displayed after the video conference starts and disappears in the same manner mentioned above.
Thesixth stage935 illustrates the UI905 after the animation of the fifth transitional stage has ended. Specifically, the sixth stage illustrates aPIP display980 that is presented by the UI905 during the video conference. As mentioned above, thisPIP display980 includes two video displays: a larger background display965 from the local camera and a smallerforeground inset display944 from the remote camera. ThisPIP display980 is only one manner of presenting a composite view of the videos being captured by the remote and local devices. In addition to this composite view, the devices of some embodiments provide other composite views. For example, instead of having a larger background display of the invite recipient, the larger background display can be of the invite requestor and the smaller foreground inset display of the invite recipient. As further described in the above-incorporated U.S. patent application Ser. No. 12/794,766, now U.S. Pat. No. 8,744,420, entitled “Establishing Video Conference During a Phone Call,” some embodiments allow a user to control the inset and main views in a PIP display to switchably display the local and remote cameras. Also, some embodiments allow the local and remote videos to appear in the UI905 in two side-by-side display areas (e.g., left and right display windows, or top and bottom display windows) or two diagonally aligned display areas. The manner of PIP display or a default display mode may be specified by the user in some embodiments through the preference settings of the device or through controls that the user can select during a video conference, as further described in the above-incorporated U.S. patent application Ser. No. 12/794,766, now U.S. Pat. No. 8,744,420, entitled “Establishing Video Conference During a Phone Call.”
AlthoughFIG. 9 shows the sequence of operations for presenting and accepting a video conference invitation in terms of six different operational stages, some embodiments may implement the operation in less stages. For instance, some of such embodiments may omit presenting the third andfourth stages920 and925 and go from thesecond stage915 to thefifth stage930 after the user selects the AcceptRequest option950. Other embodiments that implement that operation (i.e., presenting and accepting a video conference invitation) in less stages may omit the first andsecond stages910 and915 and present the user with thethird stage920 when the invite recipient receives an invitation to a video conference from the invite requestor.
FIG. 10 illustrates an example of performing the operation illustrated inFIG. 9 in less stages by combining the first and third stages into one stage and the second and fourth stage into one stage. In particular, this figure illustrates a UI905 of the remote user'sdevice900 at fivedifferent stages1090,1092,1094,930, and935. Thefirst stage1090 is similar to thestage810 except thename field995 displays the name “John Smith” to indicate the name of the person on the other end of the telephone call. That is, a phone call has been established between the user of the remote mobile device and the user of the local device (i.e., John Smith in this example). The second andthird stages1092 and1094 are similar to the first andsecond stages910 and915 ofFIG. 9 except the second andthird stage1092 and1094 also show a preview of the user of the remote mobile device (i.e., Nancy Jones in this example). The fourth andfifth stages930 and935 are the same as the fifth andsixth stages930 and935 ofFIG. 9.
In addition to activating the video conference tool through a selectable option during a phone call, some embodiments allow a user of a dual camera device to initiate a video conference directly without having to make a phone call first.FIG. 11 illustrates another such alternative method to initiate a video conference. This figure illustrates theUI1105 at sevendifferent stages1110,1115,1120,1125,1130,1135, and1140 that show an alternative sequence of operations for starting a video conference.
In thefirst stage1110, a user is looking through a contacts list on this mobile device for the person with whom he wants to engage in a video conference, similar to how he would find a contact to call. In thesecond stage1115, the user selects theperson1155 with whom he would like to have a video conference (e.g., through asingle finger tap1160 on the person's name1155). This selection triggers theUI1105 to display the contact's information and various user selectable options. In this example, Jason'sname1155 is highlighted to indicate that this is the person with whom the user would like to have a video conference. Different embodiments may indicate such a selection in different ways. While thesecond stage1115 allows the user of thedevice1100 to select a person with whom the user would like to have a video conference through a contact list, some embodiments allow the user to select the person through a “Recents” call history that lists a particular number or name of a person with whom the user of thedevice1100 recently had a video conference or a phone call.
In thethird stage1120, theUI1105 displays the selected person'sinformation1162 and variousselectable UI items1168,1172, and1170 after the person'sname1155 has been selected. In this example, one of the various selectable UI items1172 (which can be implemented as a selectable icon or button) provides a video conference tool. TheVideo Conference option1172 allows the user to invite the person identified by thecontact1166 to a video conference. Different embodiments display theinformation1162 andselectable UI items1168,1172, and1170 differently (e.g., in a different arrangement).
Thefourth stage1125 shows the user selecting the Video Conference option1172 (e.g., through a single finger tap). In this example, theVideo Conference option1172 is highlighted to indicate that thevideo conference tool1172 has been activated. Such selections may be indicated differently in different embodiments (e.g., by highlighting the text or border of the selected icon).
The fifth, sixth andseventh stages1130,1135, and1140 are similar to the third, fourth andfifth stages820,825, and830 illustrated inFIG. 8 and may be understood by reference to the discussion of those stages. In brief, thefifth stage1130 illustrates a transitional holding stage that waits for the remote user to respond to the invitation to a video conference. Thesixth stage1135 illustrates that after the remote user has accepted the video conference request, the display area1180 (that displays the video of the local user) gradually decreases in size so theUI1105 can show adisplay area1192 that contains the video from a camera of the remote user behind thedisplay area1180. In theseventh stage1140, thePIP display1147 is presented by theUI1105 during the video conference. In some embodiments, the layout ofdisplay area855 in thesixth stage1135 and theseventh stage1140 is like the layout of thedisplay area855 ofFIG. 9, described above.
FIGS. 7,8,9,10, and11 show several ways of establishing a video conference. In some embodiments, during a telephone call, audio data (e.g., voice) is transmitted through one communication channel (over a communication network like a circuit-switched communication network or a packet-switched communication network) and, during a video conference, audio data is transmitted through another communication channel. Thus, in such embodiments, audio data (e.g., voice) is transmitted through a communication channel before the video conference is established, and once the video conference is established, audio is transmitted through a different communication channel (instead of the communication channel used during the telephone call).
In order to provide a seamless transition (e.g., handoff) of audio data from the telephone call to the video conference, some embodiments do not terminate the telephone call before establishing the video conference. For instance, some embodiments establish a peer-to-peer video conference connection (e.g., after completing the message sequence illustrated inFIG. 7) before terminating the phone call and starting to transmit audio/video data through the peer-to-peer communication session. Alternatively, other embodiments establish a peer-to-peer video conference connection (e.g., after completing the message sequence illustrated inFIG. 7) and start transmitting audio/video data through the peer-to-peer communication session, before terminating the phone call and starting to present the received audio/video data.
A peer-to-peer video conference connection of some embodiments allows the mobile devices in the video conference to directly communicate with each other (instead of communicating through a central server, for example). Some embodiments of a peer-to-peer video conference allow the mobile devices in the video conferences to share resources with each other. For instance, through a control communication channel of a video conference, one mobile device can remotely control operations of another mobile device in the video conference by sending instructions from the one mobile device to the other mobile device to direct the other mobile device to process images differently (i.e., share its image processing resource) such as an exposure adjustment operation, a focus adjustment operation, and/or a switch camera operation, described in further detail below.
C. Video Conference Architecture
As mentioned above,FIG. 12 conceptually illustrates a software architecture for a video conferencing andprocessing module1200 of a dual camera mobile device of some embodiments. As shown, the video conferencing andprocessing module1200 includes aclient application1265, avideo conference module1202, amedia exchange module1220, a buffer1225, a captured image processing unit (CIPU)driver1230, anencoder driver1235, and adecoder driver1240. In some embodiments, the buffer1225 is a frame buffer that stores images of a video for display on adisplay1245 of the dual camera mobile device.
In some embodiments, theclient application1265 is the same as thevideo conference client645 ofFIG. 6. As mentioned above, theclient application1265 may be integrated into another application or implemented as a stand-alone application. Theclient application1265 may be an application that uses the video conferencing functions of thevideo conference module1202, such as a video conferencing application, a voice-over-IP (VOIP) application (e.g., Skype), or an instant messaging application.
Theclient application1265 of some embodiments sends instructions to thevideo conference module1202 such as instructions to start a conference and end a conference, receives instructions from thevideo conference module1202, routes instructions from a user of the dual camera mobile device to thevideo conference module1202, and generates user interfaces that are displayed on the dual camera mobile device and allow a user to interact with the application.
D. Video Conference Manager
As shown inFIG. 12, thevideo conference module1202 includes avideo conference manager1204, animage processing manager1208, anetworking manager1214, and buffers1206,1210,1212,1216, and1218. In some embodiments, thevideo conference module1202 is the same as thevideo conference module625 illustrated inFIG. 6 and thus performs some or all of the same functions described above for thevideo conference module625.
In some embodiments, thevideo conference manager1204 is responsible for initializing some or all of the other modules of the video conference module1202 (e.g., theimage processing manager1208 and the networking manager1214) when a video conference is starting, controlling the operation of thevideo conference module1202 during the video conference, and ceasing the operation of some or all of the other modules of thevideo conference module1202 when the video conference is ending.
Thevideo conference manager1204 of some embodiments also processes images received from one or more devices in the video conference and images captured by one of both cameras of the dual camera mobile device for display on the dual camera mobile device. For instance, thevideo conference manager1204 of some embodiments retrieves decoded images, that were received from another device participating in the video conference, from thebuffer1218 and retrieves images processed by CIPU1250 (i.e., images captured by the dual camera mobile device) from thebuffer1206. In some embodiments, thevideo conference manager1204 also scales and composites the images before displaying the images on the dual camera mobile device. That is, thevideo conference manager1204 generates the PIP or other composite views to display on the mobile device in some embodiments. Some embodiments scale the images retrieved from thebuffers1206 and1218 while other embodiments just scale images retrieved from one of thebuffers1206 and1218.
AlthoughFIG. 12 illustrates thevideo conference manager1204 as part of thevideo conference module1202, some embodiments of thevideo conference manager1204 are implemented as a component separate from thevideo conference module1202. As such, a singlevideo conference manager1204 can be used to manage and control severalvideo conference modules1202. For instance, some embodiments will run a separate video conference module on the local device to interact with each party in a multi-party conference, and each of these video conference modules on the local device are managed and controlled by the one video conference manager.
Theimage processing manager1208 of some embodiments processes images captured by the cameras of the dual camera mobile device before the images are encoded by theencoder1255. For example, some embodiments of theimage processing manager1208 perform one or more of exposure adjustment, focus adjustment, perspective correction, dynamic range adjustment, and image resizing on images processed by theCIPU1250. In some embodiments, theimage processing manager1208 controls the frame rate of encoded images that are transmitted to the other device in the video conference.
Some embodiments of thenetworking manager1214 manage one or more connections between the dual camera mobile device and the other device participating in the video conference. For example, thenetworking manager1214 of some embodiments establishes the connections between the dual camera mobile device and the other device of the video conference at the start of the video conference and tears down these connections at the end of the video conference.
During the video conference, thenetworking manager1214 transmits images encoded by theencoder1255 to the other device of the video conference and routes images received from the other device of the video conference todecoder1260 for decoding. In some embodiments, thenetworking manager1214, rather than theimage processing manager1208, controls the frame rate of the images that are transmitted to the other device of the video conference. For example, some such embodiments of thenetworking manager1214 control the frame rate by dropping (i.e., not transmitting) some of the encoded frames that are supposed to be transmitted to the other device of the video conference.
As shown, themedia exchange module1220 of some embodiments includes a camera source module1222, avideo compressor module1224, and avideo decompressor module1226. Themedia exchange module1220 is the same as themedia exchange module310 shown inFIG. 3, with more detail provided. The camera source module1222 routes messages and media content between thevideo conference module1202 and theCIPU1250 through theCIPU driver1230, thevideo compressor module1224 routes message and media content between thevideo conference module1202 and theencoder1255 through theencoder driver1235, and thevideo decompressor module1226 routes messages and media content between thevideo conference module1202 and thedecoder1260 through thedecoder driver1240. Some embodiments implement theTNR module315 included in the media exchange module310 (not shown inFIG. 12) as part of the camera source module1222 while other embodiments implement theTNR module315 as part of thevideo compressor module1224.
In some embodiments, theCIPU driver1230 and theencoder driver1235 are the same as theCIPU driver305 and the encoder driver320 illustrated inFIG. 3. Thedecoder driver1240 of some embodiments acts as a communication interface between thevideo decompressor module1226 anddecoder1260. In such embodiments, thedecoder1260 decodes images received from the other device of the video conference through thenetworking manager1214 and routed through thevideo decompressor module1226. After the images are decoded, they are sent back to thevideo conference module1202 through thedecoder driver1240 and thevideo decompressor module1226.
In addition to performing video processing during a video conference, the video conferencing andprocessing module1200 for the dual camera mobile device of some embodiments also performs audio processing operations during the video conference.FIG. 13 illustrates such a software architecture. As shown, the video conferencing andprocessing module1200 includes the video conference module1202 (which includes thevideo conference manager1204, theimage processing manager1208, and the networking manager1214), themedia exchange module1220, and theclient application1265. Other components and modules of the video conferencing andprocessing module1200 shown inFIG. 12 are omitted inFIG. 13 to simplify the description. The video conferencing andprocessing module1200 also includesframe buffers1305 and1310,audio processing manager1315, andaudio driver1320. In some embodiments, theaudio processing manager1315 is implemented as a separate software module while in other embodiments theaudio processing manager1315 is implemented as part of themedia exchange module1220.
Theaudio processing manager1315 processes audio data captured by the dual camera mobile device for transmission to the other device in the video conference. For example, theaudio processing manager1315 receives audio data through theaudio driver1320, which is captured bymicrophone1325, and encodes the audio data before storing the encoded audio data in thebuffer1305 for transmission to the other device. Theaudio processing manager1315 also processes audio data captured by and received from the other device in the video conference. For instance, theaudio processing manager1315 retrieves audio data from thebuffer1310 and decodes the audio data, which is then output through theaudio driver1320 to thespeaker1330.
In some embodiments, thevideo conference module1202 along with theaudio processing manager1315 and its associated buffers are part of a larger conference module. When a multi-participant audio conference is conducted between several devices without exchange of video content, this video conferencing andprocessing module1200 only uses thenetworking manager1214 and theaudio processing manager1315 to facilitate the exchange of audio over an Internet Protocol (IP) layer.
The operation of thevideo conference manager1204 of some embodiments will now be described by reference toFIG. 14.FIG. 14 conceptually illustrates aprocess1400 performed by a video conference manager of some embodiments such asvideo conference manager1204 illustrated inFIG. 12. This can be equivalent to being performed by themanagement layer635 ofFIG. 6. In some embodiments, thevideo conference manager1204 performsprocess1400 when a user of the dual camera mobile device accepts (e.g., through a user interface displayed on the dual camera mobile device) a video conference request or when a user of another device accepts a request sent by the user of the dual camera mobile device.
Theprocess1400 begins by receiving (at1405) instructions to start a video conference. In some embodiments, the instructions are received from theclient application1265 or are received from a user through a user interface displayed on the dual camera mobile device and forwarded to thevideo conference manager1204 by theclient application1265. For example, in some embodiments, when a user of the dual camera mobile device accepts a video conference request, the instructions are received through the user interface and forwarded by the client application. On the other hand, when a user of the other device accepts a request sent from the local device, some embodiments receive the instructions from the client application without user interface interaction (although there may have been previous user interface interaction to send out the initial request).
Next, theprocess1400 initializes (at1410) a first module that interacts with thevideo conference manager1204. The modules of some embodiments that interact with thevideo conference manager1204 include theCIPU1250, theimage processing manager1208, theaudio processing manager1315, and thenetworking manager1214.
In some embodiments, initializing theCIPU1250 includes instructing theCIPU1250 to start processing images captured by one or both cameras of the dual camera mobile device. Some embodiments initialize theimage processing manager1208 by instructing theimage processing manager1208 to start retrieving images from thebuffer1210 and processing and encoding the retrieved images. To initialize theaudio processing manager1315, some embodiments instruct theaudio processing manager1315 to begin encoding audio data captured by themicrophone1325 and decoding audio data stored in the buffer1310 (which was received from the other device) in order to output to thespeaker1330. The initializing of thenetworking manager1214 of some embodiments includes instructing thenetworking manager1214 to establish a network connection with the other device in the video conference.
Theprocess1400 then determines (at1415) whether there are any modules left to initialize. When there are modules left to initialize, theprocess1400 returns tooperation1410 to initialize another of the modules. When all of the required modules have been initialized, theprocess1400 generates (at1420) composite images for displaying on the dual camera mobile device (i.e., local display). These composite images may include those illustrated inFIG. 65 in the above-incorporated U.S. patent application Ser. No. 12/794,766, now U.S. Pat. No. 8,744,420, entitled “Establishing Video Conference During a Phone Call,” and can include various combinations of images from the cameras of the local dual camera mobile device and images from cameras of the other device participating in the video conference.
Next, theprocess1400 determines (at1425) whether a change has been made to the video conference. Some embodiments receive changes to the video conference through user interactions with a user interface displayed on the dual camera mobile device while other embodiments receive changes to the video conference from the other device through the networking manager1214 (i.e., remote control). The changes to video conference settings may also be received from theclient application1265 or other modules in thevideo conference module1202 in some embodiments. The video conference settings may also change due to changes in the network condition.
When a change has been made, theprocess1400 determines (at1430) whether the change to the video conference is a change to a network setting. In some embodiments, the changes are either network setting changes or image capture setting changes. When the change to the video conference is a change to a network setting, the process modifies (at1440) the network setting and then proceeds tooperation1445. Network setting changes of some embodiments include changing the bit rate at which images are encoded or the frame rate at which the images are transmitted to the other device.
When the change to the video conference is not a change to a network setting, theprocess1400 determines that the change is a change to an image capture setting and then proceeds tooperation1435. Theprocess1400 then performs (at1435) the change to the image capture setting. In some embodiments, change to the image capture settings may include switching cameras (i.e., switching which camera on the dual camera mobile device will capture video), focus adjustment, exposure adjustment, displaying or not displaying images from one or both cameras of the dual camera mobile device, and zooming in or out of images displayed on the dual camera mobile device, among other setting changes.
Atoperation1445, theprocess1400 determines whether to end the video conference. When theprocess1400 determines to not end the video conference, theprocess1400 returns tooperation1420. When theprocess1400 determines that the video conference will end, theprocess1400 ends. Some embodiments of theprocess1400 determine to end the video conference when theprocess1400 receives instructions from theclient application1265 to end the video conference (i.e., due to instructions received through the user interface of the local dual camera mobile device or received from the other device participating in the video conference).
In some embodiments, thevideo conference manager1204 performs various operations when the video conference ends that are not shown inprocess1400. Some embodiments instruct theCIPU1250 to stop producing images, thenetworking manager1214 to tear down the network connection with the other device in the video conference, and theimage processing manager1208 to stop processing and encoding images.
E. Temporal Noise Reduction
Some embodiments include a specific temporal noise reduction module for processing video images to reduce noise in the video. The temporal noise reduction module of some embodiments compares subsequent images in a video sequence to identify and eliminate unwanted noise from the video.
FIG. 15 conceptually illustrates a software architecture for such a temporal noise reduction (TNR)module1500 of some embodiments. Some embodiments implement theTNR module1500 as part of an application (e.g., as part of the media exchange module as shown inFIG. 3) while other embodiments implement theTNR module1500 as a stand-alone application that is used by other applications. Yet other embodiments implement theTNR module1500 as part of an operating system running on the dual camera mobile device. In some embodiments, theTNR module1500 is implemented by a set of APIs that provide some or all of the functionalities of theTNR module1500 to other applications.
As shown inFIG. 15, theTNR module1500 includes aTNR manager1505, adifference module1510, apixel averaging module1515, and amotion history module1520. WhileFIG. 15 shows the threemodules1510,1515, and1520 as separate modules, some embodiments implement the functionalities of these modules, described below, in a single module. TheTNR module1500 of some embodiments receives as input an input image, a reference image, and a motion history. In some embodiments, the input image is the image presently being processed while the reference image is the previous image in the video sequence, to which the input image is compared. TheTNR module1500 outputs an output image (a version of the input image with reduced noise) and an output motion history.
TheTNR manager1505 of some embodiments directs the flow of data through theTNR module1500. As shown, theTNR manager1505 receives the input image, the reference image, and the motion history. TheTNR manager1505 also outputs the output image and the output motion history. TheTNR manager1505 sends the input image and the reference image to thedifference module1510 and receives a difference image from thedifference module1510.
In some embodiments, thedifference module1510 processes the data received from theTNR manager1505 and sends the processed data to theTNR manager1505. As shown, thedifference module1510 receives the input image and the reference image from theTNR manager1505. Thedifference module1510 of some embodiments generates a difference image by subtracting the pixel values of one image from the pixel values of the other image. The difference image is sent to theTNR manager1505. The difference image of some embodiments indicates the difference between the two images in order to identify sections of the input image that have changed and sections of the input image that have stayed the same as compared to the previous image.
TheTNR manager1505 also sends the input image and reference image to thepixel averaging module1515. As shown, some embodiments also send the motion history to thepixel averaging module1515 as well. Other embodiments, however, might send only the input image and the reference image without the motion history. In either embodiments, theTNR manager1505 receives a processed image from thepixel averaging module1515.
Thepixel averaging module1515 of some embodiments uses the motion history to determine whether to take an average of the pixels from the input and reference images for a particular location in the image. In some embodiments, the motion history includes a probability value for each pixel in the input image. A particular probability value represents the probability that the corresponding pixel in the input image has changed (i.e., a dynamic pixel) with respect to the corresponding pixel in the reference image. For instance, if the probability value of a particular pixel in the input image is 20, that indicates a probability of 20% that the particular pixel in the input image has changed with respect to the corresponding pixel in the reference image. As another example, if the probability value of a particular pixel in the input image is 0, that indicates that the particular pixel in the input image has not changed (i.e., a static pixel) with respect to the corresponding pixel in the reference image.
Different embodiments store the probability values of the input image differently. Some embodiments might store the probability values of each pixel of the input image in one array of data. Other embodiments might store the probability values in a matrix (e.g., an array of arrays) with the same dimensions as the resolution of the images of the video. For example, if the resolution of the images of the video is 320×240, then the matrix is also 320×240.
When thepixel averaging module1515 receives the motion history in addition to the input image and reference image from theTNR manager1505, thepixel averaging module1515 reads the probability values of each pixel in the input image. If the probability value for a particular pixel in the input image is below a defined threshold (e.g., 5%, 20%), thepixel averaging module1515 averages the particular pixel value with the corresponding pixel value in the reference image based on the premise that there is not likely to be motion at the particular pixel, and thus differences between the images at that pixel may be attributable to noise.
If the probability for the particular pixel in the input image is not below the defined threshold, thepixel averaging module1515 does not modify the particular pixel of the input image (i.e., the pixel value at that pixel stays the same as in the input image). This is because motion is more likely at the particular pixel, so differences between the images are more likely to not be the result of noise. In some embodiments, when the motion history is not sent to thepixel averaging module1515, thepixel averaging module1515 averages each pixel in the input image with the corresponding pixel in the reference image. The processed image that is output by thepixel averaging module1515 and sent to theTNR manager1505 includes the input image pixel values for any pixels that were not averaged and the averaged pixel values for any pixels that were averaged by thepixel averaging module1515.
In some embodiments, themotion history module1520 processes data received from theTNR manager1505 and sends the result data back to theTNR manager1505. Themotion history module1520 of some embodiments receives the input image and the motion history from theTNR manager1505. Some embodiments input this data into a Bayes estimator in order to generate a new motion history (i.e., a set of probability values) that can be used in the pixel averaging for the next input image. Other embodiments use other estimators to generate the new motion history.
The operation of theTNR module1500 will now be described by reference toFIG. 16. This figure conceptually illustrates aprocess1600 of some embodiments for reducing temporal noise of images of a video. Theprocess1600 starts by theTNR manager1505 receiving (at1605) an input image, a reference image, and a motion history. The input image is the image presently being processed for noise reduction. In some embodiments, the reference image is the previous image of a sequence of images of the video as received from the CIPU. In other embodiments, however, the reference image is the output image generated from the processing of the previous input image (i.e., the output of TNR module1500). The motion history is the output motion history generated from the processing of the previous input image.
When the input image is a first image of the video, theTNR module1500 of some embodiments does not process (i.e., apply TNR to) the first image. In other words, theTNR manager1505 receives the first image and just outputs the first image. In other embodiments, when the input image is the first image of the video, the first image is used as the input image and the reference image and theTNR module1500 processes the image as described below. Further, when the input image is the first image of the video, the motion history is empty (e.g., null, full of zeros, etc.) and theTNR manager1505 just outputs an empty motion history as the output motion history.
TheTNR manager1505 then determines (at1610) whether the input image is static. In order to make this determination, some embodiments send the input image and the reference image to thedifference module1510 and receive a difference image from thedifference module1510. When the difference between the two images is below a defined threshold (e.g., 5% difference, 10% difference, etc.), some embodiments classify the input image as static.
When the input image is a static image, theTNR manager1505 sends the input image and the reference image to thepixel averaging module1515 to average (at1615) the pixels of the input image with the pixels of the reference image in order to reduce any noise from the static image. The process then proceeds to1640, which is described below.
When the input image is not a static image, the TNR manager sends the input image, reference image, and motion history to thepixel averaging module1515 for processing. Thepixel averaging module1515 selects (at1620) a pixel in the input image. Using the motion history, thepixel averaging module1515 determines (at1625) whether the pixel's probability of motion is below a particular threshold, as described above.
If the selected pixel's probability is below the particular threshold, thepixel averaging module1515 averages (at1630) the pixel of the input image with the corresponding pixel in the reference image. Otherwise, the pixel is not averaged and the output image will be the same as the input image at that particular pixel. Thepixel averaging module1515 then determines (at1635) whether there are any unselected pixels left in the input image. If any pixels have not yet been processed, the process returns tooperation1620 to select the next pixel. Thepixel averaging module1515 performs the operations1620-1630 until all pixels have been evaluated.
The process then updates (at1640) the motion history. As shown inFIG. 15 and described above, themotion history module1520 updates the motion history based on the input image. The new motion history is output by the TNR manager along with the processed image from the pixel averaging module.
F. Image Processing Manager & Encoder
In addition to temporal noise reduction and image processing operations performed by the CIPU and/or CIPU driver, some embodiments perform a variety of image processing operations at theimage processing layer630 of thevideo conference module625. These image processing operations may include exposure adjustment, focus adjustment, perspective correction, adjustment of dynamic range, and image resizing, among others.
FIG. 17 conceptually illustrates a process1700 for performing such image processing operations. In some embodiments, some or all of the operations of the process1700 are performed by a combination of theimage processing manager1208 and theencoder driver1235 ofFIG. 12. In some of such embodiments, theimage processing manager1208 performs the pixel-based processing (e.g., resizing, dynamic range adjustment, perspective correction, etc.). Some embodiments perform process1700 during a video conference on images that are to be transmitted to another device participating in the video conference.
The process1700 will now be described by reference toFIG. 12. The process starts by retrieving (at1705) an image from thebuffer1206. In some embodiments, the retrieved image is an image of a video (i.e., an image in a sequence of images). This video may have been captured by a camera of a device on which the process1700 is performed.
Next, the process1700 performs (at1710) exposure adjustment on the retrieved image. Some embodiments perform exposure adjustments through a user interface that is displayed on the dual camera mobile device.FIG. 18 illustrates an example exposure adjustment operation of such embodiments.
This figure illustrates the exposure adjustment operation by reference to threestages1810,1815, and1820 of aUI1805 of adevice1800. Thefirst stage1810 illustrates theUI1805, which includes adisplay area1825 and adisplay area855. As shown, thedisplay area1825 displays animage1830 of a sun and a man with a dark face and body. The dark face and body indicates that the man is not properly exposed. Theimage1830 could be a video image captured by a camera of thedevice1800. As shown, thedisplay area855 includes a selectable UI item1850 for ending the video conference. In some embodiments, the layout of thedisplay area855 is the same as the layout of thedisplay area855 ofFIG. 9, described above.
Thesecond stage1815 illustrates a user of thedevice1800 initiating an exposure adjustment operation by selecting an area of thedisplay area1825. In this example, a selection is made by placing afinger1835 anywhere within thedisplay area1825. In some embodiments, a user selects exposure adjustment from a menu of possible image setting adjustments.
Thethird stage1820 shows animage1840 of the man after the exposure adjustment operation is completed. As shown, theimage1840 is similar to theimage1830, but the man in theimage1840 is properly exposed. In some embodiments, the properly exposed image is an image that is captured after the improperly exposed image. The exposure adjustment operation initiated in thesecond stage1815 adjusts the exposure of subsequent images captured by the camera of thedevice1800.
Returning toFIG. 17, the process1700 next performs (at1715) focus adjustment on the image. Some embodiments perform focus adjustment through a user interface that is displayed on the dual camera mobile device.FIG. 19 conceptually illustrates an example of such focus adjustment operations.
FIG. 19 illustrates a focus adjustment operation by reference to threedifferent stages1910,1915, and1920 of a UI1905 of a device1900. Thefirst stage1910 illustrates the UI1905 including adisplay area1925 and adisplay area855. Thedisplay area1925 presents ablurry image1930 of a man captured by a camera of the device1900. The blurriness indicates that theimage1930 of the man is out of focus. That is, the lens of the camera was not focused on the man when theimage1930 of the man was captured by the camera. Also, theimage1930 could be a video image captured by a camera of the device1900. As shown, thedisplay area855 includes aselectable UI item1950 for ending the video conference. In some embodiments, the layout of thedisplay area855 is the same as the layout of thedisplay area855 ofFIG. 9, described above.
Thesecond stage1915 illustrates a user of the device1900 initiating a focus adjustment operation by selecting an area of thedisplay area1925. In this example, a selection is made by placing afinger1935 anywhere within thedisplay area1925. In some embodiments, a user selects focus adjustment from a menu of possible image setting adjustments.
Thethird stage1920 shows animage1940 of the man after the focus adjustment operation is completed. As shown, theimage1940 is the same as theimage1930, but the man in theimage1940 appears sharper. This indicates that the lens of the camera is properly focused on the man. In some embodiments, the properly focused image is an image that is captured after the improperly focused image. The focus adjustment operation initiated in thesecond stage1915 adjusts the focus of subsequent images captured by the camera of the device1900.
Back toFIG. 17, the process1700 performs (at1720) image resizing on the image. Some embodiments perform image resizing on the image to reduce the number of bits used to encode the image (i.e., lower the bit rate). In some embodiments, the process1700 performs image resizing as described below by reference toFIG. 22.
The process1700 next performs (at1725) perspective correction on the image. In some embodiments, the process1700 performs perspective correction as described inFIG. 20 below. Such perspective correction involves using data taken by one or more accelerometer and/or gyroscope sensors that identifies orientation and movement of the dual camera mobile device. This data is then used to modify the image to correct for the perspective being off.
After perspective correction is performed on the image, the process1700 adjusts (at1730) the dynamic range of the image. In some embodiments, the dynamic range of an image is the range of possible values that each pixel in the image can have. For example, an image with a dynamic range of 0-255 can be adjusted to a range of 0-128 or any other range of values. Adjusting the dynamic range of an image can reduce the amount of bits that will be used to encode the image (i.e., lower the bit rate) and thereby smooth out the image.
Adjusting the dynamic range of an image can also be used for various other purposes. One purpose is to reduce image noise (e.g., the image was captured by a noisy camera sensor). To reduce noise, the dynamic range of the image can be adjusted so that the black levels are redefined to include lighter blacks (i.e., crush blacks). In this manner, the noise of the image is reduced. Another purpose of dynamic range adjustment is to adjust one or more colors or range of colors in order to enhance the image. For instance, some embodiments may assume that the image captured by the front camera is an image of a person's face. Accordingly, the dynamic range of the image can be adjusted to increase the red and pink colors to make the person's cheeks appear rosy/rosier. The dynamic range adjustment operation can be used for other purposes as well.
Finally, the process1700 determines (at1735) one or more rate controller parameters that are used to encode the image. Such rate controller parameters may include a quantization parameter and a frame type (e.g., predictive, bi-directional, intra-coded) in some embodiments. The process then ends.
While the various operations of process1700 are illustrated as being performed in a specific order, one of ordinary skill will recognize that many of these operations (exposure adjustment, focus adjustment, perspective correction, etc.) can be performed in any order and are not dependent on one another. That is, the process of some embodiments could perform focus adjustment before exposure adjustment, or similar modifications to the process illustrated inFIG. 17.
1. Perspective Correction
As mentioned above, some embodiments perform perspective correction on an image before displaying or transmitting the image. In some cases, one or more of the cameras on a dual camera mobile device will not be oriented properly with its subject and the subject will appear distorted in an uncorrected image. Perspective correction may be used to process the images so that the images will closely reflect how the objects in the images appear in person.
FIG. 20 conceptually illustrates aperspective correction process2000 performed by an image processing manager of some embodiments such as that illustrated inFIG. 12. Theprocess2000 of some embodiments is performed by theimage processing layer630 shown inFIG. 6 (which may contain an image processing manager1208). Some embodiments perform theprocess2000 atoperation1725 of process1700, in order to correct the perspective of recently captured video images before displaying or transmitting the images.
Theprocess2000 starts by receiving (at2005) data from an accelerometer sensor, which is a part of the dual camera mobile device in some embodiments. The accelerometer sensor of some embodiments measures the rate of change of the velocity of the device (i.e., the device's acceleration) along one or more axes. The process also receives (at2010) data from a gyroscope sensor, which may also be a part of the dual camera mobile device in some embodiments. The gyroscope and accelerometer sensors of some embodiments can be used individually or in combination to identify the orientation of the dual camera mobile device.
Next, theprocess2000 determines (at2015) the amount of perspective correction to perform based on the data obtained from the accelerometer and gyroscope sensors. Generally, when the orientation is further off axis, more perspective correction will be required to produce an optimal image. Some embodiments calculate a warp parameter to represent the amount of perspective correction based on the orientation of the device.
After determining the amount of perspective correction to perform, theprocess2000 receives (at2020) an image captured by a camera of the dual camera mobile device. This process may be performed for each image in the video sequence captured by the camera. Some embodiments may perform separate calculations for images coming from each of the two cameras on the dual camera mobile device.
The process then modifies (at2025) the image based on the determined amount of perspective correction. Some embodiments also use a baseline image or other information (e.g., a user-entered point about which the correction should be performed) in addition to the warp parameter or other representation of the amount of perspective correction. After modifying the image,process2000 ends.
FIG. 21 conceptually illustrates example image processing operations of some embodiments. This figure illustrates a firstimage processing operation2105 performed by a firstimage processing module2120 that does not use perspective correction and a secondimage processing operation2150 performed by a secondimage processing module2165 that uses perspective correction.
As shown, the firstimage processing operation2105 is performed on afirst image2110 of ablock2115 from an aerial perspective looking downwards at an angle towards the block. From that perspective, the top of theblock2115 is closer than the bottom of the block. As such, theblock2115 appears to be leaning towards the camera that captured thefirst image2110.FIG. 21 also shows the processedfirst image2125 after processing by the firstimage processing module2120. As shown, theblock2115 in the processedfirst image2125 appears the same post-processing, as the firstimage processing module2120 did not perform any perspective correction.
The secondimage processing operation2150 is performed on a second image2155 of ablock2160. Theblock2160 is the same as theblock2115 in thefirst image2110.FIG. 21 also shows a processedsecond image2175 after processing of the second image2155 by theperspective corrector2170 of the secondimage processing module2165. Theperspective corrector2170 may useprocess2000 in order to correct the perspective of the second image2155. Based on data from an accelerometer and gyroscope indicating that the camera that captured the second image2155 is tilting at a downward angle (and possibly based on other data), theperspective corrector2170 is able to correct the second image so that the block appears to be viewed straight-on in the processedsecond image2175.
2. Resizing and Bit Stream Manipulation
Among the functions described above by reference toFIG. 17 that are performed by theimage processing layer630 of some embodiments are image resizing and bitstream manipulation. Image resizing (performed at operation1730) involves scaling up or down an image in some embodiments (i.e., modifying the number of pixels used to represent the image). In some embodiments, the bitstream manipulation involves inserting data into the bitstream that indicates the size of the image after resizing. This resizing and bitstream manipulation is performed by an encoder driver (e.g., driver1235) in some embodiments.
FIG. 22 conceptually illustrates a software architecture for such anencoder driver2200 of some embodiments and shows an example resizing and bitstream manipulation operations performed by theencoder driver2200 on anexample image2205. In some embodiments, theimage2205 is an image of a video captured by a camera of the dual camera mobile device for transmission to another device(s) in a video conference. Referring toFIG. 12, in some embodiments the video image will have traveled from theCIPU1250 through theCIPU driver1230 and camera source module1222 to buffer1206, from which it is retrieved byimage processing manager1208. After undergoing image processing (e.g., focus adjustment, exposure adjustment, perspective correction) in theimage processing manager1208, the image is sent throughbuffer1210 andvideo compressor module1224 to theencoder driver1235.
As shown, theencoder driver2200 includes a processing layer2210 and arate controller2245. Examples of the rate controller of some embodiments are illustrated inFIG. 26, described below. The processing layer2210 includes animage resizer2215 and abitstream manager2225. In some embodiments, these modules perform various operations on images both before and after the images are encoded. While in this example the image resizer is shown as part of the processing layer2210 of theencoder driver2200, some embodiments implement the image resizer as part of theimage processing manager1208 rather than the encoder driver2200 (i.e., the image resizing is done before sending the image and the size data to the encoder driver).
As shown, theimage resizer2215 resizes the images before the images are sent to theencoder2250 through therate controller2245. Theimage2205 is sent throughresizer2215 and scaled down intoimage2230. In addition to scaling down an image, some embodiments can also scale up an image.
As shown inFIG. 22, some embodiments scale down the incoming image (e.g., image2205) and then superimpose the scaled down image (e.g., image2230) onto a spatially redundant image (e.g., image2235) that is the same size (in pixels) as the incoming image (i.e., the number of rows and columns of pixels of theimage2205 are the same as the number of rows and columns of pixels of the spatially redundant image2235). Some embodiments superimpose the scaled downimage2230 into the upper left corner of the spatially redundant image (as shown, to produce composite image2240), while other embodiments superimpose the scaled down image into a different section of the spatially redundant image (e.g., the center, upper right, upper center, lower center, lower right, etc.).
In some embodiments, a spatially redundant image is an image that is substantially all one color (e.g., black, blue, red, white, etc.) or has a repetitive pattern (e.g., checkers, stripes, etc.). For instance, the spatiallyredundant image2235 shown inFIG. 22 has a repetitive crisscross pattern. The spatially redundant portion of thecomposite image2240 can be easily compressed by the encoder into a small amount of data due to the repetitive nature. Furthermore, if a sequence of images are all scaled down and the spatially redundant image used is the same for each image in the sequence, then temporal compression can be used to even further reduce the amount of data needed to represent the encoded image.
Some embodiments of theimage resizer2215 also generatesize data2220 that indicates the size of the resized image (e.g., the size of the scaled down image2230) and send this generatedsize data2220 to thebitstream manager2225. Thesize data2220 of some embodiments indicates the size of the resizedimage2230 in terms of the number of rows of pixels and the number of columns of pixels (i.e., height and width) of the resizedimage2230. In some embodiments, thesize data2220 also indicates the location of the resizedimage2230 in thecomposite image2240.
After the image is resized, thecomposite image2240 is sent through therate controller2245 to theencoder2250. Therate controller2245, as described in further detail below, controls the bit rate (i.e., the data size) of the images output by theencoder2250 in some embodiments. Theencoder2250 of some embodiments compresses and encodes the image. Theencoder2250 may use H.264 encoding or another encoding method.
Thebitstream manager2225 of some embodiments receives a bitstream of one or more encoded images from theencoder2250 and inserts size data into the bitstream. For instance, in some embodiments, thebitstream manager2225 receives thesize data2220 from theimage resizer2215 and inserts thesize data2220 into abitstream2255 of the encodedcomposite image2240 that is received from theencoder2250. The output of thebitstream manager2225 in this case is a modifiedbitstream2260 that includes thesize data2220. Different embodiments insert thesize data2220 in different positions of thebitstream2255. For example, thebitstream2260 shows thesize data2220 inserted at the beginning of thebitstream2260. However, other embodiments insert thesize data2220 at the end of thebitstream2255, in the middle of thebitstream2255, or any other position within thebitstream2255.
In some embodiments, thebitstream2255 is a bitstream of a sequence of one or more encoded images that includes thecomposite image2240. In some of such embodiments, the images in the sequence are all resized to the same size and thesize data2220 indicates the size of those resized images. After the images are transmitted to a device on the other end of the video conference, the receiving device can extract the size information from the bitstream and use the size information to properly decode the received images.
FIG. 23 conceptually illustrates animage resizing process2300 performed by an encoder driver of a dual camera mobile device, such asdriver2200. Theprocess2300 begins by receiving (at2305) an image (e.g., image2205) captured by a camera of the dual camera mobile device. When the dual camera device is capturing images with both cameras, some embodiments performprocess2300 on images from both cameras.
Next, theprocess2300 resizes (at2310) the received image. As noted above, different embodiments resize theimage2205 differently. For instance, theimage2205 inFIG. 22 is scaled down and superimposed onto the spatiallyredundant image2235 to produce thecomposite image2240.
Theprocess2300 then sends (at2315) the resized image (e.g., thecomposite image2240, which includes the resized image2230) to theencoder2250 for encoding. Some embodiments of theprocess2300 send the resized image2230 (included in the composite image2240) to theencoder2250 through a rate controller that determines a bit rate for the encoder to encode the image. Theencoder2250 of some embodiments compresses and encodes the image (e.g., using discrete cosine transform, quantization, entropy encoding, etc.) and returns a bitstream with the encoded image to theencoder driver2200.
Next, theprocess2300 sends (at2320) the data indicating the size of the resized image (e.g., the size data2220) to a bitstream manager. As shown inFIG. 22, this operation is performed within theencoder driver2200 in some embodiments (i.e., one module in theencoder driver2200 sends the size data to another module in the encoder driver2200).
After the resized image is encoded by theencoder2250, theprocess2300 receives (at2325) the bitstream from the encoder. As shown, some embodiments receive the bitstream at the bitstream manager, which also has received size data. The received bitstream includes the encoded composite image and may also include one or more additional images in a video sequence.
Theprocess2300 then inserts (at2330) the data indicating the size of the resized image (e.g., the size data2220) into the bitstream, and ends. As shown inFIG. 22, this operation is also performed by the bitstream manager in some embodiments. As mentioned above, different embodiments insert the size data into different parts of the bitstream. In the illustrated example, thesize data2220 is inserted at the beginning of thebitstream2255 as shown in the resultingbitstream2260. This bitstream can now be transmitted to another device that is participating in the video conference, where it can be decoded and viewed.
In some embodiments, the decoder driver (e.g., driver1240) performs the opposite functions of the encoder driver. That is, the decoder driver extracts size data from a received bitstream, passes the bitstream to a decoder, and resizes a decoded image using the size data.FIG. 24 conceptually illustrates a software architecture for such adecoder driver2400 of some embodiments and shows example bitstream manipulation and resizing operations performed by thedecoder driver2400 on anexample bitstream2425.
In some embodiments, thebitstream2425 is a bitstream that includes an encoded image of a video captured by a camera of a device in a video conference (e.g., a bitstream from an encoder driver such as driver2200) and transmitted to the device on which thedecoder driver2400 operates. Referring toFIG. 12, in some embodiments the bitstream will have been received by thenetworking manager1214 and sent to buffer1216, from which it is retrieved by thevideo decompressor module1226 and sent to thedecoder driver1240.
As shown, thedecoder driver2400 includes aprocessing layer2405. Theprocessing layer2405 includes animage resizer2410 and abitstream manager2420. In some embodiments, thesemodules2410 and2420 perform various operations on received images both before and after the images are decoded. While in this example theimage resizer2410 is shown as part of theprocessing layer2405 of thedecoder driver2400, some embodiments implement the image resizer as part of theimage processing manager1208 rather than the decoder driver (i.e., the image resizing is done after sending the image from the decoder driver2400).
As shown, thebitstream manager2420 of some embodiments receives a bitstream of one or more encoded images (i.e., images in a video sequence) and extracts size data from the bitstream before sending the bitstream to thedecoder2435 for decoding. For example, as illustrated inFIG. 24, thebitstream manager2420 receives abitstream2425 of an encoded image, extracts asize data2415 from thebitstream2425, and sends the resulting bitstream2430 (without the size data2415) to thedecoder2435 for decoding. As shown, thebitstream manager2420 sends the extractedsize data2415 to theimage resizer2410 in some embodiments.
Thesize data2415 of some embodiments is the same as thesize data2220 inserted into the bitstream by theencoder driver2200. As described above in the description ofFIG. 22, thesize data2415 of some embodiments indicates the size of a sub-image2445 in terms of the number of rows of pixels and the number of columns of pixels of the sub-image2445. Thesize data2415 may also indicate the location of the sub-image2445 within the larger spatiallyredundant image2440. In this example, thebitstream2425 shows thesize data2415 inserted at the beginning of thebitstream2425. However, as noted above, different embodiments insert thesize data2415 in different positions of thebitstream2425.
Theimage resizer2410 of some embodiments extracts sub-images from images using size data received from thebitstream manager2420. For instance,FIG. 24 illustrates theimage resizer2410 receiving animage2440 that includes a sub-image2445 from thedecoder2435. As shown, theimage resizer2410 of some embodiments extracts the sub-image2445 from theimage2440. This extracted image can then be displayed on the dual camera mobile device.
FIG. 25 conceptually illustrates animage extraction process2500 of some embodiments performed by a decoder driver of a device participating in a video conference, such asdriver2400. The process begins by receiving (at2505) a bitstream (e.g., bitstream2425) of an encoded image. The bitstream may be sent from a device participating in a video conference with the device on which the decoder driver is operating or may be stored in a storage of the device. When the device is receiving images from multiple sources, some embodiments performprocess2500 on images from each source
Next, theprocess2500 extracts (at2510) size data from the bitstream. As noted above, this size data may be found in different locations in the bitstream. Some embodiments know where to look for the size data, while other embodiments look for a particular signature that indicates where in the received bitstream the size data is located. In some embodiments, the size data indicates the size (e.g., the number of pixels in each row and number of pixels in each column) and the location of a sub-image in the encoded image.
Theprocess2500 then sends (at2515) the extracted size data to an image resizer. As shown inFIG. 24, this operation is performed within the decoder driver in some embodiments (i.e., one module in the decoder driver sends the size data to another module in the decoder driver).
Theprocess2500 also sends (at2520) the bitstream to the decoder for decoding. The decoder, in some embodiments decompresses and decodes the bitstream (e.g., using inverse discrete cosine transform, inverse quantization, etc.) and returns a reconstructed image to the decoder driver.
After the bitstream is decoded by the decoder, theprocess2500 receives (at2525) the decoded image from the decoder. As shown, some embodiments receive the image at the image resizer, which also has received size data from the bitstream manager. The process then extracts (at2530) a sub-image from the decoded image using the received size data. As shown, the sub-image2445 is extracted from the upper left of decodedimage2440, as indicated insize data2415. This extracted sub-image can now be displayed on a display device (e.g., a screen of the dual camera mobile device).
3. Rate Controllers
In some embodiments, the two cameras of the device have different sets of characteristics. For example, in some embodiments, the front camera is a lower resolution camera optimized for the capture of motion video images while the back camera is a higher resolution camera optimized for the capture of still images. For reasons such as cost, functionality, and/or geometry of the device, other embodiments may use different combinations of cameras of different characteristics.
Cameras with different characteristics can introduce different artifacts. For example, higher resolution cameras may reveal more noise than lower resolution cameras. Images captured by higher resolution cameras may exhibit higher levels of spatial or temporal complexities than images captured by lower resolution cameras. Also, different cameras with different optical properties may introduce different gamma values to the captured images. Different light sensing mechanisms used by different cameras to capture images may also introduce different artifacts.
Some of these camera-specific artifacts conceal artifacts generated from other sources. For example, in an image captured by a high resolution camera with a high level of noise, artifacts that are the byproduct of the video encoding process become less visible. When encoding noise (such as quantization distortion) to hide behind camera-specific artifacts, the video encoding process can use larger quantization step sizes to achieve lower bit rates. On the other hand, when a camera introduces less artifacts (such as in the case of a lower resolution camera), the video encoding process can use finer quantization step sizes in order to avoid unacceptable levels of visual distortion due to quantization. Thus, a video encoding process that is optimized to take advantage of or to compensate for these camera-specific characteristics can accomplish better rate-distortion trade-off than the video encoding process that is oblivious to these camera-specific characteristics.
In order to utilize these camera-specific characteristics for performing rate-distortion trade-offs, some embodiments implement two video encoding processes, each process optimized to each of the two cameras.FIG. 26 illustrates an example of a system with two video encoding processes for twocameras2660 and2670. As shown inFIG. 26, the system2600 includesencoder driver2610,rate controllers2620 and2640, and avideo encoder2630. Theencoder2630 encodes video images captured fromvideo cameras2660 and2670 intobitstreams2680 and2690.
In some embodiments, thevideo encoder driver2610 is a software module running on one or more processing units. It provides an interface between thevideo encoder2630 and other components of the system, such as video cameras, image processing modules, network management modules and storage buffers. Theencoder driver2610 controls the flow of captured video image from the cameras and the image processing modules to thevideo encoder2630, and it also provides the conduit for the encodedbitstreams2680 and2690 to storage buffers and network management modules.
As shown inFIG. 26, theencoder driver2610 includes twodifferent instances2620 and2640 of rate controllers. These multiple instances can be two different rate controllers for the two different cameras, or one rate controller that is configured in two different manners for two different cameras. Specifically, in some embodiments, the tworate controllers2620 and2640 represent two separate rate controllers. Alternatively, in other embodiments, the tworate controllers2620 and2640 are two different configurations of a single rate controller.
FIG. 26 also shows theencoder driver2610 to include astate buffer2615 that stores encoding state information for the rate controlling operations to use during a video conference. Specifically, in some embodiments, the two different rate controllers, or the two different configurations of the same rate controller, share during a video conference the same encoding state information that is stored in thestate buffer2615. Such sharing of state information allows uniform rate controller operations in dual video capture video conferences, which are described in further detail in the above-incorporated U.S. patent application Ser. No. 12/794,766, now U.S. Pat. No. 8,744,420, entitled “Establishing Video Conference During a Phone Call.” This sharing also allows optimal video encoding during a switch camera operation in a single video capture video conference (i.e., allows the rate controlling operation for the encoding of video captured by the current camera to use encoding state information that was maintained by the rate controlling operation for the encoding of the video captured by the previous camera).FIG. 26 shows thestate buffer2615 as being part of theencoder driver2610, but other embodiments implement thestate buffer2615 outside theencoder driver2610.
In thestate buffer2615, different embodiments store different types of data (e.g., different types of encoding parameters) to represent the encoding state information. One example of such encoding state information is the current target bit rate for the video conference. One manner for identifying the target bit rate is described in the above-incorporated U.S. patent application Ser. No. 12/794,766, now U.S. Pat. No. 8,744,420, entitled “Establishing Video Conference During a Phone Call.” Other examples of such encoding state information include buffer fullness, maximum buffer fullness, bit rates of one or more recently encoded frames, among other encoding state information.
A rate controller can then use the target bit rate (or another encoding state parameter stored in the state buffer) to calculate one or more parameters used in its rate controlling operation. For instance, as further described below, a rate controller of some embodiments uses the current target bit to calculate a quantization parameter QP for a macroblock or a frame. By way of example, some embodiments use the current target bit rate to compute a quantization adjustment parameter from which they derive the quantization parameter QP for the macroblock and/or the frame. Accordingly, during a camera switch operation in a video conference, sharing the target bit rate between the two rate controlling operations (of two rate controllers or of two different configurations of one rate controller) allows the rate controlling operation for encoding the video captured by the current camera to get the benefit of the encoding state data from the previous rate controlling operation for encoding the video captured by the previous camera.
FIG. 26 illustrates theencoder driver2610 to include the two different rate-controller instances2620 and2640. However, in other embodiments, theserate controller instances2620 and2640 are built intovideo encoder2630. Thevideo encoder2630 encodes video images captured by thecameras2660 and2670 intodigital bitstreams2680 and2690. In some embodiments, the video encoder produces bitstreams that are compliant with conventional video coding standards (e.g., H.264 MPEG-4). In some of these embodiments, the video encoder performs encoding operations that include motion estimation, discrete cosine transform (“DCT”), quantization, and entropy encoding. The video encoder also performs decoding operations that are the inverse functions of the encoding operations.
In some embodiments, theencoder2630 includes aquantizer module2632 for performing quantization. The quantizer module is controlled by aquantization parameter2622 or2642 from arate controller2620 or2640. In some embodiments, each quantization parameter is set by a corresponding rate controller and is a function of one or more attributes of the camera associated with the rate controller, as further described below. The rate controller can reduce the number of bits used for encoding by setting coarser quantization step sizes or increase the number of bits used by setting finer quantization step sizes. By controlling the quantization step size, the rate controller also determines how much distortion is introduced into the encoded video image. Thus the rate controller can perform trade-offs between bit rate and image quality. In performing the rate-distortion trade off, the rate controller monitors bit rate in order not to overflow memory buffers, underflow memory buffers, or exceed the transmission channel capacity. The rate controller must also control bit rate in order to provide the best possible image quality and to avoid unacceptable distortion of image quality due to quantization. In some embodiments, each rate controller stores the monitored data in terms of a set of state data values in thestate buffer2615. In some embodiments, therate controllers2620 and2640 uses camera-specific attributes to optimize rate-distortion trade off.
In some embodiments, each rate controller optimizes rate-distortion trade off by directly applying a modification factor to its quantization parameter. In some of these embodiments, the modification factors are pre-determined and built into the device along with the camera; the device does not need to dynamically compute these modification factors. In other embodiments, the system uses the incoming image captured by the camera to dynamically determine the appropriate modification factor specific to the camera. In some of these embodiments, the system analyzes a sequence of incoming video images captured by the camera in multiple encoding passes in order to collect certain statistics about the camera. The system then uses these statistics to derive modification factors to the quantization parameter that is optimized for the camera.
In some embodiments, these camera-specific modification factors are applied to the quantization parameter via visual masking attributes of the video images. Visual masking attribute of an image or a portion of the image is an indication of how much coding artifacts can be tolerated in the image or image portion. Some embodiments compute a visual masking attribute that quantifies the brightness energy of the image or the image portion while other embodiments compute a visual masking attribute that quantifies the activity energy or complexity of the image or the image portion. Regardless of how a visual masking attribute is calculated, some embodiments use visual masking attributes to calculate a modified or masked quantization parameter for a video frame. Some of these embodiments calculate the masked quantization parameter as a function of a frame level visual masking attribute φframeand a reference visual masking attribute φR. In some embodiments, the quantization parameter modified by visual masking attributes φframeand φRis expressed as:
MQPframe=QPnom+βframe*(φframe−φR)/φR (1)
where MQPframeis masked or modified quantization parameter for the frame, QPnomis an initial or nominal quantization value, and βframeis a constant adapted to local statistics. In some embodiments, the reference visual masking attribute φRand nominal quantization parameter QPnomare pre-determined from an initial or periodic assessment of network conditions.
In some embodiments, the visual masking attribute φframein equation (1) is calculated as
φframe=C*(E·avgFrameLuma)β·(D·avgFrameSAD)α (2)
where avgFrameLuma is the average luminance value of the frame and avgFrameSAD is the average sum of absolute difference of the frame. Constants α, β, C, D, and E are adapted to local statistics. These constants are adapted to camera specific characteristics in some embodiments.
Some embodiments also calculate a masked quantization parameter for a portion of a video image such as a macroblock. In those instances, the masked quantization parameter is calculated as a function of the macroblock visual masking attribute φMB:
MQPMB=MQPframe+βMB*(φMB−φframe)/(φframe (3)
where βMBis a constant adapted to local statistics, and MQPframeis calculated using equations (1) and (2) in some embodiments. In some embodiments, the visual masking attribute φMBin equation (3) is calculated as
φMB=A·(C·avgMBLuma)β·(B·MBSAD)α (4)
where avgMBLuma is the average luminance value of the macroblock and avgMBSAD is the average sum of absolute difference of the macroblock. Constants α, β, A, B and C are adapted to local statistics. These constants are adapted to camera specific characteristics in some embodiments.
Rather than using multiple camera-specific constants to compute the modified quantization parameters as discussed above, some embodiments perform camera-specific rate control by computing quantization parameters using only a single camera-specific coefficient. For example, given visual masking attributes φframeand φMBand quantization parameter QPframe, some embodiments use a single camera-specific coefficient μ to calculate the quantization parameter of a macroblock as:
QPMB=μ·(φframe−φMB)+QPframe (5)
To compute equation (5), some embodiments use complexity measures of the frame and of the macroblock as visual masking attributes φframeand φMB, respectively.
Some embodiments apply a different camera specific coefficient in the calculation of QPMB. For example, in some embodiments, QPMBis calculated as
QPMB=ρ·(1−φMB/φframe)·QPframe+QPframe (6)
where ρ is a coefficient tuned to camera-specific characteristics.
As mentioned above, thestate buffer2615 stores encoding state information that the two differentrate controller instances2620 and2640 can share during a video conference in order to obtain better encoding results from their rate controlling operations. Target bit rate RTis one example of such shared state information in some embodiments. This rate is a desired bit rate for encoding a sequence of frames. Typically, this bit rate is expressed in units of bits/second, and is determined based on processes like those described in the above-incorporated U.S. patent application Ser. No. 12/794,766, now U.S. Pat. No. 8,744,420, entitled “Establishing Video Conference During a Phone Call.”
As described above, a rate controller of some embodiments uses the target bit rate to calculate the frame and/or macroblock quantization parameter(s) QP that it outputs to thevideo encoder2630. For example, some embodiments use the current target bit rate to compute a quantization adjustment parameter from which they derive the quantization parameter QP for the macroblock and/or the frame. In some embodiments, the quantization adjustment parameter is expressed in terms of a fraction that is computed by dividing either the previous frame's bit rate or a running average of the previous frames' bit rate, with the current target bit rate. In other embodiments, this adjustment parameter is not exactly computed in this manner, but rather is more generally (1) proportional to either the previous frame's bit rate or a running average of the previous frames' bit rate, and (2) inversely proportional to the current target bit rate.
After computing such a quantization adjustment parameter, the rate controller of some embodiments uses this parameter to adjust the macroblock and/or frame quantization parameter(s) that it computes. One manner of making such an adjustment is to multiply the computed macroblock and/or frame quantization parameter(s) by the quantization adjustment parameter. Another manner of making this adjustment is to compute an offset quantization parameter value from the quantization adjustment parameter and then apply (e.g., subtract) this offset parameter to the computed macroblock and/or frame quantization parameter(s). The rate controller of these embodiments then outputs the adjusted macroblock and/or frame quantization parameter(s) to thevideo encoder2630.
In other embodiments, the rate controller uses the target bit rate to calculate other parameters that are used in its rate controlling operation. For instance, in some embodiments, the rate controller uses this bit rate to modify the visual masking strength for a macroblock or a frame.
G. Networking Manager
FIG. 27 conceptually illustrates the software architecture of anetworking manager2700 of some embodiments such as thenetworking manager1214 illustrated inFIG. 12. As described above, thenetworking manager2700 manages network connections (e.g., connection establishment, connection monitoring, connection adjustments, connection tear down, etc.) between a dual camera mobile device on which it operates and a remote device in a video conference. During the video conference, thenetworking manager2700 of some embodiments also processes data for transmission to the remote device and processes data received from the remote device.
As shown inFIG. 27, thenetworking manager2700 includes asession negotiating manager2705, atransmitter module2715, auniversal transmission buffer2720, a universaltransmission buffer manager2722, a virtual transport protocol (VTP)manager2725, areceiver module2730, and amedia transport manager2735.
Thesession negotiating manager2705 includes aprotocol manager2710. Theprotocol manager2710 ensures that thetransmitter module2715 uses a correct communication protocol to transmit data to a remote device during the video conference and enforces rules of the communication protocol that is used. Some embodiments of theprotocol manager2710 support a number of communication protocols, such as a real-time transport protocol (RTP), a transmission control protocol (TCP), a user datagram protocol (UDP), and a hypertext transfer protocol (HTTP), among others.
Thesession negotiating manager2705 is responsible for establishing connections between the dual camera mobile device and one or more remote devices participating in the video conference, as well as tearing down these connections after the conference. In some embodiments, thesession negotiating manager2705 is also responsible for establishing multimedia communication sessions (e.g., to transmit and receive video and/or audio streams) between the dual camera mobile device and the remote devices in the video conference (e.g., using a session initiation protocol (SIP)).
Thesession negotiating manager2705 also receives feedback data from themedia transport manager2735 and, based on the feedback data, determines the operation of the universal transmission buffer2720 (e.g., whether to transmit or drop packets/frames) through the universaltransmission buffer manager2722. This feedback, in some embodiments, may include one-way latency and a bandwidth estimation bit rate. In other embodiments, the feedback includes packet loss information and roundtrip delay time (e.g., determined based on packets sent to the remote device in the video conference and the receipt of acknowledgements from that device). Based on the information from themedia transport manager2735, thesession negotiating manager2705 can determine whether too many packets are being sent and instruct the universaltransmission buffer manager2722 to have theuniversal transmission buffer2720 transmit fewer packets (i.e., to adjust the bit rate).
Thetransmitter module2715 retrieves encoded images (e.g., as a bitstream) from a video buffer (e.g., thebuffer1212 ofFIG. 12) and packetizes the images for transmission to a remote device in the video conference through theuniversal transmission buffer2720 and the virtualtransport protocol manager2725. The manner in which the encoded images are created and sent to thetransmitter module2715 can be based on instructions or data received from themedia transport manager2735 and/or thesession negotiating manager2705. In some embodiments, packetizing the images involves breaking the received bitstream into a group of packets each having a particular size (i.e., a size specified by thesession negotiating manager2705 according to a particular protocol), and adding any required headers (e.g., address headers, protocol specification headers, etc.).
The universaltransmission buffer manager2722 controls the operation of theuniversal transmission buffer2720 based on data and/or instructions received from thesession negotiating manager2705. For example, the universaltransmission buffer manager2722 may be instructed to direct theuniversal transmission buffer2720 to transmit data, stop transmitting data, drop data, etc. As described above, in some embodiments when a remote device participating in the conference appears to be dropping packets, this will be recognized based on acknowledgements received from the remote device. To reduce the packet dropping, the universaltransmission buffer manager2722 may be instructed to transmit packets at a slower rate to the remote device.
Theuniversal transmission buffer2720 stores data received from thetransmitter module2715 and transmits the data to the remote device through theVTP manager2725. As noted above, theuniversal transmission buffer2720 may drop data (e.g., images of the video) based on instructions received from the universaltransmission buffer manager2722.
In some embodiments, RTP is used to communicate data packets (e.g., audio packets and video packets) over UDP during a video conference. Other embodiments use RTP to communicate data packets over TCP during the video conference. Other transport layer protocols can be used as well in different embodiments.
Some embodiments define a particular communication channel between two mobile devices by a pair of port numbers (i.e., source port number and destination port number). For instance, one communication channel between the mobile devices can be defined by one pair of port numbers (e.g., source port50 and destination port100) and another different communication channel between the mobile devices can be defined by another different pair of port numbers (e.g., source port75 and destination port150). Some embodiments also use a pair of internet protocol (IP) addresses in defining communication channels. Different communication channels are used to transmit different types of data packets in some embodiments. For example, video data packets, audio data packets, and control signaling data packets can be transmitted in separate communication channels. As such, a video communication channel transports video data packets and an audio communication channel transports audio data packets.
In some embodiments, a control communication channel is for messaging between the local mobile device and a remote device during a video conference. Examples of such messaging include sending and receiving requests, notifications, and acknowledgements to such requests and notifications. Another example of messaging includes sending remote control instruction messages from one device to another. For instance, the remote control operations described in the above-incorporated U.S. patent application Ser. No. 12/794,766, now U.S. Pat. No. 8,744,420, entitled “Establishing Video Conference During a Phone Call,” (e.g., instructing a device to only send images from one particular camera or to only capture images with a particular camera) can be performed by sending instructions from a local device to a remote device through the control communication channel for the local device to remotely control operations of the remote device. Different embodiments implement the control communication using different protocols like a real-time transport control protocol (RTCP), an RTP extension, SIP, etc. For instance, some embodiments use RTP extension to relay one set of control messages between two mobile devices in a video conference and use SIP packets to relay another set of control messages between the mobile devices during the video conference.
TheVTP manager2725 of some embodiments allows different types of data packets that are specified to be transmitted through different communication channels (e.g., using different pairs of port numbers) to be transmitted through a single communication channel (e.g., using the same pair of port numbers). One technique for doing this involves identifying the data packet types, identifying the communication channel through which data packets are specified to be transmitted by extracting the specified pair of port numbers of the data packets, and specifying the data packets to be transmitted through the single communication channel by modifying the pair of port numbers of the data packets to be the pair of port numbers of the single communication channel (i.e., all the data packets are transmitted through the same pair of port numbers).
To keep track of the original pair of port numbers for each type of data packet, some embodiments store a mapping of the original pair of port numbers for the data packet type. Some of these embodiments than use the packet type field of the protocol to differentiate the different packets that are being multiplexed into one communication channel. For instance, some embodiments that have the VTP manager multiplex audio, video and control packets into one RTP stream, use the RTP packet type field to differentiate between the audio, video and control packets that are transmitted in the one RTP channel to the other device in the video conference. In some of these embodiments, the VTP manger also routes control messaging in SIP packets to the other device.
Some embodiments identify examine the data packet signatures (i.e., packet header formats) to distinguish between different packets that are communicated using different protocols (e.g., to differentiate between packets transported using RTP and packets transported using SIP). In such embodiments, after the data packets of the different protocols are determined, the fields of the data packets that use the same protocol (e.g., audio data and video data using RTP) are examined as described above to identify the different data types. In this manner, theVTP manager2725 transmits different data packets, which are intended to be transmitted through different communication channels, through a single communication channel.
Although one way of combining different types of data through a single communication channel is described above, other embodiments utilize other techniques to multiplex different packet types into one communication stream. For example, one technique of some embodiments involves keeping track of the original pair of port numbers of the data packets and storing the original pair of port numbers in the data packet itself to be later extracted. Still other ways exist for combining different types of data between two video conference participants into one port pair channel.
When theVTP manager2725 receives data packets from the remote device through a virtualized communication channel, theVTP manager2725 examines the signatures of the data packets to identify the different packets that are sent using the different protocols. Such signatures can be used to differentiate SIP packets from RTP packets. The VTP manager of some embodiments also uses the packet type field of some or all of the packets to demultiplex the various different types of packets (e.g., audio, video and control packets) that were multiplexed into a single virtualized channel. After identifying these different types of packets, the VTP manager associates each different type of packet with its corresponding port pair numbers based on a mapping of port pair numbers and packet types that it keeps. TheVTP manager2725 then modifies the pair of port numbers of the data packets with the identified pair of port numbers and forwards the data packets to be depacketized. In other embodiments that use different techniques for multiplexing different packet types into the single channel, the VTP manager uses different techniques for parsing out the packets.
By using such techniques for multiplexing and de-multiplexing the different packets, theVTP manager2725 creates a single virtualized communication channel (e.g., a single pair of port numbers), transmits the video data, audio data, and control signaling data through the single virtualized communication channel, and receives audio, video, and control packets from the remote device through the single virtualized communication channel. Thus, from the perspective of the network, data is transmitted through this single virtualized communication channel, while, from the perspective of thesession negotiating manager2705 and theprotocol manager2710, the video data, audio data, and control signaling data are transmitted through different communication channels.
Similar to the images that are transmitted to the remote device in the video conference, images transmitted from the remote device in the video conference are received in packet format. Thereceiver module2730 receives the packets and depacketizes them in order to reconstruct the images before storing the images in a video buffer (e.g., thebuffer1216 ofFIG. 12) to be decoded. In some embodiments, depacketizing the images involves removing any headers and reconstructing a bitstream that only has image data (and potentially size data) from the packets.
Themedia transport manager2735 processes feedback data (e.g., one-way latency, bandwidth estimation bit rate, packet loss data, roundtrip delay time data, etc.) received from the network to dynamically and adaptively adjust the rate of data transmission (i.e., bit rate). Themedia transport manager2735 also controls error resilience based on the processed feedback data in some other embodiments, and may also send the feedback data to thevideo conference manager1204 in order to adjust other operations of thevideo conference module1202 such as scaling, resizing, and encoding. In addition to having the universal transmission buffer drop packets when a remote device in the conference is not able to process all of the packets, the video conference module and encoder can use a lower bit rate for encoding the images so that fewer packets will be sent for each image.
In some embodiments, themedia transport manager2735 may also monitor other variables of the device such as power consumption and thermal levels that may affect how the operational power modes of the cameras are configured, as discussed above. This data may also be used as additional inputs into the feedback data (e.g., if the device is getting too hot, themedia transport manager2735 may try to have the processing slowed down).
Several example operations of thenetworking manager2700 will now be described by reference toFIG. 12. The transmission of images captured by a camera of the dual camera mobile device to a remote device in the video conference will be described first, followed by the description of receiving images from the remote device. Thetransmitter module2715 retrieves encoded images from thebuffer1212, which are to be transmitted to the remote device in the video conference.
Theprotocol manager2710 determines the appropriate protocol to use (e.g., RTP to transmit audio and video) and thesession negotiating manager2705 informs thetransmitter module2715 of such protocol. Next, thetransmitter module2715 packetizes the images and sends the packetized images to theuniversal transmission buffer2720. The universaltransmission buffer manager2722 receives instructions from thesession negotiating manager2705 to direct theuniversal transmission buffer2720 to transmit or drop the images. TheVTP manager2725 receives the packets from theuniversal transmission buffer2720 and processes the packets in order to transmit the packets through a single communication channel to the remote device.
When receiving images from the remote device, theVTP manager2725 receives packetized images from the remote device through the virtualized single communication channel and processes the packets in order to direct the images to thereceiver module2730 through a communication channel that is assigned to receive the images (e.g., a video communication channel).
Thereceiver module2730 depacketizes the packets to reconstruct the images and sends the images to thebuffer1216 for decoding by thedecoder1260. Thereceiver module2730 also forwards control signaling messages to the media transport manager2735 (e.g., acknowledgements of received packets from the remote device in the video conference).
Several example operations of thenetworking manager2700 were described above. These are only illustrative examples, as various other embodiments will perform these or different operations using different modules or with functionalities spread differently between the modules. Furthermore, additional operations such as dynamic bit rate adjustment may be performed by the modules ofnetworking manager2700 or other modules.
IV. In-Conference Adjustment and Control Operations
A. Picture-in-Picture Modifications
1. Snap-to-Corner
Some embodiments of the invention allow a user of a dual camera mobile device to modify a composite display displayed on the device by moving around one or more display areas that form the composite display. One such example is moving around an inset display area of a PIP display.FIG. 28 illustrates such an example that is performed during a video conference. In a video conference, the user may want to move a foreground inset display area for a variety of reasons, such as when this area is blocking an area of interest of the background display area.
FIG. 28 illustrates the moving of aninset display area2840 in a UI2805 of a device, by reference to fivedifferent stages2810,2815,2820,2825, and2830 of this UI. Thefirst stage2810 illustrates the UI2805 during a video conference between the local user of the device and a remote user of a remote device. The UI2805 inFIG. 28 shows a PIP display that is the same PIP display shown in the fifth stage ofFIG. 8 after the video conference has started. In this example, the video captured by the local user's device is displayed in theinset display area2840 and the video captured by the remote user's device is displayed in thebackground display area2835. As shown, thedisplay area855 includes aselectable UI item2845 for ending the video conference. In some embodiments, the layout of thedisplay area855 is the same as the layout of thedisplay area855 ofFIG. 9, described above.
Thesecond stage2815 illustrates the user starting a snap-to-corner operation by selecting theinset display area2840. In this example, a selection is made by placing afinger2855 anywhere within theinset display area2840. As shown, this selection is displayed in terms of a thick border2860 for theinset display area2840. Different embodiments may indicate such a selection in different ways, such as by highlighting thedisplay area2840, by causing thedisplay area2840 to vibrate, etc.
Thethird stage2820 illustrates the UI2805 after the user begins to move theinset display area2840 of thePIP display2850 from one area in thePIP display2850 to another area in this display. In this example, theinset display area2840 has started to move from the lower left corner of thePIP display2850 to the lower right corner of this display, as indicated by thearrow2865. In this example, theinset display area2840 is moved by the user dragging hisfinger2855 towards the lower right corner of thePIP display2850 after selecting the inset display in thesecond stage2815. Some embodiments provide other techniques for moving theinset display area2840 around in thePIP display2850.
Thefourth stage2825 illustrates the UI2805 in a state after the user has removed hisfinger2855 from the screen of thedevice2800. In this state, theinset display area2840 is still moving towards the lower right corner of thePIP display2850 that was identified based on the user's finger movement in thethird stage2820. In other words, after thefinger2855 starts the movement of theinset display area2840 towards the lower right corner of thePIP display2850, the UI2805 maintains this movement even after thefinger2855 is removed. To maintain this movement, the UI2805 of some embodiments requires the user's drag operation to be larger than a particular threshold amount (e.g., longer than a particular distance or longer than a particular length of time) before the user removes hisfinger2855; otherwise, these embodiments keep theinset display area2840 in its original left corner position after moving thisdisplay area2840 slightly or not moving it at all.
However, while some embodiments allow the inset display area to move even after the user stops his drag operation before the inset display area has reached its new location, other embodiments require the user to maintain his drag operation until the inset display area reaches its new location. Some embodiments provide still other techniques for moving the inset display area. For example, some embodiments may require the user to specify where to direct theinset display area2840 before theinset display area2840 actually starts to move, etc. Some embodiments may also allow display areas to slide and snap-to-corners by simply tilting the mobile device at different angles.
Thefifth stage2830 illustrates the UI2805 after theinset display area2840 has reached its new location at the bottom right corner of thePIP display2850. The removal of the thick border2860 in thefifth stage2830 indicates that the snap-to-corner operation is completed.
To facilitate the movement illustrated in the above-described third, fourth andfifth stages2820,2825 and2830, the UI2805 of some embodiments employ snapping rules that allow the inset display area to quickly snap to a corner of thePIP display2850 once the user causes the inset display area to move towards that corner. For instance, when the user drags theinset display area2840 by more than a threshold amount towards a particular corner, the UI2805 of some embodiments identifies the direction of motion of theinset display area2840, determines that the motion has exceeded a threshold amount, and then subsequently moves theinset display area2840 automatically without further user input to the next grid point in the UI2805 to which theinset display area2840 can be snapped. In some embodiments, the only grid points that are provided for snapping theinset display area2840 are grid points at the four corners of thePIP display2850. Other embodiments provide other grid points in the UI2805 (e.g., in the PIP display2850) to which theinset display area2840 can snap (i.e., to which the sides or vertices of thearea2840 can be placed on or aligned with).
Still other embodiments may not employ grid points so that the inset display area can be positioned at any point in thePIP display2850. Yet other embodiments provide a feature that allows the user to turn on or off the snap to grid point feature of the UI. Moreover, in addition to the video captured from the devices, different embodiments may allow the user to perform the snap-to-corner operation with various items, such as icons, etc.
FIG. 29 illustrates two other examples2930 and2935 of a snap-to-corner operation in the UI2805. These other snap-to-corner operations show theinset display area2840 being moved vertically or diagonally in thePIP display2850, based on vertical or diagonal dragging operations of the user.
Even thoughFIGS. 28 and 29 illustrate the movement of the inset display area within a PIP display, one of ordinary skill will realize that other embodiments utilize similar techniques to move display areas in other types of PIP displays or other types of composite displays. For instance, as further described in the above-incorporated U.S. patent application Ser. No. 12/794,766, now U.S. Pat. No. 8,744,420, entitled “Establishing Video Conference During a Phone Call,” the PIP display of some embodiments has two or more foreground inset displays and these inset displays can be moved in the PIP display using techniques similar to those described above by reference toFIGS. 28 and 29. Also, some embodiments use similar techniques to move around display areas in composite displays (e.g., to move one display area from a left side of the screen to the right side of the screen through a user drag movement). Furthermore, the moving of a display area(s) of a composite display can cause changes to the image processing operations of the dual camera mobile device such as causing thevideo conference manager1204 to re-composite the display area in the composite display in response to the user's input. As further described in the above-incorporated U.S. patent application Ser. No. 12/794,766, now U.S. Pat. No. 8,744,420, entitled “Establishing Video Conference During a Phone Call,” some embodiments employ snap and push techniques that push a first display area from a first location when a second display area is moved to the first location from a third location.
2. Rotate
Some embodiments rotate the PIP display that is presented during a video conference when a user of the mobile device used for the video conference rotates the device during the conference.FIG. 30 illustrates the rotation of aUI805 of adevice3000 when the device is rotated from a vertical position to a horizontal position. Thedevice3000 is held vertically when the long side of the screen is vertical whereas thedevice3000 is held horizontally when the long side of the screen is horizontal. In the example illustrated inFIG. 30, theUI805 rotates from a portrait view that is optimized for a vertical holding of the device to a landscape view that is optimized for horizontal holding of thedevice3000. This rotation functionality allows the user to view theUI805 displayed in an upright position when themobile device3000 is held either vertically or horizontally.
FIG. 30 illustrates the rotation of theUI805 in terms of six differentoperational stages3010,3015,3020,3025,3030 and3035. Thefirst stage3010 illustrates theUI805 during a video conference between the local user of the device and a remote user of a remote device. TheUI805 inFIG. 30 shows aPIP display880 that is the same PIP display shown in the fifth stage ofFIG. 8 after the video conference has been established. In this example, the video captured by the local user's device is displayed in theinset display area860 and the video captured by the remote user's device is displayed in thebackground display area870. In thedisplay area855 below thePIP display880 includes a selectable UI item3085 (e.g., an End Conference button3085), which the user may select to end the video conference (e.g., through a single finger tap).
Thesecond stage3015 illustrates theUI805 after the user begins to tilt thedevice3000 sideways. In this example, the user has started to tilt thedevice3000 from being held vertically to being held horizontally, as indicated by thearrow3060. The appearance of theUI805 has not changed. In other situations, the user may want to tilt thedevice3000 from being held horizontally to being held vertically instead, and, in these situations, theUI805 switches from a horizontally optimized view to a vertically optimized view.
Thethird stage3020 illustrates theUI805 in a state after thedevice3000 has been tilted from being held vertically to being held horizontally. In this state, the appearance of theUI805 still has not changed. In some embodiments, the rotation operation is triggered after thedevice3000 is tilted past a threshold amount and is kept past this point for a duration of time. In the example illustrated inFIG. 30, it is assumed that the threshold amount and the speed of the rotation do not cause theUI805 to rotate until a short time interval after the device has been placed in the horizontal position. Different embodiments have different threshold amounts and waiting periods for triggering the rotation operation. For example, some embodiments may have such a low threshold to triggering the rotation operation as to make theUI805 appear as if it were always displayed in an upright position, notwithstanding the orientation of thedevice3000. In other embodiments, the user of thedevice3000 may specify when the rotation operation may be triggered (e.g., through a menu preference setting). Also, some embodiments may not delay the rotation after the device is tilted past the threshold amount. Moreover, different embodiments may allow the rotation operation to be triggered in different ways, such as by toggling a switch on the mobile device, by giving voice commands, upon selection through a menu, etc.
Thefourth stage3025 illustrates theUI805 after the rotation operation has started. Some embodiments animate the rotation display areas to provide feedback to the user regarding the rotation operation.FIG. 30 illustrates an example of one such animation. Specifically, it shows in itsfourth stage3025 the start of the rotation of thedisplay areas880 and855 together. Thedisplay areas880 and855 rotate around an axis3065 going through the center of the UI805 (i.e., the z-axis). Thedisplay areas880 and855 are rotated the same amount but in the opposite direction of the rotation of the device3000 (e.g., through the tilting of the device3000). In this example, since thedevice3000 has rotated ninety degrees in a clockwise direction (by going from being held vertically to being held horizontally) the rotation operation would cause thedisplay areas880 and855 to rotate ninety degrees in a counter clockwise direction. As thedisplay areas880 and855 rotate, thedisplay areas880 and855 shrink proportionally to fit theUI805 so that thedisplay areas880 and855 may still appear entirely on theUI805. Some embodiments may provide a message to indicate the state of this device3000 (e.g., by displaying the word “Rotating”).
Thefifth stage3030 illustrates theUI805 after thedisplay areas880 and855 have rotated ninety degrees counter clockwise from portrait view to landscape view. In this stage, thedisplay areas880 and855 have been rotated but have not yet expanded across the full width of theUI805. Thearrows3075 indicate that at the end of the fifth stage, thedisplay areas880 and855 will start to laterally expand to fit the full width of theUI805. Different embodiments may not include this stage since the expansion could be performed simultaneously with the rotation in thefourth stage3025.
Thesixth stage3035 illustrates theUI805 after thedisplay areas880 and855 have been expanded to occupy the full display of theUI805. As mentioned above, other embodiments may implement this rotation differently. For some embodiments, simply rotating the screen of a device past a threshold amount may trigger the rotation operation, notwithstanding the orientation of thedevice3000.
Also, other embodiments might provide a different animation for indicating the rotation operation. The rotation operation performed inFIG. 30 involves thedisplay areas880 and855 rotating about the center of theUI805. Alternatively, the display areas may be individually rotated about the center axis of their individual display areas. One such approach is shown inFIG. 31.FIG. 31 shows an alternative method to animating the rotation of thedisplay areas870 and860 ofPIP display880 of aUI805. ThePIP display880 illustrated inFIG. 31 is thesame PIP display880 illustrated inFIG. 8.
FIG. 31 illustrates the rotation of thePIP display880 in terms of six differentoperational stages3010,3015,3020,3125,3130, and3135. The first three stages of operation of theUI805 are identical to the first three stages of operation as described in theUI805 inFIG. 30. At the third stage for bothFIGS. 30 and 31, thedevice3100 has gone from being held vertically to being held horizontally and the rotation of theUI805 has not yet begun.
Thefourth stage3125 illustrates the alternative method to animating the rotation. In this stage, the rotation operation has started. Specifically, the fourth stage shows3125 the start of the rotation of thedisplay areas870 and860. Thedisplay areas870 and860 each rotate aroundaxes3167 and3165, respectively, going through the center of each of the display areas (i.e., the z-axis). Thedisplay areas870 and860 are rotated the same amount but in the opposite direction of the rotation of the device3100 (e.g., through the tilting of the device3100). Similar to that illustrated in thefourth stage3025 ofFIG. 30 above, since thedevice3100 has rotated ninety degrees in a clockwise direction (by going from being held vertically to being held horizontally) the rotation operation would cause thedisplay areas870 and860 to rotate ninety degrees in a counter clockwise direction. As thedisplay areas870 and860 rotate, thedisplay areas870 and860 shrink proportionally to fit theUI805 so that thedisplay areas870 and860 may still appear entirely on theUI805.
Thefifth stage3130 illustrates theUI805 after each of thedisplay areas870 and860 have rotated ninety degrees counter clockwise from portrait view to landscape view. In this stage, thedisplay areas870 and860 have been rotated but have not yet expanded across the full width of theUI805. Moreover, thedisplay area860 has not moved into its final position. The final position of theinset display area860 in thePIP display880 is determined by the position of theinset display area860 in thePIP display880 as shown in the first stage3010 (e.g., theinset display area860 in the lower left corner of the PIP display880). In this stage, theinset display area860 is still in the upper left corner of theUI805.
Thearrows3180 indicate that at the end of thefifth stage3130, thedisplay areas870 and860 will start to laterally expand until themain display area870 fits the full width of theUI805 for a device that is held horizontally. Moreover, thearrow3175 indicates that theinset display area860 will slide to the lower left corner of thePIP display880.
Different embodiments may implement this differently. In some embodiments, the moving of theinset display area860 may occur simultaneously as the expansion of themain display area870 or sequentially. Moreover, some embodiments may resize theinset display areas860 before, during or after the expansion of themain display area870 to create thenew PIP display880. In this example, thedisplay area855 disappears while thedisplay areas860 and870 are rotating. However, thedisplay area855 may remain on theUI805 during the rotation and rotate along with thedisplay areas860 and870 in some embodiments.
Thesixth stage3135 illustrates theUI805 after theinset display area860 has reached its new location and thedisplay areas860 and870 have been properly expanded to fit the full width of theUI805. In this example, theinset display area860 is now in the lower left corner of thePIP display880, overlapping themain display area870. ThePIP display880 now has the same display arrangement as thePIP display880 from thefirst stage3010. The appearance of thedisplay area855 below thePIP display880 in the sixth stage indicates that the rotation operation is completed. As noted above, simply rotating the screen of a device past a threshold amount may trigger the rotation operation, notwithstanding the orientation of thedevice3100.
In the examples described above by reference toFIGS. 30 and 31, the orientation of thedisplay area870 also changes (i.e., from portrait to landscape). That is, after thedisplay area870 is rotated in thethird stage3020, the orientation of thedisplay area870 changes from portrait to landscape by horizontally expanding thePIP display880 so that it fills theentire UI805. In some embodiments, when thedevice3100 is rotated, video captured by the remote device rotates but the orientation of the display area that displays the video captured by the remote device remains unchanged. One such example is illustrated inFIG. 32. This figure is similar toFIG. 31 except that video displayed in thedisplay area870 rotates but thedisplay area870 remains displayed in portrait orientation.
FIG. 32 also illustrates an example of a rotation operation in which thedisplay area855 remains in the same position (instead of rotating and expanding horizontally to fill thePIP display880 as shown inFIG. 31). Moreover, this figure includes a layout of thedisplay area855 that is the same as the layout of thedisplay area855, described above inFIG. 9. As shown, thedisplay area855 remains in the same position as thedevice3100 rotates in thestages3240,3245,3250,3255,3285, and3290.
Some embodiments provide a rotation operation in which the orientation of the display area that displays video captured by the local device changes (instead of remaining in the same orientation as shown inFIG. 31) to reflect the orientation of the local device after the rotation operation is performed on the local device.FIG. 32 illustrates an example of such a rotation operation of aUI805 by reference to sixdifferent stages3240,3245,3250,3255,3285, and3290. In this figure, thefirst stage3240 shows theinset display area860, which displays video captured by a camera of thedevice3100, in a portrait orientation. The second andthird stages3245 and3250 are similar to the second andthird stages3015 and3020 ofFIG. 31 as they show the tilting of thedevice3100 at various stages of the rotation operation. At this point, the camera of thedevice3100 is capturing images in a landscape orientation. To indicate this transition, some embodiments provide an animation as shown in fourth andfifth stages3255 and3285 while other embodiments do not provide any animation at all.
In thefourth stage3255, the image displayed in theinset display area860 is rotated, but not theinset display area860 itself since the tilting of thedevice3100 in the second andthird stages3045 and3250 has rotated theinset display area860 to a landscape orientation. In thefifth stage3285, the rotated image in theinset display area860 is horizontally expanded to fill theinset display area860 and theinset display area860 starts to move towards the lower left area of thePIP display880 to position theinset display area860 in the same relative position as theinset display area860 in the PIP display of thefirst stage3240.
In some embodiments, the orientation of the display area that displays the video captured by the remote device also changes to reflect the orientation of the remote device after a rotation operation is performed on the remote device.FIG. 33 illustrates four different stages of aUI805 of thedevice3100 in which (1) the orientation of the display area that displays the video captured by the local device (display area860 in this example) changes to reflect the orientation of the local device after a rotation operation is performed on the local device and (2) the orientation of the display area that displays video captured by the remote device (display area870 in this example) changes to reflect the orientation of the remote device after a rotation operation is performed on the remote device.
In thefirst stage3305, theUI805 is the same as theUI805 inFIG. 32. Specifically, thefirst stage3305 shows thedisplay areas860 and870 in a portrait orientation because thedevice3100 is shown in a portrait orientation and the remote device is in a portrait orientation (not shown). From thefirst stage3305 to thesecond stage3310, a rotation operation is performed on the local device by rotating thedevice3100 ninety degrees from an upright position to a sideways position. Thesecond stage3310 shows theUI805 after the rotation operation of thedevice3100 is completed. In this stage, the videos displayed in thedisplay areas870 and860 have rotated to an upright position. However, only thedisplay area860 of the locally captured video has rotated from a portrait orientation to a landscape orientation since the rotation operation is only performed on the local device (i.e., the device3100). Thedisplay area870 remains in the portrait orientation.
From thesecond stage3310 to thethird stage3315, a rotation operation is performed on the remote device by rotating the remote device from an upright position to a sideways position (not shown). Thethird stage3315 shows theUI805 after the rotation operation of the remote device is completed. In this stage, the video displayed in thedisplay area870 and thedisplay area870 of the remotely captured video have rotated from a portrait orientation to a landscape orientation since the rotation operation is only performed on the remote device. Thus, this stage of theUI805 displays thedisplay areas870 and860 of the locally and remotely captured videos both in landscape orientation.
From thethird stage3315 to thefourth stage3320, a rotation operation is performed on the local device by rotating thedevice3100 ninety degrees from a sideways position to an upright position. Thefourth stage3320 shows theUI805 after the completion of this rotation operation. In thisfourth stage3320, the videos displayed in thedisplay areas860 and870 have rotated to an upright position. However, only thedisplay area860 of the locally captured video has rotated from a landscape orientation to a portrait orientation since the rotation operation is only performed on the local device (i.e., the device3100). Thedisplay area870 remains in the landscape orientation.
From thefourth stage3320 to thefirst stage3305, a rotation operation is performed on the remote device by rotating the remote device ninety degrees from a sideways position to an upright position (not shown). In this case, thefirst stage3305 shows thedisplay area870 after the completion of this rotation operation. Therefore, theUI805 of this stage shows thedisplay areas860 and870 in a portrait orientation. AlthoughFIG. 33 illustrates a sequence of different rotation operations, other embodiments can perform any number of rotation operations in any number of different sequences.
FIGS. 30,31,32, and33 describe rotate operations performed on local and remote devices during a video conference. When a rotate operation is performed on the local mobile device, some embodiments notify the remote device of the rotate operation in order for the remote device to perform any modifications to the local device's video (such as rotating the display area that is displaying the local device's video). Similarly, when a rotate operation is performed on the remote device, the remote device notifies the local device of this operation to allow the local device to perform any modifications the remote device's video. Some embodiments provide a control communication channel for communicating the notification of rotate operations between the local and remote devices during the video conference.
Even thoughFIGS. 30,31,32, and33 illustrate different manners in which the animation of a rotation can be performed, one of ordinary skill will realize that other embodiments may display the animation of the rotation in other different ways. In addition, the animation of the rotation operation can cause changes to the image processing operations of the local mobile device such as causing thevideo conference manager1204 to re-composite the display area(s) at different angles in theUI805 and scale the images displayed in the display area(s).
3. Window Size Adjustment
Some embodiments allow a user of a mobile device to adjust the size of an inset display area of a PIP display presented during a video conference. Different embodiments provide different techniques for resizing an inset display area.FIG. 34 illustrates one approach for resizing the inset display area. In this approach, the user of the mobile device adjusts the size of the inset display area by selecting a corner of the inset display area and then expanding or shrinking the inset display area.
InFIG. 34, aUI3400 of amobile device3425 presents aPIP display3465 during a video conference with a remote user of another mobile device. ThisPIP display3465 includes two video displays: a backgroundmain display area3430 and a foregroundinset display area3435. The backgroundmain display area3430 takes up a majority of thePIP display3465 while the foregroundinset display area3435 is smaller and overlaps the backgroundmain display area3430. In this example, the backgroundmain display area3430 presents a video of a person holding a guitar, which is assumed to be a person whose video is being captured by the remote device's front camera or a person whose video is being captured by the remote device's back camera. The foregroundinset display area3435 presents a video of a person with a hat, which, in this example, is assumed to be a person whose video is being captured by the local device's front camera or a person whose video is being captured by the local device's back camera. Below thePIP display3465 is adisplay area855 that includes aselectable UI item3460 labeled “End Conference” (e.g. a button3460) that allows the user to end the video conference by selecting the item.
ThisPIP display3465 is only one manner of presenting a composite view of the videos being captured by the remote and local devices. Some embodiments may provide other composite views. For instance, instead of having a larger background display for the video from the remote device, the larger background display can be of the video from the local device and the smaller foreground inset display can be of the video from the remote device. Also, some embodiments allow the local and remote videos to appear in theUI3400 in two side-by-side display areas (e.g. left and right display windows, or top and bottom display windows) or two diagonally aligned display areas. The manner of the PIP display or a default display mode may be specified by the user in some embodiments. In other embodiments, the PIP display may also contain a larger background display and two smaller foreground inset displays.
FIG. 34 illustrates the resize operation in terms of four operational stages of theUI3400. In thefirst stage3405, theforeground inset display3435 is substantially smaller than the backgroundmain display area3430. Also in this example, the foregroundinset display area3435 is located at the lower right corner of thePIP display3465. In other examples, the foregroundinset display area3435 may be a different size or located in a different area in thePIP display3465.
In thesecond stage3410, the resizing operation is initiated. In this example, the operation is initiated by selecting a corner of theinset display area3435 that the user wants to resize (e.g., by holding afinger3440 down on the upper left corner of the inset display area3435). Thesecond stage3410 of theUI3400 indicates this selection in terms of athick border3445 for theinset display area3435. At this stage, the user can expand or shrink the inset display area3435 (e.g., by dragging hisfinger3440 on thePIP display3465 away from theinset display area3435 or toward the inset display area3435).
Thethird stage3415 illustrates theUI3400 after the user has started to expand theinset display area3435 by moving hisfinger3440 away from the inset display area3435 (i.e., by moving his finger diagonally towards the upper left corner of theUI3400 in this example), as indicated by anarrow3450. Also as indicated byarrow3455, the movement of thefinger3440 has expanded theinset display area3435 proportionally in both height and width. In other examples, the user can shrink theinset display area3435 using the same technique (i.e., by dragging the finger toward the inset display area3435).
Thefourth stage3420 displays theUI3400 after the resizing of theinset display area3435 has been completed. In this example, the user completes the resize of theinset display area3435 by stopping the dragging of hisfinger3440 and removing his finger from thePIP display3465 once theinset display area3435 has reached the desired size. As a result of this procedure, the resizedinset display area3435 is larger than its original size in thefirst stage3405. The removal of thethick border3445 indicates that the inset display area resize operation is now completed.
Some embodiments provide other techniques for allowing a user to resize aninset display area3435 in aPIP display3465 during a video conference.FIG. 35 illustrates one such other technique. This figure illustrates a technique for resizing theinset display area3435 by selecting an edge of the inset display area3435 (i.e., on one of the sides of this display area3435) and then expanding or shrinkinginset display area3435.
FIG. 35 illustrates this resizing operation in terms of four operational stages of theUI3400 ofFIG. 34. Thefirst stage3405 inFIG. 35 is the same as thefirst stage3405 inFIG. 34. Specifically, in this stage, theUI3400 ofdevice3525 illustrates thePIP display3465 with a larger backgroundmain display area3430 and a smaller foregroundinset display area3435 at the bottom right corner of thePIP display3465. Even thoughFIGS. 34 and 35 illustrate two different techniques for resizing aninset display area3435 in thesame UI3400, one of ordinary skill will realize that some embodiments will not offer both these techniques in the same UI.
Thesecond stage3510 illustrates the start of a resizing operation. In this example, the user initiates the operation by selecting a side of theinset display area3435 that the user wants to resize (e.g., by placing afinger3440 down on the top edge or the side edge of the inset display area3435). In this example, a user places hisfinger3440 on the top edge of theinset display area3435 in order to make this selection. Thesecond stage3510 indicates this selection in terms of athick border3445 for theinset display area3435.
Thethird stage3515 illustrates theUI3400 after the user has started to expand theinset display area3435 by moving hisfinger3440 away from the inset display area3435 (i.e., vertically toward the top of the PIP display3465), as indicated by anarrow3550. Also as indicated byarrow3555, the movement of thefinger3440 has expanded theinset display area3435 proportionally in both height and width. In other examples, the user can shrink thedisplay area3435 using the same technique (e.g., by dragging thefinger3440 toward the inset display area3435).
Thefourth stage3520 displays theUI3400 after the resizing of theinset display area3435 has been completed. In this example, the user completes the resize of theinset display area3435 by stopping the dragging of hisfinger3440 and removing hisfinger3440 from the device's display screen once theinset display area3435 has reached the desired size. As a result of this procedure, the resizedinset display area3435 is larger than its original size in thefirst stage3405. The removal of thethick border3445 indicates that the inset display area resize operation is now completed.
In response to a drag operation, some embodiments adjust the size of theinset display area3435 proportionally in height and width, as illustrated byFIGS. 34 and 35. Other embodiments may allow the user to adjust the height and/or width of aninset display area3435 without affecting the other attribute.FIG. 36 illustrates an example of one such resizing approach.
Specifically,FIG. 36 illustrates aUI3400 of a mobile device3625 that is similar to theUI3400 ofFIG. 34 except theUI3400 ofFIG. 36 allows theinset display area3435 to be expanded in the horizontal direction and/or vertical direction when one of the edges of theinset display area3435 is selected and moved horizontally or vertically. To simplify the description of theUI3400,FIG. 36 illustrates aPIP display3465 in theUI3400 that is similar to thePIP display3465 ofFIG. 34 except now theinset display area3435 is in the upper right corner of thePIP display3465. ThePIP display3465 includes two video displays: a backgroundmain display area3430 and a foregroundinset display area3435. In this example, the backgroundmain display area3430 presents a video that is being captured by the remote device's front camera or back camera. The foregroundinset display area3435 presents a video that is being captured by the local device's front camera or back camera.
LikeFIG. 34,FIG. 36 illustrates the resizing operation in terms of four operational stages of theUI3400. Thefirst stage3605 is similar to thefirst stage3405 ofFIG. 34 except now theinset display area3435 is in the upper right corner. The other threestages3610,3615 and3620 are similar to the threestages3510,3515 and3520 except that the selection and movement of the bottom edge of theinset display area3435 has caused theinset display area3435 to only expand in the vertical direction without affecting the width of theinset display area3435.
FIGS. 34,35, and36 provide examples embodiments that allow the user to resize aninset display area3435 of aPIP display3465 by selecting a corner or a side of theinset display area3435. Some embodiments provide other techniques for resizing aninset window3435. For instance,FIG. 37 illustrates that some embodiments allow theinset display area3435 to be resized by selecting the interior of theinset display area3435. In this approach, the user adjusts the size of theinset display area3435 by placing twofingers3755 and3756 on the screen and dragging the fingers either away from or toward each other.
InFIG. 37, aUI3400 of amobile device3740 provides aPIP display3465 during a video conference with a remote user of another mobile device. To simplify the description of theUI3400,FIG. 37 illustrates aPIP display3465 in thisUI3400 that is similar to thePIP display3465 ofFIG. 34.
FIG. 37 illustrates the resizing operation in terms of seven operational stages of theUI3400. The first fourstages3405,3710,3715, and3720 show the expansion of aninset display area3435 while the last three stages show the shrinking of theinset display area3435. Thefirst stage3405 inFIG. 37 is the same as thefirst stage3405 inFIG. 34. Specifically, in this stage, theUI3400 illustrates thePIP display3465 with a larger backgroundmain display area3430 and a smaller foregroundinset display area3435. In this example, the backgroundmain display area3430 presents a video that is being captured by the remote device's front camera or back camera. The foregroundinset display area3435 presents a video that is being captured by the local device's front camera or back camera.
Thesecond stage3710 illustrates theUI3400 after the resizing operation is initiated. In this example, the user initiates the operation by selecting theinset display area3435 that the user wants to resize (e.g., by placing twofingers3755 and3756 down within the inset display area3435). Thesecond stage3710 of theUI3400 indicates this selection in terms of athick border3790 for theinset display area3435.
Thethird stage3715 illustrates theUI3400 after the user has started to expand theinset display area3435 by moving hisfingers3755 and3756 away from each other (i.e., movingfinger3755 toward the upper left corner of thePIP display3465 and movingfinger3756 toward the lower right corner of the PIP display3465), as indicated byarrows3760. As indicated by anarrow3765, the movement of thefingers3755 and3756 has expanded theinset display area3435 proportionally in both height and width.
Thefourth stage3720 displays theUI3400 after the resizing of theinset display area3435 has been completed. In this example, the user completes the resize of theinset display area3435 by stopping the dragging of hisfingers3755 and3756 and removing hisfingers3755 and3756 from the device's display screen. As a result of this procedure, the resizedinset display area3435 is larger than its original size in thefirst stage3405. The removal of thethick border3790 indicates that the inset display area resize operation is now completed.
In thefifth stage3725, the user re-selects theinset display area3435 by placing down twofingers3755 and3756 on theinset display area3435. Thesixth stage3730 illustrates theUI3400 after the user has started to shrink theinset display area3435 by moving hisfingers3755 and3756 closer together, as indicated byarrows3770. As indicated by an arrow3775, the movement of thefingers3755 and3756 has shrunk theinset display3435 proportionally in both height and width.
Theseventh stage3735 is similar to thefourth stage3720 inFIG. 37, except that theinset display area3435 has shrunk in size as a result of the operation. The removal of thethick border3790 indicates that the inset display area resize operation is now completed.
The above description ofFIGS. 34-37 illustrates several example user interfaces that allow a user to resize an inset display area of a PIP display. In some embodiments, the resizing of an inset display area causes changes to the image processing operations of the dual camera mobile device such causing thevideo conference manager1204 to change the scaling and compositing of the inset display area in the PIP display in response to the user's input. In addition, in some embodiments the layout of thedisplay area855 inFIGS. 34-37 is the same as the layout of thedisplay area855 ofFIG. 9, described above.
4. Identifying Regions of Interest
Some embodiments allow a user to identify a region of interest (ROI) in a displayed video during a video conference in order to modify the image processing (e.g., theimage processing manager1208 inFIG. 12), the encoding (e.g., theencoder1255 inFIG. 12), the behavior of the mobile devices and their cameras during the video conference, or a combination thereof. Different embodiments provide different techniques for identifying such a region of interest in a video.FIG. 38 illustrates a user interface of some embodiments for identifying a region of interest in a video in order to improve the image quality of the video.
InFIG. 38, aUI3800 of amobile device3825 presents aPIP display3865 during a video conference with a remote user of another mobile device. The PIP display inFIG. 38 is substantially similar to the one inFIG. 37. Specifically, the PIP display inFIG. 38 includes two video displays: a backgroundmain display3830 and aforeground inset display3835. In this example, the backgroundmain display3830 presents a video of a tree and a person with a hat, which are assumed to be a tree and a person whose video is being captured by the remote device's front camera or a tree and a person whose video is being captured by the remote device's back camera. Theforeground inset display3835 presents a video of a man, which in this example is assumed to be a man whose video is being captured by the local device's front camera or a person whose video is being captured by the local device's back camera. Below the PIP display is adisplay area855 that includes aselectable UI item3860 labeled “End Conference” (e.g. a button3860) that allows the user to end the video conference by selecting the item.
This PIP display is only one manner of presenting a composite view of the videos being captured by the remote and local devices. Some embodiments may provide other composite views. For instance, instead of having a larger background display for the video from the remote device, the larger background display can be of the video from the local device and the smaller foreground inset display can be of the video from the remote device. Also, some embodiments allow the local and remote videos to appear in the UI in two side-by-side display areas (e.g. left and right display windows, or top and bottom display windows) or two diagonally aligned display areas. In other embodiments, the PIP display may also contain a larger background display and two smaller foreground inset displays. The manner of the PIP display or a default display mode may be specified by the user in some embodiments.
FIG. 38 illustrates the ROI identification operation in terms of four operational stages of theUI3800. As shown in thefirst stage3805, the video presented in thebackground display3830 has very low quality (i.e., the video images are fuzzy). In this example, a user of amobile device3825 would like to identify the area in thebackground display3830 where the person'sface3870 appears as the region of interest.
In thesecond stage3810, the operation of identifying a region of interest is initiated. In this example, the operation is initiated by selecting an area in the video presented in thebackground display3830 that the user wants to identify as the region of interest (e.g., by tapping afinger3850 on the device's screen at a location about the displayed person'sface3870 in the background display3830).
As shown in thethird stage3815, the user's selection of the area causes theUI3800 to draw an enclosure3875 (e.g., a dotted square3875) surrounding the area of the user's selection. Thefourth stage3820 displays theUI3800 after the identification of the region of interest has been completed. As a result of this process, the quality of the video within the region of interest has been substantially improved from that in thefirst stage3805. The removal of theenclosure3875 indicates that the ROI selection operation is now completed. In some embodiments, the ROI identification process also causes the same changes to the same video displayed on the remote device as it does to thelocal device3825. In this example for instance, the picture quality within the region of interest of the same video displayed on the remote device is also substantially improved.
In some embodiments, the user may enlarge or shrink theenclosure3875 in the third stage3815 (e.g., by holding thefinger3850 down on the display and moving thefinger3850 toward the upper right corner of the screen to enlarge theenclosure3875 or moving thefinger3850 toward the lower left corner of the screen to shrink the enclosure3875). Some embodiments also allow the user to relocate theenclosure3875 in the third stage3815 (e.g., by holding thefinger3850 down on the display and moving thefinger3850 horizontally or vertically on the display). In some other embodiments, the selection of the area may not cause theUI3800 to draw theenclosure3875 at all in thethird stage3815.
Other embodiments provide different techniques for allowing a user to identify a region of interest in a video.FIG. 39 illustrates one such other technique. InFIG. 39, the user identifies a region of interest by drawing a shape that bounds the region. The shape in this example is a rectangle, but it can be other shapes (e.g., any other polygon, a circle, an ellipse, etc.). Some embodiments provide this alternative technique ofFIG. 39 in a device UI that also provides the technique illustrated inFIG. 38. Other embodiments, however, do not provide both these techniques in the same UI.
FIG. 39 illustrates this ROI identification operation in terms of five operational stages of aUI3800. Thefirst stage3805 inFIG. 39 is identical to thefirst stage3805 inFIG. 38. Specifically, in thisfirst stage3805, theUI3800 illustrates aPIP display3865 with a larger backgroundmain display3830 and a smallerforeground inset display3835 at the bottom left corner of thePIP display3865.
In thesecond stage3910, the operation of identifying a region of interest is initiated. In this example, the operation is initiated by selecting for a duration of time a first position for defining the region of interest in a video presented in the background display area3830 (e.g., by holding afinger3950 down on the device's screen at a location about the displayed person'sface3870 in thebackground display3830 for a duration of time). In thethird stage3915, theUI3800 indicates that thefirst position3970 has been selected in terms of adot3955 next to the selected first position on thebackground display area3830.
Thefourth stage3920 illustrates theUI3800 after the user has selected asecond position3975 for defining the region of interest. In this example, the user selects thissecond position3975 by dragging thefinger3950 across the device's screen from the first location after thedot3955 appears and stopping at a location between the displayed hat and the displayed tree in thebackground display area3830, as indicated by anarrow3960. As shown in the fourth stage, this dragging caused theUI3800 to draw a rectangular border3965 for the region of interest area that has the first andsecond positions3970 and3975 at its opposing vertices.
Thefifth stage3925 illustrates theUI3800 after identification of the region of interest has been completed. In this example, the user completes identification of the region of interest by stopping the dragging of thefinger3950 and removing thefinger3950 from the device's display screen once the desired region of interest area has been identified. Thefifth stage3925 illustrates that as a result of the drawing process, the quality of the video within the region of interest has been substantially improved from that in thefirst stage3805. In some embodiments, the drawing process also causes the same changes to the display on the remote device as it does to thelocal device3825. In this example for instance, the picture quality within the region of interest of the same video displayed on the remote device will be substantially improved.
The description ofFIGS. 38 and 39, above, illustrates different manners of identifying a region of interest in a video in order to improve the picture quality of the identified region. In some embodiments, improving the picture quality of the identified region of interest causes changes to the encoding operations of the dual camera mobile device such as allocating more bits to the identified region when encoding the video.
Some embodiments allow the user to identify a region of interest in a video to make different changes to the mobile devices or their cameras. For instance,FIG. 40 illustrates an example of identifying a region of interest in a video to expand or shrink the region of interest area on the display. In this approach, the user identifies a region of interest in a video by selecting an area on the display as the center of the region of interest and then expanding or shrinking the region of interest area.
InFIG. 40, aUI4000 of amobile device4025 presents aPIP display3865 during a video conference with a remote user of another mobile device. ThePIP display3865 inFIG. 40 is substantially similar to thePIP display3865 ofFIG. 38, but theforeground inset display3835 ofFIG. 40 is located in the lower left corner of thePIP display3865.
FIG. 40 illustrates the ROI selection operation in terms of four operational stages of theUI4000. As shown in thefirst stage4005, thebackground display4030 presents a video with a man on the left and atree4040 on the right of thedisplay4030. Moreover, thetree4040 is relatively small and occupies only the right side of thebackground display area4030. In this example, a user of amobile device4025 would like to identify the area where thetree4040 appears on thedisplay4030 as the region of interest.
In thesecond stage4010, the operation of identifying a region of interest is initiated. In this example, the operation is initiated by selecting anarea4040 in the video presented in thebackground display4030 that the user wants to identify as the region of interest (e.g., by holding twofingers4045 and4046 down on thebackground display area4030 where thetree4040 is displayed). At thisstage4010, the user can make the region ofinterest area4040 expand and take a larger portion of thebackground display area4030 by dragging hisfingers4045 and4046 farther away from each other. The user can also make the region ofinterest4040 shrink to take a smaller portion of thebackground display area4030 by dragging hisfingers4045 and4046 closer together.
Thethird stage4015 illustrates theUI4000 after the user has started to make the region ofinterest4040 expand to take up a larger portion of thebackground display area4030 by moving hisfingers4045 and4046 farther away from each other (i.e., thefinger4045 moves toward the upper left corner of thebackground display area4030 and thefinger4046 moves toward the lower right corner of the display4030), as indicated byarrows4050. In some embodiments, the finger movement also causes the same changes to the display of the remote device as it does to the local device. In this example for instance, the region of interest of the same video will expand and take up a larger portion of thebackground display area4030 of the remote device. In some embodiments, the expansion of the region of interest in the local display and/or remote display causes one or both of the mobile devices or their cameras to modify one or more of their other operations, as further described below.
Thefourth stage4020 displays theUI4000 after the identification of the region of interest has been completed. In this example, the user completes the identification of the region of interest by stopping the dragging of hisfingers4045 and4046 and removing thefingers4045 and4046 from the device's display screen once the region of interest has reached the desired proportion in thebackground display area4030. As a result of this process, the region of interest has taken up a majority of thebackground display4030. The identification of the region of interest operation is now completed.
Some of the examples above illustrate how a user may identify a region of interest in a video for improving the image quality within the selected region of interest in the video (e.g., by increasing the bit rate for encoding the region of interest portion of the video). In some embodiments, identifying a region of interest in the video causes changes to the image processing operations of the mobile device such as exposure, scaling, focus, etc. For example, identifying a region of interest in the video can cause thevideo conferencing manager1204 to scale and composite the images of the video differently (e.g., identifying a region of interest to which to zoom).
In other embodiments, identifying a region of interest in the video causes changes to the operation of the mobile device's camera(s) (e.g., frame rate, zoom, exposure, scaling, focus, etc.). In yet other embodiments, identifying a region of interest in the video causes changes to the encoding operations of the mobile device like allocating more bits to the identified region, scaling, etc. In addition, while the example ROI identification operations described above may cause only one of the above-described modifications to the mobile device or its cameras, in some other embodiments the ROI identification operation may cause more than one of the modifications to the operation of the mobile device or its cameras. In addition, in some embodiments, the layout of thedisplay area855 inFIGS. 38-40 is the same as the layout of thedisplay area855 ofFIG. 9, described above.
B. Switch Camera
Some embodiments provide procedures to switch cameras (i.e., change the camera by which images are captured) during a video conference. Different embodiments provide different procedures for performing the switch camera operation. Some embodiments provide procedures performed by a dual camera mobile device for switching cameras of the device (i.e., local switch) while other embodiments provide procedures for the dual camera mobile device to instruct another dual camera mobile device in the video conference to switch cameras of the other device (i.e., remote switch). Yet other embodiments provide procedures for both. Section IV.B.1 will describe a process for performing a local switch camera operation on a dual camera mobile device. Section IV.B.2 will describe a process for performing a remote switch camera operation on the dual camera mobile device.
1. Local Switch Camera
FIG. 41 illustrates aprocess4100 that some embodiments perform on a local dual camera mobile device to switch between the two cameras of the device during a video conference with a remote mobile device that includes at least one camera. In some embodiments, theprocess4100 is performed by thevideo conference manager1204 shown inFIG. 12. For purposes of explanation, the discussion will refer to one camera of the local dual camera mobile device ascamera 1 and the other camera of the local dual camera mobile device ascamera 2.
Theprocess4100 begins by starting (at4105) a video conference between the local dual camera mobile device and the remote mobile device. Next, theprocess4100 sends (at4110) a video image from the currently selected camera (e.g., the camera 1) of the local dual camera mobile device to the remote mobile device for display on the remote mobile device. At4110, the process also generates and displays a composite display based on this video image and the video image that it receives from the remote mobile device.
Theprocess4100 then determines (at4115) whether a request to end the video conference is received. As described above, a video conference can end in some embodiments at the request of a user of the local dual camera mobile device (e.g., through a user interface of the local dual camera mobile device) or a user of the remote mobile device (e.g., through a user interface of the remote mobile device). When theprocess4100 receives a request to end the video conference, theprocess4100 ends.
When theprocess4100 does not receive a request to end the video conference, theprocess4100 then determines (at4120) whether the user of the local dual camera mobile device has directed the device to switch cameras for the video conference. Theprocess4100 returns tooperation4110 when theprocess4100 determines (at4120) that it has not been directed to switch cameras. However, when theprocess4100 determines (at4120) that it has been so directed, theprocess4100 transitions to4125.
At4125, theprocess4100 sends a notification to the remote mobile device to indicate that the local dual camera mobile device is switching cameras. In some embodiments, theprocess4100 sends the notification through the video conference control channel that is multiplexed with the audio and video channels by theVTP Manager2725 as described above.
After sending its notification, theprocess4100 performs (at4130) a switch camera operation. In some embodiments, performing (at4130) the switch camera operation includes instructing the CIPU to stop capturing video images with thecamera 1 and to start capturing video images with thecamera 2. These instructions can simply direct the CIPU to switch capturing images from the pixel array associated with thecamera 2 and to start processing these images. Alternatively, in some embodiments, the instructions to the CIPU are accompanied by a set of initialization parameters that direct the CIPU (1) to operate thecamera 2 based on a particular set of settings, (2) to capture video generated by thecamera 2 at a particular frame rate, and/or (3) to process video images from thecamera 2 based on a particular set of settings (e.g., resolution, etc.).
In some embodiments, the switch camera instruction (at4130) also includes instructions for switching the unused camera to the fourth operational power mode as described above. In this example, the switch camera instructions include instructions for thecamera 2 to switch to its fourth operational power mode. In addition, the switch camera instructions also include instructions for thecamera 1 to switch from its fourth operational power mode to another operational power mode such as the first operational power mode to conserve power or to the third operational power mode so it can quickly switch to the fourth operational power mode and start capturing images when requested to do so. Theswitch camera operation4130 also involves compositing images captured by thecamera 2 of the local dual camera mobile device (instead of images captured by the camera 1) with images received from the remote mobile device for display on the local dual camera mobile device.
After directing the switch camera at4130, theprocess4100 performs (at4135) a switch camera animation on the local dual camera mobile device to display a transition between the display of images from thecamera 1 and the display of images from thecamera 2. Following the switch camera animation on the local dual camera mobile device, theprocess4100 loops back through operations4110-4120 until an end video conference request or a new switch camera request is received.
FIG. 42 illustrates one example of how some embodiments allow a switch camera operation to be requested through aUI805 of a dual camera device and how these embodiments animate the switch camera operation. This figure illustrates the switch camera operation in terms of eight differentoperational stages4210,4215,4220,4225,4230,4235,4240, and4245 of theUI805 of the device. The first fourstages4210,4215,4220, and4225 of theUI805 illustrate an example of receiving a user's request to switch cameras. The user of the device has other mechanisms to make such a request in some embodiments of the invention.
Thefirst stage4210 is the same as thefifth stage830 of theUI805 ofFIG. 8, which shows theUI805 after a video conference is set up. At this stage, theUI805 displays a PIP display that includes two video displays: a larger background display from the remote camera and a smaller foreground inset display from the local camera. In this example, the backgroundmain display area870 presents a video of a lady, which in this example is assumed to be a lady whose video is being captured by the remote device, while the foregroundinset display area860 presents a video of a man, which in this example is assumed to be a man whose video is being captured by the local device's front camera.
Thesecond stage4215 then shows the initiation of the switch camera operation through the selection of thePIP display area880 of theUI805. As shown, a selection is made by placing the user'sfinger4270 on thePIP display880. Thethird stage4220 shows theUI805 that includes a selectable UI item4275 (e.g., switch camera button4275) for requesting a switch between the cameras of thelocal device4200 during the video conference. Thefourth stage4225 illustrates theUI805 after the user of thelocal device4200 selects (e.g., through a single finger tap) theselectable UI item4275, and after this selection is indicated through the highlighting of theselectable UI item4275. By selecting thisselectable UI item4275, the user is directing thedevice4200 to switch from the front camera of thedevice4200 to the back camera of thedevice4200 during the video conference. In other examples where the back camera of thedevice4200 is capturing video, the user's selection of theselectable UI item4275 directs thedevice4200 to switch from the back camera of thedevice4200 to the front camera of thedevice4200. After the fourth stage, the video conference manager sends instructions to the CIPU and the remote device to start the switch camera operation.
The last fourstages4230,4235,4240, and4245 of theUI805 illustrate an example of a switch camera animation on the local device. This animation is intended to provide an impression that the video captured from the front and the back cameras of the local device are being concurrently displayed on two opposing sides of a viewing pane that can have only one of its sides viewed by the user at any given time. When a switch camera is requested in the middle of a video conference, this viewing pane is made to appear to rotate around the vertical axis such that the presentation of one camera's video on one side of the viewing pane that was previously showing one camera's video to the user rotates away from the user until it is replaced by the other side of the viewing pane, which shows the video of the other camera. This animation and appearance of the perceived viewing pane's rotation is achieved by (1) gradually shrinking and applying perspective correction operations on the video image from one camera in the display area for that camera, followed by (2) a gradual expansion and reduction in perspective correction operation to the video image from the other camera in the display area.
Accordingly, thefifth stage4230 illustrates the start of the “rotation of the viewing pane” about thevertical axis4282. To give an appearance of the rotation of the viewing pane, theUI805 has reduced the size of the front camera's video image in thevideo display area860, and has applied perspective operations to make it appear that the right side of the video image is farther from the user than the left side of the video image.
Thesixth stage4235 illustrates that the viewing pane has rotated by 90 degrees such that the user can only view the edge of this pane, as represented by thethin line4286 displayed in the middle of thedisplay area860. Theseventh stage4240 illustrates that the viewing pane has continued to rotate such that the backside of theviewing pane4288 is now gradually appearing to the user in order to show the video captured from the user's back camera. Again, this representation of the rotation animation is achieved in some embodiments by reducing the size of the back camera's video image in thevideo display area4288, and applying perspective operations to make it appear that the left side of the video image is farther from the user than the right side of the video image.
Theeighth stage4245 illustrates the completion of the animation that shows the switch camera operation. Specifically, this stage displays in thedisplay area860 the video image of a car that is being captured by the back camera of thedevice4200.
The example described above by reference toFIG. 42 invokes a switch camera operation through a switch camera user interface. Other embodiments invoke a switch camera operation differently. For example, some embodiments invoke the switch camera operation by having a switch camera selectable UI item permanently displayed on a UI during a video conference such theUI805 ofFIG. 43. InFIG. 43, aswitch camera button989 is shown in adisplay area855 along with amute button985 and anend conference button987. The layout of thedisplay area855 is the same layout of thedisplay area855, described above by reference toFIG. 9.
FIG. 43 illustrates the switch camera operation of aUI805 in terms of six stages:4210,4390,4230,4235,4240, and4245. Thefirst stage4210 ofFIG. 43 is similar to thefirst stage4210 ofFIG. 42 except that the layout of thedisplay area855 shows amute button985, anend conference button987, and aswitch camera button989 instead of a single end conference button. Thesecond stage4390 illustrates theUI805 after the user of thelocal device4200 selects (e.g., through a single finger tap using a finger4270) the switch cameraselectable UI item989. In this example, by selecting thisselectable UI item989, the user directs thedevice4200 to switch from the front camera of thedevice4200 to the back camera of thedevice4200 during the video conference. The last four stages ofFIG. 43 are similar to the last four stages ofFIG. 42 except the layout of thedisplay area855 is the same as the layout described above in thefirst stage4210 and therefore will not be further described in order to not obscure the description of the invention with unnecessary detail.
In some embodiments, when the remote mobile device receives images from a different camera of the local dual camera mobile device (i.e., the local dual camera mobile device switched cameras), the remote mobile device also performs a switch camera animation to display a transition between the display of image from one camera of the local dual camera mobile device and the display of images from the other camera of the local dual camera mobile device.FIG. 44 illustrates an example of one of such switch camera animation in terms of fiveoperational stages4410,4415,4420,4425, and4430 of aUI4405. This figure shows an example switch camera animation on the remotemobile device4400. The operational stages are the same as the example animation ofFIG. 42 except the animation is performed on images displayed in thedisplay area4435, which is where images from the local dual camera mobile device are displayed on the remotemobile device4400. As such, the image of the man displayed in thedisplay area4435 is animated to appear to rotate 180 degrees on avertical axis4455 located in the middle of thedisplay area4450 to show the transition between the display of the image of the man in thedisplay area4435 and the display of the image of acar4470. The implementation of the switch camera animation of some embodiments is the same as the implementation of the animation described above.
The above example illustrates a switch camera animation on a remote device with a particular user interface layout. Other embodiments might perform this switch camera animation on a remote device with a different user interface layout. For instance,FIG. 45 illustrates one such example of aremote device4400 that has a differentuser interface layout4405. In particular,UI4405 ofFIG. 45 has amute button985, anend conference button987, and aswitch camera button989 included in adisplay area855, which is permanently displayed on one side of thecomposite display4450 during a video conference. The layout of the three buttons is described above by reference toFIG. 44. Other than the different user interface layout, the fivestages4410,4415,4420,4425, and4430 ofFIG. 45 are identical to the fivestages4410,4415,4420,4425, and4430 ofFIG. 44.
2. Remote Switch Camera
FIG. 46 illustrates aprocess4600 for switching between two cameras of a remote dual camera device during a video conference. Thisprocess4600 is performed by a video conference manager of a device that includes at least one camera. In the following discussion, the device through which a user directs a remote switch camera is referred to as the local device while the device that switches between its two cameras is referred to as the remote device. Also, in the discussion below, the remote device is said to switch between its front camera (or camera 1) and its back camera (or camera 2).
Theprocess4600 ofFIG. 46 will be described by reference toFIGS. 47,48,49, and50.FIG. 47 illustrates aUI4705 of alocal device4700 through which a user requests that a remote device switch between its two cameras during a video conference. This figure illustrates eight differentoperational stages4710,4715,4720,4725,4730,4735,4740, and4745 of thisUI4705.FIG. 50 illustrates aUI5005 of a remote device5000 that receives the switch camera request from thelocal device4700.FIG. 50 illustrates six differentoperational stages5010,5015,5020,5025,5030, and5035 of theUI5005.
As shown inFIG. 46, theprocess4600 begins by starting (at4605) a video conference between the local and remote devices. Theprocess4600 then (at4610) receives images from one camera of each device (e.g., from the front camera of each device) and generates a composite view for the video conference based on these images. At4610, theprocess4600 also sends a video image from the local device to the remote device.
Next, theprocess4600 determines (at4615) whether a request to end the video conference has been received. As described above, a video conference can end in some embodiments at the request of a user of the local or remote device. When theprocess4600 receives a request to end the video conference, theprocess4600 ends.
When theprocess4600 does not receive a request to end the video conference, theprocess4600 then determines (at4620) whether the user of the device on which theprocess4600 is executing (i.e., the user of the local device) has directed the device to request that the remote device switch between its cameras for the video conference. Theprocess4600 returns tooperation4610 when theprocess4600 determines (at4620) that it has not been directed to initiate a remote switch camera. When theprocess4600 determines (at4620) that it has been so directed, theprocess4600 transitions to4625, which will be described further below.
The first fourstages4710,4715,4720, and4725 of theUI4705 ofFIG. 47 illustrate an example of receiving a user's request to switch cameras of the remote device. The first andsecond stages4710 and4715 are the same as the first andsecond stages4210 and4215 ofFIG. 42. Thethird stage4720 is the same as thethird stage4220 except thethird stage4720 includes aselectable UI item4780 for a request to theremote device4700 to switch cameras in addition to theselectable UI item4775 for requesting thelocal device4700 to switch cameras. Thefourth stage4725 illustrates the user of thelocal device4700 selecting the UI item4780 (e.g., through asingle finger tap4770 of the selectable UI item4780) for requesting the remote device to switch cameras. The selection is indicated by the highlighting of theselectable UI item4780.FIG. 47 shows one example of performing this operation, but other embodiments may differently perform the operation for requesting the remote device to switch cameras.
The example described above by reference toFIG. 47 invokes a remote switch camera operation through a remote switch camera user interface. Other embodiments invoke a remote switch camera operation differently. For instance, some embodiments invoke the switch camera operation by having a switch camera selectable UI item permanently displayed on a UI during a video conference such as theUI4705 ofFIG. 48. InFIG. 48, a remoteswitch camera button4888 is shown in adisplay area855 along with amute button4882, anend conference button4884, and a localswitch camera button4886.
FIG. 48 illustrates the remote switch camera operation of theUI4705 of thedevice4700 in terms of sixdifferent stages4710,4890,4730,4735,4740, and4745. Thefirst stage4710 ofFIG. 48 is similar to thefirst stage4710 ofFIG. 47 except that the layout of thedisplay area855 shows amute button4882, a localswitch camera button4886, a remoteswitch camera button4888, and anend conference button4884. Thesecond stage4890 illustrates theUI805 after the user of thelocal device4700 selects (e.g., through a single finger tap4770) the remote switch cameraselectable UI item4888. The last four stages ofFIG. 48 are similar to the last four stages ofFIG. 47 except the layout of thedisplay area855 is the same as the layout described above in thefirst stage4710 and therefore will not be further described in order to not obscure the description of the invention with unnecessary detail.
Some embodiments provide a similar layout as the one illustrated inFIG. 48 except the remote switch camera selectable UI item is displayed inPIP display4765 instead of thedisplay area855.FIG. 49 illustrates such alayout4705. In particular, the figure shows the PIP display with the remote switch cameraselectable UI item4780 and thedisplay area855 with only amute button4882, a localswitch camera button4886, and anend conference button4884.
As mentioned above, theprocess4600 transitions to4625 when the user requests a remote switch camera. At4625, theprocess4600 sends the request to switch cameras to the remote device. In some embodiments, this request is sent through the video conference control channel that is multiplexed with the audio and video channels by theVTP Manager2725 as described above.
After the request to switch cameras is received, theprocess4600 determines (at4630) whether the remote device has responded to the request to switch cameras. In some embodiments, the remote device automatically sends an accept response (i.e., sends an acknowledgement) to the local device through the video-conference control channel. In other embodiments, however, the user of the remote device has to accept this request through the user interface of the remote device.
The first twostages5010 and5015 of theUI5005 ofFIG. 50 illustrate an example of the remote user accepting a request to switch cameras of the remote device5000. Thefirst stage5010 shows (1) adisplay area5040 for displaying text that notifies the remote user of the request, (2) a selectable UI item5065 (e.g., allow button5065) for accepting the request to switch cameras of the remote device, and (3) a selectable UI item5070 (e.g., reject button5070) for rejecting the request to switch cameras of the remote device. Thesecond stage5015 then illustrates theUI5005 after the user of the remote device has selected (e.g., through a single finger tap5080) theUI item5065 for accepting the request to switch cameras, as indicated by the highlighting of theselectable UI item5065.
When theprocess4600 determines (at4630) that it has not yet received a response from the remote device, theprocess4600 determines (at4635) whether a request to end the video conference has been received. If so, theprocess4600 returns tooperation4610 to continue to receive images from the camera of the other device. Otherwise, the process receives (at4640) images from the currently used cameras of the remote and local devices, generates a composite view for the video conference based on these images, transmit the local device's video image to the remote device, and then transitions back to4630.
When theprocess4600 determines (at4630) that it has received a response from the remote device, it determines (at4645) whether the remote device accepted the request to switch cameras. If not, theprocess4600 ends. Otherwise, the process receives (at4650) images from the other camera of the remote device and then performs (at4655) a switch camera animation on the local device to display a transition between the video of the previously utilized remote camera and the video of the currently utilized remote camera (i.e., the received images at operation4650). After4655, the process transitions back to4610, which was described above.
The last fouroperational stages4730,4735,4740, and4745 that are illustrated for theUI4705 inFIG. 47 illustrate one example of such a remote switch camera animation on thelocal device4700. The example animation is similar to the example animation illustrated in thestages4415,4420,4425, and4430 ofFIG. 44 exceptFIG. 47 shows in thedisplay area4750 an animation that replaces the video of a woman that is captured by the front camera of the remote device with the video of a tree that is captured by the back camera of the remote device. The last four stages ofFIG. 48 andFIG. 49 illustrate the same animation as the one inFIG. 47 except thedisplay area855 ofFIGS. 48 and 49 contains different selectable UI items than thedisplay area855 inFIG. 47.
In some embodiments, when the remote device switches cameras, the UI of the remote device also performs a switch camera animation to display a transition between the two cameras. The last fouroperational stages5020,5025,5030, and5035 that are illustrated for theUI5005 inFIG. 50 illustrate an example of a switch camera animation that is displayed on the remote device5000 when the remote device5000 switches between cameras. This animation is similar to the animation illustrated in thestages4230,4235,4240, and4245 ofFIG. 42 except that the animation in the display area5045 replaces the video of a woman that is captured by the front camera of the remote device5000 with the video of a tree that is captured by the back camera of the remote device5000.
As noted above,FIGS. 42,43,44,45,47,48,49, and50 show various examples of switch camera animations performed on a user interface. In some embodiments, the switch camera animation causes changes to the image processing operations of the respective dual camera mobile device such as scaling, compositing, and perspective distortion, which can be performed by thevideo conference manager1204 and theimage processing manager1208, for example.
C. Exposure Adjustment
During a video conference between a dual camera mobile device and another mobile device, different embodiments provide different techniques for adjusting the exposure of images captured by cameras of either mobile device. Some embodiments provide techniques for a user of the dual camera mobile device to adjust the exposure of images captured by a camera of the other device while other embodiments provide techniques for the user to adjust the exposure of images captured by a camera of the dual camera mobile device. Several example techniques will be described in detail below.
FIG. 51 illustrates a process5100 for performing a remote exposure adjustment operation on a dual camera mobile device of some embodiments during a video conference. In the following discussion, the device through which a user directs a remote device to adjust its exposure level is referred to as the local device. In some embodiments, the process5100 is performed by the video conference manager of the local device. In addition, the process5100 will be described by reference toFIGS. 52,53, and54 which illustrate various ways for the user of the local device to request the remote device to perform an exposure adjustment operation.
As shown inFIG. 51, the process5100 begins by starting (at5105) a video conference between the local and remote devices. The process5100 then receives (at5110) a video from the remote device for display on the display screen of the local device. Next, the process5100 determines (at5115) whether a request to end the video conference has been received. As described above, some embodiments can receive a request to end the video conference from a user of the local or remote device. When the process5100 receives a request to end the video conference, the process5100 ends.
However, when the process5100 does not receive a request to end the video conference, the process5100 then determines (at5120) whether a request for adjusting the exposure of the remote device's camera has been received. When the process5100 determines that a request for adjusting the exposure of the remote device's camera has not been received, the process5100 returns back to operation5110 to receive additional video captured from the remote device.FIGS. 52,53, and54 illustrate three different examples of providing a way for a user to make such a request. InFIGS. 52,53, and54, thefirst stages5210,5310, and5410 all show PIP displays5225,5350, and5435 of thelocal devices5200,5300, and5400 that display two videos: one captured by a camera of the local device and the other captured by a camera of the remote device. Infirst stages5210,5310, and5410 the man in thebackground display5235,5360, and5445 is dark, indicating that the man is not properly exposed.
The second stage5215 ofFIG. 52 illustrates one way for the user of thelocal device5200 to request the remote device to perform an exposure adjustment by selecting the remote device's video (e.g., through a single tap on the background display5235). In this way, theUI5205 automatically associates the user's selection of a region of interest defined by a box5245 with the user's desire to direct the remote device to perform an exposure adjustment on the region of interest and thus directs the video conference manager of the local device to contact the remote device to perform an exposure adjustment operation. The defined region of interest is used by the remote device in the calculation of the exposure adjustment.
Like the second stage5215 ofFIG. 52, the second stage5315 ofFIG. 53 shows the local user's selection of the remote device's video except this selection directs theUI5305 to display aselectable UI item5370 as shown in the third stage5320. Thefourth stage5325 illustrates the user of the local device selecting theselectable UI item5370 to direct the remote device to perform an exposure adjustment operation as described above.
Thesecond stage5415 ofFIG. 54 is similar to the second stage5315 ofFIG. 53, but instead of the user's selection of the remote device's video directing the UI to display a single selectable UI item, the user's selection directs the UI5405 to display a menu ofselectable UI items5455,5460,5465, and5470, as shown in thethird stage5420. The selectable UI items include an Auto Focus item5455, anAuto Exposure item5460, aSwitch Camera item5465, and a Cancelitem5470. In some embodiments, the Switch Cameraselectable UI item5465 is used to request a local switch camera operation while in other embodiments the Switch Cameraselectable UI item5465 is used to request a remote switch camera operation. Thefourth stage5425 illustrates the user selecting theAuto Exposure item5460 to direct the remote device to perform an exposure adjustment operation as described above.
When the process5100 determines (at5120) that the local user directed the local device to request an exposure adjustment operation, the process5100 sends (at5125) a command to the remote device through the video conference control channel to adjust the exposure of the video captured by the camera that is currently capturing and transmitting video to the local device. After operation5125, the process5100 transitions back to operation5110, which is described above.
In some embodiments, the user of the remote device is required to provide permission before the remote device performs an exposure adjustment operation, while in other embodiments the remote device performs the exposure adjustment operation automatically upon receiving the request from the local device. Moreover, in some embodiments, some of the video conference functionalities are implemented by thevideo conference manager1204. In some of these embodiments, thevideo conference manager1204 performs the exposure adjustment operation by instructing theCIPU1250 to adjust the exposure setting of the sensor of the remote device camera being used.
Thelast stages5220,5330, and5430 ofFIGS. 52,53, and54 show the remote device's video lighter, which indicates that the man is properly exposed. AlthoughFIGS. 52,53, and54 provide examples of receiving an exposure adjustment request to correct the exposure of a remote device, some embodiments provide ways for user of the local device to request that the local device adjust the exposure of a camera of the local device. Such a request can be made similar to the ways illustrated inFIGS. 52,53, and54 for requesting a remote device to adjust its camera's exposure.
FIGS. 52-54 described above show several user interfaces for performing exposure adjustment operations. In some embodiments, the exposure adjustment operation can cause changes to the image processing operations of the dual camera mobile device such as invoking theexposure adjustment process5500, which is described in further detail below. The exposure adjustment operation can also cause changes to the operation of the camera of the dual camera mobile device that is capturing the video like changing the exposure level setting of the camera, for example.
FIG. 55 conceptually illustrates anexposure adjustment process5500 performed by an image processing manager of some embodiments such as that illustrated inFIG. 12. In some embodiments, theprocess5500 is part of the exposure adjustment operations described above by reference toFIGS. 51,52,53, and54. In some of such embodiments, theimage processing manager1208 performs theprocess5500 and adjusts a camera's exposure setting by sending instructions to thevideo conference manager1204, which instructs theCIPU1250 to adjust thecamera sensor405aor405b, as mentioned above.
In some embodiments, theprocess5500 is performed by theimage processing layer630 shown inFIG. 6 while in other embodiments theprocess5500 is performed by thestatistics engine465 shown inFIG. 4. Some embodiments perform theprocess5500 on images captured by cameras of (local or remote) devices in a video conference while other embodiments perform theprocess5500 as part of the process1700 (e.g., operation1710) illustrated inFIG. 17. Some embodiments perform an exposure adjustment operation to expose images captured by the cameras of the dual camera mobile device that are not too light and not too dark. In other words, theprocess5500 is performed to capture images in a manner that maximizes the amount of detail as possible.
Theprocess5500 begins by receiving (at5505) an image captured by a camera of the dual camera mobile device. In some embodiments, when the received image is a first image captured by a camera of a device in a video conference, theprocess5500 is not performed on the first image (i.e., there was no image before the first image from which to determine an exposure value). Theprocess5500 then reads (at5510) pixel values of a defined region in the received image. Different embodiments define regions differently. Some of such embodiments define differently shaped regions such as a square, a rectangle, a triangle, a circle, etc. while other of such embodiments define regions in different locations in the image such as center, upper center, lower center, etc.
Next, theprocess5500 calculates (at5515) an average of the pixel values in the defined region of the image. Theprocess5500 determines (at5520) whether the calculated average of the pixel values is equal to a particular defined value. Different embodiments define different particular values. For example, some embodiments define the particular value as the median pixel value of the image's dynamic range. In some embodiments, a range of values is defined instead of a single value. In such embodiments, theprocess5500 determines (at5520) whether the calculated average of the pixel values is within the define range of values.
When the calculated average of the pixel values is not equal to the particular defined value, theprocess5500 adjusts (at5525) the exposure value based on the calculated average. When the calculated average of the pixel values is equal to the particular defined value, theprocess5500 ends. In some embodiments, an exposure value represents an amount of time that a camera sensor is exposed to light. In some embodiments, the adjusted exposure value is used to expose the next image to be captured by the camera that captured the received image. After the exposure value is adjusted based on the calculated average, theprocess5500 ends.
In some embodiments, theprocess5500 is repeatedly performed until the calculated average of pixel values is equal to the particular defined value (or falls within the defined range of values). Some embodiments constantly perform theprocess5500 during a video conference while other embodiments perform theprocess5500 at defined intervals (e.g., 5 seconds, 10 seconds, 30 seconds, etc.) during the video conference. Furthermore, during the video conference, theprocess5500 of some embodiments dynamically re-defines the particular pixel value before performing theprocess5500.
FIG. 56 conceptually illustrates examples of exposure adjustment operations of some embodiments. Each of the examples5600,5610, and5615 shows animage5620 captured by a camera of the dual camera mobile device on the left side. Specifically, theimage5620 shows a dark person in front of a sun. The dark person indicates that the exposure level of the image is not high enough to expose the person's face or body. The right side of each example5600,5610, and5615 shows animage5625,5630, and5635, respectively, captured after theimage5620. In some embodiments, theimage5620 and the images on the right side are images of a video captured by the camera of the dual camera mobile device. In other embodiments, theimage5620 and the image on the right side are still images captured by the camera of the dual camera mobile device at different instances in time.
The first example5600 illustrates an operation with no exposure adjustment. As such, theimage5625 appears the same as theimage5620. Since no exposure adjustment was performed, the person in theimage5625 remains dark like the person in theimage5620.
In the second example5610, an exposure adjustment operation is performed on theimage5620. In some embodiments, the exposure adjustment operation is performed by theprocess5500 using the definedregion5640. Based on the exposure adjustment operation, the exposure level of the camera is adjusted and the camera captures the image5630 using the adjusted exposure level. As shown inFIG. 56, the person in the image5630 is not as dark as the in theimage5625. However, the person's face and body in the image5630 is still not clear.
The third example5615 shows an exposure adjustment operation performed on theimage5620. Similar to the second example5610, the exposure adjustment operation of the example5615 of some embodiments is performed by theprocess5500 using the definedregion5645. Based on the exposure adjustment operation, the exposure level of the camera is adjusted and the camera captures theimage5635 using the adjusted exposure level. As seen inFIG. 56, the person in theimage5635 is perfectly exposed since the person's face and body is visible.
In some embodiments, the selection of the defined region may be made by the user of the dual camera mobile device. The device itself may also automatically adjust its defined region for the exposure adjustment operation through the feedback loop for exposure adjustment mentioned above in theCIPU400. Thestatistics engine465 inFIG. 4 may collect data to determine whether the exposure level is appropriate for the images captured and adjust the camera sensors (e.g., though a direct connection to the sensor module415) accordingly.
D. Focus Adjustment
FIG. 57 illustrates a process5700 for adjusting the focus of a dual camera mobile device during a video conference. In the following discussion, the device through which a user directs a remote device to adjust its camera focus is referred to as the local device. The process5700 ofFIG. 57 is in some embodiments performed by thevideo conference manager1204 of the local device. Also, this process will be described below by reference toFIGS. 58 and 59, which provide two exemplary manners for the user of the local device to request a focus adjustment operation to be performed by the remote device.
As shown inFIG. 57, the process5700 begins by starting (at5705) a video conference between the local and remote devices. The process5700 then receives (at5710) a video from the remote device for display on the display screen of the local device. Next, at5715, the process5700 determines whether a request to end the video conference has been received. As described above, a video conference can end in some embodiments at the request of a user of the local or remote device. When the process5700 receives a request to end the video conference, the process5700 ends.
Otherwise, the process determines (at5720) whether it has received a request for adjusting the focus of the remote camera of the remote device. When the process5700 determines that it has not received a request for adjusting the focus of the remote camera of the remote device, the process5700 returns to operation5710 to receive additional video from the remote device.FIGS. 58,59, and60 illustrate three different ways that different embodiments provide to a user to make such a request. InFIGS. 58,59, and60, thefirst stages5810,5910, and6072 all show aPIP display5825,5935, and6082 of thelocal device5800,5900, and6071 that displays two videos, one captured by the local device, and the other captured by the remote device. Thedisplay areas855 and855 inFIGS. 58 and 59 show an end conference button. However, inFIG. 60, the layout of thedisplay area855 is the same as the layout of thedisplay area855 ofFIG. 9, described above. Moreover, theswitch camera button6088 shown in thedisplay area855 can be selected to invoke a local switch camera operation in some embodiments or a remote switch camera operation in other embodiments. As shown in thefirst stages5810,5910, and6072, the video of the remote device that is displayed in thebackground display5835,5945, and6080 is blurry.
Thesecond stage5815 ofFIG. 58 illustrates an approach whereby the user of the local device requests a focus adjustment from the remote device by simply selecting the remote device's video (e.g., through asingle tap5840 on the remote device's video). Under this approach, the UI5805 automatically associates the user's selection of a region of interest defined by abox5845 with the user's desire to direct the remote device to perform an operation (such as focus) on the region of interest and therefore directs thevideo conference manager1204 of the local device5800 to contact the remote device to perform an adjustment operation (such as an focus adjustment operation). The defined region of interest is used by the remote device in the calculation of the focus adjustment.
Thesecond stage5915 ofFIG. 59 similarly shows the local user's selection of the remote video (e.g., through the user's tapping of the remote device's video). However, unlike the example illustrated inFIG. 58, this selection inFIG. 59 directs theUI5905 to display a menu ofselectable UI items5955,5960,5965 and5970 (which can be implemented as selectable buttons), as shown in thethird stage5920. These selectable UI items include anAuto Focus item5960, an Auto Exposure item5965, a Switch Camera item5970 and a Cancel item5955. In some embodiments, the Switch Camera selectable UI item5970 is used to request a local switch camera operation while in other embodiments the Switch Camera selectable UI item5970 is used to request a remote switch camera operation. Thefourth stage5925 then illustrates the local user selecting the auto-focus item5960.
Thesecond stage6074 ofFIG. 60 again similarly shows the local user's selection of the remote video (e.g., through the user's tapping of the remote device's video). However, unlike the example illustrated inFIG. 59, this selection inFIG. 60 directs theUI6078 to request a focus adjustment operation (i.e., in second stage6074). After the focus adjustment operation is completed, theUI6078 displays a menu ofselectable UI items6084 and6086 (i.e., in third stage6076), which can be implemented as selectable buttons. These selectable UI items include anAuto Exposure item6086 and a Cancelitem6084.
When the process determines (at5720) that the local user directed the local device to request a focus adjustment operation, the process5700 sends (at5740) a command to the remote device through the video conference control channel to adjust the focus of the camera whose video the remote device is currently capturing and transmitting. After5740, the process transitions back to5710, which was described above.
In some embodiments, the user of the remote device has to provide permission before the remote device performs this operation, while in other embodiments the remote device performs this operation automatically upon receiving the request for the local device. Also, in some embodiments, the focus adjustment operation adjusts the focus settings of the remote device's camera that is being used during the video conference. In some of such embodiments, some of the video conference functionalities are implemented by thevideo conference module1202 as discussed above. In these embodiments, thevideo conference manager1204 instructs theCIPU1250 to adjust the sensor of the remote device camera being used.
Thelast stages5820,5930, and6076 ofFIGS. 58,59, and60 show the remote device's video properly focused. AlthoughFIGS. 58,59, and60 provide examples of receiving a focus adjustment request to correct the focus of a remote device, some embodiments allow the local device's user to request that the local device adjust the focus of a camera of the local device. Such a request can be made similar to the approaches shown inFIGS. 58,59, and60 to requesting a remote device to adjust its camera's focus.
FIGS. 58,59, and60 illustrate three example user interfaces that allow a user to perform a focus adjustment operation. In some embodiments, the focus adjustment operation causes changes to the operation of the camera of the dual camera mobile device that is capturing the video displayed in the UIs such as changing the focus of the camera.
As discussed above inFIGS. 52 and 58, the defined region of interest was used by the remote mobile device in the computation for exposure adjustment and focus adjustment of the videos, respectively. However, in some other embodiments, the user's selection of a region of interest may be used to direct the remote device to perform one or more operations. For example, in some embodiments, both exposure adjustment and focus adjustment may be performed based on the defined region of interest, thereby directing the remote device to perform both operations.
E. Frame Rate Control
During a video conference, some embodiments may wish to adjust or maintain the rate at which images of a video captured by a camera of the dual camera mobile device are transmitted (i.e., frame rate) to the other device in the video conference. For example, assuming a fixed bandwidth, some of such embodiments reduce the frame rate of the video to increase the picture quality of the images of the video while other of such embodiments increase the frame rate of the video to smooth out the video (i.e., reduce jitter).
Different embodiments provide different techniques for controlling the frame rate of images of a video during the video conference. One example previously described above adjusts the VBI of thesensor module415 for a camera in order to control the rate at which images captured by the camera are processed. As another example, some embodiments of themanagement layer635 of thevideo conference module625 shown inFIG. 6 control the frame rate by dropping images. Similarly, some embodiments of theimage processing layer630 control the frame rate by dropping images. Some embodiments provide yet other techniques for controlling frame rates such as dropping frames in theuniversal transmission buffer2720.
V. Electronic System
Many of the above-described features and applications are implemented as software processes that are specified as a set of instructions recorded on a computer readable storage medium (also referred to as computer readable medium). When these instructions are executed by one or more processing unit(s) (e.g., one or more processors, cores of processors, or other processing units), they cause the processing unit(s) to perform the actions indicated in the instructions. Examples of computer readable media include, but are not limited to, CD-ROMs, flash drives, RAM chips, hard drives, EPROMs, etc. The computer readable media does not include carrier waves and electronic signals passing wirelessly or over wired connections.
In this specification, the term “software” is meant to include firmware residing in read-only memory or applications stored in magnetic storage which can be read into memory for processing by a processor. Also, in some embodiments, multiple software inventions can be implemented as sub-parts of a larger program while remaining distinct software inventions. In some embodiments, multiple software inventions can also be implemented as separate programs. Finally, any combination of separate programs that together implement a software invention described here is within the scope of the invention. In some embodiments, the software programs, when installed to operate on one or more electronic systems, define one or more specific machine implementations that execute and perform the operations of the software programs.
Some embodiments are implemented as software processes that include one or more application programming interfaces (APIs) in an environment with calling program code interacting with other program code being called through the one or more interfaces. Various function calls, messages or other types of invocations, which further may include various kinds of parameters, can be transferred via the APIs between the calling program and the code being called. In addition, an API may provide the calling program code the ability to use data types or classes defined in the API and implemented in the called program code.
At least certain embodiments include an environment with a calling software component interacting with a called software component through an API. A method for operating through an API in this environment includes transferring one or more function calls, messages, other types of invocations or parameters via the API.
One or more Application Programming Interfaces (APIs) may be used in some embodiments. For example, some embodiments of the media exchange module310 (or910) provide a set of APIs to other software components for accessing various video processing and encoding functionalities described inFIGS. 3 and 6 such as the functionalities of theTNR module1500 described inFIG. 15.
An API is an interface implemented by a program code component or hardware component (hereinafter “API-implementing component”) that allows a different program code component or hardware component (hereinafter “API-calling component”) to access and use one or more functions, methods, procedures, data structures, classes, and/or other services provided by the API-implementing component. An API can define one or more parameters that are passed between the API-calling component and the API-implementing component.
An API allows a developer of an API-calling component (which may be a third party developer) to leverage specified features provided by an API-implementing component. There may be one API-calling component or there may be more than one such component. An API can be a source code interface that a computer system or program library provides in order to support requests for services from an application. An operating system (OS) can have multiple APIs to allow applications running on the OS to call one or more of those APIs, and a service (such as a program library) can have multiple APIs to allow an application that uses the service to call one or more of those APIs. An API can be specified in terms of a programming language that can be interpreted or compiled when an application is built.
In some embodiments the API-implementing component may provide more than one API, each providing a different view of or with different aspects that access different aspects of the functionality implemented by the API-implementing component. For example, one API of an API-implementing component can provide a first set of functions and can be exposed to third party developers, and another API of the API-implementing component can be hidden (not exposed) and provide a subset of the first set of functions and also provide another set of functions, such as testing or debugging functions which are not in the first set of functions. In other embodiments the API-implementing component may itself call one or more other components via an underlying API and thus be both an API-calling component and an API-implementing component.
An API defines the language and parameters that API-calling components use when accessing and using specified features of the API-implementing component. For example, an API-calling component accesses the specified features of the API-implementing component through one or more API calls or invocations (embodied for example by function or method calls) exposed by the API and passes data and control information using parameters via the API calls or invocations. The API-implementing component may return a value through the API in response to an API call from an API-calling component. While the API defines the syntax and result of an API call (e.g., how to invoke the API call and what the API call does), the API may not reveal how the API call accomplishes the function specified by the API call. Various API calls are transferred via the one or more application programming interfaces between the calling (API-calling component) and an API-implementing component. Transferring the API calls may include issuing, initiating, invoking, calling, receiving, returning, or responding to the function calls or messages; in other words, transferring can describe actions by either of the API-calling component or the API-implementing component. The function calls or other invocations of the API may send or receive one or more parameters through a parameter list or other structure. A parameter can be a constant, key, data structure, object, object class, variable, data type, pointer, array, list or a pointer to a function or method or another way to reference a data or other item to be passed via the API.
Furthermore, data types or classes may be provided by the API and implemented by the API-implementing component. Thus, the API-calling component may declare variables, use pointers to, use or instantiate constant values of such types or classes by using definitions provided in the API.
Generally, an API can be used to access a service or data provided by the API-implementing component or to initiate performance of an operation or computation provided by the API-implementing component. By way of example, the API-implementing component and the API-calling component may each be any one of an operating system, a library, a device driver, an API, an application program, or other module (it should be understood that the API-implementing component and the API-calling component may be the same or different type of module from each other). API-implementing components may in some cases be embodied at least in part in firmware, microcode, or other hardware logic. In some embodiments, an API may allow a client program to use the services provided by a Software Development Kit (SDK) library. In other embodiments an application or other client program may use an API provided by an Application Framework. In these embodiments the application or client program may incorporate calls to functions or methods provided by the SDK and provided by the API or use data types or objects defined in the SDK and provided by the API. An Application Framework may in these embodiments provide a main event loop for a program that responds to various events defined by the Framework. The API allows the application to specify the events and the responses to the events using the Application Framework. In some implementations, an API call can report to an application the capabilities or state of a hardware device, including those related to aspects such as input capabilities and state, output capabilities and state, processing capability, power state, storage capacity and state, communications capability, etc., and the API may be implemented in part by firmware, microcode, or other low level logic that executes in part on the hardware component.
The API-calling component may be a local component (i.e., on the same data processing system as the API-implementing component) or a remote component (i.e., on a different data processing system from the API-implementing component) that communicates with the API-implementing component through the API over a network. It should be understood that an API-implementing component may also act as an API-calling component (i.e., it may make API calls to an API exposed by a different API-implementing component) and an API-calling component may also act as an API-implementing component by implementing an API that is exposed to a different API-calling component.
The API may allow multiple API-calling components written in different programming languages to communicate with the API-implementing component (thus the API may include features for translating calls and returns between the API-implementing component and the API-calling component); however the API may be implemented in terms of a specific programming language. An API-calling component can, in one embodiment, call APIs from different providers such as a set of APIs from an OS provider and another set of APIs from a plug-in provider and another set of APIs from another provider (e.g. the provider of a software library) or creator of the another set of APIs.
FIG. 61 is a block diagram illustrating an exemplary API architecture, which may be used in some embodiments of the invention. As shown inFIG. 61, theAPI architecture6100 includes the API-implementing component6110 (e.g., an operating system, a library, a device driver, an API, an application program, software or other module) that implements theAPI6120. TheAPI6120 specifies one or more functions, methods, classes, objects, protocols, data structures, formats and/or other features of the API-implementing component that may be used by the API-callingcomponent6130. TheAPI6120 can specify at least one calling convention that specifies how a function in the API-implementingcomponent6110 receives parameters from the API-callingcomponent6130 and how the function returns a result to the API-calling component. The API-calling component6130 (e.g., an operating system, a library, a device driver, an API, an application program, software or other module), makes API calls through theAPI6120 to access and use the features of the API-implementingcomponent6110 that are specified by theAPI6120. The API-implementingcomponent6110 may return a value through theAPI6120 to the API-callingcomponent6130 in response to an API call.
It will be appreciated that the API-implementingcomponent6110 may include additional functions, methods, classes, data structures, and/or other features that are not specified through theAPI6120 and are not available to the API-callingcomponent6130. It should be understood that the API-callingcomponent6130 may be on the same system as the API-implementingcomponent6110 or may be located remotely and accesses the API-implementingcomponent6110 using theAPI6120 over a network. WhileFIG. 61 illustrates a single API-callingcomponent6130 interacting with theAPI6120, it should be understood that other API-calling components, which may be written in different languages (or the same language) than the API-callingcomponent6130, may use theAPI6120.
The API-implementingcomponent6110, theAPI6120, and the API-callingcomponent6130 may be stored in a machine-readable medium, which includes any mechanism for storing information in a form readable by a machine (e.g., a computer or other data processing system). For example, a machine-readable medium includes magnetic disks, optical disks, random access memory; read only memory, flash memory devices, etc.
FIG. 62 is an example of a dual camera mobilecomputing device architecture6200. The implementation of a mobile computing device can include one ormore processing units6205,memory interface6210 and aperipherals interface6215. Each of these components that make up the computing device architecture can be separate components or integrated in one or more integrated circuits. These various components can also be coupled together by one or more communication buses or signal lines.
The peripherals interface6215 can be coupled to various sensors and subsystems, including acamera subsystem6220, a wireless communication subsystem(s)6225,audio subsystem6230, I/O subsystem6235, etc. The peripherals interface6215 enables communication between processors and peripherals. Peripherals such as anorientation sensor6245 or anacceleration sensor6250 can be coupled to the peripherals interface6215 to facilitate the orientation and acceleration functions.
Thecamera subsystem6220 can be coupled to one or moreoptical sensors6240, e.g., a charged coupled device (CCD) optical sensor, a complementary metal-oxide-semiconductor (CMOS) optical sensor. Thecamera subsystem6220 coupled with the sensors may facilitate camera functions, such as image and/or video data capturing.Wireless communication subsystems6225 may serve to facilitate communication functions.Wireless communication subsystems6225 may include radio frequency receivers and transmitters, and optical receivers and transmitters. They may be implemented to operate over one or more communication networks such as a GSM network, a Wi-Fi network, Bluetooth network, etc. Theaudio subsystems6230 is coupled to a speaker and a microphone to facilitate voice-enabled functions, such as voice recognition, digital recording, etc.
I/O subsystem6235 involves the transfer between input/output peripheral devices, such as a display, a touch screen, etc., and the data bus of the CPU through the Peripherals Interface. I/O subsystem6235 can include a touch-screen controller6255 andother input controllers6260 to facilitate these functions. Touch-screen controller6255 can be coupled to thetouch screen6265 and detect contact and movement on the screen using any of multiple touch sensitivity technologies.Other input controllers6260 can be coupled to other input/control devices, such as one or more buttons.
Memory interface6210 can be coupled tomemory6270, which can include high-speed random access memory and/or non-volatile memory such as flash memory. Memory can store an operating system (OS)6272. TheOS6272 can include instructions for handling basic system services and for performing hardware dependent tasks.
Memory can also include communication instructions6274 to facilitate communicating with one or more additional devices; graphicaluser interface instructions6276 to facilitate graphic user interface processing; image/video processing instructions6278 to facilitate image/video-related processing and functions;phone instructions6280 to facilitate phone-related processes and functions; media exchange andprocessing instructions6282 to facilitate media communication and processing-related processes and functions;camera instructions6284 to facilitate camera-related processes and functions; andvideo conferencing instructions6286 to facilitate video conferencing processes and functions. The above identified instructions need not be implemented as separate software programs or modules. Various functions of mobile computing device can be implemented in hardware and/or in software, including in one or more signal processing and/or application specific integrated circuits.
The above-described embodiments may include touch I/O device6301 that can receive touch input for interacting withcomputing system6303, as shown inFIG. 63, via wired orwireless communication channel6302. Touch I/O device6301 may be used to provide user input tocomputing system6303 in lieu of or in combination with other input devices such as a keyboard, mouse, etc. One or more touch I/O devices6301 may be used for providing user input tocomputing system6303. Touch I/O device6301 may be an integral part of computing system6303 (e.g., touch screen on a laptop) or may be separate fromcomputing system6303.
Touch I/O device6301 may include a touch sensitive panel which is wholly or partially transparent, semitransparent, non-transparent, opaque or any combination thereof. Touch I/O device6301 may be embodied as a touch screen, touch pad, a touch screen functioning as a touch pad (e.g., a touch screen replacing the touchpad of a laptop), a touch screen or touchpad combined or incorporated with any other input device (e.g., a touch screen or touchpad disposed on a keyboard) or any multi-dimensional object having a touch sensitive surface for receiving touch input.
In one example, touch I/O device6301 embodied as a touch screen may include a transparent and/or semitransparent touch sensitive panel partially or wholly positioned over at least a portion of a display. According to this embodiment, touch I/O device6301 functions to display graphical data transmitted from computing system6303 (and/or another source) and also functions to receive user input. In other embodiments, touch I/O device6301 may be embodied as an integrated touch screen where touch sensitive components/devices are integral with display components/devices. In still other embodiments a touch screen may be used as a supplemental or additional display screen for displaying supplemental or the same graphical data as a primary display and receiving touch input.
Touch I/O device6301 may be configured to detect the location of one or more touches or near touches ondevice6301 based on capacitive, resistive, optical, acoustic, inductive, mechanical, chemical measurements, or any phenomena that can be measured with respect to the occurrences of the one or more touches or near touches in proximity todevice6301. Software, hardware, firmware or any combination thereof may be used to process the measurements of the detected touches to identify and track one or more gestures. A gesture may correspond to stationary or non-stationary, single or multiple, touches or near touches on touch I/O device6301. A gesture may be performed by moving one or more fingers or other objects in a particular manner on touch I/O device6301 such as tapping, pressing, rocking, scrubbing, twisting, changing orientation, pressing with varying pressure and the like at essentially the same time, contiguously, or consecutively. A gesture may be characterized by, but is not limited to a pinching, sliding, swiping, rotating, flexing, dragging, or tapping motion between or with any other finger or fingers. A single gesture may be performed with one or more hands, by one or more users, or any combination thereof.
Computing system6303 may drive a display with graphical data to display a graphical user interface (GUI). The GUI may be configured to receive touch input via touch I/O device6301. Embodied as a touch screen, touch I/O device6301 may display the GUI. Alternatively, the GUI may be displayed on a display separate from touch I/O device6301. The GUI may include graphical elements displayed at particular locations within the interface. Graphical elements may include but are not limited to a variety of displayed virtual input devices including virtual scroll wheels, a virtual keyboard, virtual knobs, virtual buttons, any virtual UI, and the like. A user may perform gestures at one or more particular locations on touch I/O device6301 which may be associated with the graphical elements of the GUI. In other embodiments, the user may perform gestures at one or more locations that are independent of the locations of graphical elements of the GUI. Gestures performed on touch I/O device6301 may directly or indirectly manipulate, control, modify, move, actuate, initiate or generally affect graphical elements such as cursors, icons, media files, lists, text, all or portions of images, or the like within the GUI. For instance, in the case of a touch screen, a user may directly interact with a graphical element by performing a gesture over the graphical element on the touch screen. Alternatively, a touch pad generally provides indirect interaction. Gestures may also affect non-displayed GUI elements (e.g., causing user interfaces to appear) or may affect other actions within computing system6303 (e.g., affect a state or mode of a GUI, application, or operating system). Gestures may or may not be performed on touch I/O device6301 in conjunction with a displayed cursor. For instance, in the case in which gestures are performed on a touchpad, a cursor (or pointer) may be displayed on a display screen or touch screen and the cursor may be controlled via touch input on the touchpad to interact with graphical objects on the display screen. In other embodiments in which gestures are performed directly on a touch screen, a user may interact directly with objects on the touch screen, with or without a cursor or pointer being displayed on the touch screen.
Feedback may be provided to the user viacommunication channel6302 in response to or based on the touch or near touches on touch I/O device6301. Feedback may be transmitted optically, mechanically, electrically, olfactory, acoustically, or the like or any combination thereof and in a variable or non-variable manner.
These functions described above can be implemented in digital electronic circuitry, in computer software, firmware or hardware. The techniques can be implemented using one or more computer program products. Programmable processors and computers can be included in or packaged as mobile devices. The processes and logic flows may be performed by one or more programmable processors and by one or more programmable logic circuitry. General and special purpose computing devices and storage devices can be interconnected through communication networks.
Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra density optical discs, any other optical or magnetic media, and floppy disks. The computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.
While the above discussion primarily refers to microprocessor or multi-core processors that execute software, some embodiments are performed by one or more integrated circuits, such as application specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs). In some embodiments, such integrated circuits execute instructions that are stored on the circuit itself.
As used in this specification and any claims of this application, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms display or displaying means displaying on an electronic device. As used in this specification and any claims of this application, the terms “computer readable medium” and “computer readable media” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral signals.
FIG. 64 conceptually illustrates an example communication system6400 used for connecting some participants of a video conference according to some embodiments. As shown, the communication system6400 includes severalmobile devices6415, several cellular base stations (or Node Bs)6410, several radio network controllers (RNCs)6405, and acore network6425. Cellular base stations and RNCs are collectively referred to as a Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access Network (UTRAN)6430. EachRNC6405 is connected to one or morecellular base stations6410 that, together, are referred to as a radio access network (RAN).
Eachcellular base station6410 covers aservice region6420. As shown, themobile devices6415 in each service region are wirelessly connected to the servingcellular base station6410 of theservice region6420 through a Uu interface. The Uu interface uses a protocol stack that has two planes: a control plane and a user plane. The user plane supports circuit-switched, packet-switched and broadcast data streams. The control plane carries the network's signaling messages.
Each cellular base station is connected to an RNC through an Iub interface. EachRNC6405 is connected to thecore network6425 by Iu-cs and an Iu-ps interfaces. The Iu-cs interface is used for circuit switched services (e.g., voice) while the Iu-ps interface is used for packet switched services (e.g., data). The Iur interface is used for connecting two RNCs together.
Accordingly, the communication system6400 supports both circuit-switched services and packet-switched services. For example, circuit-switched services allow a telephone call to be conducted by transmitting the telephone call data (e.g., voice) through circuit-switched equipment of the communication system6400. Packet-switched services allow a video conference to be conducted by using a transport protocol layer such as UDP or TCP over an internet layer protocol like IP to transmit video conference data through packet-switched equipment of the communication system6400. In some embodiments, the telephone call to video conference transition (e.g., handoff) previously described in the Video Conference Setup section uses the circuit-switched and packet-switched services supported by a communication system like the communication system6400. That is, in such embodiments, the telephone call is conducted through the circuit-switched equipment of the communication system6400 and the video conference it conducted through the packet-switched equipment of the communication system6400.
Although the example communication system inFIG. 64 illustrates a third generation (3G) technology UTRAN wireless mobile communication system, it should be noted that second generation (2G) communication systems, other 3 G communication systems such as 3GPP2 Evolution-Data Optimized or Evolution-Data only (EV-DO) and 3rd generation partnership project 2 (3GPP2) Code Division Multiple Access 1X (CDMA 1X), fourth generation (4G) communication systems, wireless local area network (WLAN), and Worldwide Interoperability for Microwave Access (WiMAX) communication systems can be used for connecting some of the participants of a conference in some embodiments. Examples of 2G systems include Global System for Mobile communications (GSM), General Packet Radio Service (GPRS), and Enhanced Data Rates for GSM Evolution (EDGE). A 2 G communication system architecture is similar to the architecture shown inFIG. 64 except the 2 G communication system architecture uses base transceiver stations (BTSs) instead ofNode Bs6410 and base station controllers (BSC) instead ofRNC6405. In a 2 G communication system, an A interface between the BSC and the core network is used for circuit switched services and a Gb interface between the BSC and the core network is used for packet switched services.
In some embodiments, the communication system6400 is operated by a service carrier who initially provisions amobile device6415 to allow themobile device6415 to use the communication system6400. Some embodiments provision amobile device6415 by configuring and registering a subscriber identity module (SIM) card in themobile device6415. In other embodiments, themobile device6415 is instead configured and registered using themobile device6415's memory. Moreover, additional services can be provisioned (after a customer purchases the mobile device6415) such as data services like GPRS, multimedia messaging service (MMS), and instant messaging. Once provisioned, themobile device6415 is activated and is thereby allowed to use the communication system6400 by the service carrier.
The communication system6400 is a private communication network in some embodiments. In such embodiments, themobile devices6415 can communicate (e.g., conduct voice calls, exchange data) among each other (e.g.,mobile devices6415 that are provisioned for the communication system6400). In other embodiments, the communication system6400 is a public communication network. Thus, themobile devices6415 can communicate with other devices outside of the communication system6400 in addition to themobile devices6415 provisioned for the communication system6400. Some of the other devices outside of the communication system6400 include phones, computers, and other devices that connect to the communication system6400 through other networks such as a public switched telephone network or another wireless communication network.
The Long-Term Evolution (LTE) specification is used to define 4 G communication systems.FIG. 65 conceptually illustrates an example of a 4G communication system6500 that is used for connecting some participants of a video conference in some embodiments. As shown, thecommunication system6500 includes severalmobile devices6415, several Evolved Node Bs (eNBs)6505, a Mobility Management Entity (MME)6515, a Serving Gateway (S-GW)6520, a Packet Data Network (PDN) Gateway6525, and a Home Subscriber Server (HSS)6535. In some embodiments, thecommunication system6500 includes one ormore MMEs6515, one or more S-GWs6520, one or more PDN Gateways6525, and one ormore HSSs6535.
TheeNBs6505 provide an air interface for themobile devices6415. As shown, eacheNB6505 covers aservice region6510. Themobile devices6415 in eachservice region6510 are wirelessly connected to theeNB6505 of theservice region6510 through a LTE-Uu interface.FIG. 65 also shows theeNBs6505 connected to each other through an X2 interface. In addition, theeNBs6505 are connected to theMME6515 through an S1-MME interface and to the S-GW6520 through an S1-U interface. TheeNBs6505 are collectively referred to as an Evolved UTRAN (E-TRAN)6530.
TheeNBs6505 provide functions such as radio resource management (e.g., radio bearer control, connection mobility control, etc.), routing of user plane data towards the S-GW6520, signal measurement and measurement reporting, MME selection at the time of mobile device attachment, etc. TheMME6515 functions include idle mode mobile device tracking and paging, activation and deactivation of radio bearers, selection of the S-GW6520 at the time of mobile device attachment, Non-Access Stratum (NAS) signaling termination, user authentication by interacting with theHSS6535, etc.
The S-GW6520 functions includes (1) routing and forwarding user data packets and (2) managing and storing mobile device contexts such as parameters of the IP bearer service and network internal routing information. The PDN Gateway6525 functions include providing connectivity from the mobile devices to external packet data networks (not shown) by being the point of exit and entry of traffic for the mobile devices. A mobile station may have simultaneous connectivity with more than one PDN Gateway for accessing multiple packet data networks. The PDN Gateway6525 also acts as the anchor for mobility between 3GPP and non-3GPP technologies such as WiMAX and 3GPP2 (e.g., CDMA 1X and EV-DO).
As shown,MME6515 is connected to S-GW6520 through an S11 interface and to theHSS6535 through an S6a interface. The S-GW6520 and thePDN Gateway6520 are connected through an S8 interface. TheMME6515, S-GW6520, and PDN Gateway6525 are collectively referred to as an Evolved Packet Core (EPC). The EPC is the main component of a System Architecture Evolution (SAE) architecture, which is the core network architecture of 3GPP LTE wireless communication standard. The EPC is a pure packet system. For example, the EPC does not have a voice media gateway. Services, like voice and SMS, are packet-switched routed and are provided by application functions that make use of the EPC service. So using the telephone call to video conference transition previously described above as an example, both the telephone call and the video conference are conducted through packet-switched equipment of thecommunication system6500 in some embodiments. In some such embodiments, the packet-switched channel used for the telephone call is continued to be used for the audio data of the video conference after the telephone call terminates. However, in other such embodiments, a different packet-switched channel is created (e.g., when the video conference is established) and audio data is transmitted through the newly created packet-switched channel instead of the packet-switched channel of the telephone call when the telephone call terminates.
Moreover, the amount of bandwidth provided by these different technologies ranges from 44 kilobits per second (kbps) for GPRS to over 10 megabits per second (Mbps) for LTE. Download rates of 100 Mbps and upload rates of 65 Mbps are predicted in the future for LTE.
While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. In addition, a number of the figures conceptually illustrate processes. The specific operations of these processes may not be performed in the exact order shown and described. The specific operations may not be performed in one continuous series of operations, and different specific operations may be performed in different embodiments. Furthermore, the process could be implemented using several sub-processes, or as part of a larger macro process.
Also, many embodiments were described above by reference to a video conference between two dual camera mobile devices. However, one of ordinary skill in the art will realize that many of these embodiments are used in cases involving a video conference between a dual camera mobile device and another device, such as a single camera mobile device, a computer, a phone with video conference capability, etc. Moreover, many of the embodiments described above can be used in single camera mobile devices and other computing devices with video conference capabilities. Thus, one of ordinary skill in the art would understand that the invention is not limited by the foregoing illustrative details, but rather is to be defined by the appended claims.