BACKGROUNDMobile devices, such as cellular phones, are typically equipped with one or more cameras. In some instances, device-mounted cameras can be activated or launched in a quick manner, such as by a physical gesture. This alleviates the user having to unlock the device, navigate to the camera functionality, activate the camera functionality, and begin to take pictures. This can be quite useful in situations where a user wishes to take a picture of an event that is unfolding quickly.
Yet, a mobile device may be prone to misidentifying such gestures, such as recognizing normal motion as a camera invocation gesture. When this happens, unintended camera clicks can occur which, in turn, can cause unintended pictures to be taken. This can be problematic for a number of reasons. First, in high definition phones captured images can be quite large, e.g. 5-7 MB. If twenty or thirty images are captured, such as in a burst mode, a great deal of memory can be consumed. Second, if the images are also automatically saved to a network location, i.e. a “cloud” location, the user must take time to search for and delete the unintended images on both the mobile device and in the network location. Third, because the camera is operating, battery resources are consumed thus unnecessarily taxing the mobile device's battery.
BRIEF DESCRIPTION OF THE DRAWINGSEmbodiments managing unintended camera clicks are described with reference to the following Figures. The same numbers may be used throughout to reference like features and components that are shown in the Figures:
FIG. 1 illustrates an example system in which embodiments of managing unintended camera clicks can be implemented.
FIG. 2 illustrates an example mobile device in which embodiments of managing unintended camera clicks can be implemented.
FIG. 3 illustrates example components that can be used to implement managing unintended camera clicks in accordance with one or more embodiments.
FIG. 4 is a flow diagram that describes operations in a method in accordance with one or more embodiments.
FIG. 5 illustrates example components that can be used to implement managing unintended camera clicks in accordance with one or more embodiments.
FIG. 6 illustrates an example of how confidence levels can be assigned in accordance with one or more embodiments.
FIG. 7 is a flow diagram that describes operations in a method in accordance with one or more embodiments.
FIG. 8 illustrates various components of an example device that can implement embodiments of managing unintended camera clicks.
DETAILED DESCRIPTIONOverview
Techniques for managing unintended camera clicks are described. In this document, the term “click” is used to characterize the action that results in a picture being taken by a camera. Traditionally, the click was a physical click in which a button on a camera was pressed down. So a click can involve any type of button, hard key such as a volume key, and the like. In modern devices, a click can be a soft click, such as a touch input received on a mobile device's touchscreen.
In at least some embodiments, a camera invocation gesture to access a first camera on a mobile device is received by the mobile device. Responsive to receiving the camera invocation gesture, a second camera on the mobile device is utilized to ascertain a context associated with whether a user likely intends to take a picture. If the mobile device ascertains that the user likely intends to take a picture, the mobile device can be placed in an operational picture mode in which pictures can be or can continue to be taken and saved on the mobile device. If the mobile device ascertains, from the context associated with the second camera, that the user likely does not intend to take a picture, then any pictures that are or were taken can be saved and the user can be prompted as to whether they wish to delete the pictures.
In at least some other embodiments, a camera invocation gesture to access a first camera on a mobile device is received by the mobile device. Responsive to receiving the camera invocation gesture, the first camera is accessed and one or more pictures are taken by the first camera. The camera invocation gesture is evaluated relative to a confidence spectrum having multiple confidence levels in order to ascertain a confidence level of the camera invocation gesture. For a defined confidence level, a second camera on the mobile device is utilized to ascertain a context associated with whether a user likely intended to take the picture or pictures. For the defined confidence level, if the context ascertained by the second camera indicates that the user likely intended to take the picture or pictures, the pictures can be designated as “intended” and saved on the mobile device. If, on the other hand, the context ascertained by the second camera indicates that the user likely did not intend to take a picture, any picture taken is designated as “unintended.” The user is then prompted to save or delete the pictures that were designated as “unintended.”
In the discussion that follows, an operating environment is described in which the inventive embodiments can be employed. Following this, various embodiments for managing unintended camera clicks are described.
Operating Environment
FIG. 1 illustrates a handheld, mobile device generally at100, that can be used in connection with the inventive embodiments described herein. Typical mobile devices include cellular telephones, push-to-talk (PTT) devices, smart phones, PDAs, and others. Themobile device100 includes a body orhousing102 that contains adisplay104 that can display content such as phone numbers, text messages, internet information, images, the user and background, etc. Themobile device100 also includes one ormore switches106 and multiple cameras, an example of which is shown at108. The cameras can be disposed at any location on the front and back of themobile device100. Other input/output (I/O) devices such as microphones, wheels, joysticks, soft (software defined) or hard keys, touchscreens, speakers, antennas, and assorted I/O connections may be present in the mobile device but are not shown here.
Themobile device100 also contains internal components and circuitry that control and process information and elements of themobile device100. For example, as shown generally at200 inFIG. 2, themobile device100 contains aprocessor120, amemory122, transmit/receivecircuitry124, input/output circuitry126, and asensor128 among other components that are not shown for clarity that are connected by asystem bus130 that operatively couples various components to theprocessor120. The I/O circuitry126 contains circuitry that is connected to thedisplay104,keys132,cameras134,microphone136,speaker138, etc. Thesensor128 may be an accelerometer and/or a gyroscope, for example. The accelerometer and gyroscope can be used to determine attitude and motion of the device which, in turn, can be processed to recognize gestures, as will become apparent below.
In some embodiments, thesensor128 is formed from a conventional microelectromechanical systems (MEMS) device. In other embodiments, thesensor128 and one or more of thecameras134 may be the same element. Multiple gyroscopes and/or accelerometers or a combination thereof may be used to obtain more accurate sensor results. Further, the sensor128 (as other elements) may provide different functionalities dependent on the device mode (e.g., game, camera, navigation device, internet browser, etc.).
Thememory122 may be a conventional memory that is commercially available. Thememory122 may include random-access memory (RAM), read-only memory (ROM), flash memory and the like, that contain, for example, non-transitory computer-accessible media. Theprocessor120 executes computer programs stored on the computer-accessible media.
Mobile devices can be communicatively connected to the Internet via a wired or wireless connection in a manner well known in the art. Wired connections can be provided using, for example, a modem or Ethernet or similar network card connected to a local-area network (LAN) or a wide-area network (WAN) that itself is connected to the Internet via, for example, a T1 line. Wireless connections can be provided using WiFi or some other connection. The mobile devices typically operate on an operating system stored on the device.
Having considered an example operating environment, consider now embodiments in which multiple cameras on a mobile device can be used to manage unintended camera clicks.
Managing Unintended Camera Clicks
In the discussion below, techniques for managing unintended camera clicks on a mobile device are described. A first of the techniques utilizes a first camera and a second camera on the mobile device to manage unintended camera clicks through a contextual analysis that is designed to ascertain whether a user likely intended to take a picture or not. A second of the techniques utilizes multiple cameras to manage unintended camera clicks, but also employs a confidence spectrum in addition to the contextual analysis to manage unintended camera clicks.
As noted above, many mobile devices that include cameras enable users to access the cameras through the use of camera invocation gestures. This relieves the users of having to unlock their mobile device, navigate to the camera functionality, activate the camera functionality, and begin taking pictures. Camera invocation gestures can include a wide variety of gestures including, by way of example and not limitation, touch gestures such as taps and swipes, and various motion gestures.
One specific camera invocation gesture, utilized by the assignee of this document, can involve holding the mobile device and twisting it two times, similar to the motion of turning a doorknob. By doing so, the camera can be automatically invoked and then, by touching anywhere on the device's display, a picture or pictures can be taken. This can enable a user to rapidly deploy the camera and begin taking pictures. Sometimes, however, input to a mobile device can be mistaken as a camera invocation gesture and can result in unintended pictures being taken. For example, consider the following scenario.
Ajay went for a walk and took his phone along. He has a habit of keeping a phone in his hand while he walks. While on his walk he saw a group of ducks gathered on the lake and took a photo of the ducks. He then continued on his walk and went back home after completing 30 minutes of the walk. Later, he decided to share the picture of the ducks he took on his walk with his friends. He opened his photo gallery to find the photo and expected it to be the last photo taken. However, much to his surprise, there were a number of other photos of pavement, grass, fingers, and more. He did not remember taking those photos and started selecting all the unintended photos to delete. Apparently, while he was on his walk, the motion of his hand holding his phone was mistakenly identified as a camera invocation gesture and, because his thumb was on the display of his device, unintended photos were taken. After spending more than 10 minutes of selecting the unintended photos, one-by-one, he deleted them from his phone. He finally found the photo he was looking for and was able to share it with his friends. Later that day, he received a message from his cloud photo storage provider that his account was running out of storage. Subsequently, he logged into his cloud account to find all of the unintended photos had been synced to his cloud account as well. He again had to spend a considerable amount of time to delete the unintended photos from the cloud.
Consider now how situations like this and others can be mitigated by using multiple cameras to manage unintended camera clicks.
Using Multiple Cameras to Manage Unintended Camera Clicks
FIG. 3 illustrates example components that can be utilized to manage unintended camera clicks in accordance with one or more embodiments. In this particular example, memory122 (FIG. 2) includes agesture engine300, an unintendedpicture management module302,picture storage304, and auser interface module306.
Gesture engine300 is representative of functionality that is used to receive input and process the input to identify a gesture. Any suitable type of gestures can be processed bygesture engine300 including, by way of example and not limitation, taps, swipes, motion gestures and the like. Motion gestures can be identified through the use of one or more sensors, such as sensor128 (FIG. 2) which can include an accelerometer and/or a gyroscope.
Unintendedpicture management module302 is representative of functionality that is used to ascertain whether a user likely intends to take a picture. In at least some embodiments, when a gesture is received,gesture engine300 determines whether the gesture is a camera invocation gesture to access a first camera on a mobile device. If the gesture is identified as a camera invocation gesture, the first camera (as indicated byarrow308 representing the direction of a rear-facing camera) can be invoked and the unintendedpicture management module302 can cause a second camera on the mobile device to be invoked (as indicated byarrow310 representing the direction of a front-facing camera). In this example, the front-facing camera is utilized to ascertain a context associated with whether a user likely intends to take a picture. Any suitable type of context can be utilized. For example, the front-facing camera can take an image or video and the unintendedpicture management module302 can process the image to ascertain whether a head appears in the image or video. That is, typically when a user takes a picture, the user will look through the viewfinder displayed on their display screen to see in the direction ofarrow308. If the unintendedpicture management module302 ascertains that a head appears in the image or video captured by the front-facing camera, an inference can be made that the user likely intends to take a picture. Other contexts can be utilized as well. For example, in the image or video captured by the front-facing camera, the unintendedpicture management module302 can analyze the image or video to ascertain the current position of the user's eyes. That is, typically when the user takes a picture, the user will look through the viewfinder displayed on their display screen. This means that the user's eyes will be looking directly or near directly at the device's display. If this is the case, an inference can be made that the user likely intends to take a picture. If, on the other hand, the user's eyes are not looking directly or near directly at the device's display, then an inference can be made that the user likely does not intend to take a picture.
If the mobile device ascertains that the user likely intends to take a picture, the mobile device can be placed in or remain in an operational picture mode in which pictures can be or can continue to be taken and saved inpicture storage304 on the mobile device.
If the mobile device ascertains, from the context associated with the second camera, that the user likely does not intend to take a picture, then any pictures that are taken can be saved inpicture storage304 and the user can be prompted, by way ofuser interface module306, as to whether they wish to delete the pictures. In at least some embodiments, when the pictures are saved, the pictures can be categorized as either “intended” or “unintended”, as represented by the tableadjacent picture storage304. In this particular example, the table has two columns, one designated “Picture ID” and “Intended/Unintended.”
Some gestures may invoke the front-facing camera directly. In this case, if a gesture is received to invoke the front-facing camera, the techniques described herein can be used to ascertain whether the user likely intends to take a picture with the front facing camera. In yet other situations, if a user is taking a group “selfie” picture, using the techniques described herein to detect a face, head or eyes of an individual other than the user can still evidence a user's intent to take a picture.
Example method400 is described with reference toFIG. 4. Generally, any services, components, modules, methods, and/or operations described herein can be implemented using software, firmware, hardware (e.g., fixed logic circuitry), manual processing, or any combination thereof. Some operations of the example methods may be described in the general context of executable instructions stored on computer-readable storage memory that is local and/or remote to a computer processing system, and implementations can include software applications, programs, functions, and the like. Alternatively or in addition, any of the functionality described herein can be performed, at least in part, by one or more hardware logic components, such as, and without limitation, Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on-a-chip systems (SoCs), Complex Programmable Logic Devices (CPLDs), and the like.
FIG. 4 illustrates example method(s)400 of managing unintended camera clicks as described herein. The order in which the method is described is not intended to be construed as a limitation, and any number or combination of the described method operations can be performed in any order to perform a method, or an alternate method.
At402, a camera invocation gesture to access a first camera on a mobile device is received by the mobile device. In the illustrated and described example, the first camera is a rear-facing camera that faces away from the user. Responsive to receiving the camera invocation gesture, a second camera on the mobile device is utilized, at404, to ascertain a context associated with whether a user likely intends to take a picture. If the mobile device ascertains that the user likely intends to take a picture at406, the mobile device can be placed in an operational picture mode, at408, in which pictures can be taken or can continue to be taken and saved on the mobile device. Any suitable context can be utilized to ascertain whether the user likely intends to take a picture. For example, if the second camera captures a picture of a head in its viewfinder, then it can be inferred that the user is taking a picture. Similarly, if the second camera captures a picture of the user's eyes looking at the viewfinder, then it can be inferred that the user is taking a picture. If the mobile device ascertains from the context associated with the second camera, at406, that the user likely does not intend to take a picture, then any pictures that are taken can be saved on the camera, at410, and the user can be prompted, at412, as to whether they wish to delete any saved pictures.
In this manner, the mobile device automatically determines whether any of the pictures that are taken are possibly unintended. The mobile device can then prompt the user to delete the pictures, thus saving the user time and effort to search through a storage album and find the unintended pictures that they wish to delete.
Having considered the above-described embodiment, consider now embodiments that utilize a multiple cameras and a confidence spectrum to manage unintended camera clicks.
Using Multiple Cameras and a Confidence Spectrum to Manage Unintended Camera Clicks
FIG. 5 illustrates example components that can be utilized to manage unintended camera clicks in accordance with one or more embodiments. In this particular example, memory122 (FIG. 2) includes agesture engine500, aconfidence evaluation module501, an unintendedpicture management module502,picture storage504, and auser interface module506.
Gesture engine500 is representative of functionality that is used to receive input and process the input to identify a gesture, such as a camera invocation gesture. Any suitable type of gestures can be processed bygesture engine500 including, by way of example and not limitation, taps, swipes, motion gestures and the like. Motion gestures can be identified through the use of one or more sensors, such as sensor128 (FIG. 2) which can include an accelerometer and/or a gyroscope.
Confidence evaluation module501 is representative of functionality that ascertains a confidence level associated with a gesture being a camera invocation gesture. Multiple confidence levels can be distributed along a confidence spectrum. For example, one end of the spectrum may have a low value while the other end of the spectrum has a high value with varying values in between. Thus, the confidence levels may vary from a low confidence level, up through moderate levels, and terminate at a high level. So, for example, if a gesture is received and corresponding data is processed by theconfidence evaluation module501 to assign a low confidence level of the gesture being a camera invocation gesture, the camera will not be invoked. If, on the other hand, the gesture is evaluated and assigned a high confidence level of the gesture being a camera invocation gesture, the camera will be invoked and will operate as usual. If the gesture is evaluated and assigned a moderate confidence level, the pictures taken during this time will be processed, as described below, using the unintendedpicture management module502.
Unintendedpicture management module502 is representative of functionality that is used to ascertain whether a user likely intends to take a picture. In at least some embodiments, when a gesture is received,gesture engine500 processes the input to produce gesture data. The gesture data is used by theconfidence evaluation module501 to assign a confidence level to the gesture as to whether the gesture is a camera invocation gesture to invoke a first camera on a mobile device. One example of how this can be done is provided below in a section entitled “Assigning Confidence Levels”. If the gesture is assigned a moderate confidence level as being a camera invocation gesture, the first camera (as indicated byarrow508 representing the direction of a rear-facing camera) can be invoked and the unintendedpicture management module502 can cause a second camera on the mobile device to be invoked (as indicated byarrow510 representing the direction of a front-facing camera). In this example, the front-facing camera is utilized to ascertain a context associated with whether a user likely intends to take a picture. Any suitable type of context can be utilized. For example, the front-facing camera can take an image or video and the unintendedpicture management module502 can process the image to ascertain whether a head appears in the image or video. That is, typically when a user takes a picture, the user will look through the viewfinder displayed on their display screen to see in the direction ofarrow508. If the unintendedpicture management module502 ascertains that a head appears in the image or video captured by the front-facing camera, an inference can be made that the user likely intends to take a picture. Other contexts can be utilized as well. For example, in the image or video captured by the front-facing camera, the unintendedpicture management module502 can analyze the image or video to ascertain the current position of the user's eyes. That is, typically when the user takes a picture, the user will look through the viewfinder displayed on their display screen. This means that the user's eyes will be looking directly or near directly at the device's display. If this is the case, an inference can be made that the user likely intends to take a picture. If, on the other hand, the user's eyes are not looking directly or near directly at the device's display, then an inference can be made that the user likely does not intend to take a picture.
If the mobile device ascertains that the user likely intends to take a picture, the mobile device can be placed in or remain in an operational picture mode in which pictures can be or can continue to be taken and saved inpicture storage504 on the mobile device.
If the mobile device ascertains, from the context associated with the second camera, that the user likely does not intend to take a picture, then any pictures that are taken can be saved inpicture storage504 and the user can be prompted, by way ofuser interface module506, as to whether they wish to delete the pictures. In at least some embodiments, when the pictures are saved, the pictures can be categorized as either “intended” or “unintended”, as represented by the tableadjacent picture storage504.
Consider now one example of how confidence levels can be assigned in accordance with one or more embodiments.
Assigning Confidence Levels
As noted above, camera invocation gestures can vary between mobile devices. In some instances, the camera invocation gesture may be a touch gesture such as a swipe or tap. In other instances, the camera invocation gesture may be a motion-based gesture. For example, one camera invocation gesture is the twist gesture mentioned above in which the mobile device is twisted twice in a manner similar to the motion used to turn a doorknob. If the mobile device is twisted twice through a high range of angles, then a high confidence level may be assigned. If, on the other hand, the mobile device is twisted twice through a low range of angles, then a low confidence level can be assigned. Similarly, if the mobile device is twisted through a moderate range of angles, a moderate confidence level may be assigned. As an example, considerFIG. 6.
There, a mobile device is shown generally at600. The mobile device is also depicted600′, as viewed along line6-6. Themobile device600′ appears within a circle that has a range of angles depicted including 0 degrees, +/−45 degrees, and +/−90 degrees. When the mobile device is twisted, sensors within the device, such as an accelerometer and gyroscope, can produce gesture data which can then be analyzed. Various threshold angles can be set and used to determine whether a gesture or motion is a camera invocation gesture. For example, if themobile device600′ is rotated twice in the direction of the indicated arrows between −45 degrees and +45 degrees or greater, a high confidence level may be assigned that the gesture is a camera invocation gesture. If, on the other hand, the device is rotated between −35 degrees and +20 degrees, then a moderate confidence level may be assigned that the gesture is a camera invocation gesture. If the device is rotated between −20 degrees and +10 degrees, then a low confidence level may be assigned.
It is to be appreciated and understood, however, that this constitutes but one example of how confidence levels can be assigned to a particular gesture to ascertain whether the gesture is a camera invocation gesture. As such, confidence levels can be assigned in different ways and in connection with different gestures without departing from the spirit and scope of the claimed subject matter.
Example method700 is described with reference toFIG. 7. Generally, any services, components, modules, methods, and/or operations described herein can be implemented using software, firmware, hardware (e.g., fixed logic circuitry), manual processing, or any combination thereof. Some operations of the example methods may be described in the general context of executable instructions stored on computer-readable storage memory that is local and/or remote to a computer processing system, and implementations can include software applications, programs, functions, and the like. Alternatively or in addition, any of the functionality described herein can be performed, at least in part, by one or more hardware logic components, such as, and without limitation, Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on-a-chip systems (SoCs), Complex Programmable Logic Devices (CPLDs), and the like.
FIG. 7 illustrates example method(s)700 of managing unintended camera clicks as described herein. The order in which the method is described is not intended to be construed as a limitation, and any number or combination of the described method operations can be performed in any order to perform a method, or an alternate method.
At702, a camera invocation gesture to access a first camera on a mobile device is received by the mobile device. In the illustrated and described example, the first camera is a rear-facing camera that faces away from the user. In at least some embodiments, when the first camera is invoked, it can immediately begin taking pictures. At704, a confidence level associated with the camera invocation gesture is detected. At706 the confidence level is determined to be a certain defined confidence level. The certain defined confidence level is one in which it is uncertain whether the camera invocation gesture that invoked the camera was actually intended to invoke the camera. In the example above, the certain defined confidence level was a moderate confidence level that lay between a low confidence level and a high confidence level. Responsive to determining that the confidence level is the certain defined confidence level, a second camera on the mobile device is utilized, at708, to ascertain a context associated with whether a user likely intends to take a picture. If the mobile device ascertains that the user likely intends to take a picture at710, the pictures that are taken can be classified as “intended” and saved at712. Any suitable context can be utilized to ascertain whether the user likely intends to take a picture. For example, if the second camera captures a picture of a head in its viewfinder, then it can be inferred that the user is taking a picture. Similarly, if the second camera captures a picture of the user's eyes looking at the viewfinder, then it can be inferred that the user is taking a picture. If, on the other hand, the mobile device ascertains, at710, from the context associated with the second camera that the user likely does not intend to take a picture, then any pictures that are taken are classified as “unintended” and saved on the camera, at714. The user can then be prompted, at716, as to whether they wish to delete any saved pictures that were classified as “unintended”.
In this manner, the mobile device automatically determines whether any of the pictures that are taken are possibly unintended. The mobile device can then prompt the user to delete the pictures, thus saving the user time and effort to search through a storage album and find the unintended pictures that they wish to delete.
Having considered various embodiments described above, consider now an example device that can be utilized to implement the described embodiments.
Example Device
FIG. 8 illustrates various components of anexample device800 in which embodiments of managing unintended camera clicks can be implemented. Theexample device800 can be implemented as any of the devices described with reference to the previous figures, such as any type of client device, mobile phone, tablet, computing, communication, entertainment, gaming, media playback, and/or other type of electronic device.
Thedevice800 includescommunication transceivers802 that enable wired and/or wireless communication ofdevice data804 with other devices. Additionally, the device data can include any type of audio, video, and/or image data. Example transceivers include wireless personal area network (WPAN) radios compliant with various IEEE 802.15 (Bluetooth™) standards, wireless local area network (WLAN) radios compliant with any of the various IEEE 802.11 (WiFi™) standards, wireless wide area network (WWAN) radios for cellular phone communication, wireless metropolitan area network (WMAN) radios compliant with various IEEE 802.15 (WiMAX™) standards, and wired local area network (LAN) Ethernet transceivers for network data communication.
Thedevice800 may also include one or moredata input ports806 via which any type of data, media content, and/or inputs can be received, such as user-selectable inputs to the device, messages, music, television content, recorded content, and any other type of audio, video, and/or image data received from any content and/or data source. The data input ports may include USB ports, coaxial cable ports, and other serial or parallel connectors (including internal connectors) for flash memory, DVDs, CDs, and the like. These data input ports may be used to couple the device to any type of components, peripherals, or accessories such as microphones and/or cameras.
Thedevice800 includes aprocessing system808 of one or more processors (e.g., any of microprocessors, controllers, and the like) and/or a processor and memory system implemented as a system-on-chip (SoC) that processes computer-executable instructions. The processor system may be implemented at least partially in hardware, which can include components of an integrated circuit or on-chip system, an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a complex programmable logic device (CPLD), and other implementations in silicon and/or other hardware. Alternately or in addition, the device can be implemented with any one or combination of software, hardware, firmware, or fixed logic circuitry that is implemented in connection with processing and control circuits, which are generally identified at810. Thedevice800 may further include any type of a system bus or other data and command transfer system that couples the various components within the device. A system bus can include any one or combination of different bus structures and architectures, as well as control and data lines.
Thedevice800 also includes computer-readable storage memory812 that enable data storage, such as data storage devices that can be accessed by a computing device, and that provide persistent storage of data and executable instructions (e.g., software applications, programs, functions, and the like). Examples of the computer-readable storage memory812 include volatile memory and non-volatile memory, fixed and removable media devices, and any suitable memory device or electronic data storage that maintains data for computing device access. The computer-readable storage memory can include various implementations of random access memory (RAM), read-only memory (ROM), flash memory, and other types of storage media in various memory device configurations. Thedevice800 may also include a mass storage media device.
The computer-readable storage memory812 provides data storage mechanisms to store thedevice data804, other types of information and/or data, and various device applications814 (e.g., software applications). For example, anoperating system816 can be maintained as software instructions with a memory device and executed by theprocessing system808. The device applications may also include a device manager, such as any form of a control application, software application, signal-processing and control module, code that is native to a particular device, a hardware abstraction layer for a particular device, and so on. In this example, thedevice800 includes aconfidence evaluation module818, unintendedpicture management module820, and agesture engine821 that operates as described above.
Thedevice800 can also include one ormore device sensors822, such as any one or more of an ambient light sensor, a proximity sensor, a touch sensor, an infrared (IR) sensor, accelerometer, gyroscope, and the like. Thedevice800 can also include one ormore power sources824, such as when the device is implemented as a mobile device. The power sources may include a charging and/or power system, and can be implemented as a flexible strip battery, a rechargeable battery, a charged super-capacitor, and/or any other type of active or passive power source.
Thedevice800 also includes an audio and/orvideo processing system826 that generates audio data for anaudio system828 and/or generates display data for adisplay system830, andmultiple cameras827. The audio system and/or the display system may include any devices that process, display, and/or otherwise render audio, video, display, and/or image data. Display data and audio signals can be communicated to an audio component and/or to a display component via an RF (radio frequency) link, S-video link, HDMI (high-definition multimedia interface), composite video link, component video link, DVI (digital video interface), analog audio connection, or other similar communication link, such asmedia data port832. In implementations, the audio system and/or the display system are integrated components of the example device. Alternatively, the audio system and/or the display system are external, peripheral components to the example device.
Although the embodiments described above have been described in language specific to features and/or methods, the subject of the appended claims is not necessarily limited to the specific features or methods described. Rather, the specific features and methods are disclosed as example implementations, and other equivalent features and methods are intended to be within the scope of the appended claims. Further, various different embodiments are described and it is to be appreciated that each described embodiment can be implemented independently or in connection with one or more other described embodiments.