Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure.
With the development of technology and the progress of the technical field of electronic devices, the requirements of users on the shooting performance of daily electronic devices are gradually increased. In the related art, the definition of an image photographed by an electronic device may be improved through a multi-frame super-resolution algorithm and an Auto Focus (AF) technique. The multi-frame super-resolution algorithm can reconstruct the multi-frame image through the acquired multi-frame image by utilizing the complementary information of the pixel points of the adjacent frames, so that the definition of the image quality is improved. In the auto-focusing technology, for a Sensor (Sensor) with non-fixed focus, AF may determine that the focal length Of a lens needs to be adjusted currently by using a Phase Detection (PD) technology, a Time Of Flight (TOF) technology, and a previous frame definition, so as to obtain a clear image.
However, in the related art, the multi-frame super-resolution algorithm performs multi-frame super-resolution reconstruction on the acquired image, and therefore, the multi-frame super-resolution algorithm is limited to the sharpness of the input image. It can be understood that the definition of the image obtained based on the multi-frame super-resolution algorithm is determined by the definition of the image input by the multi-frame algorithm, and if the definition of the input image is poor, the image with higher definition cannot be obtained by the multi-frame super-resolution algorithm. On the other hand, the auto-focusing technology obtains a local focusing image by moving a lens, so as to obtain a clear image at a focus, but the auto-focusing technology cannot obtain a global clear image. For example, when an electronic device captures an image, the image to be captured includes a closer person and a farther building, and the camera lens can only be adjusted by an auto-focus technique so that the lens focus is located on the person or on the building. Therefore, only a locally clear image can be acquired by the autofocus technique, and a globally clear image cannot be acquired.
Accordingly, the present disclosure provides an image photographing method, which determines position information of a photosensitive element in a photographing device by acquiring a distance between the photographing device and an object to be photographed, matching the acquired distance with a calibration gradient matrix, and adjusting the photosensitive element based on the determined position information. And the image is shot based on the adjusted photosensitive element, so that a globally clear image is obtained, and shooting experience with higher definition is brought to a user.
Fig. 1 is a flowchart illustrating an image photographing method according to an exemplary embodiment, which is used in an electronic device as shown in fig. 1, including the following steps.
In step S11, in response to detecting that the photographing device in the electronic apparatus is turned on, a distance between the photographing device and the object to be photographed is acquired.
In the embodiment of the disclosure, the distance from the object to be photographed can be obtained through a TOF sensor in the photographing device.
The TOF sensor may be a pixel-level TOF sensor, and distance information of each pixel point in the object to be photographed may be acquired based on the pixel-level TOF sensor.
In step S12, the position of the photosensitive element in the photographing device is adjusted based on the distance and the calibration gradient matrix.
The calibration gradient matrix is used for representing the corresponding relation between the distance and the adjustment position of the photosensitive element.
In the embodiment of the disclosure, the obtained distance is matched with the calibration gradient matrix, so that the position of the photosensitive element corresponding to the distance is obtained.
In the embodiment of the present disclosure, the adjustment position of the photosensitive element may be a position of the photosensitive element where a clear image can be obtained.
In embodiments of the present disclosure, the photosensitive element may be a pixel-level sensor, for example, a pixel sensor. In one example, the photosensitive element in the electronic device may employ a Dual Photodiode (Dual PD). Fig. 2 is a schematic diagram showing a structure of a Dual PD according to an exemplary embodiment. As shown in fig. 2, two photodiodes of the Dual PD are located at different positions at the bottom of the same pixel, and the focal length, that is, the position where the Dual PD can acquire a clear image, is determined by receiving light signals in different directions and based on the intensity of the light signals and the time difference. It can be understood that, in the embodiment of the disclosure, the refractive power of the Dual PD transparent lens is assumed to be perfect, and no scattering or other phenomena occur.
In the embodiment of the disclosure, based on the obtained distance between each pixel point of the object to be photographed, the distance is matched with the calibration gradient matrix, so that the position of each photosensitive element corresponding to each pixel point can be obtained.
In step S13, an image is captured based on the photosensitive element after the position adjustment.
In the embodiment of the disclosure, the photosensitive element is adjusted to a position determined based on the calibration gradient matrix, and an image is taken. The imaging may be performed by adjusting the positions of the photosensitive elements corresponding to the pixels based on the determined positions of the photosensitive elements corresponding to the pixels.
In the embodiment of the disclosure, by acquiring the distance between the shooting device and the object to be shot, the acquired distance is matched with the calibration gradient matrix, so that the position information of the photosensitive element in the shooting device is determined, the photosensitive element is adjusted based on the determined position information, the image is shot based on the adjusted photosensitive element, and therefore a globally clear image is acquired, and shooting experience with higher definition is brought to a user.
Fig. 3 is a flowchart illustrating a method of determining a calibration gradient matrix according to an exemplary embodiment, and the method of determining a calibration gradient matrix is used in an electronic device as shown in fig. 3, and includes the following steps.
In step S21, the focus range of the imaging device is divided into a plurality of equal-division points based on the focus range, and each equal-division point is determined as the target distance.
In the embodiment of the disclosure, the focusing range of the image capturing apparatus may be a range from a closest point to a farthest point at which the image capturing apparatus can focus.
In the embodiment of the disclosure, by dividing the focusing range of the image capturing apparatus, for example, the focusing range of the image capturing apparatus may be equally divided, thereby determining a plurality of target distances.
In step S22, the photographing object is moved to the ith target distance, and the photosensitive element in the photographing device is moved until the brightness value obtained by the photosensitive element at the position of the pixel point corresponding to the calibration template is equal to the preset brightness threshold, and the first correspondence between the position of the photosensitive element and the ith target distance is recorded.
The calibration template is determined based on the number of pixels of an image shot by the image pickup device and comprises a plurality of pixel positions.
In the embodiment of the disclosure, the shooting object may be moved to the target distance, and when the shooting object is determined to be at the target distance, the photosensitive element in the image capturing device is moved, so that the brightness value obtained in the calibration template is equal to the preset brightness value. And recording a first correspondence between the current photosensitive element position and the target distance. It is understood that, when the same shooting object is at the same target distance, the positions of the photosensitive elements corresponding to different pixels of the shooting object may be different.
In the implementation of the present disclosure, the focal position, i.e., the position of the photosensitive element, may be determined by a Sensor's phase focusing technique. The principle of the phase focusing technique can be understood, among other things, as determining the focus position by finding the point where the phase difference is smallest. Fig. 4 is a schematic diagram illustrating a phase focusing principle according to an exemplary embodiment. As shown in fig. 4, it is assumed that the same pixel point is mapped to similar triangles of the sensor by different light rays, and P1, P2, and d1, and P1', P2', and d2 are corresponding. Wherein, it can be determined that P1, P2 and P1', P2' are located before or after the focal length by receiving the light ray position relationship between P1, P2 and P1', P2' and the original pixel point. Wherein, P1, P2 and d1 can be the data that the sensor is designed to leave the factory. Therefore, the distance between the P2 'and the P1' is calculated based on the position relationship of the PD points, and the distance of d2 can be calculated based on the similar triangle, thereby determining the focal position.
In step S23, the photographing object is moved to the (i+1) th target distance, and the above-described process is repeated until a first correspondence between the target distance corresponding to each of the plurality of equally divided points and the position of the photosensitive element is obtained.
In the embodiment of the disclosure, the shooting object is moved to other target distances, for example, may be the next target distance. When the shooting object is located at the next target distance, the photosensitive element in the image pickup device is moved again, so that the brightness value obtained in the calibration template is equal to the preset brightness value. And recording a first correspondence between the current photosensitive element position and the target distance. It is understood that the luminance values corresponding to different pixels at different target distances may be different, and the luminance values may be determined by a related technician.
In step S24, a calibration gradient matrix is determined based on a first correspondence between the target distance corresponding to each of the plurality of equal-dividing points and the position of the photosensitive element.
In the embodiment of the disclosure, the first correspondence relationship between the target distance divided based on the focusing range and the position of the photosensitive element is acquired, so that the calibration gradient matrix can be determined.
In the embodiment of the disclosure, a plurality of target distances are determined by dividing a focusing range of an image pickup device, and a calibration gradient matrix is determined by acquiring a first correspondence between the plurality of target distances and positions of photosensitive elements. In the embodiment of the disclosure, the calibration gradient matrix is determined, so that the position of the photosensitive element can be rapidly determined when the shooting device shoots an image, a globally clear image is obtained, and the use experience of a user is improved.
In the implementation of the present disclosure, since the target distance does not represent the entire focusing range, it is necessary to fit the first correspondence between the target distance and the position of the photosensitive element, so as to determine the correspondence between the entire distance and the photosensitive element in the focusing range.
Fig. 5 is a flowchart illustrating a method of determining a calibration gradient matrix according to an exemplary embodiment, and the method of determining a calibration gradient matrix is used in an electronic device as shown in fig. 5, and includes the following steps.
In step S31, the focus range of the imaging device is divided into a plurality of equal-division points based on the focus range, and each equal-division point is determined as the target distance.
In step S32, the photographing object is moved to the ith target distance, and the photosensitive element in the photographing device is moved until the brightness value obtained by the photosensitive element at the position of the pixel point corresponding to the calibration template is equal to the preset brightness threshold, and the first correspondence between the position of the photosensitive element and the ith target distance is recorded.
In step S33, the photographing object is moved to the (i+1) th target distance, and the above-described process is repeated until a first correspondence between the target distance corresponding to each of the plurality of equally divided points and the position of the photosensitive element is obtained.
In step S34, a first correspondence between the target distance corresponding to each of the plurality of equal points and the position of the photosensitive element is fitted, so as to obtain a second correspondence between the focusing distance corresponding to the focusing range of the photographing device and the position of the photosensitive element, and the second correspondence is used as a calibration gradient matrix.
In the embodiment of the present disclosure, the steps S31, S32, and S33 are identical to the steps S21, S22, and S23, and are not described in detail herein.
In the embodiment of the disclosure, based on the first correspondence between the acquired target distance and the position of the photosensitive element, fitting the first correspondence, so as to acquire second correspondence between all focusing distances in the focusing range and the position of the photosensitive element, and taking the second correspondence as a calibration gradient matrix.
In one example, a one-dimensional vector may be constructed for the movement position of each photosensitive element. Wherein it is understood that this one-dimensional vector is not continuous. Therefore, in order to avoid that the distance of the object to be shot is not in the target distance, the constructed one-dimensional vector data is converted into a two-dimensional function template, so that the moving position of the photosensitive element can be conveniently obtained at different distances. And constructing a two-dimensional function template for all positions to finally obtain a constructed gradient matrix.
In the embodiment of the disclosure, the first correspondence between the obtained target distance and the position of the photosensitive element is fitted to obtain the second correspondence between all the focusing distances in the focusing range and the position of the photosensitive element, and the second correspondence is used as the calibration gradient matrix, so that the calibration gradient matrix can contain the correspondence between all the focusing distances in the photographing device and the photosensitive element, and further, when the photographing device photographs images, the position of the photosensitive element can be accurately and comprehensively determined, and the photographed images can be clearer.
In the embodiment of the disclosure, the sizes of the images shot by different image shooting devices may be different, or the same shooting device may shoot images of different sizes. Also, the number of pixels contained may be different for different sized images. It will be appreciated that the photosensitive elements may be configured to receive the light signals corresponding to the pixels, and thus their corresponding photosensitive elements may be different for different sized images.
In the embodiment of the disclosure, the calibration template can be used for acquiring the brightness values of different light-sensitive elements, and the calibration template can be different for different image pickup devices. And, the calibration template may be determined based on the size of the image that the image pickup device can capture.
Fig. 6 is a flowchart illustrating a method of determining a calibration template according to an exemplary embodiment, and the method of determining a calibration template is used in an electronic device as shown in fig. 6, and includes the following steps.
In step S41, based on the width of the image captured by the capturing device and the number of pixels included in the row direction of the preset target templates, X target templates included in the width of the image captured by the capturing device are determined, and X is a value obtained by rounding down the ratio between the width and the number of pixels included in the row direction.
In the embodiment of the disclosure, the target template may be a matrix of a preset size.
In the embodiment of the disclosure, the number of pixels included in the width of the image that can be shot by the shooting device can be determined, and the number of pixels included in the row direction in the target templates can be determined, so that it is determined that the image that can be shot by the shooting device includes X target templates in width. Wherein X is a value obtained by rounding down the ratio between the width and the number of pixel points included in the row direction.
In step S42, Y target templates included in the height of the image captured by the capturing device are determined based on the height of the image captured by the capturing device and the number of pixels included in the column direction of the preset target templates, and Y is a value obtained by rounding down the ratio between the height and the number of pixels included in the column direction.
In the embodiment of the disclosure, the number of pixels included in the height of the image that can be shot by the shooting device can be determined, and the number of pixels included in the column direction in the target templates can be determined, so that the image that can be shot by the shooting device is determined to include Y target templates in height. Wherein Y is a value obtained by rounding down the ratio between the height and the number of pixel points included in the column direction.
In step S43, x×y target templates are used as calibration templates.
In the embodiment of the disclosure, the product of X and Y may be determined as the number of target templates contained in the calibration template.
In the embodiment of the present disclosure, the target template may be set to be a matrix of m×n. For example, each pixel in the image to be photographed needs to have its own identity information, and the brightness value is generally 0-255, so a matrix with 256×256 size can be established as the target matrix.
In one example, FIG. 7 is a schematic diagram of a calibration template, according to an example embodiment. As shown in fig. 7, if the size of the image captured by the image capturing device is determined to be m×n, where M and N respectively represent the number of pixels included in the image in the width and height directions. For example, the size of the image may be 512 x 512. And determining that the target templates are 256-256 matrix, the calibration templates comprise 4 target templates, and the arrangement mode of the four target templates is shown in fig. 7.
In the embodiment of the disclosure, the calibration template is determined based on the image which can be shot by the shooting device, so that the calibration template meeting the functional requirement of the shooting device can be obtained, and then the gradient calibration matrix can be obtained based on the determined calibration template, so that the globally clear shot image is obtained, and the use experience of a user is improved.
In the embodiment of the disclosure, the position of the photosensitive element may be adjusted by a motor, for example, the position of the photosensitive element may be adjusted by a voice coil motor.
In the embodiment of the disclosure, the positions of the photosensitive elements are adjusted through the voice coil motor, so that multi-intersection shooting of images is realized, and the definition of shot images is improved.
In the embodiment of the disclosure, the distance from the shooting device to each pixel point in the object to be shot is obtained through the pixel-level distance sensor, so that the position of the photosensitive element corresponding to the pixel point can be determined according to the distance of each pixel point, further adjustment is performed, shooting of a global clear image is realized, and shooting experience of a user is improved.
In the embodiments of the present disclosure, an image capturing method is described with reference to the following examples.
In an embodiment of the present disclosure, fig. 8 is a flowchart illustrating an image capturing according to an exemplary embodiment. As shown in fig. 8, a single frame global sharp image is generated by a calibration module including building templates and multi-dimensional calibration, building a calibration gradient matrix, pixel-level TOF distance estimation, pixel-level adjustment sensors. When the sensor is electrified, namely, when a user opens an image pickup device in the electronic equipment, a calibration template which is matched with the size of a picture which can be shot by the image pickup device is constructed. The calibration templates may include a plurality of preset target templates. Wherein the target template may be a matrix determined based on luminance values, for example, the target template may be a 256 x 256 matrix. When the calibration template is determined, the focus range may be equally divided according to the intervals of the focus ranges supported by the maximum and minimum of the image pickup apparatuses, for example, the interval of 1 meter may be divided into N equal parts. And the distance of each pixel point of the sensor is adjusted by using the constructed template under each distance to ensure that each pixel point can obtain the same brightness value as the template. Wherein each pixel point has left and right PD points. According to the principle of light transmission, the same pixel point passes through different light rays, and the same brightness value can be obtained if the pixel sensor is at the focal length imaging position. Therefore, based on the calibration template, the focal distance can be conveniently calculated, and the whole image can be clearly presented by properly adjusting the position of the pixel sensor. Also, since the imaging distances presented when the object is at different positions are all different, simulation over a plurality of divided distances is required.
Further, a correspondence relationship between the distance of the entire focusing range of the imaging device and the position of the pixel sensor may be established based on the acquired positions of the pixel sensors of the respective equal points. A one-dimensional vector is first constructed based on the position of each aliquot and the moving position of the pixel sensor, but this one-dimensional vector is not continuous. In order to avoid that the distance of an object is not in the simulation division distance, a two-dimensional function template is constructed by constructed one-dimensional vector data, so that the moving position of the pixel sensor can be conveniently obtained at different distances. And constructing a two-dimensional function template for all positions to finally obtain a constructed gradient matrix.
In the embodiment of the disclosure, the above process may be understood as a process of constructing a calibration gradient matrix, and the above process may be completed before the electronic device leaves the factory, so that calibration module parameters can be obtained quickly in the electronic device, so as to calculate the pixel sensor moving distance.
In the embodiment of the disclosure, when a user turns on an image pickup device, such as a camera, a pixel-level TOF sensor emits infrared light, distance information of each pixel in a scene is obtained, and the distance information of the pixel is recorded in real time. Distance information is obtained according to the pixel-level TOF, the position where the pixel point of the sensor needs to move is calculated according to the constructed gradient matrix, the position information of the sensor is adjusted by the pixel level, and finally, the pixel points can be clearly displayed. After the position of the Sensor is adjusted, according to pixel information obtained by DualPD, the left PD data and the right PD data are added to obtain the brightness value of the pixel point at the position, and the Sensor sets exposure parameters according to the exposure information and completes the original data map.
In the embodiment of the disclosure, the distance information from the camera to each pixel point in the shooting scene is obtained, and the position where the sensor pixel point needs to move is calculated based on the constructed gradient matrix, so that the position of each pixel sensor is adjusted, each pixel point can be clearly presented, and shooting experience with higher definition is brought to a user.
Based on the same conception, the embodiment of the disclosure also provides an image shooting device.
It can be appreciated that, in order to implement the above-mentioned functions, the image capturing device provided in the embodiments of the present disclosure includes a hardware structure and/or a software module that perform respective functions. The disclosed embodiments may be implemented in hardware or a combination of hardware and computer software, in combination with the various example elements and algorithm steps disclosed in the embodiments of the disclosure. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application, but such implementation is not to be considered as beyond the scope of the embodiments of the present disclosure.
Fig. 9 is a block diagram of an image photographing device according to an exemplary embodiment. Referring to fig. 9, the apparatus 100 includes an acquisition unit 101, an adjustment unit 102, and a photographing unit 103.
And an obtaining unit 101, configured to obtain, in response to detecting that the photographing device in the electronic apparatus is turned on, a distance between the photographing device and an object to be photographed, where the object to be photographed is located in a focusing range of the photographing device.
The adjusting unit 102 is configured to adjust a position of the photosensitive element in the photographing device based on the distance and a calibration gradient matrix, where the calibration gradient matrix is used to characterize a correspondence between the distance and the adjustment position of the photosensitive element.
And a photographing unit 103 for photographing an image based on the photosensitive element after the position adjustment.
In one embodiment, the adjustment unit 102 determines the calibration gradient matrix by dividing the focus range of the imaging device into a plurality of equal-dividing points based on the focus range of the imaging device, and determining each equal-dividing point as the target distance. And moving the shooting object to the ith target distance, and moving a photosensitive element in the shooting device until the brightness value obtained by the photosensitive element at the position of the pixel point corresponding to the calibration template is equal to a preset brightness threshold value, and recording a first corresponding relation between the position of the photosensitive element and the ith target distance. The calibration template is determined based on the number of pixels of an image shot by the image pickup device and comprises a plurality of pixel positions. And moving the object to be shot to the (i+1) th target distance, and repeating the process until a first corresponding relation between the target distance corresponding to each of the plurality of equal dividing points and the position of the photosensitive element is obtained. A calibration gradient matrix is determined based on a first correspondence between a target distance corresponding to each of the plurality of aliquots and a position of the photosensitive element.
In one embodiment, the adjustment unit 102 determines the calibration gradient matrix based on a first correspondence between the target distance corresponding to each of the plurality of equal-dividing points and the position of the photosensitive element by fitting the first correspondence between the target distance corresponding to each of the plurality of equal-dividing points and the position of the photosensitive element to obtain a second correspondence between the corresponding focusing distance in the focusing range of the photographing device and the position of the photosensitive element, and uses the second correspondence as the calibration gradient matrix.
In one embodiment, the calibration template is determined based on the number of pixels of an image captured by the image capturing device, and based on the width of the image captured by the image capturing device and the number of pixels included in the preset target template in the row direction, X target templates included in the width of the image captured by the image capturing device are determined, and X is a value obtained by rounding down the ratio between the width and the number of pixels included in the row direction. And determining Y target templates contained in the height of the image shot by the shooting device based on the height of the image shot by the shooting device and the number of pixels contained in the preset target template column direction, wherein Y is a value obtained by rounding down the ratio between the height and the number of pixels contained in the column direction. X.Y target templates are used as calibration templates.
In one embodiment, the position of the photosensitive element in the photographing device is adjusted based on the voice coil motor, and the photosensitive element includes a plurality of photosensitive elements.
In one embodiment, the acquiring unit 101 acquires the distance between the photographing device and the object to be photographed by acquiring the distance between the photographing device and each pixel point of the object to be photographed based on the pixel-level distance sensor.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 10 is a block diagram illustrating an apparatus 200 for image capturing according to an exemplary embodiment. For example, apparatus 200 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to FIG. 10, the apparatus 200 may include one or more of a processing component 202, a memory 204, a power component 206, a multimedia component 208, an audio component 210, an input/output (I/O) interface 212, a sensor component 214, and a communication component 216.
The processing component 202 generally controls overall operation of the apparatus 200, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 202 may include one or more processors 220 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 202 can include one or more modules that facilitate interactions between the processing component 202 and other components. For example, the processing component 202 may include a multimedia module to facilitate interaction between the multimedia component 208 and the processing component 202.
The memory 204 is configured to store various types of data to support operations at the apparatus 200. Examples of such data include instructions for any application or method operating on the device 200, contact data, phonebook data, messages, pictures, videos, and the like. The memory 204 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power component 206 provides power to the various components of the device 200. The power components 206 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 200.
The multimedia component 208 includes a screen between the device 200 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 208 includes a front-facing camera and/or a rear-facing camera. The front camera and/or the rear camera may receive external multimedia data when the apparatus 200 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 210 is configured to output and/or input audio signals. For example, the audio component 210 includes a Microphone (MIC) configured to receive external audio signals when the device 200 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 204 or transmitted via the communication component 216. In some embodiments, audio component 210 further includes a speaker for outputting audio signals.
The I/O interface 212 provides an interface between the processing assembly 202 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to, a home button, a volume button, an activate button, and a lock button.
The sensor assembly 214 includes one or more sensors for providing status assessment of various aspects of the apparatus 200. For example, the sensor assembly 214 may detect the on/off state of the device 200, the relative positioning of the components, such as the display and keypad of the device 200, the sensor assembly 214 may also detect a change in position of the device 200 or a component of the device 200, the presence or absence of user contact with the device 200, the orientation or acceleration/deceleration of the device 200, and a change in temperature of the device 200. The sensor assembly 214 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. The sensor assembly 214 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 214 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 216 is configured to facilitate communication between the apparatus 200 and other devices in a wired or wireless manner. The device 200 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In one exemplary embodiment, the communication component 216 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 216 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 200 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 204, including instructions executable by processor 220 of apparatus 200 to perform the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
It is understood that the term "plurality" in this disclosure means two or more, and other adjectives are similar thereto. "and/or" describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate that there are three cases of a alone, a and B together, and B alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. The singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It is further understood that the terms "first," "second," and the like are used to describe various information, but such information should not be limited to these terms. These terms are only used to distinguish one type of information from another and do not denote a particular order or importance. Indeed, the expressions "first", "second", etc. may be used entirely interchangeably. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure.
It will be further understood that the terms "center," "longitudinal," "transverse," "front," "rear," "upper," "lower," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like, as used herein, refer to an orientation or positional relationship based on that shown in the drawings, merely for convenience in describing the present embodiments and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a particular orientation, be constructed and operate in a particular orientation.
It will be further understood that "connected" includes both direct connection where no other member is present and indirect connection where other element is present, unless specifically stated otherwise.
It will be further understood that although operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the scope of the appended claims.