Movatterモバイル変換


[0]ホーム

URL:


CN113727016A - Shooting method and electronic equipment - Google Patents

Shooting method and electronic equipment
Download PDF

Info

Publication number
CN113727016A
CN113727016ACN202110662935.6ACN202110662935ACN113727016ACN 113727016 ACN113727016 ACN 113727016ACN 202110662935 ACN202110662935 ACN 202110662935ACN 113727016 ACN113727016 ACN 113727016A
Authority
CN
China
Prior art keywords
image
electronic device
magnification
pixel signal
preview image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110662935.6A
Other languages
Chinese (zh)
Inventor
崔瀚涛
于广财
商亚洲
冯思悦
赵玉霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co LtdfiledCriticalHonor Device Co Ltd
Priority to CN202110662935.6ApriorityCriticalpatent/CN113727016A/en
Publication of CN113727016ApublicationCriticalpatent/CN113727016A/en
Priority to PCT/CN2021/143949prioritypatent/WO2022262260A1/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本申请实施例提供一种拍摄方法及电子设备,涉及光学成像技术领域。使得电子设备在变焦倍率拍摄场景下,可以提升电子设备在变焦倍率拍摄场景下的用户体验。该方法可以包括:响应于触发相机功能的操作,启动电子设备的相机应用;显示变焦倍率为第一倍率的第一预览图像;接收到第一操作,第一操作指示电子设备调整变焦倍率为第二倍率,第二倍率小于预设倍率;采用第一像素信号合并方式生成变焦倍率为第二倍率第二预览图像,显示第二预览图像;响应于拍摄键被触发的操作,确定电子设备所在拍摄环境的亮度;若亮度大于第一预设阈值,采用第二像素信号合并方式生成拍摄图像;若亮度小于或等于第一预设阈值,采用第一像素信号合并方式生成拍摄图像。

Figure 202110662935

Embodiments of the present application provide a photographing method and an electronic device, which relate to the technical field of optical imaging. This enables the electronic device to improve the user experience of the electronic device in the zoom magnification shooting scene. The method may include: in response to an operation triggering a camera function, launching a camera application of the electronic device; displaying a first preview image with a zoom magnification of a first magnification; receiving a first operation, the first operation instructing the electronic device to adjust the zoom magnification to a first Double magnification, the second magnification is less than the preset magnification; the first pixel signal combination method is used to generate a second preview image with the zoom magnification of the second magnification, and the second preview image is displayed; in response to the operation of the shooting key being triggered, determine the location where the electronic device is shooting The brightness of the environment; if the brightness is greater than the first preset threshold, the second pixel signal combination method is used to generate the captured image; if the brightness is less than or equal to the first preset threshold, the first pixel signal combination method is used to generate the captured image.

Figure 202110662935

Description

Shooting method and electronic equipment
Technical Field
The embodiment of the application relates to the technical field of optical imaging, in particular to a shooting method and electronic equipment.
Background
At present, in order to meet the photographing requirements in different scenes, electronic devices are generally configured with a plurality of cameras. Among these cameras may include cameras of multiple focal lengths, e.g., short-focus wide-angle cameras, mid-focus cameras, and tele-cameras, among others. The cameras with different focal lengths can correspond to different viewing ranges and zoom magnifications.
When a user takes a picture using an electronic device with multiple cameras, the electronic device may switch cameras of different focal lengths (i.e., optical zoom) for capturing the image. In addition, the electronic equipment can also adjust the view finding range by combining a software processing mode of digital zooming so as to meet various shooting scenes.
It should be understood that when the electronic device is combined with digital zoom (or called digital zoom), as the zoom factor increases, the viewing range displayed by the electronic device is adjusted to be a part of the shot scene, and the Field of View (FOV) corresponding to the camera is gradually decreased. Thus, the electronic device needs to crop more pixels from the original resolution image, and the remaining pixels can be formed into the original resolution image through a software interpolation algorithm. In this process, the image finally generated by the electronic device inevitably causes a loss of definition.
Disclosure of Invention
The application provides a shooting method and electronic equipment, so that the electronic equipment can reduce the definition loss of a shot image in a zoom magnification shooting scene, and the user experience of the electronic equipment in the zoom magnification shooting scene can be improved.
In order to achieve the technical purpose, the following technical scheme is adopted in the application:
in a first aspect, the present application provides a shooting method, which may include: in response to an operation triggering the camera function, a camera application of the electronic device is launched. At this time, the electronic device displays a first preview image, and the zoom magnification corresponding to the first preview image is a first magnification. Wherein the first magnification may be a preset magnification. If the preset magnification is 1 time (1 ×), the electronic device displays the first preview image, which is a 1 × image.
After the electronic device receives the first operation, the electronic device may determine a pixel signal combination mode according to a preset rule, and generate a second preview image according to the preset pixel signal combination mode (e.g., the first pixel signal combination mode or the second pixel signal combination mode), so that the electronic device displays the second preview image. Wherein. The first operation instructs the electronic apparatus to adjust the zoom magnification to a target magnification (e.g., a second magnification). Thus, the second preview image is an image at the zoom magnification that is the target magnification.
For example, the preset rule may be: when the target magnification is a zoom magnification value smaller than 3x, the electronic device forms a preview image (i.e., a second preview image) in a manner that the first pixel signals are combined. Triggering a shooting key, identifying the brightness of the shooting environment of the electronic equipment, and if the brightness is higher than a first preset threshold value, generating a shooting image by the electronic equipment in a second pixel signal combination mode; and if the brightness is lower than a first preset threshold value, the electronic equipment generates a shot image in a first pixel signal combination mode.
In a second aspect, the present application further provides a shooting method, which may include: in response to an operation triggering the camera function, a camera application of the electronic device is launched. The electronic equipment displays a first preview image, and the zoom magnification corresponding to the first preview image is a first magnification. When the electronic equipment displays the first preview image, if the electronic equipment receives a first operation, the first operation instructs the electronic equipment to adjust the zoom magnification to a second magnification, and the second magnification is smaller than the preset magnification. That is, when the first operation received by the electronic device indicates that the zoom magnification of the electronic device is less than the preset magnification. For example, the second magnification is less than 3 ×.
In this case, the electronic device generates a preview image (i.e., a second preview image) by combining the first pixel signals, and a display screen of the electronic device is used for displaying the second preview image, so that a user can view the real-time preview image through the display screen. And the zoom magnification corresponding to the second preview image is a second magnification. In response to an operation in which the photographing key is triggered, the electronic apparatus determines the brightness of the photographing environment in which it is located. If the brightness is larger than a first preset threshold value, the electronic equipment generates a shot image in a second pixel signal combination mode; and if the brightness is less than or equal to a first preset threshold value, the electronic equipment generates a shot image in a first pixel signal combination mode.
It can be understood that different pixel signal combination methods have different shooting characteristics. Generally, in a zoom shooting scene, when the shooting environment brightness is insufficient, an image is generated by adopting a first pixel signal combination mode, so that the light sensitivity of an image sensor can be improved, and the signal-to-noise ratio can be increased; under the scene that the shooting environment brightness is sufficient, the image is generated in a second pixel signal combination mode, and the definition of the generated image can be improved in the image with the zooming magnification. That is, in an environment with insufficient luminance, it is appropriate to generate an image by the first pixel signal combination method, and in an environment with sufficient luminance, it is appropriate to generate an image by the second pixel signal combination method. Therefore, when the shooting key of the electronic equipment is triggered, the brightness of the shooting environment where the electronic equipment is located is determined, and the pixel signal combination mode is adjusted according to the brightness, so that the loss of image definition is reduced when the electronic equipment is in zooming magnification, the generated image is clearer, and the user experience of the electronic equipment in a zooming magnification shooting scene is improved.
With reference to the second aspect, in a possible implementation, when the electronic device generates the captured image by using the second pixel signal combination method, the method may specifically include: the electronic equipment outputs an original image in a second pixel signal combination mode, wherein the zooming multiplying power of the original image is a first multiplying power, and the first multiplying power is larger than a second multiplying power. And performing cropping processing on the original image to generate a shot image, wherein the shot image is an image with a zooming magnification of a second magnification.
In the second pixel signal combination mode, the resolution of the image directly output by the second pixel signal combination mode is not consistent with the resolution corresponding to the second magnification and is larger than the image corresponding to the second resolution. Therefore, the electronic apparatus performs clipping processing on the original image output in the second pixel signal combination manner so that the clipped original image is an image of which zoom magnification is the second magnification.
With reference to the second aspect, in another possible implementation, before the electronic device generates the captured image in the second pixel signal combination manner after the brightness is greater than the first preset threshold, the method may further include: the electronic equipment stops outputting the image frame in a first pixel signal combination mode; the electronic equipment stops displaying the second preview image; the electronic device switches from the first pixel signal combination mode to the second pixel signal combination mode.
It can be understood that, when the electronic device needs to switch the pixel signal combination mode, the first pixel signal combination mode is switched to the second pixel signal combination mode. In this process, the electronic device needs to undergo start-stop switching, that is, the current first pixel signal combination mode is stopped, and at this time, the electronic device stops displaying the second preview image. And after the electronic equipment completes switching and outputs the image frame in a second pixel signal combination mode, the electronic equipment generates a shot image.
With reference to the second aspect, in another possible implementation, in the method, if the brightness is greater than a first preset threshold, the electronic device generates the captured image in a second pixel signal combination manner; if the brightness is smaller than or equal to the first preset threshold value, the electronic equipment generates a shot image in a first pixel signal combination mode. The above method may further comprise: and the electronic equipment generates a third preview image in a first pixel signal combination mode and displays the third preview image.
Wherein the electronic device generates the shot image after the response shot key is triggered. After the shooting action is finished, if the electronic equipment is still in a shooting stage, the electronic equipment still generates a preview image (namely a third preview image) in a first pixel signal combination mode.
With reference to the second aspect, in another possible implementation, the generating, by the electronic device, a captured image by using a first pixel signal combination method may specifically include: the electronic equipment outputs a multi-frame image frame in a first pixel signal combination mode; based on a first image fusion algorithm, the electronic equipment generates a shot image according to a plurality of frames of image frames. Wherein the first image fusion algorithm at least comprises: one of a multi-frame noise reduction algorithm, a high dynamic range algorithm, a high dynamic super-resolution algorithm and a super-resolution algorithm based on a human face.
With reference to the second aspect, in another possible implementation, the performing a cropping process on the original image to generate a captured image specifically may include: cutting an original image to obtain a preprocessed image frame; based on the second image fusion algorithm, the electronic equipment generates a shot image according to the multi-frame preprocessing image frame. Wherein the second image fusion algorithm at least comprises: one of a multi-frame noise reduction algorithm, a high dynamic range algorithm, a high dynamic super-resolution algorithm and a super-resolution algorithm based on a human face.
It is understood that, in order to improve the quality of an image generated in zoom magnification shooting, the first image fusion algorithm (or the second image algorithm) is employed so that the image quality is higher. For example, the definition of the image is improved, and the dynamic range of the image is increased, so that the user experience of the electronic equipment in a zoom magnification shooting scene can be improved.
With reference to the second aspect, in another possible implementation, the electronic device includes one or more front cameras, and the electronic device implements a shooting function through the one or more front cameras; the second multiplying power is a multiplying power value of the first multiplying power interval, wherein the first multiplying power interval is 0.8 multiplying power to 1.9 multiplying power.
Or; the electronic equipment comprises one or more rear cameras, and the electronic equipment realizes a shooting function through the one or more rear cameras; the second multiplying power is a multiplying power value of a second multiplying power interval, wherein the second multiplying power interval is 1.0 multiplying power to 2.9 multiplying power.
In a third aspect, the present application further provides a shooting method, where the method may include: in response to an operation triggering the camera function, a camera application of the electronic device is launched. The electronic equipment displays a first preview image, and the zoom magnification corresponding to the first preview image is a first magnification. When the electronic equipment displays the first preview image, if the electronic equipment receives a first operation, the first operation instructs the electronic equipment to adjust the zoom magnification to a third magnification, and the third magnification is larger than or equal to a preset magnification. That is, when the first operation received by the electronic device indicates that the zoom magnification of the electronic device is greater than or equal to the preset magnification. For example, the zoom magnification of the electronic apparatus is 3 ×, or more.
And determining the brightness of the shooting environment of the electronic equipment, and if the brightness is less than or equal to a first preset threshold value, generating a second preview image by adopting a first pixel signal combination mode. And if the brightness is larger than the first preset threshold value, generating a second preview image by adopting a second pixel signal combination mode. And the zoom magnification corresponding to the second preview image is a third magnification.
When the electronic equipment displays a preview interface, the electronic equipment determines the brightness of the current shooting environment in real time, and if the brightness of the current shooting environment is determined to be a highlight shooting environment, a preview image (namely a second preview image) is generated in a second pixel signal combination mode; and if the environment is determined to be a dark light shooting environment, generating a second preview image by adopting the first pixel signal combination mode. The electronic equipment is real-time, so that when the electronic equipment is in zooming magnification, the loss of image definition is reduced, the generated image is clearer, and the user experience of the electronic equipment in a zooming magnification shooting scene is improved.
In a possible implementation, after the generating a second preview image by using the first pixel signal combination method if the brightness is less than or equal to the first preset threshold, and the zoom magnification corresponding to the second preview image is the third magnification, the method may further include: determining the current brightness of the shooting environment of the electronic equipment; if the current brightness is larger than a first preset threshold value, the electronic equipment is switched from a first pixel signal combination mode to a second pixel signal combination mode; and generating a third preview image by adopting a second pixel signal combination mode, wherein the zoom magnification corresponding to the third preview image is a third magnification.
In the process that the electronic device displays the preview image (i.e., the second preview image), the electronic device can adjust the pixel signal combination mode in real time according to the brightness of the shooting environment where the electronic device is located. Specifically, if the brightness is greater than a first preset threshold, the electronic device outputs an image frame in a second pixel signal combination mode; the brightness is smaller than or equal to a first preset threshold value, and the electronic equipment outputs an image frame in a first pixel signal combination mode. The pixel signal combination mode can be adjusted according to the brightness in real time, and good use experience can be provided for users in the process of displaying preview images. And the preview image which is seen by the user in real time has the same effect as the image generated by shooting, thereby providing good use experience for the user.
In a third aspect, in another possible implementation, if the brightness is greater than the first preset threshold, a second preview image is generated by using a second pixel signal combination method, and after a zoom magnification corresponding to the second preview image is a third magnification, the method may further include: determining the current brightness of the shooting environment where the electronic equipment is located; if the current brightness is smaller than or equal to a second preset threshold, the electronic equipment is switched from the second pixel signal combination mode to the first pixel signal combination mode, and the second preset threshold is smaller than the first preset threshold; and generating a fourth preview image by adopting a first pixel signal combination mode, wherein the zoom magnification corresponding to the fourth preview image is a third magnification.
In a third aspect, in another possible implementation, when the electronic device generates the second preview image by using the first pixel signal combination method, the method may further include: in response to the shooting key being triggered, the electronic device generates a shot image in a first pixel signal combination mode. When the electronic device generates the second preview image by using the second pixel signal combination method, the method may further include: in response to the shooting key being triggered, the electronic device generates a shot image in a second pixel signal manner.
It can be understood that, when the electronic device generates the preview image in the corresponding pixel signal combination mode, the shooting key is triggered, and the electronic device generates the shot image in the corresponding pixel signal combination mode. That is, the electronic device does not modify the pixel signal combination method at the instant the capture key is triggered.
In a third aspect, in another possible implementation, the method may further include: when the electronic equipment generates a second preview image in a first pixel signal combination mode, the electronic equipment outputs a preview image frame according to a first rate; and when the electronic equipment generates a third preview image by adopting a second pixel signal combination mode, the electronic equipment outputs a preview image frame according to the first rate.
In a third aspect, in another possible implementation, the method may further include: when the electronic equipment generates a second preview image in a second pixel signal combination mode, the electronic equipment outputs a preview image frame according to a first rate; when the electronic equipment generates a fourth preview image by adopting the first pixel signal combination mode, the electronic equipment outputs the preview image frame according to the first rate.
It can be understood that, in the process of displaying the preview image by the electronic device, the electronic device has a phenomenon of switching the pixel signal combination mode. If the pixel signal combination mode is switched, the speed of the electronic equipment outputting the preview image frame is unchanged before and after the electronic equipment switches the pixel signal combination mode. In this way, the preview image displayed by the electronic equipment can be ensured to be stable.
In a third aspect, in another possible implementation, when the third preview image is generated by using the first pixel signal combination method, the method may specifically include: the electronic equipment outputs a first initial image in a first pixel signal combination mode, and generates a third preview image from the first initial image according to a preset image algorithm, so that the brightness of the third preview image is approximate to that of the second preview image.
Due to the fact that the pixel signal combination mode is switched, the brightness of the third preview image is approximate to that of the second preview image, and therefore the brightness of the preview image displayed by the electronic equipment is kept stable and does not change suddenly. If the brightness of the shooting environment where the electronic equipment is located is greatly changed, the brightness of the preview image displayed by the electronic equipment is gradually improved after the pixel signal combination mode is switched.
In another possible implementation of the third aspect, when generating the fourth preview image by using the second pixel signal combination method, the method includes: and the electronic equipment outputs a second initial image in a second pixel signal combination mode, and generates a fourth preview image from the second initial image according to a preset image algorithm, so that the brightness of the fourth preview image is approximate to that of the second preview image.
In a third aspect, in another possible implementation, the resolution of the first original image corresponds to a third magnification.
In a possible implementation of the third aspect, the resolution of the second initial image is a fourth resolution, and the fourth resolution is greater than the third resolution; after the electronic device outputs the second initial image in the second pixel signal combination mode, before generating a fourth preview image from the second initial image according to a preset image algorithm, the method may further include: the electronic equipment performs cropping processing on the second initial image so that the resolution of the second initial image is the third resolution.
In a third aspect, in another possible implementation, an electronic device generates a captured image by using a first pixel signal combination method, including: the electronic equipment outputs multi-frame image frames in a first signal combination mode, and generates shot images according to the multi-frame image frames based on a first image fusion algorithm. The electronic device generates the captured image by using a second pixel signal combination method, which may specifically include: the electronic equipment outputs a plurality of frames of original image frames in a second signal combination mode, cuts each frame of original image frames to obtain a plurality of frames of cut image frames, and generates a shot image according to the plurality of frames of cut image frames based on a second image fusion algorithm.
In a third aspect, in another possible implementation, the first image fusion algorithm at least includes: one of a multi-frame noise reduction algorithm, a high dynamic range algorithm, a high dynamic super-resolution algorithm and a super-resolution algorithm based on a human face; the second image fusion algorithm comprises at least: one of a multi-frame noise reduction algorithm, a high dynamic range algorithm, a high dynamic super-resolution algorithm and a super-resolution algorithm based on a human face.
In a third aspect, in another possible implementation, if the electronic device includes one or more front cameras, the electronic device implements a shooting function through the one or more front cameras, and the third magnification is 2.0 times; if the electronic equipment comprises one or more rear cameras, the electronic equipment realizes the shooting function through the one or more rear cameras, and the third rate is a rate value of the third rate interval or the fourth rate interval.
In a third aspect, in another possible implementation, the third magnification interval is 3.0-3.4 magnifications; the fourth magnification interval is 7.0 magnification to 50.0 magnification.
In a fourth aspect, an embodiment of the present application provides an electronic device, including: the camera is used for collecting images; the display screen is used for displaying an interface; one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory. When the memory executes the computer program, the electronic device may perform the steps of: in response to an operation triggering the camera function, a camera application of the electronic device is launched. At this time, the electronic device displays a first preview image, and the zoom magnification corresponding to the first preview image is a first magnification. Wherein the first magnification may be a preset magnification. If the preset magnification is 1 time (1 ×), the electronic device displays the first preview image, which is a 1 × image.
After the electronic device receives the first operation, the electronic device may determine a pixel signal combination mode according to a preset rule, and generate a second preview image according to the preset pixel signal combination mode (e.g., the first pixel signal combination mode or the second pixel signal combination mode), so that the electronic device displays the second preview image. Wherein. The first operation instructs the electronic apparatus to adjust the zoom magnification to a target magnification (e.g., a second magnification). Thus, the second preview image is an image at the zoom magnification that is the target magnification.
In a fifth aspect, an embodiment of the present application provides an electronic device, including: the camera is used for collecting images; the display screen is used for displaying an interface; one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory. When the memory executes the computer program, the electronic device may perform the steps of: in response to an operation triggering the camera function, a camera application of the electronic device is launched. The electronic equipment displays a first preview image, and the zoom magnification corresponding to the first preview image is a first magnification. When the electronic equipment displays the first preview image, if the electronic equipment receives a first operation, the first operation instructs the electronic equipment to adjust the zoom magnification to a second magnification, and the second magnification is smaller than the preset magnification. That is, when the first operation received by the electronic device indicates that the zoom magnification of the electronic device is less than the preset magnification. The electronic device generates a preview image (i.e., a second preview image) by using the first pixel signal combination mode, and a display screen of the electronic device is used for displaying the second preview image, so that a user can view the real-time preview image through the display screen. And the zoom magnification corresponding to the second preview image is a second magnification. In response to an operation in which the photographing key is triggered, the electronic apparatus determines the brightness of the photographing environment in which it is located. If the brightness is larger than a first preset threshold value, the electronic equipment generates a shot image in a second pixel signal combination mode; and if the brightness is less than or equal to a first preset threshold value, the electronic equipment generates a shot image in a first pixel signal combination mode.
In a sixth aspect, an embodiment of the present application provides an electronic device, including: the camera is used for collecting images; the display screen is used for displaying an interface; one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory. When the memory executes the computer program, the electronic device may perform the steps of: in response to an operation triggering the camera function, a camera application of the electronic device is launched. The electronic equipment displays a first preview image, and the zoom magnification corresponding to the first preview image is a first magnification. When the electronic equipment displays the first preview image, if the electronic equipment receives a first operation, the first operation instructs the electronic equipment to adjust the zoom magnification to a third magnification, and the third magnification is larger than or equal to a preset magnification. That is, when the first operation received by the electronic device indicates that the zoom magnification of the electronic device is greater than or equal to the preset magnification.
And determining the brightness of the shooting environment of the electronic equipment, and if the brightness is less than or equal to a first preset threshold value, generating a second preview image by adopting a first pixel signal combination mode. And if the brightness is larger than the first preset threshold value, generating a second preview image by adopting a second pixel signal combination mode. And the zoom magnification corresponding to the second preview image is a third magnification.
In a seventh aspect, the present application further provides an electronic device, including: the camera is used for collecting images; the display screen is used for displaying an interface; one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions which, when executed by the electronic device, cause the electronic device to perform the method of taking a picture in the first aspect and any of its possible designs.
In an eighth aspect, the present application further provides an electronic device, including: the camera is used for collecting images; the display screen is used for displaying an interface; one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions which, when executed by the electronic device, cause the electronic device to perform the method of taking a picture in the second aspect and any of its possible designs.
In a ninth aspect, the present application further provides an electronic device, comprising: the camera is used for collecting images; the display screen is used for displaying an interface; one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions which, when executed by the electronic device, cause the electronic device to perform the method of taking a picture in the third aspect and any of its possible designs.
In a tenth aspect, the present application further provides a computer-readable storage medium, which is characterized by comprising computer instructions, when the computer instructions are executed on a computer, the computer executes the photographing method in the first aspect, the second aspect, the third aspect, and any possible design manner thereof.
In an eleventh aspect, embodiments of the present application provide a computer program product, which when run on a computer, causes the computer to perform the method performed by the electronic device in the first aspect, the second aspect, the third aspect, and any possible design thereof.
In a twelfth aspect, an embodiment of the present application provides a chip system, where the chip system is applied to an electronic device. The chip system includes one or more interface circuits and one or more processors; the interface circuit and the processor are interconnected through a line; the interface circuit is used for receiving signals from a memory of the electronic equipment and sending the signals to the processor, and the signals comprise computer instructions stored in the memory; the computer instructions, when executed by the processor, cause the electronic device to perform the method of the first, second, third and any possible design thereof described above.
It is to be understood that the foregoing advantageous effects achieved by the electronic device of the fourth aspect, the electronic device of the fifth aspect, the electronic device of the sixth aspect, the electronic device of the seventh aspect, the electronic device of the eighth aspect, the electronic device of the ninth aspect, the computer-readable storage medium of the tenth aspect, the computer program product of the eleventh aspect, and the chip system of the twelfth aspect provided by the present application can be referred to as the advantageous effects of the first aspect, the second aspect, the third aspect, and any possible design manner thereof, and are not described herein again.
Drawings
Fig. 1A is a schematic diagram of a pixel signal combination method according to an embodiment of the present disclosure;
fig. 1B is a schematic diagram of another pixel signal combination method according to an embodiment of the present disclosure;
fig. 2 is an interface schematic diagram of a shooting method according to an embodiment of the present disclosure;
fig. 3 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present disclosure;
fig. 4 is a schematic interface diagram of another shooting method provided in the embodiment of the present application;
fig. 5 is a schematic interface diagram of another shooting method provided in the embodiment of the present application;
fig. 6 is a schematic diagram illustrating a switching pixel signal combination method of an image sensor according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of another switching pixel signal combination method of an image sensor according to an embodiment of the present disclosure
Fig. 8 is a schematic view of a camera structure of an electronic device according to an embodiment of the present application;
fig. 9 is a flowchart of a shooting method according to an embodiment of the present disclosure;
fig. 10 is a flowchart of another shooting method provided in the embodiment of the present application;
fig. 11 is a schematic structural diagram of a chip system according to an embodiment of the present disclosure.
Detailed Description
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present embodiment, "a plurality" means two or more unless otherwise specified.
In order to facilitate understanding of the schemes provided by the embodiments of the present application, some terms related to the embodiments of the present application will be explained below:
first pixel signal combination method (Binning): in the process of shooting an image by the electronic equipment, light reflected by a target object is collected by the camera, so that the reflected light is transmitted to the image sensor. The image sensor comprises a plurality of photosensitive elements, each photosensitive element collects charges as a pixel, and analog merging (Binning) operation is carried out on pixel information. Specifically, Binning can merge n × n pixels into one pixel. For example, Binning can synthesize adjacent 3 × 3 pixels into one pixel, that is, the color of the adjacent 3 × 3 pixels is presented in the form of one pixel.
For ease of understanding, the first pixel signal combination method may be referred to as a "first pixel arrangement method", a "first pixel combination method", a "first image readout mode", and the like.
Exemplarily, as shown in fig. 1A, fig. 1A is a schematic diagram illustrating a process of reading out an image in a Binning manner after the electronic device acquires the image. As shown in fig. 1A (a) which is a schematic diagram of 6 × 6 pixels, adjacent 3 × 3 pixels are synthesized into one pixel. Fig. 1A (b) is a schematic diagram of a pixel read by Binning. For example, 3 × 3 pixels in the 01 region in (a) in fig. 1A are formed into a pixel G in (b) in 1A by Binning; 3 × 3 pixels in the 02 region in (a) in fig. 1A are formed as a pixel B in (B) in 1A; 3 × 3 pixels in the 03 region of (a) in fig. 1A are formed as a pixel R in (b) in fig. 1A; the 3 × 3 pixels in the 04 region of (a) in fig. 1A are shown as the pixel G in (b) in fig. 1A.
Here, taking the example that the output image format is a Bayer (Bayer) format image, the Bayer format image refers to an image including only red, blue, and green (i.e., three primary colors) in the image. For example, a pixel a formed by 3 × 3 pixels in the 01 region is red, a pixel B formed by 3 × 3 pixels in the 02 region is green, a pixel C formed by 3 × 3 pixels in the 03 region is green, and a pixel D formed by 3 × 3 pixels in the 04 region is blue.
Second pixel signal combination mode (Remosaic): when the image is read out in a Remosaic mode, pixels are rearranged into a Bayer pattern image. For example, assuming that a pixel in an image is composed of n × n pixels, n × n pixels can be rearranged in the image using Remosaic. For convenience of understanding, the second pixel signal combination method may also be referred to as a "second pixel arrangement method", a "second pixel combination method", a "second image reading mode", and the like.
Illustratively, as shown in fig. 1B, (a) is a schematic diagram of pixels, each pixel being synthesized by adjacent 3 × 3 pixels. Fig. 1B (B) is an image schematic of a Bayer pattern read out in a Remosaic method. Specifically, in fig. 1B (a), the pixel a is red, the pixels B and C are green, and the pixel D is blue. Each pixel in (a) in fig. 1B is divided into 3 × 3 pixels and rearranged respectively. That is, the readout is performed by using Remosaic method, and the read image is an image in Bayer format shown in fig. 1B (B).
It can be understood that when the zoom factor is increased during the process of shooting the image, the electronic device may affect the definition of the image. In a shooting scene with medium or low brightness, if a Binning mode is adopted to synthesize a plurality of pixels into one pixel, the light sensing performance of the image sensor can be improved, and the signal-to-noise ratio can be increased. In a high-brightness shooting scene, if a Remosaic mode is adopted, one pixel is rearranged into an image with a Bayer format, and the definition of the image can be improved.
In some implementations, during the capturing of the image by the electronic device, the display screen of the electronic device displays the preview image in real time in response to the zoom operation. The preview image is a target object image captured by the electronic device after the zoom operation.
For example, when the electronic device photographs a target object, the electronic device displays apreview image 201 as shown in (a) in fig. 2. At this time, the zoom magnification of the electronic apparatus is "1 ×" (1 time), i.e., no zoom. The electronic apparatus receives a zoom operation, which is a gesture of a user's double fingers sliding outward, as shown in fig. 2 (a). In response to receiving the zoom operation, the electronic device displays apreview interface 202 as shown in fig. 2 (b). At this time, the electronic apparatus displays that the zoom magnification is "3 ×". Wherein thepreview image 202 is a portion of thepreview image 201 without zoom magnification when the electronic device is responsive to a zoom operation.
Note that, the above process is an example of increasing the zoom factor. Assuming that the zoom operation is a zoom-out magnification, more content may be included inpreview image 202 thanpreview image 201.
Specifically, when the electronic device displays a preview image, the image can be read out by Binning. When the shooting key is triggered, the electronic equipment judges the current shooting brightness, and if the electronic equipment determines that the shooting brightness of the shooting scene is larger than a preset threshold value, the electronic equipment is switched to a Remosaic mode to read out the image. And if the electronic equipment determines that the shooting brightness of the shooting scene is less than or equal to the preset threshold, the electronic equipment reads the image in a Binning mode and displays the shot image.
When the shooting brightness of the shooting scene is greater than the preset threshold value, and the pixel signal combination mode is switched in the electronic device, the electronic device needs to switch the setting of the image sensor (sensor setting). In the switching process, the image sensor needs to stop pixel signal combination firstly, so that the preview image displayed by the electronic equipment is stopped, and after the switching is finished, the electronic equipment can display and display the shot image. In the process, the electronic equipment is jammed, and the use experience of the user is reduced.
The embodiment of the application provides a shooting method, which can improve the definition of a shot image during zooming shooting, reduce the signal-to-noise ratio of the image and improve the image quality in the process of improving the definition of the image.
Embodiments of the present application will be described below with reference to the drawings.
The shooting method provided by the embodiment of the present application can be applied to an electronic device having a plurality of cameras, where the electronic device may be a mobile phone, a tablet computer, a desktop computer, a laptop computer, a handheld computer, a notebook computer, a vehicle-mounted device, an ultra-mobile personal computer (UMPC), a netbook, a cellular phone, a Personal Digital Assistant (PDA), a professional camera (e.g., a single-lens reflex camera, a card-type camera, etc.), a sports camera (GoPro), an Augmented Reality (AR) Virtual Reality (VR) device, etc., and the embodiment of the present application does not specially limit the specific form of the electronic device.
Please refer to fig. 3, which is a schematic diagram of a hardware structure of the electronic device 100 according to an embodiment of the present disclosure. As shown in fig. 3, the electronic device 100 may include aprocessor 110, anexternal memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, acharging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, amobile communication module 150, a wireless communication module 160, anaudio module 170, aspeaker 170A, a receiver 170B, amicrophone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM)card interface 195, and the like. Wherein the sensor module 180 may include a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, etc.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: theprocessor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided inprocessor 110 for storing instructions and data. In some embodiments, the memory in theprocessor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by theprocessor 110. If theprocessor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of theprocessor 110, thereby increasing the efficiency of the system.
In some embodiments,processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments,processor 110 may include multiple sets of I2C buses. Theprocessor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces, respectively. For example: theprocessor 110 may be coupled to the touch sensor 180K via an I2C interface, such that theprocessor 110 and the touch sensor 180K communicate via an I2C bus interface to implement the touch functionality of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments,processor 110 may include multiple sets of I2S buses. Theprocessor 110 may be coupled to theaudio module 170 via an I2S bus to enable communication between theprocessor 110 and theaudio module 170. In some embodiments, theaudio module 170 may communicate audio signals to the wireless communication module 160 via the I2S interface, enabling answering of calls via a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, theaudio module 170 and the wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, theaudio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect theprocessor 110 with the wireless communication module 160. For example: theprocessor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, theaudio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connectprocessor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments,processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of electronic device 100. Theprocessor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect theprocessor 110 with the camera 193, the display 194, the wireless communication module 160, theaudio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the interface connection relationship between the modules illustrated in the embodiments of the present application is only an illustration, and does not limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
Thecharging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, thecharging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, thecharging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. Thecharging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, thecharging management module 140 and theprocessor 110. The power management module 141 receives input from the battery 142 and/or thecharge management module 140, and supplies power to theprocessor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in theprocessor 110. In other embodiments, the power management module 141 and thecharging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, themobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
Themobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. Themobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. Themobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. Themobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of themobile communication module 150 may be disposed in theprocessor 110. In some embodiments, at least some of the functional modules of themobile communication module 150 may be disposed in the same device as at least some of the modules of theprocessor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to thespeaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as themobile communication module 150 or other functional modules, independent of theprocessor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to theprocessor 110. The wireless communication module 160 may also receive a signal to be transmitted from theprocessor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled tomobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Theprocessor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
Theexternal memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with theprocessor 110 through theexternal memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like. Theprocessor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The electronic device 100 may implement audio functions via theaudio module 170, thespeaker 170A, the receiver 170B, themicrophone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
Theaudio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. Theaudio module 170 may also be used to encode and decode audio signals. In some embodiments, theaudio module 170 may be disposed in theprocessor 110, or some functional modules of theaudio module 170 may be disposed in theprocessor 110.
Thespeaker 170A converts an audio electric signal into a sound signal. The receiver 170B is configured to convert the audio electrical signal into a sound signal. Themicrophone 170C converts a sound signal into an electric signal. The electronic device 100 may be provided with at least onemicrophone 170C.
The headphone interface 170D is used to connect a wired headphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor is used for sensing a pressure signal and converting the pressure signal into an electric signal. In some embodiments, the pressure sensor may be disposed on the display screen 194. There are many types of pressure sensors, such as resistive pressure sensors, inductive pressure sensors, capacitive pressure sensors, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position based on the detection signal of the pressure sensor. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions.
The gyro sensor may be used to determine the motion pose of the electronic device 100. In some embodiments, the angular velocity of the electronic device 100 about three axes (i.e., the x, y, and z axes) may be determined by a gyroscope sensor. The gyro sensor may be used for photographing anti-shake. For example, when the shutter is pressed, the gyroscope sensor detects a shake angle of the electronic device 100, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor can also be used for navigation and body feeling game scenes. If the motion postures of the electronic device 100 are different, the images captured by the cameras on the electronic device 100 are also different.
The air pressure sensor is used for measuring air pressure.
The magnetic sensor includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip holster using a magnetic sensor. In some embodiments, when the electronic device 100 is a flip phone, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.
The acceleration sensor may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, electronic device 100 may utilize a range sensor to range to achieve fast focus.
The proximity light sensor may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light to the outside through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there are no objects near the electronic device 100. The electronic device 100 can utilize the proximity light sensor to detect that the user holds the electronic device 100 close to the ear for talking, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor can also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor is used for sensing the ambient light brightness. Electronic device 100 may adaptively adjust the brightness of display screen 194 based on the perceived ambient light level. The ambient light sensor can also be used to automatically adjust the white balance when taking a picture. The ambient light sensor may also cooperate with the proximity light sensor to detect whether the electronic device 100 is in a pocket to prevent inadvertent contact.
The fingerprint sensor is used for collecting fingerprints. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on.
The temperature sensor is used for detecting temperature. In some embodiments, the electronic device 100 implements a temperature processing strategy using the temperature detected by the temperature sensor. In other embodiments, the electronic device 100 heats the battery 142 when the temperature is below another threshold to avoid the low temperature causing the electronic device 100 to shut down abnormally. In other embodiments, when the temperature is lower than a further threshold, the electronic device 100 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
Touch sensors, also known as "touch devices". The touch sensor may be disposed on the display screen 194, and the touch sensor and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The bone conduction sensor may acquire a vibration signal. In some embodiments, the bone conduction sensor may acquire a vibration signal of a human voice vibrating a bone mass. The bone conduction sensor can also contact the pulse of the human body to receive the blood pressure pulsation signal. In some embodiments, the bone conduction sensor may also be disposed in a headset, integrated into a bone conduction headset. Theaudio module 170 may analyze a voice signal based on the vibration signal of the bone block vibrated by the sound part obtained by the bone conduction sensor, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signals acquired by the bone conduction sensor, and a heart rate detection function is realized.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
TheSIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with the electronic apparatus 100 by being inserted into theSIM card interface 195 or being pulled out of theSIM card interface 195. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. TheSIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The sameSIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. TheSIM card interface 195 may also be compatible with different types of SIM cards. TheSIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as communication and data communication.
In the embodiment of the present application, the electronic device 100 may include cameras with different focal lengths, and the image Sensor included in the electronic device 100 may be a Quadra Sensor (a four-Sensor, that is, an image Sensor, which is larger in size). The electronic apparatus 100 determines the zoom factor according to a user operation when capturing an image, and at the same time, the electronic apparatus 100 may determine the brightness of the current capturing environment according to a brightness sensor (e.g., an ambient light sensor). In this way, the electronic device 100 can adjust the pixel signal combination mode of the image sensor according to the zoom multiple and the brightness of the current shooting scene. For example, the image is read out by Binning, or by Remosaic. In this manner, it is possible to improve the sharpness of a captured image in zoom capturing and ensure improved image sharpness in zoom capturing.
The shooting method provided by the embodiment of the present application will be described below by taking the electronic device with the structure shown in fig. 3 as a mobile phone, and taking the mobile phone as an example, where the mobile phone includes a touch screen. The touch screen of the mobile phone can comprise a display panel and a touch panel. The display panel can display an interface, and the touch panel can detect touch operation of a user and report the touch operation to the mobile phone processor for corresponding processing.
In some embodiments, when the user uses the mobile phone to capture an image, the user may instruct the mobile phone to start the camera application by a touch operation, a key operation, a gesture operation, or a voice operation. After the camera application is started, the mobile phone can automatically enter a shooting mode such as a shooting mode and a video recording mode, and a preview interface is displayed.
Illustratively, as shown in fig. 4 (a), the user may instruct the mobile phone to open the camera application by clicking a "camera"application icon 401, and the mobile phone runs the camera application, displaying a shooting interface as shown in fig. 4 (b). Or, when the mobile phone is in the screen lock state, the user may instruct the mobile phone to start the camera application through a gesture of sliding rightward on the screen of the mobile phone, and the mobile phone may also display the shooting interface shown in fig. 4 (b). Or, the mobile phone is in a screen locking state, the screen locking interface includes a camera application icon, and the user instructs the mobile phone to open the camera application by clicking the camera application icon, so that the mobile phone displays the shooting interface shown in (b) in fig. 4. Or when the mobile phone runs other applications, the application has the authority of calling the camera application. And (4) the user instructs the mobile phone to start the camera application by clicking the corresponding control, and then the mobile phone displays the shooting interface shown in (b) in fig. 4. For example, when the mobile phone is running an instant messaging application, the user may instruct the mobile phone to start the camera application by selecting the control of the camera function.
Specifically, as shown in fig. 4 (b), the preview interface of the camera includes afinder frame 402, a shooting control, and a function control. The function control comprises: large aperture, portrait, photograph and video, etc. In some implementations, azoom magnification indication 403 may also be included in the capture interface. Generally, the default zoom factor of the mobile phone is a basic multiple, which is "1 ×". The zoom factor can be understood as the focal length of the current camera, which is equivalent to the zoom/zoom factor of the reference focal length. When the mobile phone displays the preview interface, at least one camera may be in a working state. The reference focal length is usually the focal length of the main camera of the mobile phone.
Illustratively, the mobile phone includes a wide-angle camera, a middle-focus camera and a long-focus camera. Assuming that the relative position of the target object and the mobile phone is unchanged, the focal length of the wide-angle camera is the minimum, the field angle is the maximum, and the size of the target object in the shot image is the minimum. The focal length of the middle-focus camera is larger than that of the wide-angle camera, the field angle is smaller than that of the wide-angle camera, and the size of a target object in a shot image is larger than that of the wide-angle camera. The long-focus camera has the largest focal length and the smallest field angle, and the size of the target object in the shot image is the largest.
The field angle is used for indicating the maximum angle which can be shot by the camera in the process of shooting the target object by the mobile phone. If the target object is in the maximum angle range which can be shot by the camera, the illumination reflected by the target object can be collected by the camera, so that the image of the target object is displayed in the preview image displayed by the mobile phone. If the target object is out of the maximum angle range which can be shot by the camera, the illumination reflected by the target object cannot be collected by the camera, and thus, the image of the target object cannot appear in the preview image displayed by the mobile phone. Generally, the larger the angle of view of a camera, the larger the shooting range when using the camera. The smaller the angle of view of the camera, the smaller the shooting range when using the camera. It is understood that the field angle may also be referred to as a "field of view range," "field of view region," or the like.
It should be noted that, in the process of taking an image, the mobile phone may use any one of the cameras as a main camera, and use the focal length of the main camera as a reference focal length.
For example, when a mobile phone is running a camera application, the mobile phone uses a middle-focus camera as a main camera, the focal length of the main camera is set as a reference focal length, and the zoom magnification is "1 ×". If the mobile phone simultaneously uses the long-focus camera and the middle-focus camera to acquire images, the focal length of the long-focus camera and the focal length multiple of the main camera (namely the middle-focus camera) are used as the zoom magnification of the long-focus camera. Assume that the focal length of the tele camera is 5 times the focal length of the main camera, i.e., the zoom factor of the tele camera is "5 ×". When the mobile phone is capturing an image, the captured image can be digitally zoomed according to the user operation, and thus, the image captured using the telephoto camera can correspond to a zoom magnification of one section, such as "5 x" to "25 x".
For another example, the mobile phone simultaneously uses the wide-angle camera (short-focus camera) and the main camera (i.e. middle-focus camera) to obtain an image, and the focal length of the short-focus camera and the focal length multiple of the main camera can be used as the zoom multiple of the short-focus camera. Assume that the focal length of the short focus camera may be 0.5 times the focal length of the main camera, i.e., the zoom magnification of the short focus camera is "0.5 ×". Based on this, when an image is captured by using a short-focus camera, digital zooming is performed, so that the image captured by using the short-focus camera can correspond to a zoom magnification of one section, such as "0.5 ×" to "1 ×".
In some embodiments, as shown in fig. 5 (a), when the mobile phone runs the camera application to display a shooting interface, the user may reduce the zoom magnification used by the mobile phone by making a gesture of double-finger (or three-finger) pinch on the display screen of the mobile phone; or, the user can make a gesture that two fingers (or three fingers) slide outwards on the display screen of the mobile phone, namely the gesture is opposite to the direction of pinching, so that the distance between the fingers is increased, and the zoom magnification used by the mobile phone is increased.
As shown in fig. 5 (b), the mobile phone includes ascale 501 for indicating the current zoom magnification when displaying the shooting interface. The user can drag an arrow in the scale in the shooting interface to adjust the zoom magnification used by the mobile phone.
In addition, the user can change the zoom magnification of the mobile phone by switching the current shooting mode in the shooting interface. For example, the mobile phone is currently in a photographing mode, the mobile phone can capture an image by using a middle-focus camera, and if the user selects to switch to a portrait mode, the mobile phone is switched to a short-focus camera, and the zoom magnification is increased.
Illustratively, a mobile phone runs a camera application, and the mobile phone reads out an image in a Binning manner, and takes the read-out image as a preview image. The mobile phone receives zoom operation input by a user, and in response to the zoom operation, the mobile phone displays a preview image corresponding to the zoom magnification. Assuming that the zoom factor of the cell phone is "3 x" in response to the zoom operation, the cell phone image sensor setting is as shown in fig. 6. The mobile phone reads out the image in a Binning mode, and the processed image is presented on a display screen as a preview image. The zoom factor is "3 ×", the image pixels read out by the cell phone are 4000 × 3000, i.e., the preview image is 4000 × 3000, and read out in Binning. When a shooting key of the mobile phone is triggered, the mobile phone captures an image and reads the image in a Remosaic mode. As shown in fig. 6, the first switching process of the image sensor: during image capturing, the image sensor of the mobile phone can read out 12000 × 9000 images in a Remosaic manner. Since the mobile phone outputs an image in a scene with a zoom factor of "3 ×", the mobile phone performs cropping (i.e., crop operation) on the image read out in the Remosaic mode, so that the image becomes an image read out in the Remosaic mode with pixels 4000 × 3000. Further, the second switching of the image sensor: an image of pixel 4000 × 3000, read out in Remosaic, can be converted to an image of 4000 × 3000, read out in Binning. Thus, the 4000 × 3000 image can be displayed on the display of the mobile phone and read out by Binning.
In some embodiments, after the mobile phone receives the zoom operation, the mobile phone may monitor the brightness in the current shooting environment in real time during the preview. And adjusting the pixel signal combination mode according to the current zoom multiple and the brightness value in the environment. Take an example in which the mobile phone receives a zoom operation, and the zoom operation instructs the mobile phone to display an image at a zoom magnification of "3 ×".
As shown in fig. 7, the mobile phone currently reads out an image by Binning at the time of preview, taking as an example that the mobile phone reads out an image of 4000 × 3000 by Binning in response to a zoom operation. And if the brightness in the current shooting environment is determined to exceed the preset threshold, the mobile phone modifies the current pixel signal combination mode from Binning to Remosaic. Certainly, the image is read out in a Remosaic manner, and the image needs to be crop to ensure that the image corresponding to the zoom multiple is obtained, that is, the mobile phone obtains the 4000 × 3000 image in a Remosaic + crop manner. And if the brightness in the current shooting environment is determined to be changed, the brightness of the shooting environment is less than the preset threshold value. Thus, the mobile phone can modify the current pixel signal combination mode from Remosaic + crop to Binning.
It should be mentioned that, since the mobile phone can adjust the pixel signal combination mode in real time according to the brightness in the current shooting environment, when the shooting key is triggered, the mobile phone still obtains the shot image according to the corresponding pixel signal combination mode. As shown in fig. 7, when the mobile phone reads a 4000 × 3000 image by Binning and presents the image as a preview image on the display of the mobile phone, the shooting key is triggered, and the mobile phone still reads the image by Binning to generate a 4000 × 3000 shot image. When the mobile phone reads out the 4000 x 3000 image in the mode of Remosaic + crop and displays the image on the display screen of the mobile phone as a preview image, the shooting key is triggered, and the mobile phone still reads out the image in the mode of Remosaic + crop to generate the 4000 x 3000 shooting image.
The brightness in the current shooting environment exceeds the preset threshold, which may indicate that the brightness in the current shooting environment is greater than the preset threshold, or indicate that the brightness in the current shooting environment is greater than or equal to the preset threshold. The brightness in the current shooting environment is less than the preset threshold, which may indicate that the brightness in the current shooting environment is less than or equal to the preset threshold, or that the brightness in the current shooting environment is less than the preset threshold.
It should be noted that the brightness threshold of the mobile phone from the high-brightness environment to the dark-light environment may be a first threshold, that is, the mobile phone determines that the brightness of the environment is less than the first threshold, and the mobile phone switches from the Remosaic + crop mode to the Binning mode to read out the image. The brightness threshold of the mobile phone from the dark light environment to the highlight environment can be a second threshold, namely the mobile phone determines that the ambient brightness exceeds the second threshold, and the mobile phone switches from the Binning mode to the Remosaic + crop mode to read out the image.
For example, the second threshold may be greater than the first threshold. That is, the second threshold is different from the first threshold, and when the mobile phone determines that the brightness of the shooting environment exceeds the second threshold, the mobile phone switches the pixel signal combination mode. And when the mobile phone determines that the brightness of the shooting environment is lower than the second threshold and is less than the first threshold, the mobile phone switches the pixel signal combination mode again. Therefore, the image sensor of the mobile phone can be protected, the pixel signal combination mode (also called ping-pong switching pixel signal combination mode) of the mobile phone is prevented from being adjusted in a reciprocating manner, and the stable operation of the mobile phone photographing system is favorably maintained.
In this embodiment, when the mobile phone switches the sensor setting, the mobile phone may keep the rate of the image sensor outputting the image frames unchanged. For example, the image sensor outputs images at a rate of 30 frames/sec, and the mobile phone may always keep outputting images at a rate of 30 frames/sec regardless of the manner in which the pixel signals are combined or whether the photographing key is activated. In this case, when the sensor setting is performed, the mobile phone reads a 4000 × 3000 image by Binning, and after the switching of the pixel signal combination method is completed, the mobile phone switches to read a 4000 × 3000 image by Remosaic + crop.
It is worth noting that the mobile phone collects the illumination intensity in the current shooting environment in real time, and after the zoom multiple is determined, the pixel signal combination mode of the image sensor is adjusted according to the illumination intensity. That is, the image sensor adopts Binning or Remosaic and the light sensing energy is not different by adopting different pixel signal combination modes. When the image sensor switches the pixel signal combination mode, the mobile phone can adopt a preset image algorithm to ensure that the color reduction degree of the image is high, and the brightness of the previewed image is uniform when the image sensor switches the pixel signal combination mode.
Illustratively, the preset algorithm may be a 3A technique. Among them, the 3A technology is abbreviated as Auto Focus (AF), Auto Exposure (AE), and Auto White Balance (AWB). Specifically, when the image sensor switches the pixel signal combination mode, the maximum image contrast can be realized by using an AF algorithm, an AE algorithm and an AWB algorithm by adopting a 3A technology, the overexposure or underexposure condition of a main shot object is improved, and the chromatic aberration of a picture under different light irradiation is compensated. Therefore, the problem of flickering of the preview image possibly occurring in the process of switching the pixel signal combination mode of the image sensor is solved. By adopting the 3A technology, the brightness of a screen display preview picture is smoother and more natural in the process of combining the switching pixel signals of the image sensor.
It is worth mentioning that when the mobile phone reads out the image in a Remosaic mode, the definition of the output image can be improved. In this way, the mobile phone reads out an image in a Remosaic method, and cuts out the read-out image so that the image becomes an image conforming to the zoom magnification. After the shooting key is triggered in the pixel signal combination mode, the mobile phone obtains an image in a Remosaic + crop mode, can acquire a plurality of frames of images, generates a shot image according to the plurality of frames of images, and displays the shot image on the mobile phone. The image of the multiple frames can improve the signal-to-noise ratio of the shot image, so that the details in the shot image are enhanced, and the shot image has a High-Dynamic Range (HDR) effect.
That is, images of a plurality of frames are captured at different exposure times, and a captured image is obtained from the images of the plurality of frames. The final HDR image (i.e. the shot image) is synthesized by using the detail image corresponding to each exposure time, so that the visual effect in the real environment can be better reflected. For example, an image obtained by a Remosaic + crop mode is a clearer image relative to an image obtained by a Binning mode, and a mobile phone synthesizes three images obtained by continuous exposure shooting into one image, so that the detailed expression of the image can be improved.
Specifically, different sensor setting strategies may be set at different zoom magnifications. After the mobile phone acquires the image, the image can be processed by using different algorithms. After the mobile phone acquires the zooming operation, the front camera and the rear camera in the mobile phone can set different sensor setting strategies and image processing algorithms.
In one possible implementation, the handset includes three rear cameras and two front cameras. As shown in fig. 8 (a), the front camera includes a firstfront camera 801 and a secondfront camera 802; as shown in fig. 8 (b), the rear cameras include a firstrear camera 803, a secondrear camera 804, and a thirdrear camera 805.
For example, the configuration of the rear three cameras may be: the first rear camera is a main camera (middle focus camera), the specification is 108M (the shooting pixel is 1920 × 1080), and the camera adopts a 3 × 3 quad sensor; the second rear camera is an ultra-wide-angle camera with the specification of 8M (800 ten thousand pixels); the third rear camera is a tele camera, with a 64M (6400 ten thousand pixels) specification, and employs a 2x2 quad sensor, 3.5x tele. Wherein the optical zoom magnification of tele is 3.5 ×. That is, at the 35mm equivalent focal length, the third rear camera focal length is 3.5 times the main-shooting (first rear camera) focal length.
The configuration of the two front cameras may be: the first front camera is a main camera, the specification is 32M, and the camera adopts a 2x2 quad sensor; the second front camera is a wide-angle camera with the specification of 12M.
The preset image processing algorithm in the mobile phone comprises the following steps: multi-frame noise reduction (MFNR), High Dynamic Range (HDR), High Dynamic Super Resolution (HDSR), Face-based super resolution (Face super resolution), and the like.
MFNR: and realizing image noise reduction according to the multi-frame images. Specifically, similar pixel points of a current pixel are searched in a frame of image, and weighted average operation is performed on the pixel points, so that the similar pixels have the same display effect, and image noise is reduced. HDR: when multi-frame image fusion can be realized, the pixel contrast of the generated image is improved, so that the dynamic range of the image is improved. HDSR: and realizing a super-resolution algorithm on the basis of high dynamic range fusion. FACSERR: the super-resolution algorithm based on the human face can enable the human face on the image to be clearer by enhancing the human face boundary, so that the definition of the whole image is improved.
When the mobile phone adopts the rear camera to collect images, the zooming multiples are different, and then the mobile phone can adopt different image algorithms and image sensor settings. Referring to table 1 below, different zoom scenes of the image are acquired by using the rear camera for the mobile phone.
Table 1: zoom range and image processing algorithm corresponding relation for collecting image by adopting rear camera
Figure BDA0003116093540000171
Figure BDA0003116093540000181
When the mobile phone adopts the front camera to collect images, the zooming multiples are different, and then the mobile phone can adopt different image algorithms and image sensor settings. Referring to table 2 below, different zoom scenes of the image are acquired by using the front camera for the mobile phone.
Table 2: corresponding relation between zooming range for acquiring image by adopting front-facing camera and image processing algorithm
Figure BDA0003116093540000182
An embodiment of the present application further provides a shooting method, please refer to fig. 9, where the method may include:
here, taking the zoom magnification of the electronic device in the preset magnification range as an example (that is, the zoom magnification is the second magnification and is smaller than the preset magnification), the shooting method provided in the embodiment of the present application is described. For example, the zoom magnification is less than 3 ×. The electronic equipment realizes shooting functions through an ISP (internet service provider), a camera, a video encoder, a GPU (graphic processing unit), a display screen, a processor and the like. If the electronic equipment adopts the front camera to realize the shooting function, the zooming range is 0.8-1.9 x. If the electronic equipment adopts the rear camera to realize the shooting function, the zooming range comprises 1.0-2.9x and 3.5 x-6.9 x.
Step 901: the electronic device starts the camera.
Step 902: the electronic equipment displays a first preview image, and the zoom magnification corresponding to the first preview image is a first magnification.
Illustratively, the electronic device starts up the camera, and the electronic device displays the preview image corresponding to the zoom magnification of 1 ×.
Step 903: the electronic equipment receives a first operation input by a user, and the first operation instructs the electronic equipment to adjust the zoom magnification of the camera to a second magnification.
Wherein the second magnification is different from the first magnification. The first operation may include instructing the electronic device to increase a zoom magnification of the camera or to decrease the zoom magnification of the camera.
For example, the first operation may be instructing the electronic device to increase a zoom magnification of the camera to a second magnification; alternatively, the first operation may be instructing the electronic device to zoom out the zoom magnification of the camera to the second magnification.
Step 904: and responding to the first operation, the electronic equipment generates a second preview image in a first pixel signal combination mode, and displays the second preview image.
And the zooming magnification corresponding to the second preview image is a second magnification.
Step 905: and determining the brightness of the shooting environment in which the electronic equipment is positioned in response to the shooting key being triggered.
Specifically, the electronic device may recognize the brightness of the shooting environment in which the electronic device is currently located through a plurality of sensors (e.g., an ambient light sensor, a light sensor, etc.) and the display screen.
Step 906: if the brightness is larger than a first preset threshold value, the electronic equipment generates a shot image in a second pixel signal combination mode; and if the brightness is less than or equal to a first preset threshold value, the electronic equipment generates a shot image in a first pixel signal combination mode.
The electronic equipment determines the brightness of the current shooting environment at the moment when the shooting key is triggered, and adjusts the pixel signal combination mode of the image sensor according to the brightness.
In some implementations, when the electronic device generates the captured image by using the second pixel signal combination method, the electronic device needs to switch from the first pixel signal combination method to the second pixel signal combination method when capturing the image. The electronic device stops outputting the image frame in the first pixel signal combination mode, and therefore the electronic device stops displaying the preview image. At this time, the electronic device sets a mode of switching from the first pixel signal combination mode to the geothermal pixel signal combination mode, and when the image sensor completes the switching, the electronic device generates a shot image by adopting a second pixel signal combination mode.
After the electronic device generates the shot image, if the electronic device is still in the shooting state, the electronic device may switch the second pixel signal combination mode to the first pixel signal combination mode. The electronic equipment continues to output the image frames in the first pixel signal combination mode and continues to provide preview images for the electronic equipment.
It should be noted that, when the captured image is generated by using the second pixel signal combination method, the electronic device needs to crop (crop) the image output by the image sensor, and reference may be made to the method for generating an image by using the second pixel signal combination method in the foregoing embodiment specifically, and details are not repeated herein to avoid redundancy.
An embodiment of the present application further provides a shooting method, please refer to fig. 10, where the method may include:
the photographing method provided by the embodiment of the present application is described herein by taking an example that the zoom magnification of the electronic device is within a preset magnification range (i.e., the zoom magnification is a third magnification that is greater than or equal to a preset magnification). For example, the zoom magnification is less than 3 ×. As in tables 1 and 2, the photographing function is realized by the rear camera, and the zoom ranges include 3.0x-3.4x and 7.0 x-50.0 x; the shooting function is realized by the front camera, and the third rate can be 2.0 x.
Step 1001: the electronic device starts the camera.
Step 1002: the electronic equipment displays a first preview image, and the zoom magnification corresponding to the first preview image is a first magnification.
Step 1003: the electronic equipment receives a first operation, and the first operation instructs the electronic equipment to adjust the zoom magnification to a third magnification.
Wherein the first magnification is different from the second magnification, and the first operation may include instructing the electronic device to increase a zoom magnification of the camera or to decrease the zoom magnification of the camera.
The steps 1001 to 1003 are the same as the steps 901 to 903, and the specific implementation may refer to the related description, which is not described herein again.
Step 1004: the electronic equipment determines the brightness of the shooting environment and judges whether the brightness is larger than a first preset threshold value. If the brightness is less than or equal to the first preset threshold, executing step 1005-step 1006; if the brightness is larger than the first preset threshold, steps 1007-1008 are executed.
The manner of determining the brightness of the shooting environment has been described above, and is not described herein again.
Step 1005: and generating a second preview image by adopting a first pixel signal combination mode, wherein the zoom magnification corresponding to the second preview image is a third magnification.
Step 1006: in response to the shooting key being triggered, the electronic device generates a shot image in a first pixel signal combination mode.
Step 1007: and generating a second preview image by adopting a second pixel signal combination mode, wherein the zoom magnification corresponding to the second preview image is a third magnification.
Step 1008: and in response to the shooting key being triggered, the electronic equipment generates a shot image in a second pixel signal combination mode.
It can be understood that in the method, when the electronic device generates the second preview image by adopting the first pixel signal combination mode, the shooting key is triggered, and the electronic device generates the shot image by adopting the first pixel signal combination mode. When the electronic equipment generates a second preview image by adopting a second pixel signal combination mode, the shooting key is triggered, and the electronic equipment generates a shot image by adopting the second pixel signal combination mode.
It is worth mentioning that in the above method, the electronic device identifies the brightness of the current shooting environment in real time. If the brightness changes, the electronic device can also adjust the pixel signal combination mode in real time. The real-time is the brightness of the shooting environment where the electronic equipment identifies the current shooting environment once every 5s, 10s or 15 s.
And setting a pixel signal combination mode of the image sensor according to the zoom magnification and the shooting brightness of the current environment.
The electronic device may refer to the zoom magnifications in tables 1 and 2 described above, and the pixel signal combination manner of the image sensor at the zoom magnifications. Particularly, for example, when the mobile phone acquires an image by using a rear camera, when the zoom magnification is 3.3 × -3.4 ×, or the zoom magnification is 7.0 x-50.0 x, the mobile phone may adjust the image output mode of the image sensor according to the brightness of the current shooting environment. And if the shooting environment brightness is a highlight scene, reading out the image in a remoiac + crop mode and generating a shot image. And if the shooting environment brightness is a scene with medium and low brightness, reading out the image by adopting a Binning mode.
In addition, the electronic equipment adjusts the pixel signal combination mode in real time according to the brightness of the current shooting environment. In order to avoid the influence on the shooting of the electronic equipment caused by frequent switching, when the electronic equipment is larger than a first preset threshold value, the electronic equipment is switched to a second pixel signal combination mode to output an image frame. When the electronic device is smaller than a second preset threshold, the electronic device switches from the second pixel signal combination mode to the first pixel signal combination mode. And the second preset threshold is smaller than the first preset threshold.
It can be understood that the electronic device displays the second preview image with better definition. Therefore, the electronic equipment can improve the definition of the zoom image during zooming shooting, reduce the signal to noise ratio of the image and improve the image quality.
The method provided by the embodiment of the present application is described above by taking the electronic device as a mobile phone as an example, and when the electronic device is another device, the method can also be adopted. And will not be described in detail herein.
It is understood that the electronic device provided in the embodiments of the present application includes a hardware structure and/or a software module for performing the above functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional modules according to the method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation.
Embodiments of the present application further provide a chip system, as shown in fig. 11, where the chip system includes at least oneprocessor 1101 and at least oneinterface circuit 1102. Theprocessor 1101 and the interface circuit 1002 may be interconnected by wires. For example, theinterface circuit 1102 may be used to receive signals from other devices (e.g., a memory of an electronic device). As another example, theinterface circuit 1102 may be used to send signals to other devices (e.g., the processor 1101). Illustratively, theinterface circuit 1102 may read instructions stored in the memory and send the instructions to theprocessor 1101. The instructions, when executed by theprocessor 1101, may cause the electronic device to perform the various steps in the embodiments described above. Of course, the chip system may further include other discrete devices, which is not specifically limited in this embodiment of the present application.
The embodiment of the present application further provides a computer storage medium, where the computer storage medium includes computer instructions, and when the computer instructions are run on the electronic device, the electronic device is enabled to execute each function or step executed by the mobile phone in the foregoing method embodiment.
The embodiment of the present application further provides a computer program product, which when running on a computer, causes the computer to execute each function or step executed by the mobile phone in the above method embodiments.
Through the description of the above embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical functional division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another device, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, that is, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially contributed to by the prior art, or all or part of the technical solutions may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (23)

Translated fromChinese
1.一种拍摄方法,其特征在于,包括:1. a photographing method, is characterized in that, comprises:响应于触发相机功能的操作,启动电子设备的相机应用;In response to an operation that triggers the camera function, launch the camera application of the electronic device;所述电子设备显示第一预览图像,所述第一预览图像对应的变焦倍率为第一倍率;The electronic device displays a first preview image, and the zoom magnification corresponding to the first preview image is the first magnification;所述电子设备接收到第一操作,所述第一操作指示所述电子设备调整变焦倍率为第二倍率,所述第二倍率小于预设倍率;The electronic device receives a first operation, the first operation instructs the electronic device to adjust the zoom magnification to a second magnification, and the second magnification is smaller than a preset magnification;所述电子设备采用第一像素信号合并方式生成第二预览图像,显示所述第二预览图像,所述第二预览图像对应的变焦倍率为所述第二倍率;The electronic device generates a second preview image by combining the first pixel signals, displays the second preview image, and the zoom magnification corresponding to the second preview image is the second magnification;响应于拍摄键被触发的操作,确定所述电子设备所在拍摄环境的亮度;In response to the triggered operation of the shooting key, determine the brightness of the shooting environment where the electronic device is located;若所述亮度大于第一预设阈值,所述电子设备采用第二像素信号合并方式生成拍摄图像;若所述亮度小于或等于所述第一预设阈值,所述电子设备采用所述第一像素信号合并方式生成拍摄图像。If the brightness is greater than the first preset threshold, the electronic device uses the second pixel signal combination method to generate the captured image; if the brightness is less than or equal to the first preset threshold, the electronic device uses the first The captured image is generated by combining pixel signals.2.根据权利要求1所述的方法,其特征在于,所述电子设备采用第二像素信号合并方式生成拍摄图像,包括:2 . The method according to claim 1 , wherein the electronic device generates a captured image in a second pixel signal combining manner, comprising: 2 .所述电子设备采用第二像素信号合并方式输出原始图像,所述原始图像的变焦倍率为所述第一倍率,所述第一倍率大于所述第二倍率;The electronic device outputs an original image in a second pixel signal combining manner, the zoom magnification of the original image is the first magnification, and the first magnification is greater than the second magnification;对所述原始图像进行裁切处理,生成拍摄图像,所述拍摄图像是变焦倍率为所述第二倍率的图像。The original image is cropped to generate a captured image, where the captured image is an image whose zoom magnification is the second magnification.3.根据权利要求1或2所述的方法,其特征在于,所述若所述亮度大于第一预设阈值之后,所述电子设备采用第二像素信号合并方式生成拍摄图像之前,所述方法还包括:3 . The method according to claim 1 or 2 , wherein after the brightness is greater than a first preset threshold, before the electronic device generates a captured image in a second pixel signal combining manner, the method 3 . Also includes:所述电子设备停止以所述第一像素信号合并方式输出图像帧;The electronic device stops outputting image frames in the first pixel signal combining manner;所述电子设备停止显示所述第二预览图像;the electronic device stops displaying the second preview image;所述电子设备从所述第一像素信号合并方式切换为第二像素信号合并方式。The electronic device switches from the first pixel signal combining method to the second pixel signal combining method.4.根据权利要求1-3任一项所述的方法,其特征在于,所述若所述亮度大于第一预设阈值,所述电子设备采用第二像素信号合并方式生成拍摄图像;若所述亮度小于或等于所述第一预设阈值,所述电子设备采用所述第一像素信号合并方式生成拍摄图像之后,所述方法还包括:4. The method according to any one of claims 1-3, wherein, if the brightness is greater than a first preset threshold, the electronic device generates a captured image by using a second pixel signal combination method; After the brightness is less than or equal to the first preset threshold, after the electronic device generates the captured image by using the first pixel signal combining method, the method further includes:所述电子设备采用所述第一像素信号合并方式生成第三预览图像,显示所述第三预览图像。The electronic device generates a third preview image by using the first pixel signal combining method, and displays the third preview image.5.根据权利要求1-4任一项所述的方法,其特征在于,5. The method according to any one of claims 1-4, wherein,所述电子设备采用第一像素信号合并方式生成拍摄图像,包括:The electronic device uses the first pixel signal combination method to generate the captured image, including:所述电子设备采用第一像素信号合并方式输出多帧图像帧;The electronic device uses the first pixel signal combining method to output multiple frames of image frames;基于第一图像融合算法,所述电子设备根据所述多帧图像帧生成拍摄图像;Based on the first image fusion algorithm, the electronic device generates a captured image according to the multiple image frames;其中,所述第一图像融合算法至少包括:多帧降噪算法、高动态范围算法、高动态超分辨率算法和基于人脸的超分辨率算法中的一个。Wherein, the first image fusion algorithm includes at least one of a multi-frame noise reduction algorithm, a high dynamic range algorithm, a high dynamic super-resolution algorithm and a face-based super-resolution algorithm.6.根据权利要求2所述的方法,其特征在于,所述对所述原始图像进行裁切处理,生成拍摄图像,包括:6. The method according to claim 2, characterized in that, performing cropping processing on the original image to generate a captured image, comprising:对所述原始图像进行裁切处理之后,得到预处理图像帧;After the original image is cropped, a preprocessed image frame is obtained;基于第二图像融合算法,所述电子设备根据多帧所述预处理图像帧生成拍摄图像;Based on the second image fusion algorithm, the electronic device generates a captured image according to multiple frames of the preprocessed image frames;其中,所述第二图像融合算法至少包括:多帧降噪算法、高动态范围算法、高动态超分辨率算法和基于人脸的超分辨率算法中的一个。Wherein, the second image fusion algorithm includes at least one of a multi-frame noise reduction algorithm, a high dynamic range algorithm, a high dynamic super-resolution algorithm and a face-based super-resolution algorithm.7.根据权利要求1-6任一项所述的方法,其特征在于,7. The method according to any one of claims 1-6, characterized in that,所述电子设备包括一个或多个前置摄像头,所述电子设备通过所述一个或多个前置摄像头实现拍摄功能;The electronic device includes one or more front cameras, and the electronic device realizes a shooting function through the one or more front cameras;所述第二倍率是第一倍率区间的一个倍率值,其中,所述第一倍率区间为0.8倍率至1.9倍率;The second magnification is a magnification value in the first magnification interval, wherein the first magnification interval is 0.8 magnification to 1.9 magnification;或者;or;所述电子设备包括一个或多个后置摄像头,所述电子设备通过所述一个或多个后置摄像头实现拍摄功能;The electronic device includes one or more rear cameras, and the electronic device realizes a shooting function through the one or more rear cameras;所述第二倍率是第二倍率区间的一个倍率值,其中,所述第二倍率区间为1.0倍率至2.9倍率。The second magnification is a magnification value in a second magnification interval, wherein the second magnification interval is 1.0 magnification to 2.9 magnification.8.一种拍摄方法,其特征在于,包括:8. A photographing method, characterized in that, comprising:响应于触发相机功能的操作,启动电子设备的相机应用;In response to an operation that triggers the camera function, launch the camera application of the electronic device;所述电子设备显示第一预览图像,所述第一预览图像对应的变焦倍率为第一倍率;The electronic device displays a first preview image, and the zoom magnification corresponding to the first preview image is the first magnification;所述电子设备接收到第一操作,所述第一操作指示所述电子设备调整变焦倍率为第三倍率,所述第三倍率大于或等于预设倍率;The electronic device receives a first operation, the first operation instructs the electronic device to adjust the zoom magnification to a third magnification, and the third magnification is greater than or equal to a preset magnification;确定所述电子设备所在拍摄环境的亮度;determining the brightness of the shooting environment where the electronic device is located;若所述亮度小于或等于第一预设阈值,采用第一像素信号合并方式生成第二预览图像,所述第二预览图像对应的变焦倍率为所述第三倍率;If the brightness is less than or equal to the first preset threshold, a second preview image is generated using the first pixel signal combination method, and the zoom magnification corresponding to the second preview image is the third magnification;若所述亮度大于所述第一预设阈值,采用第二像素信号合并方式生成第二预览图像,所述第二预览图像对应的变焦倍率为所述第三倍率。If the brightness is greater than the first preset threshold, a second preview image is generated in a second pixel signal combination manner, and the zoom magnification corresponding to the second preview image is the third magnification.9.根据权利要求8所述的方法,其特征在于,所述若所述亮度小于或等于第一预设阈值,采用第一像素信号合并方式生成第二预览图像,所述第二预览图像对应的变焦倍率为所述第三倍率之后,所述方法还包括:9 . The method according to claim 8 , wherein, if the brightness is less than or equal to a first preset threshold, a first pixel signal combination method is used to generate a second preview image, and the second preview image corresponds to 9 . After the zoom magnification is the third magnification, the method further includes:确定所述电子设备拍摄环境的当前亮度;determining the current brightness of the shooting environment of the electronic device;若所述当前亮度大于所述第一预设阈值,所述电子设备从所述第一像素信号合并方式切换为第二像素信号合并方式;If the current brightness is greater than the first preset threshold, the electronic device switches from the first pixel signal combining mode to the second pixel signal combining mode;采用所述第二像素信号合并方式生成第三预览图像,所述第三预览图像对应的变焦倍率为所述第三倍率。A third preview image is generated by using the second pixel signal combining method, and the zoom magnification corresponding to the third preview image is the third magnification.10.根据权利要求8或9所述的方法,其特征在于,所述若所述亮度大于所述第一预设阈值,采用第二像素信号合并方式生成第二预览图像,所述第二预览图像对应的变焦倍率为所述第三倍率之后,所述方法还包括:10. The method according to claim 8 or 9, wherein, if the brightness is greater than the first preset threshold, a second pixel signal combination method is used to generate a second preview image, and the second preview image is After the zoom magnification corresponding to the image is the third magnification, the method further includes:确定所述电子设备所在拍摄环境的当前亮度;determining the current brightness of the shooting environment where the electronic device is located;若所述当前亮度小于或等于第二预设阈值,所述电子设备从所述第二像素信号合并方式切换为第一像素信号合并方式,所述第二预设阈值小于所述第一预设阈值;If the current brightness is less than or equal to a second preset threshold, the electronic device switches from the second pixel signal combining method to the first pixel signal combining method, and the second predetermined threshold is less than the first predetermined threshold threshold;采用所述第一像素信号合并方式生成第四预览图像,所述第四预览图像对应的变焦倍率为所述第三倍率。A fourth preview image is generated using the first pixel signal combining method, and the zoom magnification corresponding to the fourth preview image is the third magnification.11.根据权利要求8-10任一项所述的方法,其特征在于,当所述电子设备采用第一像素信号合并方式生成第二预览图像时,所述方法还包括:11. The method according to any one of claims 8-10, wherein when the electronic device generates the second preview image in a first pixel signal combining manner, the method further comprises:响应于拍摄键被触发,所述电子设备采用所述第一像素信号合并方式生成拍摄图像;In response to the capture key being triggered, the electronic device generates a captured image by using the first pixel signal combination method;当所述电子设备采用第二像素信号合并方式生成第二预览图像时,所述方法还包括:When the electronic device generates the second preview image in the second pixel signal combining manner, the method further includes:响应于拍摄键被触发,所述电子设备采用所述第二像素信号方式生成拍摄图像。In response to the capture key being triggered, the electronic device generates a captured image by means of the second pixel signal.12.根据权利要求9所述的方法,其特征在于,所述方法还包括:12. The method of claim 9, wherein the method further comprises:当所述电子设备采用第一像素信号合并方式生成第二预览图像时,所述电子设备按照第一速率输出预览图像帧;When the electronic device generates the second preview image in the first pixel signal combining manner, the electronic device outputs the preview image frame according to the first rate;当所述电子设备采用第二像素信号合并方式生成第三预览图像时,所述电子设备按照第一速率输出预览图像帧。When the electronic device generates the third preview image in the second pixel signal combining manner, the electronic device outputs the preview image frame according to the first rate.13.根据权利要求10所述的方法,其特征在于,所述方法还包括:13. The method of claim 10, wherein the method further comprises:当所述电子设备采用第二像素信号合并方式生成第二预览图像时,所述电子设备按照第一速率输出预览图像帧;When the electronic device generates the second preview image in the second pixel signal combining manner, the electronic device outputs the preview image frame according to the first rate;当所述电子设备采用第一像素信号合并方式生成第四预览图像时,所述电子设备按照第一速率输出预览图像帧。When the electronic device generates the fourth preview image in the first pixel signal combining manner, the electronic device outputs the preview image frame according to the first rate.14.根据权利要求9或12所述的方法,其特征在于,所述采用所述第一像素信号合并方式生成第三预览图像时,包括:14. The method according to claim 9 or 12, wherein when the third preview image is generated by the first pixel signal combining method, the method comprises:所述电子设备采用所述第一像素信号合并方式输出第一初始图像,根据预设图像算法将所述第一初始图像生成第三预览图像,使得所述第三预览图像的亮度与所述第二预览图像亮度近似。The electronic device outputs a first initial image by combining the first pixel signals, and generates a third preview image from the first initial image according to a preset image algorithm, so that the brightness of the third preview image is the same as that of the third preview image. The brightness of the second preview image is approximate.15.根据权利要求10或13所述的方法,其特征在于,所述采用所述第二像素信号合并方式生成第四预览图像时,包括:15. The method according to claim 10 or 13, wherein, when the fourth preview image is generated by using the second pixel signal combining method, the method comprises:所述电子设备采用所述第二像素信号合并方式输出第二初始图像,根据预设图像算法将所述第二初始图像生成第四预览图像,使得所述第四预览图像的亮度与所述第二预览图像亮度近似。The electronic device outputs a second initial image by combining the second pixel signals, and generates a fourth preview image from the second initial image according to a preset image algorithm, so that the brightness of the fourth preview image is the same as that of the first preview image. The brightness of the second preview image is approximate.16.根据权利要求14所述的方法,其特征在于,所述第一初始图像的分辨率对应所述第三倍率。16. The method according to claim 14, wherein the resolution of the first initial image corresponds to the third magnification.17.根据权利要求15所述的方法,其特征在于,所述第二初始图像的分辨率为第四分辨率,所述第四分辨率大于所述第三分辨率;17. The method according to claim 15, wherein the resolution of the second initial image is a fourth resolution, and the fourth resolution is greater than the third resolution;所述电子设备采用所述第二像素信号合并方式输出第二初始图像之后,所述根据预设图像算法将所述第二初始图像生成第四预览图像之前,所述方法还包括:After the electronic device outputs the second initial image by using the second pixel signal combination method, and before generating the fourth preview image from the second initial image according to a preset image algorithm, the method further includes:所述电子设备对所述第二初始图像进行裁切处理,使得所述第二初始图像的分辨率为所述第三分辨率。The electronic device performs cropping processing on the second initial image, so that the resolution of the second initial image is the third resolution.18.根据权利要求11所述的方法,其特征在于,所述电子设备采用第一像素信号合并方式生成拍摄图像,包括:18. The method according to claim 11, wherein the electronic device generates a captured image in a first pixel signal combining manner, comprising:所述电子设备采用第一信号合并方式输出多帧图像帧,基于第一图像融合算法,根据所述多帧图像帧生成拍摄图像;The electronic device uses the first signal combining method to output multiple image frames, and based on the first image fusion algorithm, generates a captured image according to the multiple image frames;所述电子设备采用第二像素信号合并方式生成拍摄图像,包括:The electronic device uses the second pixel signal combination method to generate the captured image, including:所述电子设备采用第二信号合并方式输出多帧原始图像帧,针对每帧原始图像帧进行裁切处理,得到多帧裁切图像帧,基于第二图像融合算法,根据所述多帧裁切图像帧生成拍摄图像。The electronic device uses the second signal combining method to output multiple frames of original image frames, and performs cropping processing for each frame of the original image frame to obtain multiple frames of cropped image frames, and based on the second image fusion algorithm, according to the multi-frame cropping Image frames generate captured images.19.根据权利要求17所述的方法,其特征在于,所述第一图像融合算法至少包括:多帧降噪算法、高动态范围算法、高动态超分辨率算法和基于人脸的超分辨率算法中的一个;19. The method according to claim 17, wherein the first image fusion algorithm at least comprises: a multi-frame noise reduction algorithm, a high dynamic range algorithm, a high dynamic super-resolution algorithm and a face-based super-resolution algorithm one of the algorithms;所述第二图像融合算法至少包括:多帧降噪算法、高动态范围算法、高动态超分辨率算法和基于人脸的超分辨率算法中的一个。The second image fusion algorithm includes at least one of a multi-frame noise reduction algorithm, a high dynamic range algorithm, a high dynamic super-resolution algorithm and a face-based super-resolution algorithm.20.根据权利要求8-19任一项所述的方法,其特征在于,20. The method according to any one of claims 8-19, wherein,如果所述电子设备包括一个或多个前置摄像头,所述电子设备通过所述一个或多个前置摄像头实现拍摄功能,所述第三倍率是2.0倍率;If the electronic device includes one or more front-facing cameras, the electronic device implements a shooting function through the one or more front-facing cameras, and the third magnification is 2.0;如果所述电子设备包括一个或多个后置摄像头,所述电子设备通过所述一个或多个后置摄像头实现拍摄功能,所述第三倍率是第三倍率区间或第四倍率区间的一个倍率值。If the electronic device includes one or more rear cameras, the electronic device implements a shooting function through the one or more rear cameras, and the third magnification is a magnification of the third magnification interval or the fourth magnification interval value.21.根据权利要求19所述的方法,其特征在于,所述第三倍率区间为3.0倍率至3.4倍率;所述第四倍率区间为7.0倍率至50.0倍率。21. The method according to claim 19, wherein the third magnification interval is from 3.0 to 3.4 magnifications; and the fourth magnification interval is from 7.0 to 50.0 magnifications.22.一种电子设备,其特征在于,包括:22. An electronic device, characterized in that, comprising:一个或多个处理器;存储器,所述存储器中存储有代码;触摸屏,用于检测触摸操作以及显示界面;one or more processors; a memory in which code is stored; a touch screen for detecting touch operations and displaying an interface;当所述代码被所述一个或多个处理器执行时,使得所述电子设备执行如权利要求1-21任一项所述的方法。The code, when executed by the one or more processors, causes the electronic device to perform the method of any of claims 1-21.23.一种计算机可读存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子设备上运行时,使得电子设备执行如权利要求1-21任一项所述的拍摄方法。23. A computer-readable storage medium, comprising computer instructions, which, when the computer instructions are executed on an electronic device, cause the electronic device to execute the photographing method according to any one of claims 1-21.
CN202110662935.6A2021-06-152021-06-15Shooting method and electronic equipmentPendingCN113727016A (en)

Priority Applications (2)

Application NumberPriority DateFiling DateTitle
CN202110662935.6ACN113727016A (en)2021-06-152021-06-15Shooting method and electronic equipment
PCT/CN2021/143949WO2022262260A1 (en)2021-06-152021-12-31Photographing method and electronic device

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202110662935.6ACN113727016A (en)2021-06-152021-06-15Shooting method and electronic equipment

Publications (1)

Publication NumberPublication Date
CN113727016Atrue CN113727016A (en)2021-11-30

Family

ID=78672952

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202110662935.6APendingCN113727016A (en)2021-06-152021-06-15Shooting method and electronic equipment

Country Status (2)

CountryLink
CN (1)CN113727016A (en)
WO (1)WO2022262260A1 (en)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114449174A (en)*2022-02-282022-05-06维沃移动通信有限公司Shooting method and device and electronic equipment
CN115118871A (en)*2022-02-112022-09-27东莞市步步高教育软件有限公司Photographing pixel mode switching method and system, terminal device and storage medium
CN115426449A (en)*2022-07-302022-12-02荣耀终端有限公司Photographing method and terminal
WO2022262260A1 (en)*2021-06-152022-12-22荣耀终端有限公司Photographing method and electronic device
CN115526787A (en)*2022-02-282022-12-27荣耀终端有限公司Video processing method and device
CN115550544A (en)*2022-08-192022-12-30荣耀终端有限公司Image processing method and device
CN116051368A (en)*2022-06-292023-05-02荣耀终端有限公司 Image processing method and related equipment
CN116055855A (en)*2022-07-282023-05-02荣耀终端有限公司Image processing method and related device
WO2023160220A1 (en)*2022-02-282023-08-31荣耀终端有限公司Image processing method and electronic device
WO2023160221A1 (en)*2022-02-242023-08-31荣耀终端有限公司Image processing method and electronic device
WO2023160224A1 (en)*2022-02-282023-08-31荣耀终端有限公司Photographing method and related device
CN116709021A (en)*2023-02-132023-09-05荣耀终端有限公司 Zoom response method, electronic device and storage medium
CN116709016A (en)*2022-02-242023-09-05荣耀终端有限公司 Magnification switching method and magnification switching device
CN116939363A (en)*2022-03-292023-10-24荣耀终端有限公司Image processing method and electronic equipment
WO2023245391A1 (en)*2022-06-202023-12-28北京小米移动软件有限公司Preview method and apparatus for camera
CN117714858A (en)*2023-07-312024-03-15荣耀终端有限公司 Image processing method, electronic device and readable storage medium
WO2024055823A1 (en)*2022-09-162024-03-21荣耀终端有限公司Camera application interface interaction method and device
CN117998203A (en)*2024-04-032024-05-07荣耀终端有限公司Image processing method, electronic equipment and storage medium
WO2024093518A1 (en)*2022-11-012024-05-10荣耀终端有限公司Image readout mode switching method and related device
CN118175441A (en)*2024-04-242024-06-11荣耀终端有限公司Image sensor, image processing method, electronic device, storage medium, and product
WO2024255314A1 (en)*2023-06-152024-12-19荣耀终端有限公司Mode control method, electronic device, storage medium, and program product
WO2025007751A1 (en)*2023-07-032025-01-09荣耀终端有限公司Display method, mobile terminal, and computer-readable storage medium
WO2025011069A1 (en)*2023-07-122025-01-16荣耀终端有限公司Image processing method for camera application and related device
US12445729B2 (en)2022-02-242025-10-14Honor Device Co., Ltd.Image processing method and electronic device

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN116320727B (en)*2023-02-252024-03-08荣耀终端有限公司 An algorithm scheduling method and electronic device
CN116306733B (en)*2023-02-272024-03-19荣耀终端有限公司Method for amplifying two-dimensional code and electronic equipment
CN117714899B (en)*2023-07-282025-01-03荣耀终端有限公司Shooting method for time-lapse shooting and electronic equipment
CN118474524B (en)*2023-11-212024-12-17荣耀终端有限公司Image processing method and device
CN119277215B (en)*2024-03-302025-09-12荣耀终端股份有限公司 Image processing method, electronic device, computer program product, and storage medium
CN119255121A (en)*2024-03-302025-01-03荣耀终端有限公司 Image processing method, terminal device and readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110876027A (en)*2018-08-292020-03-10三星电子株式会社Image sensor, electronic device including the same, and image scaling processing method
US20210067749A1 (en)*2019-08-262021-03-04Samsung Electronics Co., Ltd.System and method for content enhancement using quad color filter array sensors

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US10136079B2 (en)*2013-10-312018-11-20Ricoh Imaging Company, Ltd.Method and apparatus for imaging an object
CN108419022A (en)*2018-03-062018-08-17广东欧珀移动通信有限公司Control method, control device, computer-readable storage medium, and computer apparatus
CN109005363B (en)*2018-08-132020-07-17Oppo广东移动通信有限公司 Imaging control method, device, electronic device, and storage medium
CN109005364B (en)*2018-08-132020-03-06Oppo广东移动通信有限公司 Imaging control method, apparatus, electronic device, and computer-readable storage medium
CN110475066A (en)*2019-08-202019-11-19Oppo广东移动通信有限公司Control method, imaging device and electronic equipment
CN110572581B (en)*2019-10-142021-04-30Oppo广东移动通信有限公司 Method and device for acquiring zoom blurred image based on terminal equipment
CN113727016A (en)*2021-06-152021-11-30荣耀终端有限公司Shooting method and electronic equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110876027A (en)*2018-08-292020-03-10三星电子株式会社Image sensor, electronic device including the same, and image scaling processing method
US20210067749A1 (en)*2019-08-262021-03-04Samsung Electronics Co., Ltd.System and method for content enhancement using quad color filter array sensors

Cited By (37)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2022262260A1 (en)*2021-06-152022-12-22荣耀终端有限公司Photographing method and electronic device
CN115118871A (en)*2022-02-112022-09-27东莞市步步高教育软件有限公司Photographing pixel mode switching method and system, terminal device and storage medium
CN116709042B (en)*2022-02-242024-07-09荣耀终端有限公司Image processing method and electronic equipment
US12445729B2 (en)2022-02-242025-10-14Honor Device Co., Ltd.Image processing method and electronic device
CN116709016A (en)*2022-02-242023-09-05荣耀终端有限公司 Magnification switching method and magnification switching device
CN116709042A (en)*2022-02-242023-09-05荣耀终端有限公司Image processing method and electronic equipment
WO2023160221A1 (en)*2022-02-242023-08-31荣耀终端有限公司Image processing method and electronic device
WO2023160224A1 (en)*2022-02-282023-08-31荣耀终端有限公司Photographing method and related device
WO2023160220A1 (en)*2022-02-282023-08-31荣耀终端有限公司Image processing method and electronic device
CN116723417B (en)*2022-02-282024-04-26荣耀终端有限公司 Image processing method and electronic device
CN118741315A (en)*2022-02-282024-10-01荣耀终端有限公司 Image processing method and electronic device
CN115526787A (en)*2022-02-282022-12-27荣耀终端有限公司Video processing method and device
CN116723417A (en)*2022-02-282023-09-08荣耀终端有限公司 An image processing method and electronic device
CN115526787B (en)*2022-02-282023-10-20荣耀终端有限公司Video processing method and device
CN114449174A (en)*2022-02-282022-05-06维沃移动通信有限公司Shooting method and device and electronic equipment
CN116939363B (en)*2022-03-292024-04-26荣耀终端有限公司 Image processing method and electronic device
CN116939363A (en)*2022-03-292023-10-24荣耀终端有限公司Image processing method and electronic equipment
WO2023245391A1 (en)*2022-06-202023-12-28北京小米移动软件有限公司Preview method and apparatus for camera
CN116051368A (en)*2022-06-292023-05-02荣耀终端有限公司 Image processing method and related equipment
CN116051368B (en)*2022-06-292023-10-20荣耀终端有限公司 Image processing methods and related equipment
CN116055855B (en)*2022-07-282023-10-31荣耀终端有限公司Image processing method and related device
CN116055855A (en)*2022-07-282023-05-02荣耀终端有限公司Image processing method and related device
CN115426449A (en)*2022-07-302022-12-02荣耀终端有限公司Photographing method and terminal
CN115550544B (en)*2022-08-192024-04-12荣耀终端有限公司 Image processing method and device
CN115550544A (en)*2022-08-192022-12-30荣耀终端有限公司Image processing method and device
WO2024055823A1 (en)*2022-09-162024-03-21荣耀终端有限公司Camera application interface interaction method and device
CN117768772A (en)*2022-09-162024-03-26荣耀终端有限公司Interaction method and device of camera application interface
WO2024093518A1 (en)*2022-11-012024-05-10荣耀终端有限公司Image readout mode switching method and related device
CN116709021B (en)*2023-02-132024-06-18荣耀终端有限公司 Zoom response method, electronic device and storage medium
WO2024169638A1 (en)*2023-02-132024-08-22荣耀终端有限公司Zoom response method, electronic device, and storage medium
CN116709021A (en)*2023-02-132023-09-05荣耀终端有限公司 Zoom response method, electronic device and storage medium
WO2024255314A1 (en)*2023-06-152024-12-19荣耀终端有限公司Mode control method, electronic device, storage medium, and program product
WO2025007751A1 (en)*2023-07-032025-01-09荣耀终端有限公司Display method, mobile terminal, and computer-readable storage medium
WO2025011069A1 (en)*2023-07-122025-01-16荣耀终端有限公司Image processing method for camera application and related device
CN117714858A (en)*2023-07-312024-03-15荣耀终端有限公司 Image processing method, electronic device and readable storage medium
CN117998203A (en)*2024-04-032024-05-07荣耀终端有限公司Image processing method, electronic equipment and storage medium
CN118175441A (en)*2024-04-242024-06-11荣耀终端有限公司Image sensor, image processing method, electronic device, storage medium, and product

Also Published As

Publication numberPublication date
WO2022262260A1 (en)2022-12-22

Similar Documents

PublicationPublication DateTitle
CN113727016A (en)Shooting method and electronic equipment
CN110072070B (en) A kind of multi-channel video recording method and equipment, medium
CN110445978B (en) A shooting method and equipment
CN111183632A (en) Image capturing method and electronic device
CN113810600B (en) Image processing method, device and terminal device for terminal
CN113810601B (en) Terminal image processing method, device and terminal equipment
CN114205515B (en)Anti-shake processing method for video and electronic equipment
CN111050062B (en)Shooting method and electronic equipment
CN113596316A (en)Photographing method, graphical user interface and electronic equipment
CN115412678B (en)Exposure processing method and device and electronic equipment
CN112188094A (en)Image processing method and device, computer readable medium and terminal equipment
CN114257737B (en)Shooting mode switching method and related equipment
CN114302063B (en) A shooting method and device
CN112422814A (en) A shooting method and electronic device
CN116582743B (en)Shooting method, electronic equipment and medium
CN113472996B (en)Picture transmission method and device
CN117593236A (en) Image display method, device and terminal equipment
CN113923351A (en) Exit method, device, storage medium and program product for multi-channel video shooting
RU2822535C2 (en)Method and device for multichannel video recording
CN117714858B (en) Image processing method, electronic device and readable storage medium
RU2789447C1 (en)Method and apparatus for multichannel video recording
CN119277215B (en) Image processing method, electronic device, computer program product, and storage medium
CN115297269B (en)Exposure parameter determination method and electronic equipment
RU2809660C1 (en)Frame method for recording multichannel video, graphic user interface and electronic device
CN114520870B (en) A display method and terminal

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication

Application publication date:20211130

RJ01Rejection of invention patent application after publication

[8]ページ先頭

©2009-2025 Movatter.jp