Movatterモバイル変換


[0]ホーム

URL:


CN111369482B - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium
Download PDF

Info

Publication number
CN111369482B
CN111369482BCN202010140777.3ACN202010140777ACN111369482BCN 111369482 BCN111369482 BCN 111369482BCN 202010140777 ACN202010140777 ACN 202010140777ACN 111369482 BCN111369482 BCN 111369482B
Authority
CN
China
Prior art keywords
image
processed
ith
repair
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010140777.3A
Other languages
Chinese (zh)
Other versions
CN111369482A (en
Inventor
林松楠
张佳维
任思捷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co LtdfiledCriticalBeijing Sensetime Technology Development Co Ltd
Priority to CN202010140777.3ApriorityCriticalpatent/CN111369482B/en
Publication of CN111369482ApublicationCriticalpatent/CN111369482A/en
Application grantedgrantedCritical
Publication of CN111369482BpublicationCriticalpatent/CN111369482B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The present disclosure relates to an image processing method and apparatus, an electronic device, and a storage medium, the method including: determining brightness increment information of an ith image to be processed according to the event information of the ith image to be processed and the event information of the ith-1 image to be processed, wherein the event information is acquired by event acquisition equipment, and i is an integer greater than 1; and determining the repair image of the ith to-be-processed image according to the ith to-be-processed image, the repair image of the ith to-be-processed image and the brightness increment information of the ith to-be-processed image, wherein the definition of the repair image is larger than that of the to-be-processed image. The embodiment of the disclosure can improve the image deblurring effect.

Description

Image processing method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the field of computer technology, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
Conventional image capture devices can capture images that conform to people's viewing habits, such as RGB images. While Event collection devices (e.g., event cameras) are capable of collecting asynchronous brightness changes (i.e., events (events)) at high temporal frequencies. In the related art, deblurring processing may be performed on a blurred image by an event corresponding to the blurred image, however, the image processing effect of the processing manner of the related art is poor.
Disclosure of Invention
The present disclosure proposes an image processing technique.
According to an aspect of the present disclosure, there is provided an image processing method including: determining brightness increment information of an ith image to be processed according to the event information of the ith image to be processed and the event information of the ith-1 image to be processed, wherein the event information is acquired by event acquisition equipment, and i is an integer greater than 1; and determining the repair image of the ith to-be-processed image according to the ith to-be-processed image, the repair image of the ith to-be-processed image and the brightness increment information of the ith to-be-processed image, wherein the definition of the repair image is larger than that of the to-be-processed image.
In one possible implementation manner, the determining the brightness increment information of the ith to-be-processed image according to the event information of the ith to-be-processed image and the event information of the ith to-1 to-be-processed image includes: extracting the characteristics of the event information of the ith to-be-processed image and the event information of the ith to-be-processed image to obtain a first event characteristic of the ith to-be-processed image; convolving the first event feature according to a convolution kernel tensor of the first event feature to obtain a second event feature of the ith image to be processed, wherein the convolution kernel tensor comprises convolution kernels of all channels of the first event feature; and carrying out brightness increment prediction on the second event feature to obtain brightness increment information of the ith image to be processed.
In one possible implementation manner, the determining the brightness increment information of the ith to-be-processed image according to the event information of the ith to-be-processed image and the event information of the ith to-1 to-be-processed image further includes: and carrying out convolution kernel prediction on each channel of the first event feature according to reference information corresponding to the ith image to be processed to obtain a convolution kernel tensor of the first event feature, wherein the number of channels of the convolution kernel tensor is the same as that of channels of the first event feature, and the reference information comprises the ith image to be processed and/or event information of the ith image to be processed.
In one possible implementation manner, the reference information further includes at least one of the i-1 th to-be-processed image, event information of the i-1 th to-be-processed image, and a repair image of the i-1 th to-be-processed image.
In one possible implementation, the brightness increment information includes: the method for determining the repair image of the ith to-be-processed image according to the ith to-be-processed image, the repair image of the ith to-be-processed image and the brightness increment information of the ith to-be-processed image relative to the first brightness increment information of the ith to-be-processed image and the second brightness increment information of the repair image of the ith to-be-processed image relative to the ith to-1 to-be-processed image comprises the following steps: multiplying the ith image to be processed with the first brightness increment information to obtain a first repair image of the ith image to be processed; multiplying the repair image of the i-1 th image to be processed with the second brightness increment information to obtain a second repair image of the i-1 th image to be processed; according to the ith to-be-processed image and the event information of the ith to-be-processed image, carrying out weight prediction on the first repair image and the second repair image to obtain a first weight of the first repair image and a second weight of the second repair image; and according to the first weight and the second weight, the first repair image and the second repair image are added in a weighted mode, and the repair image of the ith image to be processed is obtained.
In one possible implementation manner, the repair image of the to-be-processed image includes 2n+1 repair images, n is a positive integer, the brightness increment information further includes third brightness increment information of 2n+1 repair images of the i-th to-be-processed image relative to 2n+1 repair images of the i-1-th to-be-processed image, and the determining the repair image of the i-th to-be-processed image according to the i-th to-be-processed image, the repair image of the i-1-th to-be-processed image, and the brightness increment information of the i-th to-be-processed image further includes: multiplying the first repair image and the second repair image with the third brightness increment information respectively to obtain 2n+1 groups of third repair images; respectively carrying out weight prediction on each third restoration image of each group in the 2n+1 group of third restoration images according to the ith to-be-processed image and event information of the ith to-be-processed image to obtain a third weight of the 2n+1 group of third restoration images; and according to the third weight, weighting and adding all the third repair images in each group of the 2n+1 groups of third repair images to obtain 2n+1 repair images of the ith image to be processed.
In one possible implementation manner, the second brightness increment information includes 2n+1 second brightness increment information, and multiplying the repair image of the i-1 th image to be processed with the second brightness increment information to obtain a second repair image of the i-1 th image to be processed, where the second repair image includes: multiplying 2n+1 repair images of the i-1 th image to be processed with corresponding second brightness increment information in the 2n+1 second brightness increment information respectively to obtain 2n+1 second repair images of the i-1 th image to be processed.
In a possible implementation manner, the image to be processed is N, N is an integer and 1<i N is less than or equal to N, and the method further includes: and determining repair videos corresponding to the N images to be processed according to the repair images of the N images to be processed.
According to an aspect of the present disclosure, there is provided an image processing apparatus including:
the incremental information determining module is used for determining brightness incremental information of an ith image to be processed according to the event information of the ith image to be processed and the event information of the ith-1 image to be processed, wherein the event information is acquired through event acquisition equipment, and i is an integer greater than 1; the image restoration module is used for determining the restoration image of the ith to-be-processed image according to the ith to-be-processed image, the restoration image of the ith to-be-processed image and the brightness increment information of the ith to-be-processed image, and the definition of the restoration image is larger than that of the to-be-processed image.
In one possible implementation manner, the incremental information determining module includes: the feature extraction sub-module is used for extracting features of the event information of the ith to-be-processed image and the event information of the ith to-be-processed image to obtain a first event feature of the ith to-be-processed image; a convolution sub-module, configured to convolve the first event feature according to a convolution kernel tensor of the first event feature, to obtain a second event feature of the ith image to be processed, where the convolution kernel tensor includes a convolution kernel of each channel of the first event feature; and the increment prediction sub-module is used for carrying out brightness increment prediction on the second event feature to obtain brightness increment information of the ith image to be processed.
In one possible implementation manner, the incremental information determining module further includes: and the convolution kernel prediction sub-module is used for carrying out convolution kernel prediction on each channel of the first event feature according to the reference information corresponding to the ith image to be processed to obtain a convolution kernel tensor of the first event feature, wherein the number of channels of the convolution kernel tensor is the same as that of the first event feature, and the reference information comprises the ith image to be processed and/or event information of the ith image to be processed.
In one possible implementation manner, the reference information further includes at least one of the i-1 th to-be-processed image, event information of the i-1 th to-be-processed image, and a repair image of the i-1 th to-be-processed image.
In one possible implementation, the brightness increment information includes: the first brightness increment information of the ith to-be-processed image relative to the ith to-1 to-be-processed image and the second brightness increment information of the repair image of the ith to-be-processed image relative to the ith to-1 to-be-processed image, the image repair module comprises:
the first restoration submodule is used for multiplying the ith image to be processed with the first brightness increment information to obtain a first restoration image of the ith image to be processed; the second restoration submodule is used for multiplying the restoration image of the ith-1 to-be-processed image with the second brightness increment information to obtain a second restoration image of the ith to-be-processed image; the first weight predicting sub-module is used for predicting the weights of the first repair image and the second repair image according to the ith to-be-processed image and the event information of the ith to-be-processed image to obtain a first weight of the first repair image and a second weight of the second repair image; and the first weighted addition sub-module is used for weighted addition of the first repair image and the second repair image according to the first weight and the second weight to obtain the repair image of the ith to-be-processed image.
In one possible implementation manner, the repair image of the image to be processed includes 2n+1 repair images, n is a positive integer, the brightness increment information further includes third brightness increment information of 2n+1 repair images of the i-th image to be processed relative to 2n+1 repair images of the i-1-th image to be processed, and the image repair module further includes:
the third restoration submodule is used for multiplying the first restoration image and the second restoration image with the third brightness increment information respectively to obtain 2n+1 groups of third restoration images; the second weight predicting sub-module is used for respectively predicting the weight of each third repair image in each group of the 2n+1 group of third repair images according to the ith to-be-processed image and the event information of the ith to-be-processed image to obtain the third weight of the 2n+1 group of third repair images; and the second weighted addition sub-module is used for weighted addition of all the third repair images in each group of the 2n+1 groups of the third repair images according to the third weight value to obtain 2n+1 repair images of the ith to-be-processed image.
In one possible implementation, the second brightness increment information includes 2n+1 second brightness increment information, and the second repair submodule is configured to: multiplying 2n+1 repair images of the i-1 th image to be processed with corresponding second brightness increment information in the 2n+1 second brightness increment information respectively to obtain 2n+1 second repair images of the i-1 th image to be processed.
In a possible implementation manner, the image to be processed is N, N is an integer and 1<i N is less than or equal to N, and the apparatus further includes: and the video restoration module is used for determining restoration videos corresponding to the N images to be processed according to the restoration images of the N images to be processed.
According to an aspect of the present disclosure, there is provided an electronic apparatus including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the instructions stored in the memory to perform the above method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, the brightness increment of the current image can be determined through the event information of the current image and the previous image; and determining the repair image of the current image according to the repair image of the current image, the previous image and the brightness increment information, so that the image is repaired through event information and the repair result of the previous image, details in the image are reserved, and the image deblurring effect is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the technical aspects of the disclosure.
Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure.
Fig. 2a, 2b and 2c show schematic diagrams of a neural network according to an embodiment of the present disclosure.
Fig. 3 shows a schematic diagram of a processing procedure of an image processing method according to an embodiment of the present disclosure.
Fig. 4 shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
Fig. 5 shows a block diagram of an electronic device, according to an embodiment of the disclosure.
Fig. 6 shows a block diagram of an electronic device, according to an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the disclosure will be described in detail below with reference to the drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits well known to those skilled in the art have not been described in detail in order not to obscure the present disclosure.
Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure, as shown in fig. 1, the method including:
In step S11, according to the event information of the ith to-be-processed image and the event information of the ith to-be-processed image, determining brightness increment information of the ith to-be-processed image, wherein the event information is acquired by an event acquisition device, and i is an integer greater than 1;
in step S12, a repair image of the ith to-be-processed image is determined according to the ith to-be-processed image, the repair image of the ith to-be-processed image, and the brightness increment information of the ith to-be-processed image, where the definition of the repair image is greater than that of the to-be-processed image.
In a possible implementation manner, the image processing method may be performed by an electronic device such as a terminal device or a server, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a personal digital processing (Personal Digital Assistant, PDA), a handheld device, a computing device, an in-vehicle device, a wearable device, etc., and the method may be implemented by a processor invoking computer readable instructions stored in a memory. Alternatively, the method may be performed by a server.
In one possible implementation, the image to be processed may be, for example, a video frame acquired by an image acquisition device (such as a camera), where the image to be processed may have low definition, and there may be cases of image blur, small dynamic range, and the like. In this case, the image to be processed may be repaired, i.e., deblurred, by event information corresponding to the image to be processed acquired by an event acquisition device (e.g., an event camera). Corresponding to the image to be processed, the acquisition time of the image to be processed can be represented to be within a preset time period for acquiring the event information. The event information is used for representing brightness change of each pixel point in the image within the preset time period, and the value of the event information can be, for example, positive number for brightening, negative number for darkening and zero for unchanged brightness. The present disclosure is not limited in this regard.
In one possible implementation, the number of images to be processed is N, where N is an integer greater than 1. For the current ith to-be-processed image (1<i. Ltoreq.N), in step S11, the brightness increment information of the ith to-be-processed image may be determined according to the event information of the ith to-be-processed image and the event information of the ith-1 th to-be-processed image. The brightness increment information is used for representing brightness difference between the event information of the ith to-be-processed image and the event information of the (i-1) th to-be-processed image.
The event information can be processed, for example, through a convolutional neural network, the event information of the ith to-be-processed image and the event information of the ith to-1 to-be-processed image are overlapped, the event characteristics of the overlapped event information are extracted, and then the brightness increment information of the ith to-be-processed image is determined according to the event characteristics. The present disclosure is not limited to the network architecture of convolutional neural networks.
In one possible implementation manner, after the brightness increment information of the ith to-be-processed image is obtained, in step S12, the brightness increment information is multiplied by the repair image of the ith to-be-processed image and the i-1 th to-be-processed image respectively to obtain a plurality of preliminary repair images; and fusing the primarily repaired images to obtain a repaired image of the ith image to be processed.
The event information may represent the difference between the blurred image and the sharp image, and the information in the repair image of the previous image may preserve details in the image. Therefore, the image to be processed is repaired by the brightness increment information and the information in the repair image of the previous image, so that the definition of the repair image is larger than that of the image to be processed, and the loss of image details is reduced. I.e. after processing in steps S11-S12, deblurring of the image is achieved.
In one possible implementation manner, for the 1 st to be processed image in the N to be processed images, the event information of the initial 0 th to be processed image (for example, a preset gray scale image, a random gray scale image, or the 1 st to be processed image itself, etc.) and the repair image of the 0 th to be processed image (for example, deblurred by the related technology) may be set, and the repair image of the 1 st to be processed image is obtained by performing the processing in steps S11-S12. The present disclosure does not limit the image content of the 0 th image to be processed and the repair image thereof, and the specific repair method.
In one possible implementation, the N images to be processed are sequentially processed in steps S11 to S12, so as to obtain repair images of the N images to be processed. The repair images of the N images to be processed can be respectively output, and the repair images of the N images to be processed can also be formed into a repair video, so that the deblurring process of the whole video is completed.
According to the embodiment of the disclosure, the brightness increment of the current image can be determined through event information of the current image and the previous image; and determining the repair image of the current image according to the repair image of the current image, the previous image and the brightness increment information, so that the image is repaired through event information and the repair result of the previous image, details in the image are reserved, and the image deblurring effect is improved.
In one possible implementation, step S11 may include:
extracting the characteristics of the event information of the ith to-be-processed image and the event information of the ith to-be-processed image to obtain a first event characteristic of the ith to-be-processed image;
convolving the first event feature according to a convolution kernel tensor of the first event feature to obtain a second event feature of the ith image to be processed, wherein the convolution kernel tensor comprises convolution kernels of all channels of the first event feature;
and carrying out brightness increment prediction on the second event feature to obtain brightness increment information of the ith image to be processed.
For example, the event information of the ith to-be-processed image and the event information of the ith to-1 to-be-processed image may be superimposed, and the superimposed event information may be input into a preset feature extraction network for processing, and the first event feature may be output. The feature extraction network may include a plurality of convolutional layers, a plurality of residual blocks, and the like, which is not limited by this disclosure.
In one possible implementation, a three-dimensional convolution kernel tensor of the first event feature may be provided, where the convolution kernel tensor includes a convolution kernel of each channel of the first event feature, where the convolution kernel has a size of a preset size k×k, where K is a positive integer. For example, the first event feature includes 128 channels, the size of the convolution kernel is 3×3, and the size of the convolution kernel tensor is 3×3×128. A fixed convolution kernel may be provided for each channel of the first event feature, or a dynamic convolution kernel may be provided for each channel of the first event feature, which is not limiting of the present disclosure.
In one possible implementation, the first event feature may be convolved with the convolution kernel tensor to obtain the second event feature. Thus, the enhancement of event characteristics can be realized, and the precision of subsequent image processing is improved.
In one possible implementation, the second event feature may be input into a preset incremental prediction network, and the brightness increment of the second event feature is predicted to obtain brightness increment information. The brightness increment information may be plural, for example, including brightness increment information of the i-th to-be-processed image with respect to the i-1-th to-be-processed image (obtained by Event-based double integration (Event-based Double Integral, abbreviated as EDI)) and brightness increment information of the i-th to-be-processed image with respect to the i-1-th to-be-processed image repair image (obtained by Event-based single integration) and the like. The delta prediction network may include a plurality of deconvolution layers, a plurality of residual blocks, etc., which is not limiting of the present disclosure. In this way, the brightness increment information of the ith image to be processed can be obtained for subsequent processing.
During the acquisition of event information, events are triggered once the brightness change exceeds a preset threshold, and the sum of events captured over a certain time interval may represent the proportion of brightness change. In a physical model of event-based video reconstruction, blurred image frames (i.e., images to be processed) may be approximated as an average of a plurality of potential image frames that are discrete over the time interval. In this case, the brightness increment information of the image to be processed may be obtained by taking an average value of a preset threshold value of brightness variation and a proportion of brightness variation in the time interval. That is, the first brightness increment information of the ith to-be-processed image relative to the (i-1) th to-be-processed image is obtained through the double integration principle based on the event.
In one possible implementation, the brightness delta information of the blurred image frame relative to each potential image frame may be obtained by scaling the brightness variation between the blurred image frame and each potential image frame. That is, the second brightness increment information of the repair image of the ith to-1 th to-be-processed image is obtained through the single integration principle based on the event.
Fig. 2a, 2b and 2c show schematic diagrams of a neural network according to an embodiment of the present disclosure. As shown in fig. 2a, a neural network according to an embodiment of the present disclosure includes afeature extraction network 21 and anincremental prediction network 22. Event information E of the ith image to be processedi And event information E of the i-1 th image to be processedi-1 Stacking; inputting the superimposed information into thefeature extraction network 21, processing via multiple network blocks (each network block comprises convolution layer and residual block) of thefeature extraction network 21, with channel number of 64, 96, 128 in turn, and outputting the first event feature Q by the last network blocki (the number of channels is 128); by convolving the kernel tensor Hi For the first event feature Qi Convolving to obtain a second event feature Gi The method comprises the steps of carrying out a first treatment on the surface of the Characterizing the second event Gi In theincremental prediction network 22, the number of channels is 96, 64 and 32 in turn, and the last three network blocks output three brightness incremental information including the first brightness incremental information C via processing of a plurality of network blocks (each network block includes a deconvolution layer and a residual block) of theincremental prediction network 22i Second brightness increment information Pi Third brightness increment information Ii
In one possible implementation, step S11 may further include:
according to the reference information corresponding to the ith image to be processed, carrying out convolution kernel prediction on each channel of the first event feature to obtain a convolution kernel tensor of the first event feature, wherein the number of channels of the convolution kernel tensor is the same as that of the first event feature,
wherein the reference information comprises the ith to-be-processed image and/or event information of the ith to-be-processed image.
For example, the convolution kernel prediction may be performed on each channel of the first event feature according to information (which may be referred to as reference information) corresponding to the image to be processed, so as to generate a dynamic convolution kernel according to the information of the image to be processed, so as to implement enhancement of the event feature. The reference information may include at least the i-th image to be processed itself and/or event information of the i-th image to be processed, thereby improving the accuracy of the predicted convolution kernel.
In one possible implementation, the reference information may further include: at least one of the i-1 th image to be processed, event information of the i-1 th image to be processed and a repair image of the i-1 th image to be processed. That is, the convolution kernels of the respective channels of the first event feature may be predicted using more information associated with the image to be processed as reference information.
In one possible implementation manner, the ith to-be-processed image, the ith-1 to-be-processed image, event information of the ith-1 to-be-processed image and a repair image of the ith-1 to-be-processed image can be overlapped; the superimposed information is input into a convolution kernel prediction network for processing to obtain a convolution kernel matrix (with the size of 128 XK2 ) The method comprises the steps of carrying out a first treatment on the surface of the The convolution kernel matrix is shaped (Reshape) to obtain a convolution kernel tensor (the scale is k×k×128) of the first event feature, and the number of channels of the convolution kernel tensor is the same as the number of channels of the first event feature.
As shown in fig. 2b, the neural network according to an embodiment of the present disclosure further comprises a convolutionkernel prediction network 23. Will be the ith pending image Bi I-1 th image B to be processedi-1 Event information E of the ith image to be processedi Event information E of the i-1 th image to be processedi-1 Repair image S of i-1 th image to be processedi-1 Superposing; the superimposed information is input into the convolutionkernel prediction network 23, processed by a plurality of network blocks (each network block including a convolution layer and a residual block) with the number of channels being 64, 96, 128 in order, and the last network block outputs a convolution kernel matrix (size 128 xk)2 ) The method comprises the steps of carrying out a first treatment on the surface of the Shaping (Reshape) the convolution kernel matrix to obtain a convolution kernel tensor H of the first event featurei (scale K X128).
By the method, the current image, the event of the current image and the information of the previous image can be introduced to participate in the convolution kernel prediction, so that the accuracy of the obtained dynamic convolution kernel is further improved, and the effect of enhancing the event characteristics is improved.
In one possible implementation, the brightness increment information obtained in step S11 may include first brightness increment information of the i-th to-be-processed image with respect to the i-1-th to-be-processed image (e.g., double-integral brightness increment information obtained based on a double-integral principle), and second brightness increment information of the i-th to-be-processed image with respect to the repair image of the i-1-th to-be-processed image (e.g., single-integral brightness increment information obtained based on a single-integral principle).
After the brightness increment information is obtained, deblurring processing may be performed on the i-th image to be processed in step S12. Wherein, step S12 may include:
multiplying the ith image to be processed with the first brightness increment information to obtain a first repair image of the ith image to be processed;
multiplying the repair image of the i-1 th image to be processed with the second brightness increment information to obtain a second repair image of the i-1 th image to be processed;
According to the ith to-be-processed image and the event information of the ith to-be-processed image, carrying out weight prediction on the first repair image and the second repair image to obtain a first weight of the first repair image and a second weight of the second repair image;
and according to the first weight and the second weight, the first repair image and the second repair image are added in a weighted mode, and the repair image of the ith image to be processed is obtained.
For example, the i-th image to be processed may be subjected to preliminary restoration at step S12. Multiplying the ith to-be-processed image with the first brightness increment information to obtain a first repair image of the ith to-be-processed image, wherein the first repair image is a preliminary repair image; meanwhile, multiplying the repair image of the ith-1 to-be-processed image by the second brightness increment information to obtain a second repair image of the ith to-be-processed image, wherein the second repair image is also a preliminary repair image.
In one possible implementation manner, the ith to-be-processed image, event information of the ith to-be-processed image, the first repair image and the second repair image may be superimposed; and inputting the superimposed information into a weight prediction network to perform weight prediction to obtain a first weight of the first repair image and a second weight of the second repair image. The weight prediction network may be a convolutional neural network, including a plurality of convolutional layers, an active layer, etc., which is not limiting of the present disclosure.
As shown in fig. 2c, the neural network according to an embodiment of the present disclosure further includes aweight prediction network 24. Will be the ith pending image Bi Event information E of the ith image to be processedi Preliminary repair image Fi (including a first repair image and a second repair image) and then overlapping; the superimposed information is input into aweight prediction network 24, is processed by a plurality of 3D convolution layers (the channel number is 64), and the obtained characteristics are activated by Sigmoid to obtain a weight graph M of a first repair image and a second repair imagei Thus, the first weight and the second weight of the first repair image and the second repair image can be determined. The accuracy of weight prediction can be improved by processing the 3D convolution layer.
In one possible implementation manner, according to the first weight and the second weight, the first repair image and the second repair image may be added in a weighted manner, so as to obtain a repair image of the ith image to be processed. In this way, a repair image with higher quality can be obtained, and the deblurring effect of the image is improved.
In one possible implementation, after the brightness increment information of the ith to-be-processed image is obtained in step S11, deblurring and frame insertion processing may be performed on the ith to-be-processed image in step S12. Each image to be processed may be interpolated, for example, n images (n is a positive integer) are inserted before and after each image to be processed, so as to increase the frame rate of the image to be processed. In this case, the repair image of the image to be processed includes 2n+1 repair images. For example, when n=3, the repair image of the image to be processed includes 7 repair images.
In one possible implementation, the i-1 th to-be-processed image has 2n+1 repair images, and thus the second brightness increment information of the i-1 th to-be-processed image with respect to the repair image of the i-1 th to-be-processed image is also 2n+1. In this case, the step of multiplying the repair image of the i-1 th image to be processed by the second brightness increment information to obtain a second repair image of the i-1 th image to be processed may include:
multiplying 2n+1 repair images of the i-1 th image to be processed with corresponding second brightness increment information in the 2n+1 second brightness increment information respectively to obtain 2n+1 second repair images of the i-1 th image to be processed.
That is, 2n+1 second luminance increment information is predicted in step S11, and in step S12, 2n+1 repair images of the i-1 th image to be processed may be multiplied by corresponding second luminance increment information in the 2n+1 second luminance increment information, respectively, to obtain 2n+1 second repair images, which are images of the 2n+1 repair images of the i-1 th image to be processed primarily repaired to the i-th image to be processed. In this way, the restoration effect of the image can be improved.
In one possible implementation, the brightness increment information further includes third brightness increment information (for example, single-integral brightness increment information obtained based on a single-integral principle) of 2n+1 repair images of the ith to-be-processed image relative to 2n+1 repair images of the ith to-1 to-be-processed image. That is, each repair image has a corresponding third brightness increment information, and 2n+1 third brightness increment information is obtained in total.
In one possible implementation, step S12 may further include:
multiplying the first repair image and the second repair image with the third brightness increment information respectively to obtain 2n+1 groups of third repair images;
respectively carrying out weight prediction on each third restoration image of each group in the 2n+1 group of third restoration images according to the ith to-be-processed image and event information of the ith to-be-processed image to obtain a third weight of the 2n+1 group of third restoration images;
and according to the third weight, weighting and adding all the third repair images in each group of the 2n+1 groups of third repair images to obtain 2n+1 repair images of the ith image to be processed.
For example, the first (1) and second (2n+1) repair images may be multiplied by 2n+1 third luminance increment information, respectively, to obtain 2n+1 sets of third repair images. Each third brightness increment information obtains a group of third repair images, and each group of third repair images has 2n+2 images. For example, n=3, 7 sets of 8 third repair images can be obtained, totaling 56 images.
In one possible implementation, the weight prediction may be performed separately for each group of the respective third repair images. For any one of the 2n+1 groups of third repair images, the ith image to be processed, event information of the ith image to be processed and 2n+2 third repair images of the group can be overlapped, and the overlapped information is input into a weight prediction network to perform weight prediction, so that third weights of the 2n+2 third repair images of the group are obtained. Thus, the third weight of the 2n+1 group of third repair images can be obtained by processing the 2n+1 group of third repair image groups. The weight prediction network may be a convolutional neural network, including a plurality of convolutional layers, an active layer, etc., which is not limiting of the present disclosure.
In one possible implementation manner, for any one of 2n+1 sets of third repair images, the 2n+2 third repair images of the set may be weighted and added according to the third weights of the 2n+2 third repair images of the set, to obtain 1 repair image of the ith to-be-processed image. Thus, 2n+1 repair images of the ith image to be processed can be obtained by weighted addition of the 2n+1 group of third repair image groups.
By the method, deblurring and frame inserting processes of the image to be processed can be realized, the deblurring effect of the image is improved, and the frame rate of the image is improved.
FIG. 3 is a schematic diagram showing the processing procedure of the image processing method according to the embodiment of the present disclosure, as shown in FIG. 3, instep 31, the ith image B to be processed may be processedi Ith-1 image B to be processedi-1 Event information E of the ith image to be processedi Event information E of the i-1 th image to be processedi-1 Repair image S of i-1 th image to be processedi-1 The input integration network 311 (which may be referred to as an integrator net and includes the feature extraction network, the incremental prediction network, and the convolution kernel prediction network described above) obtains the first brightness incremental information C of the ith image to be processedi (1), second luminance increment information Pi 2n+1 pieces of third luminance increment information Ii (2n+1).
Instep 32, the ith image B to be processed may be processedi And the first brightness increment information Ci Multiplying to obtain a first repair image (1 image); repair image S of the i-1 th image to be processedi-1 Respectively corresponding to the second brightness increment information Pi Multiplication results in a second repair image (2n+1 images), resulting in a total of 2n+2preliminary repair images 321.
Instep 33, 2n+2 primary restoredimages 321 can be respectively associated with 2n+1 third brightness increment information Ii Multiplication results in 2n+1 sets ofthird repair images 331, each set of third repair images comprising 2n+2 images.
Instep 34, the ith image B to be processed may be processedi Event information E of the ith image to be processedi After a group ofthird repair images 33 are superimposed, the superimposed images are input into a weight prediction network 341 (which may be referred to as a gateway) to obtain a weight map M of the group of third repair imagesi The method comprises the steps of carrying out a first treatment on the surface of the And according to the weight of each third restoration image of the group, carrying out weighted addition on 2n+2 third restoration images of the group to obtain 1 restoration image of the ith to-be-processed image. Thus, the 2n+1 group of the third repair images are processed to obtain 2n+1 repair images S of the ith to-be-processed imagei Thereby completing the deblurring and frame inserting process of the ith image to be processed.
In one possible implementation, the method may further include: and determining repair videos corresponding to the N images to be processed according to the repair images of the N images to be processed.
That is, the N images to be processed may be N image frames of a blurred video to be processed with a low frame rate, and the N images to be processed may be sequentially processed to obtain repair images of the N images to be processed, where the repair images may be one image without a frame or multiple images with a frame inserted. And generating repair videos corresponding to the N images to be processed according to the repair images. The definition of the repair video is higher than that of the video to be processed, so that the deblurring of the video is realized; under the condition of inserting frames, the frame rate of the video is improved, and a clear video with high frame rate is obtained.
In one possible implementation, before the neural network is deployed, the neural network may be trained, and the image processing method according to an embodiment of the disclosure further includes:
and training the neural network according to a preset training set, wherein the training set comprises a plurality of blurred sample images and a plurality of clear images corresponding to each sample image. For example, frame extraction and blurring processing can be performed on a clear video with high frame rate to obtain a plurality of blurred sample images; the original clear video is taken as a clear image corresponding to the sample image.
In one possible implementation, the sample images in the training set may be input into the neural network for processing to obtain a restored image of the sample images; determining a loss of the neural network according to the difference between the restored image and the clear image of the sample image; reversely adjusting network parameters of the neural network according to the loss; after multiple iterations, when the training condition (such as network convergence) is satisfied, a trained neural network is obtained. In this way, a training process of the neural network can be achieved.
According to the image processing method disclosed by the embodiment of the invention, the events corresponding to the image frames can be processed on the characteristic domain based on the triggering principle of the event camera, the brightness difference between the image frames is determined, and the image frames are deblurred and the frame insertion is realized by utilizing the double-integration principle and the single-integration principle. The method utilizes the time sequence information of the video, transmits the time sequence information to the current image frame through the deblurring result of the previous image frame, reserves more details in the image and improves the deblurring effect of the image. According to the method, the deep learning and event camera deblurring theory can be fused through the end-to-end convolutional neural network, clear video images can be recovered, higher precision can be achieved, and the visual effect is improved.
The image processing method can be applied to application scenes such as film shooting, security monitoring and video processing, deployed in electronic equipment such as a mobile terminal and a robot, and can be used for realizing high-frame-rate video recording, synthesis and the like, and effectively improving the frame rate of videos and the dynamic range of image frames of the videos.
It will be appreciated that the above-mentioned method embodiments of the present disclosure may be combined with each other to form a combined embodiment without departing from the principle logic, and are limited to the description of the present disclosure. It will be appreciated by those skilled in the art that in the above-described methods of the embodiments, the particular order of execution of the steps should be determined by their function and possible inherent logic.
In addition, the disclosure further provides an image processing apparatus, an electronic device, a computer readable storage medium, and a program, where the foregoing may be used to implement any one of the image processing methods provided in the disclosure, and corresponding technical schemes and descriptions and corresponding descriptions referring to method parts are not repeated.
Fig. 4 shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure, as shown in fig. 4, the apparatus including:
the incrementalinformation determining module 41 is configured to determine brightness incremental information of an ith to-be-processed image according to event information of the ith to-be-processed image and event information of an ith-1 to-be-processed image, where the event information is acquired by an event acquisition device, and i is an integer greater than 1; theimage restoration module 42 is configured to determine a restoration image of the ith to-be-processed image according to the ith to-be-processed image, the restoration image of the ith to-be-processed image, and the brightness increment information of the ith to-be-processed image, where the definition of the restoration image is greater than the definition of the to-be-processed image.
In one possible implementation manner, the incremental information determining module includes: the feature extraction sub-module is used for extracting features of the event information of the ith to-be-processed image and the event information of the ith to-be-processed image to obtain a first event feature of the ith to-be-processed image; a convolution sub-module, configured to convolve the first event feature according to a convolution kernel tensor of the first event feature, to obtain a second event feature of the ith image to be processed, where the convolution kernel tensor includes a convolution kernel of each channel of the first event feature; and the increment prediction sub-module is used for carrying out brightness increment prediction on the second event feature to obtain brightness increment information of the ith image to be processed.
In one possible implementation manner, the incremental information determining module further includes: and the convolution kernel prediction sub-module is used for carrying out convolution kernel prediction on each channel of the first event feature according to the reference information corresponding to the ith image to be processed to obtain a convolution kernel tensor of the first event feature, wherein the number of channels of the convolution kernel tensor is the same as that of the first event feature, and the reference information comprises the ith image to be processed and/or event information of the ith image to be processed.
In one possible implementation manner, the reference information further includes at least one of the i-1 th to-be-processed image, event information of the i-1 th to-be-processed image, and a repair image of the i-1 th to-be-processed image.
In one possible implementation, the brightness increment information includes: the first brightness increment information of the ith to-be-processed image relative to the ith to-1 to-be-processed image and the second brightness increment information of the repair image of the ith to-be-processed image relative to the ith to-1 to-be-processed image, the image repair module comprises:
the first restoration submodule is used for multiplying the ith image to be processed with the first brightness increment information to obtain a first restoration image of the ith image to be processed; the second restoration submodule is used for multiplying the restoration image of the ith-1 to-be-processed image with the second brightness increment information to obtain a second restoration image of the ith to-be-processed image; the first weight predicting sub-module is used for predicting the weights of the first repair image and the second repair image according to the ith to-be-processed image and the event information of the ith to-be-processed image to obtain a first weight of the first repair image and a second weight of the second repair image; and the first weighted addition sub-module is used for weighted addition of the first repair image and the second repair image according to the first weight and the second weight to obtain the repair image of the ith to-be-processed image.
In one possible implementation manner, the repair image of the image to be processed includes 2n+1 repair images, n is a positive integer, the brightness increment information further includes third brightness increment information of 2n+1 repair images of the i-th image to be processed relative to 2n+1 repair images of the i-1-th image to be processed, and the image repair module further includes:
the third restoration submodule is used for multiplying the first restoration image and the second restoration image with the third brightness increment information respectively to obtain 2n+1 groups of third restoration images; the second weight predicting sub-module is used for respectively predicting the weight of each third repair image in each group of the 2n+1 group of third repair images according to the ith to-be-processed image and the event information of the ith to-be-processed image to obtain the third weight of the 2n+1 group of third repair images; and the second weighted addition sub-module is used for weighted addition of all the third repair images in each group of the 2n+1 groups of the third repair images according to the third weight value to obtain 2n+1 repair images of the ith to-be-processed image.
In one possible implementation, the second brightness increment information includes 2n+1 second brightness increment information, and the second repair submodule is configured to: multiplying 2n+1 repair images of the i-1 th image to be processed with corresponding second brightness increment information in the 2n+1 second brightness increment information respectively to obtain 2n+1 second repair images of the i-1 th image to be processed.
In a possible implementation manner, the image to be processed is N, N is an integer and 1<i N is less than or equal to N, and the apparatus further includes: and the video restoration module is used for determining restoration videos corresponding to the N images to be processed according to the restoration images of the N images to be processed.
In some embodiments, functions or modules included in an apparatus provided by the embodiments of the present disclosure may be used to perform a method described in the foregoing method embodiments, and specific implementations thereof may refer to descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
The disclosed embodiments also provide a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method. The computer readable storage medium may be a volatile or nonvolatile computer readable storage medium.
The embodiment of the disclosure also provides an electronic device, which comprises: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the instructions stored in the memory to perform the above method.
The disclosed embodiments also provide a computer program product comprising computer readable code which, when run on a device, causes a processor in the device to execute instructions for implementing the image processing method as provided in any of the embodiments above.
The disclosed embodiments also provide another computer program product for storing computer readable instructions that, when executed, cause a computer to perform the operations of the image processing method provided in any of the above embodiments.
The electronic device may be provided as a terminal, server or other form of device.
Fig. 5 illustrates a block diagram of anelectronic device 800, according to an embodiment of the disclosure. For example,electronic device 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 5, anelectronic device 800 may include one or more of the following components: aprocessing component 802, amemory 804, apower component 806, amultimedia component 808, anaudio component 810, an input/output (I/O)interface 812, asensor component 814, and acommunication component 816.
Theprocessing component 802 generally controls overall operation of theelectronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. Theprocessing component 802 may include one ormore processors 820 to execute instructions to perform all or part of the steps of the methods described above. Further, theprocessing component 802 can include one or more modules that facilitate interactions between theprocessing component 802 and other components. For example, theprocessing component 802 can include a multimedia module to facilitate interaction between themultimedia component 808 and theprocessing component 802.
Thememory 804 is configured to store various types of data to support operations at theelectronic device 800. Examples of such data include instructions for any application or method operating on theelectronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. Thememory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
Thepower supply component 806 provides power to the various components of theelectronic device 800. Thepower components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for theelectronic device 800.
Themultimedia component 808 includes a screen between theelectronic device 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, themultimedia component 808 includes a front camera and/or a rear camera. When theelectronic device 800 is in an operational mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
Theaudio component 810 is configured to output and/or input audio signals. For example, theaudio component 810 includes a Microphone (MIC) configured to receive external audio signals when theelectronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in thememory 804 or transmitted via thecommunication component 816. In some embodiments,audio component 810 further includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between theprocessing component 802 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
Thesensor assembly 814 includes one or more sensors for providing status assessment of various aspects of theelectronic device 800. For example, thesensor assembly 814 may detect an on/off state of theelectronic device 800, a relative positioning of the components, such as a display and keypad of theelectronic device 800, thesensor assembly 814 may also detect a change in position of theelectronic device 800 or a component of theelectronic device 800, the presence or absence of a user's contact with theelectronic device 800, an orientation or acceleration/deceleration of theelectronic device 800, and a change in temperature of theelectronic device 800. Thesensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. Thesensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, thesensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
Thecommunication component 816 is configured to facilitate communication between theelectronic device 800 and other devices, either wired or wireless. Theelectronic device 800 may access a wireless network based on a communication standard, such as WiFi,2G, or 3G, or a combination thereof. In one exemplary embodiment, thecommunication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, thecommunication component 816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, theelectronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such asmemory 804 including computer program instructions executable byprocessor 820 ofelectronic device 800 to perform the above-described methods.
Fig. 6 illustrates a block diagram of anelectronic device 1900 according to an embodiment of the disclosure. For example,electronic device 1900 may be provided as a server. Referring to FIG. 6,electronic device 1900 includes aprocessing component 1922 that further includes one or more processors and memory resources represented bymemory 1932 for storing instructions, such as application programs, that can be executed byprocessing component 1922. The application programs stored inmemory 1932 may include one or more modules each corresponding to a set of instructions. Further,processing component 1922 is configured to execute instructions to perform the methods described above.
Electronic device 1900 may also includeApower supply component 1926 is configured to perform power management of theelectronic device 1900, a wired orwireless network interface 1950 is configured to connect theelectronic device 1900 to a network, and an input/output (I/O)interface 1958. Theelectronic device 1900 may operate an operating system based on amemory 1932, such as Windows ServerTM ,Mac OS XTM ,UnixTM ,LinuxTM ,FreeBSDTM Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such asmemory 1932, including computer program instructions executable byprocessing component 1922 ofelectronic device 1900 to perform the methods described above.
The present disclosure may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for performing the operations of the present disclosure can be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

wherein, the determining the brightness increment information of the ith to-be-processed image according to the event information of the ith to-be-processed image and the event information of the ith to-be-processed image comprises: extracting the characteristics of the event information of the ith to-be-processed image and the event information of the ith to-be-processed image to obtain a first event characteristic of the ith to-be-processed image; convolving the first event feature according to a convolution kernel tensor of the first event feature to obtain a second event feature of the ith image to be processed, wherein the convolution kernel tensor comprises convolution kernels of all channels of the first event feature; performing brightness increment prediction on the second event feature to obtain brightness increment information of the ith image to be processed;
Wherein, the incremental information determining module includes: the feature extraction sub-module is used for extracting features of the event information of the ith to-be-processed image and the event information of the ith to-be-processed image to obtain a first event feature of the ith to-be-processed image; a convolution sub-module, configured to convolve the first event feature according to a convolution kernel tensor of the first event feature, to obtain a second event feature of the ith image to be processed, where the convolution kernel tensor includes a convolution kernel of each channel of the first event feature; the increment prediction sub-module is used for performing brightness increment prediction on the second event feature to obtain brightness increment information of the ith image to be processed;
CN202010140777.3A2020-03-032020-03-03Image processing method and device, electronic equipment and storage mediumActiveCN111369482B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202010140777.3ACN111369482B (en)2020-03-032020-03-03Image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202010140777.3ACN111369482B (en)2020-03-032020-03-03Image processing method and device, electronic equipment and storage medium

Publications (2)

Publication NumberPublication Date
CN111369482A CN111369482A (en)2020-07-03
CN111369482Btrue CN111369482B (en)2023-06-23

Family

ID=71211189

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202010140777.3AActiveCN111369482B (en)2020-03-032020-03-03Image processing method and device, electronic equipment and storage medium

Country Status (1)

CountryLink
CN (1)CN111369482B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112738442B (en)*2020-12-242021-10-08中标慧安信息技术股份有限公司Intelligent monitoring video storage method and system
CN116134829B (en)*2020-12-312025-10-03华为技术有限公司 Image processing method and device
US20240177485A1 (en)*2021-04-022024-05-30Sony Semiconductor Solutions CorporationSensor device and semiconductor device

Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106991650A (en)*2016-01-212017-07-28北京三星通信技术研究有限公司A kind of method and apparatus of image deblurring
CN107330867A (en)*2017-06-162017-11-07广东欧珀移动通信有限公司 Image synthesis method, device, computer readable storage medium and computer equipment
WO2019105305A1 (en)*2017-11-282019-06-06Oppo广东移动通信有限公司Image brightness processing method, computer readable storage medium and electronic device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US8654848B2 (en)*2005-10-172014-02-18Qualcomm IncorporatedMethod and apparatus for shot detection in video streaming
FR3033973A1 (en)*2015-03-162016-09-23Univ Pierre Et Marie Curie Paris 6 METHOD FOR 3D RECONSTRUCTION OF A SCENE
US10062151B2 (en)*2016-01-212018-08-28Samsung Electronics Co., Ltd.Image deblurring method and apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106991650A (en)*2016-01-212017-07-28北京三星通信技术研究有限公司A kind of method and apparatus of image deblurring
CN107330867A (en)*2017-06-162017-11-07广东欧珀移动通信有限公司 Image synthesis method, device, computer readable storage medium and computer equipment
WO2019105305A1 (en)*2017-11-282019-06-06Oppo广东移动通信有限公司Image brightness processing method, computer readable storage medium and electronic device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Bringing a blurry frame alive at high frame-rate with an event camera;Liyuan Pan 等;《2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition》;第6813-6822页*
Deblurring Images via Dark Channel Prior;Jinshan Pan 等;《IEEE Transactions on Pattern Analysis and Machine Intelligence》;第2315-2328页*
Learning to extract a video sequence from a single motion-blurred image;Meiguang Jin 等;《2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition》;第6334-6342页*
低维流形约束下的事件相机去噪算法;江盟 等;《信号处理》;第35卷(第10期);第1753-1761页*

Also Published As

Publication numberPublication date
CN111369482A (en)2020-07-03

Similar Documents

PublicationPublication DateTitle
CN111462268B (en)Image reconstruction method and device, electronic equipment and storage medium
CN109118430B (en)Super-resolution image reconstruction method and device, electronic equipment and storage medium
CN110060215B (en)Image processing method and device, electronic equipment and storage medium
CN110675355B (en) Image reconstruction method and device, electronic device and storage medium
CN111340731B (en)Image processing method and device, electronic equipment and storage medium
CN111340733B (en)Image processing method and device, electronic equipment and storage medium
CN113139947B (en)Image processing method and device, electronic equipment and storage medium
CN111445415B (en)Image restoration method and device, electronic equipment and storage medium
CN111369482B (en)Image processing method and device, electronic equipment and storage medium
CN109635926B (en)Attention feature acquisition method and device for neural network and storage medium
CN110415258B (en)Image processing method and device, electronic equipment and storage medium
CN110633715B (en)Image processing method, network training method and device and electronic equipment
CN109903252B (en)Image processing method and device, electronic equipment and storage medium
CN113506229A (en)Neural network training and image generation method and device
CN109840890B (en)Image processing method and device, electronic equipment and storage medium
CN111488964B (en)Image processing method and device, and neural network training method and device
CN109816620B (en)Image processing method and device, electronic equipment and storage medium
CN113034407B (en)Image processing method and device, electronic equipment and storage medium
CN113660531B (en)Video processing method and device, electronic equipment and storage medium
CN113177890B (en)Image processing method and device, electronic equipment and storage medium
CN112734015B (en)Network generation method and device, electronic equipment and storage medium
CN111553865B (en)Image restoration method and device, electronic equipment and storage medium
CN111507131B (en)Living body detection method and device, electronic equipment and storage medium
CN112837237A (en) Video repair method and device, electronic device and storage medium
CN113177889B (en)Image processing method and device, electronic equipment and storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp