Disclosure of Invention
Embodiments of the present application provide a method, apparatus, electronic device, and readable storage medium for fundus image processing to reduce information loss of fundus images when the fundus images are brightened.
In a first aspect, an embodiment of the present application provides a method for image processing, including:
and acquiring a fundus image to be processed.
And carrying out brightness enhancement treatment on the fundus image to be treated by adopting a trained brightness enhancement model to obtain a treated target fundus image, wherein the brightness enhancement model is obtained based on the countermeasure generation network and the image quality loss function training.
In the implementation process, the trained brightness enhancement model is utilized to carry out brightness enhancement processing on fundus images to be processed, and the method can carry out brightness enhancement on batch fundus images to be processed, so that the processing efficiency of the images is improved, the workload of manual image processing is reduced, the cost of image processing is saved, moreover, the contrast generation network is trained through the image quality loss function to obtain the brightness enhancement model, the precision of parameter adjustment in the model training process is enhanced, and therefore, the loss of texture detail characteristics of fundus images after brightness enhancement can be reduced, and the image quality of fundus images after brightness enhancement is further improved.
With reference to the first aspect, in one implementation manner, before performing brightness enhancement processing on the fundus image to be processed by using the trained brightness enhancement model, the method further includes:
a sample fundus image set is acquired.
And inputting the sample fundus images in the sample fundus image set into an countermeasure generation network to obtain corresponding reconstructed sample fundus images.
Using the target loss function, an overall loss value between a sample fundus image in the set of sample fundus images and a corresponding reconstructed sample fundus image is determined.
And adjusting parameters in the countermeasure generation network according to the total loss value to obtain a trained brightness enhancement model.
In the implementation process, the countermeasure generation network is trained through the sample fundus image and the target loss function, so that a trained brightness enhancement model is obtained, and training of the brightness enhancement model is achieved.
With reference to the first aspect, in one implementation manner, acquiring a sample fundus image set includes:
an initial image set is acquired, wherein the initial image set comprises a first initial image set and a second initial image set, the first initial image set comprises at least one first initial image, and the second initial image set comprises at least one second initial image.
The brightness value of each first initial image in the first initial image set is located in a first brightness value range, the brightness value of each second initial image in the second initial image set is located in a second brightness value range, and the maximum brightness value of the first brightness value range is smaller than the minimum brightness value of the second brightness value range.
Cutting black edges, extracting features and adjusting the size of each first initial image in the first initial image set to obtain a first preprocessing initial image set, wherein the first preprocessing initial image set comprises first preprocessing initial images corresponding to each first initial image.
And cutting black edges, extracting features and adjusting the size of each second initial image in the second initial image set to obtain a second preprocessing initial image set, wherein the second preprocessing initial image set comprises second preprocessing initial images corresponding to each second initial image.
At least one first preprocessing initial image in the first preprocessing initial image set is randomly cut and turned over to obtain a first sample fundus image set, wherein the first sample fundus image set comprises a plurality of first sample fundus images.
And randomly cutting and overturning at least one second preprocessing initial image in the second preprocessing initial image set to obtain a second sample fundus image set, wherein the second sample fundus image set comprises a plurality of second sample fundus images.
The first sample fundus image set and the second sample fundus image set are sample fundus image sets, and the plurality of first sample fundus images and the plurality of second sample fundus images are sample fundus images.
In the above implementation process, the obtained initial image is preprocessed and data is expanded, so that a sample fundus image for training the brightness enhancement model is obtained.
With reference to the first aspect, in one implementation, the countermeasure generation network includes a first generator and a second generator.
Inputting the sample fundus images in the sample fundus image set into an countermeasure generation network to obtain corresponding reconstructed sample fundus images, including:
A first sample fundus image in the first sample fundus image set is input into a first generator to obtain a first reconstructed bright image.
Inputting the first reconstructed bright image into a second generator to obtain a first reconstructed dark image;
And inputting a second sample fundus image in the second sample fundus image set into a second generator to obtain a second reconstructed dark image.
Inputting the second reconstructed dark image into a first generator to obtain a second reconstructed bright image;
the first reconstructed dark image and the second reconstructed light image are reconstructed sample fundus images.
In the implementation process, the first sample fundus image and the second sample fundus image are respectively reconstructed through the first generator and the second generator, so that parameter adjustment is conveniently carried out on the first generator and the second generator according to the reconstructed image and the original sample fundus image, and the first generator and the second generator are trained so as to improve the precision of the brightness enhancement model.
With reference to the first aspect, in one implementation manner, the objective loss function includes an antagonism loss function, a cyclic consistency loss function, and an image quality loss function, and the antagonism generation network further includes a first discriminator and a second discriminator;
Determining an overall loss value between a sample fundus image in the set of sample fundus images and a corresponding reconstructed sample fundus image using the target loss function, comprising:
Determining a first image quality loss value between a first sample fundus image in the first sample fundus image set and a corresponding first reconstructed dark image by adopting an image quality loss function;
determining a first cyclic loss value between a first sample fundus image in the first sample fundus image set and a corresponding first reconstructed dark image by adopting a cyclic consistency loss function;
determining a first discrimination value between a first sample fundus image and a second reconstructed dark image in the first sample fundus image set by adopting a first discriminator and a contrast loss function;
Determining a second image quality loss value between a second sample fundus image in the second sample fundus image set and a corresponding second reconstructed bright image by adopting an image quality loss function;
Determining a second cyclic loss value between a second sample fundus image in the second sample fundus image set and a corresponding second reconstructed bright map by adopting a cyclic consistency loss function;
determining a second discrimination value between a second sample fundus image and the first reconstructed bright map in the second sample fundus image set by adopting a second discriminator and a countering loss function;
The total loss value is the sum of a first image quality loss value, a first cyclic loss value, a discrimination value, a second image quality loss value, a second cyclic loss value and two discrimination values.
In the implementation process, the overall loss value between the sample fundus image and the reconstructed image is calculated through the image quality loss function, the cyclic consistency loss function and the contrast loss function, so that parameter adjustment is conveniently carried out on the contrast generation network according to the overall loss value, and the accuracy of the image generated by the contrast generation network is improved.
With reference to the first aspect, in one embodiment, the target loss function is characterized as in formula (1):
L(G,F,Dx,Dy)=LGAN(G,Dy,X,Y)+LGAN(F,Dx,Y,X)+λLcyc(G,F)+Lidentity(G,F)+Lssim(G,F) (1)
Wherein Lssim (G, F) =1- | ssim (X, F (G (X))) |+1- | ssim (Y, F (G (Y))) |=2- | ssim (X, F (G (X))) |- | ssim (Y, F (G (Y))) |, G represents the first generator, F represents the second generator, Dx represents the first discriminator, Dy represents the second discriminator, X represents the first sample fundus image, Y represents the second sample fundus image, L (G, F, Dx,Dy) represents the target loss function, LGAN(G,Dy, X, Y) represents the contrast loss function of the first generator and the second discriminator, LGAN(F,Dx, Y, X) represents the contrast loss function of the second generator and the first discriminator, Lcyc (G, F) represents the coefficient of the loop consistency loss function between the first generator and the second generator, λ is the coefficient of the loop consistency loss function, Lidentity (G, F) represents the loop consistency loss function between the first generator and the second generator and the first generator, Lssim.
With reference to the first aspect, in one implementation manner, adjusting parameters in the countermeasure generation network according to the total loss value to obtain a trained brightness enhancement model includes:
Judging whether the total loss value accords with a preset training condition or not;
If yes, determining the first generator as a trained brightness enhancement model;
If not, the parameters of the first generator are adjusted through the Ranger optimizer, and the parameters of the second generator are adjusted until the total loss value meets the preset training condition.
In the implementation process, the optimizer adjusts parameters in the first generator and the second generator, so that stability of the countermeasure generation network can be enhanced, and convergence speed of the countermeasure generation network can be improved.
In a second aspect, an embodiment of the present application provides an apparatus for image processing, including:
and the acquisition module is used for acquiring the fundus image to be processed.
The processing module is used for carrying out brightness enhancement processing on the fundus image to be processed by adopting a trained brightness enhancement model to obtain a processed target fundus image, wherein the brightness enhancement model is obtained by training the countermeasure generation network based on an image quality loss function.
With reference to the second aspect, in one embodiment, the processing module is further configured to:
a sample fundus image set is acquired.
And inputting the sample fundus images in the sample fundus image set into an countermeasure generation network to obtain corresponding reconstructed sample fundus images.
Using the target loss function, an overall loss value between a sample fundus image in the set of sample fundus images and a corresponding reconstructed sample fundus image is determined.
And adjusting parameters in the countermeasure generation network according to the total loss value to obtain a trained brightness enhancement model.
With reference to the second aspect, in one embodiment, the processing module is specifically configured to:
an initial image set is acquired, wherein the initial image set comprises a first initial image set and a second initial image set, the first initial image set comprises at least one first initial image, and the second initial image set comprises at least one second initial image.
The brightness value of each first initial image in the first initial image set is located in a first brightness value range, the brightness value of each second initial image in the second initial image set is located in a second brightness value range, and the maximum brightness value of the first brightness value range is smaller than the minimum brightness value of the second brightness value range.
Cutting black edges, extracting features and adjusting the size of each first initial image in the first initial image set to obtain a first preprocessing initial image set, wherein the first preprocessing initial image set comprises first preprocessing initial images corresponding to each first initial image.
And cutting black edges, extracting features and adjusting the size of each second initial image in the second initial image set to obtain a second preprocessing initial image set, wherein the second preprocessing initial image set comprises second preprocessing initial images corresponding to each second initial image.
At least one first preprocessing initial image in the first preprocessing initial image set is randomly cut and turned over to obtain a first sample fundus image set, wherein the first sample fundus image set comprises a plurality of first sample fundus images.
And randomly cutting and overturning at least one second preprocessing initial image in the second preprocessing initial image set to obtain a second sample fundus image set, wherein the second sample fundus image set comprises a plurality of second sample fundus images.
The first sample fundus image set and the second sample fundus image set are sample fundus image sets, and the plurality of first sample fundus images and the plurality of second sample fundus images are sample fundus images.
With reference to the second aspect, in one embodiment, the countermeasure generation network includes a first generator and a second generator.
The processing module is specifically used for inputting a first sample fundus image in the first sample fundus image set into the first generator to obtain a first reconstructed bright image.
And inputting the first reconstructed bright image into a second generator to obtain a first reconstructed dark image.
And inputting a second sample fundus image in the second sample fundus image set into a second generator to obtain a second reconstructed dark image.
And inputting the second reconstructed dark image into a first generator to obtain a second reconstructed bright image.
The first reconstructed dark image and the second reconstructed light image are reconstructed sample fundus images.
With reference to the second aspect, in one embodiment, the objective loss function includes an antagonism loss function, a cyclic consistency loss function, and an image quality loss function, and the antagonism generation network further includes a first discriminator and a second discriminator;
The processing module is specifically used for:
Determining an overall loss value between a sample fundus image in the set of sample fundus images and a corresponding reconstructed sample fundus image using the target loss function, comprising:
a first image quality loss value between a first sample fundus image in the first set of sample fundus images and a corresponding first reconstructed dark map is determined using an image quality loss function.
A first cyclical loss value between a first sample fundus image in the first set of sample fundus images and a corresponding first reconstructed dark map is determined using a cyclical consistency loss function.
A first discrimination value between a first sample fundus image and a second reconstructed darkness image in a first set of sample fundus images is determined using a first discriminator and a contrast loss function.
And determining a second image quality loss value between a second sample fundus image in the second sample fundus image set and a corresponding second reconstructed bright map by adopting an image quality loss function.
And determining a second cyclic loss value between a second sample fundus image in the second sample fundus image set and a corresponding second reconstructed bright map by adopting a cyclic consistency loss function.
And determining a second discrimination value between the second sample fundus image and the first reconstructed bright map in the second sample fundus image set by adopting a second discriminator and a countering loss function.
The total loss value is the sum of a first image quality loss value, a first cyclic loss value, a discrimination value, a second image quality loss value, a second cyclic loss value and two discrimination values.
With reference to the second aspect, in one embodiment, the objective loss function is characterized as in formula (1):
L(G,F,Dx,Dy)=LGAN(G,Dy,X,Y)+LGAN(F,Dx,Y,X)+λLcyc(G,F)+Lidentity(G,F)+Lssim(G,F) (1)
Wherein Lssim (G, F) =1- | ssim (X, F (G (X))) |+1- | ssim (Y, F (G (Y))) |=2- | ssim (X, F (G (X))) | -ssim (Y, F (G (Y))) |, G represents the first generator, F represents the second generator, Dx represents the first discriminator, Dy represents the second discriminator, X represents the first sample fundus image, Y represents the second sample fundus image, L (G, F, Dx,Dy) represents the target loss function, LGAN(G,Dy, X, Y represents an opposing loss function of the first generator and the second discriminator, LGAN(F,Dx, Y, X) represents an opposing loss function of the second generator and the first generator, Lcyc (G, F) represents a cyclic uniformity function between the first generator and the second generator, Y represents a cyclic uniformity loss between the first generator and the second generator, and the first generator is a cyclic uniformity loss function between the first generator and the second generator, L (G, FGAN(G,Dy, X, Y represents a constant loss between the first generator and the second generator is a constant loss function between the first generator and the second generator is represented as a constant loss function.
With reference to the second aspect, in one embodiment, the processing module is specifically configured to:
And judging whether the total loss value accords with a preset training condition.
If yes, the first generator is determined to be a trained brightness enhancement model.
If not, the parameters of the first generator are adjusted through the Ranger optimizer, and the parameters of the second generator are adjusted until the total loss value meets the preset training condition.
In a third aspect, an embodiment of the present application provides an electronic device, including:
A processor, a memory and a bus, the processor being connected to the memory by the bus, the memory storing computer readable instructions which, when executed by the processor, are adapted to carry out the method provided by any of the embodiments of the first aspect described above.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs steps in a method as provided by any of the embodiments of the first aspect described above.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the embodiments of the application. The objectives and other advantages of the application will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present application.
It should be noted that like reference numerals and letters refer to like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only to distinguish the description, and are not to be construed as indicating or implying relative importance.
Some of the terms involved in the embodiments of the present application will be described first to facilitate understanding by those skilled in the art.
The terminal device may be a mobile terminal, a fixed terminal or a portable terminal, such as a mobile handset, a site, a unit, a device, a multimedia computer, a multimedia tablet, an internet node, a communicator, a desktop computer, a laptop computer, a notebook computer, a netbook computer, a tablet computer, a personal communication system device, a personal navigation device, a personal digital assistants, an audio/video player, a digital camera/camcorder, a positioning device, a television receiver, a radio broadcast receiver, an electronic book device, a game device, or any combination thereof, including the accessories and peripherals of these devices or any combination thereof. It is also contemplated that the terminal device can support any type of interface (e.g., wearable device) for the user, etc.
The server can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and can also be a cloud server for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, basic cloud computing services such as big data and artificial intelligent platforms and the like.
The fundus is composed of retina, fundus blood vessel, optic nerve head, optic nerve fiber, macula on retina, choroid behind retina, etc. The fundus image is an image of a fundus region acquired using a fundus camera. In the medical field, fundus images are generally used for screening and detecting fundus lesions, but due to the reasons of retina pathology or imaging configuration and the like, the brightness of some fundus images obtained by a fundus camera is very low, and the fundus lesions cannot be accurately screened and detected through the fundus images, so that the accuracy of fundus lesion detection is affected.
In the related art, brightness enhancement is performed on fundus images through a deep learning network model, but due to lower accuracy of training parameters in the training process of the deep learning network model, resolution of generated brightness enhancement fundus images is lower, and accurate screening and detection of fundus lesions are affected.
Therefore, improving the parameter training accuracy of the deep learning network model to obtain a fundus image with high resolution through the trained deep learning network model is a problem to be solved.
Referring to fig. 1, fig. 1 is a flowchart of a fundus image processing method according to an embodiment of the present application, in which an execution subject of the method may be an electronic device, and optionally, the electronic device may be a server or a terminal device, but the present application is not limited thereto.
As an embodiment, the method shown in fig. 1 is implemented as follows:
And 101, acquiring a fundus image to be processed.
Specifically, in performing step 101, the following steps may be employed:
step one, acquiring an original fundus image to be processed.
As an embodiment, the original fundus image to be processed is an original low-luminance fundus image, and the electronic apparatus further acquires the original low-luminance fundus image.
And secondly, cutting black edges, extracting features and adjusting the size of the original fundus image to be processed to obtain the fundus image to be processed.
Further, the original fundus image to be processed is preprocessed, and the fundus image to be processed is obtained.
Specifically, the original low-brightness fundus image is subjected to black edge cutting, feature extraction and size adjustment, so that a preprocessed low-brightness fundus image is obtained, namely, the fundus image to be processed is the preprocessed low-brightness fundus image.
The feature extraction may be to add a field mask to the image, or may extract structural features of the image, which is not limited herein.
In the implementation process, the original fundus image is preprocessed by cutting black edges, extracting features and adjusting the size, so that the resolution of the generated brightness-enhanced image can be improved.
And 102, performing brightness enhancement treatment on the fundus image to be treated by adopting a trained brightness enhancement model to obtain a treated target fundus image.
Wherein the brightness enhancement model is obtained by training the countermeasure generation network based on an image quality loss function.
Specifically, the image quality loss function is characterized as in formula (2):
ssim loss(x,y)=1-SSIM(x,y) (2)
Wherein SSIM (x, y) = [ l (x, y)α*c(x,y)β*s(x,y)γ ], x denotes a sample fundus image, y denotes a reconstructed sample fundus image corresponding to the sample fundus image, SSIM loss (x, y) denotes an image quality loss value of an image quality loss function, l (x, y) denotes a luminance difference value between the sample fundus image and the corresponding reconstructed sample fundus image, c (x, y) denotes a contrast difference value between the sample fundus image and the corresponding reconstructed sample fundus image, s (x, y) denotes a feature difference value between the sample fundus image and the corresponding reconstructed sample fundus image, α, β, and γ denote indices of the luminance difference value, the contrast difference value, and the feature difference value, respectively, α, β, and γ are arbitrary constants.
The luminance difference between the sample fundus image x and the corresponding reconstructed sample fundus image y is expressed by the formula (3):
Where L (x, y) denotes a luminance difference between the sample fundus image x and the corresponding reconstructed sample fundus image y, μx denotes an average gradation of the sample fundus image x, μy denotes an average gradation of the reconstructed sample fundus image y corresponding to the sample fundus image, c1 denotes a constant for maintaining stability, for example, c1=(k1L)2,k1 may be 0.01 or 0.02, where L denotes a dynamic range of pixel values without limitation.
The contrast difference between the sample fundus image x and the corresponding reconstructed sample fundus image y is expressed by the formula (4):
Where c (x, y) denotes a contrast difference between the sample fundus image x and the corresponding reconstructed sample fundus image y, σx denotes a standard deviation of each pixel value of the sample fundus image x, σy denotes a standard deviation of each pixel value of the reconstructed sample fundus image y, and c2 denotes a constant for maintaining stability, for example, c2=(k2L)2,k2 may be 0.02 or 0.03, where L denotes a dynamic range of pixel values without limitation.
The first characteristic difference value between the sample fundus image x and the corresponding reconstructed sample fundus image y is expressed by the formula (5):
Where s (x, y) represents a characteristic difference between the sample fundus image x and the corresponding reconstructed sample fundus image y, σxy represents a pixel covariance between the sample fundus image x and the corresponding reconstructed sample fundus image y, c3 is a constant, and for example, the value of c3 may be:
Specifically, before step 102 is performed, a trained brightness enhancement model is obtained, as shown in fig. 2, fig. 2 is a training flowchart of the brightness enhancement model provided by the embodiment of the present application, and specifically, when the brightness enhancement model is trained, the following steps may be adopted:
S1021, acquiring a sample fundus image set.
Specifically, in executing S1021, the following steps may be adopted:
and a step a of acquiring an initial image set.
The initial image set comprises a first initial image set and a second initial image set, wherein the first initial image set comprises at least one first initial image, and the second initial image set comprises at least one second initial image.
The brightness value of each first initial image in the first initial image set is located in a first brightness value range, the brightness value of each second initial image in the second initial image set is located in a second brightness value range, and the maximum brightness value of the first brightness value range is smaller than the minimum brightness value of the second brightness value range.
Specifically, a plurality of fundus images are collected, the fundus images are screened according to a first brightness value range and a second brightness value range, a low-brightness fundus image set and a high-brightness fundus image set are obtained, wherein the low-brightness fundus image set comprises at least one low-brightness fundus image, the high-brightness fundus image set comprises at least one high-brightness fundus image, and the low-brightness fundus image set and the high-brightness fundus image set form an initial image set.
As one embodiment, the first luminance value range is [25,40], the second luminance value range is [85,100], the luminance value of each fundus image in the acquired plurality of fundus images is determined, the fundus image with the luminance value in the first luminance value range is taken as an image in the low-luminance fundus image set, the fundus image with the luminance value in the second luminance value range is taken as an image in the high-luminance fundus image set, and the fundus image with the luminance value in neither the first luminance value range nor the second luminance value range is discarded, so that the low-luminance fundus image set and the high-luminance fundus image set are formed.
That is, the low-luminance fundus image set is a first initial image set, and the high-luminance fundus image set is a second initial image set.
In the embodiment of the present application, the first luminance value range is [25,40] and the second luminance value range is [85,100] are only described as examples, and in practical application, the first luminance value range may be [30,60], the second luminance value range may be [70,90], and the first luminance value range and the second luminance value range may be set according to practical situations, which is not limited herein.
In the implementation process, the collected multiple fundus images are screened according to the first brightness value range and the second brightness value range to obtain a low-brightness fundus image set and a high-brightness fundus image set, so that fundus image classification is realized.
And b, cutting black edges, extracting features and adjusting the size of each first initial image in the first initial image set to obtain a first preprocessed initial image set, wherein the first preprocessed initial image set comprises first preprocessed initial images corresponding to each first initial image, and cutting black edges, extracting features and adjusting the size of each second initial image in the second initial image set to obtain a second preprocessed initial image set, and the second preprocessed initial image set comprises second preprocessed initial images corresponding to each second initial image.
Further, the images in the first initial image set and the images in the second initial image set are preprocessed respectively.
Specifically, each low-brightness fundus image in the low-brightness fundus image set is subjected to cutting black edge, feature extraction and size adjustment to generate a preprocessed low-brightness fundus image set, and each high-brightness fundus image in the high-brightness fundus image set is subjected to cutting black edge, feature extraction and size adjustment to generate a preprocessed high-brightness fundus image set.
And c, randomly cutting and overturning at least one first preprocessing initial image in the first preprocessing initial image set to obtain a first sample fundus image set, wherein the first sample fundus image set comprises a plurality of first sample fundus images, and randomly cutting and overturning at least one second preprocessing initial image in the second preprocessing initial image set to obtain a second sample fundus image set, wherein the second sample fundus image set comprises a plurality of second sample fundus images.
The first sample fundus image set and the second sample fundus image set are sample fundus image sets, and the plurality of first sample fundus images and the plurality of second sample fundus images are sample fundus images.
Further, the low-brightness fundus image and the high-brightness fundus image after the pretreatment are respectively subjected to data expansion.
As one embodiment, the method comprises the steps of randomly cutting and turning images in a preprocessed low-brightness fundus image set to generate a low-brightness fundus image sample set, and randomly cutting and turning images in a preprocessed high-brightness fundus image set to generate a high-brightness fundus image sample set.
Wherein, the low-brightness fundus sample fundus image in the low-brightness fundus image sample set and the high-brightness fundus sample fundus image in the high-brightness fundus image sample set are both sample fundus images of the training brightness enhancement model.
In the embodiment of the present application, only random cutting and flipping processes are described as data expansion.
As an embodiment, if the size of the fundus image in the preprocessed low-brightness fundus image set or the size of the image in the preprocessed high-brightness fundus image set does not meet the size after random cropping, the corresponding image needs to be cropped randomly so that the corresponding image meets the size after cropping, further, after the image in the preprocessed low-brightness fundus image set or the fundus image in the preprocessed high-brightness fundus image set is cropped randomly, the image after random cropping may be further subjected to overturn processing, so as to obtain a low-brightness sample fundus image, or the image after random cropping may not be subjected to overturn processing according to practical application, and the image after machine cropping may be directly used as the sample fundus image.
As an embodiment, if the size of the image in the preprocessed low-brightness fundus image set or the size of the image in the preprocessed high-brightness fundus image set satisfies the size after random cropping, the image in the preprocessed low-brightness fundus image set and the image in the preprocessed high-brightness fundus image set satisfying the size after random cropping may be directly turned over, so as to obtain a sample fundus image, or the image satisfying the size after random cropping may be directly used as the sample fundus image according to practical application.
The number of images in the high-brightness fundus image set after the data expansion is the same as the number of images in the low-brightness fundus image set.
It should be noted that, the size of the fundus image after random trimming is 1/4 of the size of the original fundus image, and the size after random trimming may be set to other sizes according to practical applications, which is not limited herein.
In the above implementation process, the obtained initial image is preprocessed and data is expanded, so that a sample fundus image for training the brightness enhancement model is obtained.
And S1022, inputting the sample fundus images in the sample fundus image set into the countermeasure generation network to obtain corresponding reconstructed sample fundus images.
As shown in fig. 3, fig. 3 is a schematic diagram of an countermeasure generation network according to an embodiment of the present application, where the countermeasure generation network shown in fig. 3 includes a first generator 303, a second generator 306, a first arbiter 305, and a second arbiter 310.
As an embodiment, the first generator is an initial brightness enhancement generator and the second generator is an initial brightness reduction generator.
It should be noted that, in the embodiment of the present application, the network structure of the first generator and the second generator is an attention mechanism countermeasure generation network (ResBlock + CBAM).
Specifically, in executing S1022, the following steps may be adopted:
Step one, inputting a first sample fundus image in a first sample fundus image set into a first generator to obtain a first reconstructed bright image, and inputting a second sample fundus image in a second sample fundus image set into a second generator to obtain a second reconstructed dark image.
And secondly, inputting the first reconstructed bright image into a second generator to obtain a first reconstructed dark image, and inputting the second reconstructed dark image into the first generator to obtain a second reconstructed bright image.
The first reconstructed dark image and the second reconstructed light image are reconstructed sample fundus images.
As an embodiment, as shown in fig. 3, a low-luminance sample fundus image 301 in a low-luminance fundus image sample set is input into a first generator 303 to obtain a first reconstructed bright map 304, and a high-luminance sample fundus image 309 in a high-luminance fundus image sample set is input into a second generator 306 to obtain a second reconstructed dark map 308.
The first reconstructed bright map 304 is input into a second generator 306 to obtain a first reconstructed dark map 307, and the second reconstructed dark map 308 is input into the first generator 303 to obtain a second reconstructed bright map 302.
In the implementation process, the low-brightness fundus sample fundus image and the high-brightness fundus sample fundus image are respectively reconstructed through the first generator and the second generator, so that parameter adjustment is conveniently carried out on the first generator and the second generator according to the reconstructed images and the original fundus sample fundus image, and training is carried out on the first generator and the second generator.
S1023, determining the total loss value between the sample fundus image in the sample fundus image set and the corresponding reconstructed sample fundus image by using the target loss function.
Specifically, the target loss function is characterized by the following formula (1):
L(G,F,Dx,Dy)=LGAN(G,Dy,X,Y)+LGAN(F,Dx,Y,X)+λLcyc(G,F)+Lidentity(G,F)+Lssim(G,F) (1)
Wherein ,Lssim(G,F)=1-|ssim(X,F(G(X)))|+1-|ssim(Y,F(G(Y)))|=2-|ssim(X,F(G(X)))|-|ssim(Y,F(G(Y)))|,G denotes a first generator, F denotes a second generator, Dx denotes a first discriminator, Dy denotes a second discriminator, X denotes a first sample fundus image, Y denotes a second sample fundus image, λ is L (G, F, Dx,Dy) denotes a target loss function, LGAN(G,Dy, X, Y) denotes a contrast loss function of the first generator and the second discriminator, LGAN(F,Dx, Y, X) denotes a contrast loss function of the second generator and the first discriminator, Lcyc (G, F) denotes a cyclic uniformity loss function between the first generator and the second generator, λ is a coefficient of the cyclic uniformity loss function, Lidentity (G, F) denotes a near-uniform loss function between the first generator and the second generator, and Lssim (G, F) denotes an image quality loss function between the first generator and the second generator.
Wherein LGAN(G,Dy, X, Y) is the input X to generate the image more realistic ,LGAN(G,Dy,X,Y)=Ey~Pdata(y)[logDy(y)+Ex~Pdata(x)[log(1-Dy(G(x)),Lcyc(G,F)=Ex~Pdata(x)[||F(G(x)-x)||1]+Ey~Pdata(y)[||G(F(y)-y)||1],G, is the first generator to generate the false graph Y, and the input Y of F is the second generator to generate the false graph X. When x is fed into G, a false y-picture is obtained, and then the false y-picture is fed into F, so that a more false x-picture is obtained, which ideally should be comparable to the original x-picture. Lidentity(G,F)=Ey~Pdata(y)[||G(F(y)-y)||1]+||F(x)-x)||1],Lidentity (G, F) is used to generate y-style images with the first generator G, then y is fed into G and should still be generated, only if this proves that G has the ability to generate y-style.
The objective loss function comprises an antagonism loss function, a cyclic consistency loss function and an image quality loss function, and the antagonism generation network further comprises a first discriminator and a second discriminator.
As an example, the target loss function may also be an anti-loss function, a cyclic consistency loss function, an image quality loss function, and a near identity loss function.
It should be noted that, in the embodiment of the present application, the first arbiter and the second arbiter may be markov arbiters (PatchGAN).
Specifically, in executing S1023, the following steps may be adopted:
Determining a first image quality loss value between a first sample fundus image in a first sample fundus image set and a corresponding first reconstructed dark image by adopting an image quality loss function, and determining a second image quality loss value between a second sample fundus image in a second sample fundus image set and a corresponding second reconstructed bright image by adopting the image quality loss function.
Specifically, the first luminance difference value between the low-luminance sample fundus image 301 and the corresponding first reconstructed dark map 307 is determined according to the average gray level of the low-luminance sample fundus image 301 and the average gray level of the corresponding first reconstructed dark map 307, respectively. The first contrast difference between the low-luminance sample fundus image 301 and the corresponding first reconstructed dark map 307 is determined according to the standard deviation of each pixel value of the low-luminance sample fundus image 301 and the standard deviation of each pixel value of the corresponding first reconstructed dark map 307, respectively. The first feature difference value of the low-luminance sample fundus image 301 and the corresponding first reconstructed dark map 307 is determined according to the covariance of the low-luminance sample fundus image 301 and the corresponding first reconstructed dark map 307, respectively.
And determining a first image quality loss value according to the first brightness difference value, the first contrast difference value and the first characteristic difference value.
As an embodiment, if the low-luminance sample fundus image x1, the first reconstructed dark map corresponding to the low-luminance sample fundus image x1 is y1.
The first luminance difference value between the low-luminance sample fundus image x1 and the corresponding first reconstructed dark map y1 is expressed as:
Where l (x1,y1) represents a first luminance difference value between the low-luminance sample fundus image x1 and the corresponding first reconstructed dark map y1; Represents the average gray scale of the low-luminance sample fundus image x1,The average gray scale of the first reconstructed dark map y1 corresponding to the low-luminance sample fundus image is represented.
The first contrast difference between the low-luminance sample fundus image and the corresponding first reconstructed dark map is expressed as:
Wherein c (x1,y1) represents a first contrast difference between the low-luminance sample fundus image x1 and the corresponding first reconstructed dark map y1; The standard deviation of each pixel value of the low-luminance sample fundus image x1 is represented,The standard deviation of each pixel value of the first reconstructed dark map y1 corresponding to the low-luminance sample fundus image is represented.
The first characteristic difference value between the low-brightness sample fundus image and the corresponding first reconstructed dark image is expressed as:
Wherein s (x1,y1) represents a first feature difference value between the first reconstructed dark images y1 corresponding to the low-luminance sample fundus image x1; The pixel covariance representing the low-luminance sample fundus image x1 and the corresponding first reconstructed dark map y1, c3 is constant, for example, the value of c3 may be:
further, the first image quality loss value 311 between the low-luminance sample fundus image 301 and the corresponding first reconstructed dark map 307 is:
ssim loss(x1,y1)=1-SSIM(x1,y1) (9)
Wherein ,SSIM(x1,y1)=[l(x1,y1)α*c(x1,y1)β*s(x1,y1)γ],α、β and γ represent the indices of the first luminance, the first contrast, and the first feature difference, respectively, α, β, and γ may be arbitrary constants, and ,SSIM(x1,y1)=[l(x1,y1)*c(x1,y1)*s(x1,y1)];ssim loss(x1,y1) represents the image quality loss value of the first image quality loss function when α=β=γ.
Similarly, the process of determining a second image quality loss value between the high-intensity fundus image 309 and the corresponding second reconstructed dark map 302 in the high-intensity fundus image sample set using the image quality loss function is as follows:
The second luminance difference value between the second reconstructed dark maps 302 corresponding to the high-luminance sample fundus image 309 is determined from the average gray scale of the high-luminance sample fundus image 309 and the average gray scale of the corresponding second reconstructed dark map 302, respectively.
The second contrast difference between the high-luminance sample fundus image 309 and the corresponding second reconstructed dark image 302 is based on the standard deviation of each pixel value of the high-luminance sample fundus image 309 and the standard deviation of each pixel value of the corresponding second reconstructed dark image 302, respectively.
The first feature difference between the high-luminance sample fundus image 309 and the corresponding second reconstructed dark map 302 is determined from the covariances of the high-luminance sample fundus image 309 and the corresponding second reconstructed dark map 302, respectively.
And determining a second image quality loss value according to the second brightness difference value, the second contrast difference value and the second characteristic difference value.
It should be noted that the feature difference may be a structural difference between the fundus image of the sample and the corresponding reconstructed image, but the present application is not limited thereto.
As one example, assume that x2 is a high-luminance sample fundus image, and y2 is a second reconstructed dark image corresponding to the high-luminance sample fundus image.
The second luminance difference value between the high-luminance sample fundus image x2 and the corresponding second reconstructed dark map y2 is expressed as:
where l (x2,y2) represents a second luminance difference value between the high-luminance sample fundus image x2 and the corresponding second reconstructed dark map y2; represents the average gray scale of the high-luminance sample fundus image x2,The average gradation of the second reconstructed dark map y2 corresponding to the high-luminance sample fundus image is represented.
The second contrast difference between the high-luminance sample fundus image x2 and the corresponding second reconstructed dark map y2 is represented as:
Wherein c (x2,y2) represents a second contrast difference value between the second reconstructed dark map y2 corresponding to the high-luminance sample fundus image x2; The standard deviation of each pixel value of the high-luminance sample fundus image x2 is represented,The standard deviation of each pixel value of the second reconstructed dark map y2 corresponding to the high-luminance sample fundus image is shown.
The second feature difference value between the high-luminance sample fundus image x2 and the corresponding second reconstructed dark map y2 is expressed as:
Wherein s (x2,y2) represents a second feature difference value between the high-brightness sample fundus image x2 and the corresponding second reconstructed dark image y2; The pixel covariance of the high-brightness sample fundus image x2 and the corresponding second reconstructed dark map y2 is represented.
Further, the second image quality loss value 312 between the high-brightness sample fundus image 309 and the corresponding second reconstructed dark map 302 is:
ssim loss(x2,y2)=1-SSIM(x2,y2) (13)
wherein ,SSIM(x2,y2)=[l(x2,y2)α*c(x2,y2)β*s(x2,y2)γ],α、β and γ represent indices of the second luminance, the second contrast, and the second feature difference, respectively, α, β, and γ may be arbitrary constants, and ,SSIM(x2,y2)=[l(x2,y2)*c(x2,y2)*s(x2,y2)];ssim loss(x2,y2) represents an image quality loss value of the second image quality loss function when α=β=γ.
Step two, determining a first cyclic loss value between a first sample fundus image in a first sample fundus image set and a corresponding first reconstructed dark image by adopting a cyclic consistency loss function; and determining a second cyclic loss value between a second sample fundus image in the second sample fundus image set and a corresponding second reconstructed bright map by adopting the cyclic consistency loss function.
Specifically, a first cyclical loss value 313 between the low-luminance sample fundus image 301 in the low-luminance fundus image sample set and the corresponding first reconstructed dark map 307 is determined using a cyclical consistency loss function. A second cyclical loss value 314 between a high-intensity fundus image 309 in the high-intensity fundus image sample set and a corresponding second reconstructed luminance map 302 is determined using a cyclical consistency loss function.
The loop consistency loss function is an L1 norm loss function.
Step three, determining a first discrimination value between a first sample fundus image and a second reconstructed dark image in a first sample fundus image set by adopting a first discriminator and an antagonism loss function; and determining a second discrimination value between a second sample fundus image in the second sample fundus image set and the first reconstructed bright map by adopting the second discriminator and the contrast loss function.
As an embodiment, a first discriminator 310 and a contrast loss function are used to determine a first discrimination value 315 between a low-luminance fundus image 301 and a corresponding second reconstructed dark map 308 in a low-luminance fundus image sample set, while a second discriminator 305 and a contrast loss function are used to determine a second discrimination value 316 between a high-luminance fundus image 309 and a corresponding first reconstructed bright map 304 in a high-luminance fundus image sample set.
It should be noted that the anti-loss function may be one or any combination of a square error loss function, a cross entropy loss function, a perceptual loss function, a 0-1 loss function, and a regular loss function, which is not limited herein.
As one embodiment, a first near-identity loss value between the low-luminance sample fundus image 301 and the corresponding first reconstructed dark map 307 in the low-luminance fundus image sample set is calculated by a near-identity loss function, and a second near-identity loss value of the high-luminance sample fundus image 309 and the corresponding second reconstructed dark map 302 is calculated by a near-identity loss function.
And step four, determining the sum of the first image quality loss value, the first cyclic loss value, the first judging value, the second image quality loss value, the second cyclic loss value and the second judging value as an overall loss value.
The total loss value is the sum of a first image quality loss value, a first cyclic loss value, a discrimination value, a second image quality loss value, a second cyclic loss value and two discrimination values.
As one embodiment, a sum of the first image quality loss value, the first cyclic loss value, the first discrimination value, the first near-identity loss value, the second image quality loss value, the second cyclic loss value, the second discrimination value, and the second near-identity loss value is determined as an overall loss value.
In the implementation process, the overall loss value between the sample fundus image and the reconstructed image is calculated through the image quality loss function, the cyclic consistency loss function and the contrast loss function, so that parameter adjustment is conveniently carried out on the contrast generation network according to the overall loss value, and loss of texture detail characteristics of the image after brightness enhancement obtained by using a trained network model can be reduced.
And S1024, adjusting parameters in the countermeasure generation network according to the total loss value to obtain a trained brightness enhancement model.
Specifically, in executing S1024, the following manner may be adopted:
Judging whether the total loss value accords with a preset training condition, if so, determining the first generator as a trained brightness enhancement model, and if not, adjusting parameters of the first generator and parameters of the second generator through a range optimizer until the total loss value accords with the preset training condition.
Specifically, it is determined whether the overall loss value is not greater than a preset threshold.
And if the total loss value is not greater than the preset threshold value, determining the first generator as a trained brightness enhancement model.
And if the total loss value is larger than the preset threshold value, adjusting the parameters of the first generator and the parameters of the second generator through the Ranger optimizer until the total loss value is not larger than the preset threshold value.
It should be noted that, in the embodiment of the present application, the range optimizer is a combination of LookAhead and RAdam.
In the implementation process, in the process of carrying out parameter adjustment on the first generator, the parameter of the first generator is adjusted through the range optimizer, so that the stability of model training of the first generator can be improved, the speed and accuracy of model convergence in the process of model training of the first generator are improved, furthermore, a low-brightness fundus image is input into the trained first generator, a high-brightness fundus image is generated, the texture structure of the image can be reserved, and the accuracy of fundus images generated by an countermeasure generation network is improved.
After the trained luminance enhancement model is obtained, step 102 is performed.
Specifically, a trained brightness enhancement model is adopted to carry out brightness enhancement processing on the preprocessed low-brightness fundus image, so as to obtain a high-brightness fundus image.
As an embodiment, referring to fig. 4, fig. 4 is a schematic view of fundus image brightness enhancement provided by the embodiment of the present application, and a low-brightness fundus image 401 is input into a ResBlock + CBAM network 402 of a first generator, so as to generate a high-brightness fundus image 403.
In the image processing, the size of the fundus image of low luminance is the same as the original size of the fundus image of low luminance, that is, the size of the fundus image in the input countermeasure generation network is 1/4 of the size of the fundus image of low luminance in the image processing.
In the implementation process, the brightness enhancement processing is performed on the preprocessed low-brightness fundus image by using the trained brightness enhancement model, namely the first generator, so that the enhancement of the brightness of the low-brightness fundus image is realized.
In the implementation process, the trained brightness enhancement model is utilized to carry out brightness enhancement processing on fundus images to be processed, and the method can carry out brightness enhancement on batch fundus images to be processed, so that the processing efficiency of the images is improved, the workload of manual image processing is reduced, the cost of image processing is saved, the brightness enhancement model is obtained through the antagonism generation network and the image quality loss function training, the precision of parameter adjustment in the model training process is enhanced, and the loss of texture detail characteristics of the images after brightness enhancement can be reduced.
In the implementation process, parameter adjustment is performed on the countermeasure generation network in the training process through the target loss function consisting of the countermeasure loss function, the cycle consistency loss function and the image quality loss function, so that the training precision of the countermeasure generation network is improved, and the stability of the countermeasure generation network and the convergence efficiency in the training process can be improved through parameter adjustment of the first generator and the second generator by a range optimizer combined with LookAhead and RAdam, so that the image quality loss of the fundus image after brightening can be reduced, and the fundus image with high resolution can be processed through the trained countermeasure generation network, so that the applicability of the countermeasure generation network is improved.
Referring to fig. 5, fig. 5 is a block diagram of an apparatus for fundus image processing according to an embodiment of the present application, and an apparatus 500 shown in fig. 5 corresponds to the method of fig. 1, and includes functional modules capable of implementing the method of fig. 1.
In one embodiment, the apparatus 500 shown in fig. 5 includes:
an acquisition module 501 is configured to acquire a fundus image to be processed.
The processing module 502 is configured to perform brightness enhancement processing on a fundus image to be processed by using a trained brightness enhancement model, so as to obtain a processed target fundus image, where the brightness enhancement model is obtained by training based on an countermeasure generation network and an image quality loss function.
In one embodiment, the obtaining module 501 is specifically configured to:
And acquiring an original fundus image to be processed.
And cutting black edges, extracting features and adjusting the size of the original fundus image to be processed to obtain the fundus image to be processed.
In one embodiment, the processing module 502 is further configured to:
a sample fundus image set is acquired.
And inputting the sample fundus images in the sample fundus image set into an countermeasure generation network to obtain corresponding reconstructed sample fundus images.
Using the target loss function, an overall loss value between a sample fundus image in the set of sample fundus images and a corresponding reconstructed sample fundus image is determined.
And adjusting parameters in the countermeasure generation network according to the total loss value to obtain a trained brightness enhancement model.
In one embodiment, the target loss function is characterized as in equation (1):
L(G,F,Dx,Dy)=LGAN(G,Dy,X,Y)+LGAN(F,Dx,Y,X)+λLcyc(G,F)+Lidentity(G,F)+Lssim(G,F) (1)
Wherein Lssim (G, F) =1- | ssim (X, F (G (X))) |+1- | ssim (Y, F (G (Y))) |=2- | ssim (X, F (G (X))) | -ssim (Y, F (G (Y))) |, G represents the first generator, F represents the second generator, Dx represents the first discriminator, Dy represents the second discriminator, X represents the first sample fundus image, Y represents the second sample fundus image, L (G, F, Dx,Dy) represents the target loss function, LGAN(G,Dy, X, Y represents the fight loss function of the first generator and the second discriminator, LGAN(F,Dx, Y, X) represents the fight loss function of the second generator and the first discriminator, Lidentity (G, F) represents the near-identity loss function between the first generator and the second generator, Lssim (G, F) represents the image quality loss function between the first generator and the second generator.
In one embodiment, the processing module 502 is specifically configured to:
an initial image set is acquired, wherein the initial image set comprises a first initial image set and a second initial image set, the first initial image set comprises at least one first initial image, and the second initial image set comprises at least one second initial image.
The brightness value of each first initial image in the first initial image set is located in a first brightness value range, the brightness value of each second initial image in the second initial image set is located in a second brightness value range, and the maximum brightness value of the first brightness value range is smaller than the minimum brightness value of the second brightness value range.
Cutting black edges, extracting features and adjusting the size of each first initial image in the first initial image set to obtain a first preprocessing initial image set, wherein the first preprocessing initial image set comprises first preprocessing initial images corresponding to each first initial image.
And cutting black edges, extracting features and adjusting the size of each second initial image in the second initial image set to obtain a second preprocessing initial image set, wherein the second preprocessing initial image set comprises second preprocessing initial images corresponding to each second initial image.
At least one first preprocessing initial image in the first preprocessing initial image set is randomly cut and turned over to obtain a first sample fundus image set, wherein the first sample fundus image set comprises a plurality of first sample fundus images.
And randomly cutting and overturning at least one second preprocessing initial image in the second preprocessing initial image set to obtain a second sample fundus image set, wherein the second sample fundus image set comprises a plurality of second sample fundus images.
The first sample fundus image set and the second sample fundus image set are sample fundus image sets, and the plurality of first sample fundus images and the plurality of second sample fundus images are sample fundus images.
In one embodiment, the antagonism generation network includes a first generator and a second generator.
The processing module 502 is specifically configured to input a first sample fundus image in the first sample fundus image set into the first generator to obtain a first reconstructed bright image.
And inputting the first reconstructed bright image into a second generator to obtain a first reconstructed dark image.
And inputting a second sample fundus image in the second sample fundus image set into a second generator to obtain a second reconstructed dark image.
And inputting the second reconstructed dark image into a first generator to obtain a second reconstructed bright image.
The first reconstructed dark image and the second reconstructed light image are reconstructed sample fundus images.
In one embodiment, the objective loss function comprises an antagonism loss function, a cyclic consistency loss function, and an image quality loss function, and the antagonism generation network further comprises a first arbiter and a second arbiter.
The processing module 502 is specifically configured to determine a first image quality loss value between a first sample fundus image in the first set of sample fundus images and a corresponding first reconstructed dark map using the image quality loss function.
A first cyclical loss value between a first sample fundus image in the first set of sample fundus images and a corresponding first reconstructed dark map is determined using a cyclical consistency loss function.
A first discrimination value between a first sample fundus image and a second reconstructed darkness image in a first set of sample fundus images is determined using a first discriminator and a contrast loss function.
And determining a second image quality loss value between a second sample fundus image in the second sample fundus image set and a corresponding second reconstructed bright map by adopting an image quality loss function.
And determining a second cyclic loss value between a second sample fundus image in the second sample fundus image set and a corresponding second reconstructed bright map by adopting a cyclic consistency loss function.
And determining a second discrimination value between the second sample fundus image and the first reconstructed bright map in the second sample fundus image set by adopting a second discriminator and a countering loss function.
The total loss value is the sum of a first image quality loss value, a first cyclic loss value, a discrimination value, a second image quality loss value, a second cyclic loss value and two discrimination values.
In one embodiment, the processing module 502 is specifically configured to:
And judging whether the total loss value accords with a preset training condition.
If yes, the first generator is determined to be a trained brightness enhancement model.
If not, the parameters of the first generator are adjusted through the Ranger optimizer, and the parameters of the second generator are adjusted until the total loss value meets the preset training condition.
It should be noted that, the apparatus 500 shown in fig. 5 can implement each process of the image processing method in the embodiment of the method of fig. 1. The operation and/or function of the various modules in the apparatus 500 are respectively for implementing the corresponding flows in the method embodiment in fig. 1. Reference is specifically made to the description in the above method embodiments, and detailed descriptions are omitted here as appropriate to avoid repetition.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application, and the electronic device 600 shown in fig. 6 may include at least one processor 610, such as a CPU, at least one communication interface 620, at least one memory 630, and at least one communication bus 640. Wherein communication bus 640 is used to enable direct connection communications for these components. The communication interface 620 of the device in the embodiment of the present application is used for performing signaling or data communication with other node devices. The memory 630 may be a high-speed RAM memory or a non-volatile memory, such as at least one disk memory. Memory 630 may also optionally be at least one storage device located remotely from the aforementioned processor. The memory 630 has stored therein computer readable instructions which, when executed by the processor 610, perform the method process described above in fig. 1.
Embodiments of the present application provide a computer-readable storage medium having a computer program stored thereon, which when executed by a server, implements the method process shown in fig. 1.
In the several embodiments provided in the present application, it should be understood that the disclosed system and method may be implemented in other manners as well. The system embodiments described above are merely illustrative, e.g., the division of the system devices is merely a logical functional division, and there may be additional divisions in actual implementation, and e.g., multiple devices or components may be combined or integrated into another system, or some features may be omitted, or not performed.
Further, the units described as separate units may or may not be physically separate, and units displayed as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and variations will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.