Block chain copyright right determining method based on video genesTechnical Field
The invention relates to the fields of video clip searching, video copyright confirmation and video content tracing, in particular to a block chain copyright confirmation method based on a video gene.
Background
Copyright (copyright) refers to the rights that authors of literary, artistic, scientific works enjoy to their works. Copyright is a type of intellectual property rights, which is composed of works in natural science, social science, and literature, music, drama, painting, sculpture, photography, and cinematography. Copyrights are obtained in two ways: automatic acquisition and registration acquisition. In China, the work is automatically copyrighted after completion according to the copyright law. The completion is relatively speaking, and as long as the authored object has satisfied the legal composition condition of the work, the authored object can be protected as the work by copyright law. The applicant's own or an agent applies for the copyright or copyright of a digitized work by a copyright agency, which uses a digital copyright unique identifier DCI (Digital Content Identifier) to automatically find the right on the web for registering the acquired copyright.
Decentralization, distributed collective maintenance, data chronology, programmability, high security and reliability are significant features of blockchain technology. The existing digital work rights-determining work uses a decentralised blockchain rights-determining platform, so that human intervention factors are avoided, and the rights-determining reliability is greatly improved. But using a platform for rights and authorizations for copyrights with blockchain, the occupation of storage is excessive when processing video works. The present invention addresses the problem of reducing storage costs by means of video gene extraction.
A video gene is a data string used to express a video feature as if the gene were expressed by a DNA sequence. Searching similar videos from a video library using a video gene is similar to the search algorithm using a set of words to find similar text. The existing video genes generally describe the characteristics of frames, shapes, vector directions and the like in video images, and the image characteristics described by the video genes are suitable for searching deformed and locally referenced video targets in a known video library, and are suitable for the demands of content theft prevention, copyright protection and the like. The requirement for searching the same video content, such as video clip searching, video attribution confirmation, video content synoptic source and the like, is too low in video content comparison efficiency by using the existing video genetic algorithm. Therefore, there is a need for a faster generation and efficient characterization video genetic algorithm, and a faster processing video validation method.
Disclosure of Invention
The invention provides a block chain copyright right determining method based on video genes for improving the speed of video content searching and comparing in order to overcome the defects of the technology.
The technical scheme adopted for overcoming the technical problems is as follows:
a block chain copyright right determining method based on video genes comprises the following steps:
a) The applicant applies for own account numbers on the copyright blockchain of the video works, and protects the own account numbers of the applicant by using a password and a real-name system;
b) The applicant uses an intelligent contract program of a video work copyright blockchain to encode an original video work by using a public unidirectional video gene generation algorithm to generate a video gene encoding identification file;
c) Uploading the generated video gene coding identification file to a video copyright server by an applicant account, and storing the video gene coding identification file, the applicant account and the application date and time to a video copyright blockchain database by the video copyright server;
d) The digital copyright application website system searches similar genes on a copyright blockchain database according to the video genes, and if the similar genes are not available and the correctness of the digital codes is confirmed by using the public key and the digital signature in the record, the video gene codes and identifies the file to confirm the rights;
e) The video copyright server packages and stores the video gene coding identification file with the right into a blockchain;
f) And the digital rights application website system confirms the rights attribution according to the video gene coding identification file of the rights, and sends a rights confirmation certificate or a rights confirmation electronic certificate to the applicant.
Further, step b) comprises the steps of:
b-1) performing frame dropping treatment on the original video;
b-2) extracting a binarization characteristic value from each frame of image in the original video subjected to the frame dropping treatment;
b-3) calculating the Hamming distance between each frame of image features of the encoded video work, cutting off the frame at the feature position with the Hamming distance being more than 10% of the total length, splitting a video shot, and generating a shot descriptor for the split video shot;
b-4) splicing all lens descriptors in sequence to generate a video gene.
Further, step b-2) comprises the steps of:
b-2.1) removing black edges around each frame of image of the original video after the frame reduction treatment;
b-2.2) reducing the image to a size of 8x8 pixels;
b-2.3) converting the reduced image into a gray scale;
b-2.4) performing 16×16 resolution discrete cosine transform on the gray map;
b-2.5) reserving the content of the 8x8 matrix of the upper left corner of the image after discrete cosine transformation, and discarding other high-frequency content;
b-2.6) calculating the gray average value of all 64 pixels in the image, carrying out binarization processing on the gray average value, setting the pixel value in the image to be greater than or equal to the gray average value as 1, and setting the pixel value in the image to be less than the gray average value as 0;
b-2.7) carrying out quantization output on the binarization result, and arranging the binarization values in sequence from top to bottom and from left to right according to the image to form a 64-bit characteristic value.
Further, the shot descriptor in step b-3) includes the position of the video shot in the video, the frame number included in the video shot, the feature value of the first frame image of the video shot, and the hamming distance between the feature values of each frame image.
Further, step b-4) comprises the steps of:
b-4.1) calculating a binarization eigenvalue of the first frame of the split video;
b-4.2) calculating a binarization eigenvalue of the next frame, and calculating a hamming distance between the binarization eigenvalue and the binarization eigenvalue of the previous frame;
b-4.3) if the hamming distance is less than or equal to 6, determining that the video shot is not switched and returning to the step b-4.2);
b-4.4) if the Hamming distance is greater than 6, determining that the video shot is switched, and executing the step b-4.5);
b-4.5) judging whether the number of frames contained in the video shot is greater than 10 frames;
b-4.6) if the video shot contains less than or equal to 10 frames, discarding the shot and performing step b-4.8);
b-4.7) if the number of frames contained in the video shot is greater than 10 frames, generating a descriptor for the shot;
b-4.8) repeating the step b) with the currently processed frame being the beginning of a new shot.
The beneficial effects of the invention are as follows: the method for carrying out the block chain copyright right determination by utilizing the video gene is suitable for scenes and fields such as video fragment searching, video copyright right determination, video content tracing and the like. Video or video clips similar to the target video content can be quickly found from a massive video library, and differences brought by aspects such as video resolution, watermarks, corner marks, subtitles, gamma correction, color histogram adjustment and the like can be ignored. The method has the characteristics of small calculation amount of video gene generation, small occupied space of video gene data, high video confirming speed, unable tampering of block chain confirming results and the like, is suitable for scenes such as video copyright confirming, video copyright trading, video content tracing and the like, and has good popularization and application values in the aspect of video service application.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a diagram of an image feature value extraction process according to the present invention;
FIG. 3 is a diagram illustrating a process for generating a lens descriptor according to the present invention;
FIG. 4 is a flow chart showing the generation of a video gene according to the present invention.
Detailed Description
The invention is further described with reference to fig. 1 to 4.
As shown in fig. 1, a blockchain copyright right determining method based on video genes includes:
a) The applicant applies for own account numbers on the copyright blockchain of the video works, and protects the own account numbers of the applicant by using a password and a real-name system;
b) The applicant uses an intelligent contract program of a video work copyright blockchain to encode an original video work by using a public unidirectional video gene generation algorithm to generate a video gene encoding identification file;
c) Uploading the generated video gene coding identification file to a video copyright server by an applicant account, and storing the video gene coding identification file, the applicant account and the application date and time to a video copyright blockchain database by the video copyright server;
d) The digital copyright application website system searches similar genes on the copyright blockchain database according to the video genes, and if the similar genes are not available and the correctness of the digital codes is confirmed by using the public key and the digital signature in the record, the video gene codes the identification file to confirm the rights, namely, finally, a right confirmation conclusion is given;
e) The video copyright server packages and stores the video gene coding identification file with the right into a blockchain;
f) And the digital rights application website system confirms the rights attribution according to the video gene coding identification file of the rights, and sends a rights confirmation certificate or a rights confirmation electronic certificate to the applicant.
The algorithm requires a description of the features of the pictures and shots of the video. In order to describe the features of a video using as short as possible, it is necessary to segment the video into video shots with similar images, then generate short descriptors for the video shots, and finally splice the descriptors in sequence to finally form a feature description of the video, i.e. a video gene. The method for carrying out the block chain copyright right determination by utilizing the video gene is suitable for scenes and fields such as video fragment searching, video copyright right determination, video content tracing and the like. Video or video clips similar to the target video content can be quickly found from a massive video library, and differences brought by aspects such as video resolution, watermarks, corner marks, subtitles, gamma correction, color histogram adjustment and the like can be ignored. The method has the characteristics of small calculation amount of video gene generation, small occupied space of video gene data, high video confirming speed, unable tampering of block chain confirming results and the like, is suitable for scenes such as video copyright confirming, video copyright trading, video content tracing and the like, and has good popularization and application values in the aspect of video service application.
Example 1:
step b) comprises the steps of:
b-1) performing frame dropping processing on the original video. Firstly, in order to unify the video gene frame rate of the video work library, and secondly, in order to reduce the operation cost of video gene coding.
b-2) extracting a binarization characteristic value for each frame image in the original video subjected to the frame reduction processing. The result of the binarization characteristic value of the image is not obtained at low frequency, but the result value of the binarization characteristic is unchanged as long as the integral structure of the picture is kept unchanged, and the processing can avoid the influence caused by content adjustment such as image proportion, resolution, watermark, angle mark, caption, gamma correction, color histogram and the like.
b-3) calculating the Hamming Distance between each frame of image features of the encoded video work, cutting off the frame at the feature position with the Hamming Distance being more than 10% of the total length, splitting a video shot, and generating a shot descriptor for the split video shot. A shot descriptor, which is used to describe the characteristics of a shot in a video, the descriptor should include: the position of the lens in the video, the number of frames contained in the lens, the characteristic value of the first frame image of the lens and the hamming distance between the characteristic values of each frame image. The schematic effect of generating a lens descriptor is shown in fig. 3. By this processing, a video shot of several seconds can be generated into a piece of descriptor data of only several tens of bytes in size in general.
b-4) splicing all lens descriptors in sequence to generate a video gene.
Example 2:
as shown in fig. 2, step b-2) includes the steps of:
b-2.1) removing black edges around each frame of image of the original video after the frame reduction processing. And discarding the difference brought by videos with different proportions.
b-2.2) reducing the image to a size of 8x8 pixels, discarding the differences due to the different sizes.
b-2.3) converting the reduced image into a gray scale.
b-2.4) the gray-scale map is subjected to a Discrete Cosine Transform (DCT) of 16 x 16 resolution. After DCT transformation, the low frequency part of the image is concentrated in the upper left corner, and the information of the part contains the low frequency content of the image; the other part is the high frequency content of the image details.
b-2.5) preserving the content of the 8x8 matrix in the upper left corner of the discrete cosine transformed image, discarding other high frequency content.
b-2.6) calculating the gray average value of all 64 (8 multiplied by 8) pixels in the image, performing binarization processing by using the gray average value, setting the pixel value in the image to be greater than or equal to the gray average value to be 1, and setting the pixel value in the image to be less than the gray average value to be 0.
b-2.7) carrying out quantization output on the binarization result, and arranging the binarization values in sequence from top to bottom and from left to right according to the image to form a characteristic value of 64 bits (bit).
Example 3:
the shot descriptor in the step b-3) comprises the position of the video shot in the video, the frame number contained in the video shot, the characteristic value of the first frame image of the video shot and the hamming distance between the characteristic values of each frame image.
Example 4:
as shown in fig. 4, step b-4) includes the steps of:
b-4.1) calculating the binarized eigenvalues of the first frame of the split video.
b-4.2) calculating the binarized eigenvalue of the next frame, and calculating the hamming distance between the binarized eigenvalue and the binarized eigenvalue of the previous frame.
b-4.3) judging whether the hamming distance between the characteristic values of two frames is greater than 10% of the length of the characteristic value of the current frame, if the hamming distance between the two frames is greater than 6 for the 64-pixel binary image of 8x8, the characteristic of the two images is obviously different, and the next frame image can be judged to be the beginning of a new lens in the video. Thus if the hamming distance is less than or equal to 6, it is determined that the video shot is not switched and step b-4.2 is performed back.
b-4.4) if the hamming distance is greater than 6, determining that the video shot has been switched, and performing step b-4.5).
b-4.5) judging whether the number of frames contained in the video shot is greater than 10 frames.
b-4.6) discarding the video shot if the number of frames contained in the shot is less than or equal to 10 frames, so as to discard the feature recognition errors caused by gradual transition or continuous rapid transition of the video, and performing step b-4.8).
b-4.7) if the video shot contains more than 10 frames, generating descriptors for the shot.
b-4.8) repeating the step b) with the currently processed frame being the beginning of a new shot.
Finally, it should be noted that: the foregoing description is only a preferred embodiment of the present invention, and the present invention is not limited thereto, but it is to be understood that modifications and equivalents of some of the technical features described in the foregoing embodiments may be made by those skilled in the art, although the present invention has been described in detail with reference to the foregoing embodiments. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.