Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the application.
In the description of the present application, the description of first, second, etc. is for the purpose of distinguishing between technical features only and should not be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
In the description of the present application, it should be understood that the direction or positional relationship indicated with respect to the description of the orientation, such as up, down, etc., is based on the direction or positional relationship shown in the drawings, is merely for convenience of describing the present application and simplifying the description, and does not indicate or imply that the apparatus or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present application.
In the description of the present application, unless explicitly defined otherwise, terms such as arrangement, installation, connection, etc. should be construed broadly and the specific meaning of the terms in the present application can be determined reasonably by a person skilled in the art in combination with the specific content of the technical solution.
With the development of multimedia information technology, video shooting can more fully record the occurrence of scenes, so life and work through video recording have become important parts of people's daily life. Especially in critical fields such as judicial evidence collection, news reports, commercial propaganda and the like, the information which can be transmitted by the video is more complete and clear, so that the guarantee of the authenticity and the integrity of the video is important.
In the prior art, the real integrity of the video is ensured by adding the watermark or the anti-counterfeiting mark in the original video, the visible watermark is easily deleted by editing software, or similar anti-counterfeiting marks are created to misguide users, so that effective anti-counterfeiting guarantee cannot be carried out on the video content, the video is easily counterfeited and information is lost, and the authenticity and the integrity are further lost.
Based on the above, the embodiment of the application provides a video encryption anti-counterfeiting method, a video encryption anti-counterfeiting system, electronic equipment and a video encryption anti-counterfeiting medium, aiming at encrypting and protecting videos and improving the accuracy and stability of video anti-counterfeiting.
The video encryption anti-counterfeiting method, the system, the electronic equipment and the medium provided by the embodiment of the application are specifically described through the following embodiments, and the video encryption anti-counterfeiting method in the embodiment of the application is described first.
The embodiment of the application can acquire and process the related data based on the artificial intelligence technology. Wherein artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) is the theory, method, technique, and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend, and expand human intelligence, sense the environment, acquire knowledge, and use knowledge to obtain optimal results.
Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
The embodiment of the application provides a video encryption anti-counterfeiting method, and relates to the technical field of video encryption. The video encryption anti-counterfeiting method provided by the embodiment of the application can be applied to a terminal, a server and software running in the terminal or the server. In some embodiments, the terminal may be a smart phone, a tablet computer, a notebook computer, a desktop computer, etc., the server may be configured as an independent physical server, may be configured as a server cluster or a distributed system formed by a plurality of physical servers, and may be configured as a cloud server for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, and basic cloud computing services such as big data and artificial intelligent platforms, and the software may be an application for implementing the video encryption anti-counterfeiting method, but is not limited to the above form.
The application is operational with numerous general purpose or special purpose computer system environments or configurations. Such as a personal computer, a server computer, a hand-held or portable device, a tablet device, a multiprocessor system, a microprocessor-based system, a set top box, a programmable consumer electronics, a network PC, a minicomputer, a mainframe computer, a distributed computing environment that includes any of the above systems or devices, and the like. The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
It should be noted that, in each specific embodiment of the present application, when related processing is required according to user information, user behavior data, user history data, user location information, and other data related to user identity or characteristics, permission or consent of the user is obtained first, and the collection, use, processing, and the like of the data comply with related laws and regulations and standards. In addition, when the embodiment of the application needs to acquire the sensitive personal information of the user, the independent permission or independent consent of the user is acquired through popup or jump to a confirmation page and the like, and after the independent permission or independent consent of the user is definitely acquired, the necessary relevant data of the user for enabling the embodiment of the application to normally operate is acquired.
For this reason, referring to fig. 1, an embodiment of the present application provides a video encryption anti-counterfeiting method, which is applied to a user terminal, and includes the following steps S110 to S140.
Step S110, responding to a video shooting request of a server side, shooting a video, obtaining a first video file, and obtaining parameters to be encrypted generated during video shooting.
In this step, the user side responds to the video shooting request of the server side, and obtains the first video file by receiving the video file shot or sent by the server side.
Specifically, the first video file is not limited to video directly captured by a video processing device (such as a visual intelligent monitor, an intelligent terminal device) on which the video encryption anti-counterfeiting system is mounted, but includes video captured by other external devices (such as a web camera, a camcorder, etc.) and then imported or synchronized into the video processing device. In addition, the video can also originate from a cloud storage platform and directly reach the video processing equipment for processing through network transmission. Also, the videos may be captured by a separate imaging device, such as a high-end digital single-lens camera or a professional video camera, in different environments, and then uploaded to the video processing device by wireless transmission (Wi-Fi, bluetooth, etc.), data line transmission, etc.
In this embodiment, the first video file may be a video captured by an imaging system (for example, a camera), where the imaging system may be configured for the video processing device itself or configured for other devices. For example, the first video file is a video file which is shot by an intelligent visual device with a video acquisition device or shot by a fixed network camera and sent to a video processing device, etc.
Further, parameters to be encrypted generated during the shooting of the first video file are obtained, wherein the parameters to be encrypted comprise shooting time, longitude and latitude, a photographer, shooting use, loan application number and other information related to the first video file during video shooting.
Specifically, shooting time can be obtained through a built-in time stamp function of terminal equipment with the video encryption anti-counterfeiting system or through information of a video file, longitude and latitude can be obtained through a built-in GPS module of the terminal equipment with the video encryption anti-counterfeiting system, a photographer can be obtained through user login information, identity traceability is ensured, video application is an image category selected by a user in a user side and is used for marking the purpose of video use, loan application form number is a loan application form created on the user side, and image data corresponding to the loan application form number can carry the application form number.
And step S120, encrypting the parameters to be encrypted according to the encryption key sent by the server side to obtain the encryption parameters.
In the step, an encryption key from a server side is received, and parameters to be encrypted are encrypted through the encryption key to obtain the encryption parameters, so that the security of video information is ensured.
Specifically, these parameters to be encrypted are encrypted using a symmetric encryption algorithm such as AES (Advanced Encryption Standard ), which is a symmetric encryption algorithm that converts plaintext data into ciphertext data by a key.
In some embodiments, the encryption key used in the encryption process may be embedded in a particular picture by storing the encryption key for secure transmission and concealment. The AES algorithm is used to encrypt the service requirement information (shooting time, longitude and latitude, photographer, video use, and application number) to generate video anti-counterfeiting information corresponding to the video, and to subsequently determine whether the video is true or false.
Specifically, a source picture file (e.g., com_mark_bg.png) is first prepared, and an encrypted AES key (e.g., a string "emhueGp1R2ZSNDVhdVJ5bkZ aw=") encoded by Base64 is hidden into the picture. The hashNum array defines how the byte data of the key is distributed into the byte stream of the picture by using an array named hashNum to ensure that the key can be securely and implicitly embedded.
Further, the first element of the hashNum array (here 16) specifies the byte block size that is read from the picture at a time, and the next element (7,4,6,0,15,10,9,8) specifies which locations (indices) in the read byte blocks are to be used to store the bytes of the encryption key. The index values must be smaller than the byte block size read each time and cannot be repeated, so that each byte of the encryption key can be uniquely embedded into the picture data, and the security of the encryption key is ensured.
Further, byte data of the source picture is read block by block, and for each block of data, bytes of the encryption key are replaced one by one to these positions according to index positions specified by hashNum arrays. After all bytes of the encryption key have been embedded in the picture data in this way, the modified data is written to a new picture file (e.g., target. Png) for storage.
Further, since a specific index array hashNum is used to embed the key, the same index array is also used to accurately recover the byte sequence of the key from the picture data when extracting the key, thereby realizing the secure and hidden storage of the encryption key.
Step S130, generating a coding pattern according to the encryption parameters, and synthesizing a second video file according to the coding pattern and the first video file.
In the step, the encryption parameters are encrypted into the coding pattern according to the anti-counterfeiting requirement, and the second video file is synthesized according to the coding pattern and the first video file.
In some embodiments, by converting the encryption parameters into a two-dimensional code pattern, not only can high-density information be stored, but also scan verification is easy.
Specifically, the encryption parameters are converted into two-dimensional code images by using a suitable two-dimensional code generation library or API or two-dimensional code generation algorithm.
Further, embedding the two-dimensional code image into the video file, namely synthesizing a second video file according to the two-dimensional code pattern and the first video file.
In particular, complex audio and video editing tasks are preferably performed using a multimedia processing framework, whereby video composition operations are implemented through command line or programming interface calls. The two-dimensional code image may be embedded into a specific frame of the video. The first frame of video or other location that does not easily affect the viewing experience may generally be selected. The method can also be embedded into each frame of the video, for example, the same two-dimensional code is embedded into each key frame of the video, so that the success rate in verification is improved, and meanwhile, the information in the two-dimensional code can still be clearly scanned when the video is played or checked by ensuring that the quality of the two-dimensional code image is high enough.
In some embodiments, the encrypted encryption parameters are converted into two-dimensional code pictures through a specific two-dimensional code generation algorithm. The two-dimensional Code (QR Code) is a matrix two-dimensional bar Code, which is a graphical data representation mode, data information is recorded through a pattern formed by square blocks with black and white alternately on a plane, and encryption parameters are encoded into a two-dimensional Code pattern through a two-dimensional Code generation algorithm.
And step S140, embedding the encryption parameters into metadata of the second video file to obtain a third video file.
In this step, the encryption parameters are embedded into the metadata of the second video file to form a new video file (third video file), including the second video file and the metadata embedded with the encryption parameters, so as to ensure that even if the two-dimensional code image is lost or invisible, the anti-counterfeiting information can be obtained by analyzing the metadata of the video file itself.
In particular, the metadata may include, but is not limited to, encryption parameters, as well as any other auxiliary information that helps to verify the authenticity of the video, not only enhancing the security and authenticity of the video content, but also facilitating subsequent verification.
And step S150, transmitting the third video file to a server side so that the server side extracts the encryption parameters and the coding patterns from the third video file, adopts a decryption key corresponding to the encryption key to decrypt the encryption parameters and the coding patterns, and judges the authenticity of the third video file according to the consistency of the decrypted encryption parameters and the decrypted coding patterns.
In this step, after the third video file is transmitted to the server, the server extracts the second video file and metadata of the third video file from the third video file, further extracts the coding pattern from the second video file, and extracts the encryption parameter from the metadata of the third video file.
Further, the encryption parameters and the coding patterns are decrypted by adopting decryption keys corresponding to the encryption keys, so that decrypted encryption parameters and decrypted coding patterns are obtained, and authenticity of the third video file is judged according to consistency of the decrypted encryption parameters and the decrypted coding patterns.
In some embodiments, after the third video file is transmitted to the server, the server uses a corresponding library or tool (such as FFmpeg, mediaInfo, etc.) to parse metadata of the third video file, extract encryption parameters from the metadata, extract the second video file from the third video file, further extract a two-dimensional code image from a first frame or other preset key frames of the second video file, and scan and decode the two-dimensional code image using a two-dimensional code reading library (such as ZXing, QRCode.js, etc.), so as to obtain the encryption parameters therein.
Further, the two extracted encryption parameters are decrypted by an AES decryption algorithm respectively to obtain two decrypted encryption parameters, and authenticity of the third video file is determined by comparing consistency of the two decrypted encryption parameters.
Specifically, if the comparison results are inconsistent, the video file is not tampered and is real, and if the comparison results are inconsistent, the video file is possibly modified and is not an originally shot version and is false.
Preferably, consistency of other metadata, such as photographing time, geographical position, etc., can be further verified to increase reliability of verification, or by checking quality and integrity of the two-dimensional code image, it is ensured that it is not affected by physical damage or malicious tampering.
Further, according to the video verification result, clear feedback information is provided for the user, and the anti-counterfeiting verification result of the video file is informed.
In some embodiments, in step S120, according to an encryption key sent by a server, an encryption parameter to be encrypted is encrypted to obtain an encryption parameter, which includes the following steps:
Step S210, obtaining a picture sent by a server side, wherein the picture contains key bytes for forming an encryption key;
Step S220, extracting key bytes from the picture, and combining the key bytes into an encryption key;
step S230, the parameters to be encrypted are encrypted according to the encryption key, and the encryption parameters are obtained.
In some embodiments, the client receives a picture sent by the server, wherein the picture contains key bytes that make up the encryption key.
Specifically, before the picture is sent to the user side, a source picture file needs to be first prepared at the server side, and the encrypted AES key is hidden in the picture.
Specifically, embedding is performed by using an array called hashNum, where hashNum array defines how to disperse the byte data of the key into the byte stream of the picture. The first element of the hashNum array specifies the byte block size that is read from the picture at a time, and the subsequent elements specify which locations (indices) in these byte blocks are to be used to store the bytes of the key. Moreover, these index values must be smaller than the byte block size of each read and cannot be repeated to ensure that each byte of the key is uniquely embedded in the picture data, resulting in a picture that contains the key bytes that consist of the encryption key.
Further, byte data of the source picture is read block by block, and for each block of data, bytes of the encryption key are replaced one by one to these positions according to index positions specified by hashNum arrays. When all bytes of the encryption key are embedded in the picture data in this way, the modified data is written into a new picture file.
Further, since the specific index array hashNum is used to embed the key, the same index array is also used to accurately recover the byte sequence of the key from the picture data when extracting the key, thereby ensuring both security and concealment.
Specifically, the encryption key is generated based on an AES encryption algorithm and is used for encrypting parameters to be encrypted according to the encryption key, and the decryption key is generated according to the AES encryption algorithm and the encryption key. Since AES supports multiple modes of operation, it is first necessary to select encryption modes such as electronic codebook mode (Electronic Codebook, ECB), cipher block chaining mode (Cipher Block Chaining, CBC), cipher text Feedback mode (CFB), and Output Feedback kill (OFB) according to encryption requirements. Not only confidentiality but also integrity protection is provided by the AES encryption algorithm.
Specifically, an Initialization Vector (IV) is randomly generated, which has the same length as the AES block size (128 bits or 16 bytes). Where AES is a block cipher that requires the length of the incoming data to be an integer multiple of the block size. When the data is not an integer multiple of the block size, a padding scheme needs to be employed.
Further, a secure library or API provided by the programming language is used to initialize the AES encryptor and set the required parameters including encryption Key (Key), initialization Vector (IV) and encryption Mode (Mode). And then the encryption function is called, and the parameters to be encrypted are transmitted as plaintext data and the previously configured encryption context for encryption.
Further, AES encrypts the parameters to be encrypted according to the selected operation mode to generate a ciphertext output, and the encrypted data may be in the form of a byte array or may be converted into another format, such as a Base64 encoded string, for storage or transmission.
In some embodiments, embedding the encryption parameter in the metadata of the second video file in step S140 results in a third video file, comprising the steps of:
Step S310, analyzing the metadata of the second video file to embed the encryption parameters into the metadata of the second video file, thereby obtaining a third video file.
In this step, existing metadata of the video file is read using metadata analysis tools such as MediaInfo or directly using a relational library in a programming language (e.g., the mutagen library of Python) that allows programmatically accessing and modifying tags or metadata of the audio/video file. MediaInfo is a tool that can provide detailed information about media files, including technical and non-technical metadata.
Further, one or more proper metadata fields are selected to store encryption parameters according to requirements, so that a third video file generated by metadata embedded with the encryption parameters and the second video file is obtained, and the safety and the authenticity of video content are enhanced.
The embodiment of the application provides a video encryption anti-counterfeiting method, which comprises the steps of shooting a video through responding to a video shooting request of a server side to obtain a first video file, obtaining parameters to be encrypted generated during video shooting, encrypting the parameters to be encrypted according to an encryption key sent by the server side to obtain the encryption parameters, generating an encoding pattern according to the encryption parameters, synthesizing a second video file according to the encoding pattern and the first video file, embedding the encryption parameters into metadata of the second video file to obtain a third video file, transmitting the third video file to the server side, so that the server side extracts the encryption parameters and the encoding pattern from the third video file, adopts a decryption key corresponding to the encryption key to decrypt the encryption parameters and the encoding pattern, judges the consistency of the encrypted encryption parameters and the encoding pattern, effectively prevents the video from being counterfeited, and ensures the authenticity and the integrity of the video.
Referring to fig. 2, an embodiment of the present application provides a video encryption anti-counterfeiting method, which is applied to a server, and includes the following steps S410 to S430.
Step S410, an encryption key is sent to the user terminal, so that the user terminal encrypts the parameters to be encrypted according to the encryption key to obtain encryption parameters, the user terminal generates a coding pattern according to the encryption parameters, synthesizes a second video file according to the coding pattern and the first video file, and embeds the encryption parameters into metadata of the second video file to obtain a third video file.
In this step, an encryption key is generated by the server side and sent to the client side for encrypting the video file or other files to be encrypted.
Specifically, the encryption key is sent to the user side, so that after the user side receives the encryption key, the encryption key is used for executing encryption operation on the parameters to be encrypted, and a coding pattern is generated according to the encryption parameters, so that secondary encryption is realized and the secondary encryption is used for subsequent video anti-counterfeiting verification.
Step S420, receiving a third video file sent by the user terminal.
In this step, the server receives the third video file sent from the user terminal, and is used for verifying the authenticity of the video through the server terminal.
Step S430, extracting the encryption parameters and the coding patterns from the third video file, decrypting the encryption parameters and the coding patterns by adopting a decryption key corresponding to the encryption key, and judging the authenticity of the third video file according to the consistency of the decrypted encryption parameters and the decrypted coding patterns.
In this step, after receiving the third video file sent from the user terminal, the server extracts the encryption parameter and the coding pattern for determining authenticity from the third video file, and further decrypts the encryption parameter and the coding pattern to determine authenticity of the third video file.
Preferably, metadata of the second video file in the third video file and the second video file are extracted through a tool such as MediaInfo or a related library in a programming language (for example, mutagen library of Python).
Further, the previously embedded encryption parameters are extracted from the metadata of the second video file, and the corresponding Initialization Vector (IV) and/or authentication Tag (Tag) are also extracted for different AES encryption modes.
Further, specific frames can be intercepted by using tools such as FFmpeg and the like and saved as picture files, so that two-dimensional code images are extracted from a first frame of a video or other preset key frames, and then two-dimensional code images are scanned and decoded by using a two-dimensional code reading library (such as ZXing, QRCode. Js and the like) to acquire encryption information in the two-dimensional code images.
Further, the encryption parameters and the coding patterns are decrypted by adopting a decryption key corresponding to the encryption key, and then the two extracted encryption parameters are decrypted by using an AES decryption algorithm respectively. And determining whether the video file is true or false by comparing whether the decrypted result is consistent.
If the two decrypted encryption parameters are consistent, the video file is not tampered and is real, and if the two decrypted encryption parameters are inconsistent, the video file is possibly modified and is not the original photographed version and is false.
In some embodiments, before the step S430 decrypts the encryption parameter and the encoding pattern using the decryption key corresponding to the encryption key, the method includes the following steps:
step S510, extracting a second video file and metadata of the second video file from the third video file;
Step S520, extracting the coding pattern from the second video file;
Step S530, parsing the metadata of the second video file to extract the encryption parameters from the metadata of the second video file.
In this embodiment, the encryption parameters and the encoding patterns are extracted from the third video file, and the second video file and the metadata of the second video file are extracted from the third video file, so that the encoding patterns are extracted from the second video file, and the metadata of the second video file is parsed to extract the encryption parameters from the metadata of the second video file.
In some embodiments, in step S430, the authenticity of the third video file is determined according to the decrypted encryption parameter and the consistency of the encoding pattern, including the following steps:
step S610, when the decrypted encryption parameter is consistent with the decrypted encoding pattern, the third video file is true;
In step S620, when the decrypted encryption parameter and the decrypted encoding pattern are inconsistent, the third video file is false.
In this step, the decrypted encryption parameter and the decrypted encoding pattern are compared, and the authenticity of the video file is determined by whether the comparison result is consistent.
Specifically, when the decrypted encryption parameter and the decrypted encoding pattern are consistent, the third video file is true, which indicates that the video file is not tampered, and when the decrypted encryption parameter and the decrypted encoding pattern are inconsistent, the third video file is false, which indicates that the video may have been modified.
The embodiment of the application provides a video encryption anti-counterfeiting method, which comprises the steps of sending an encryption key to a user side, enabling the user side to encrypt parameters to be encrypted according to the encryption key to obtain encryption parameters, generating a coding pattern according to the encryption parameters, synthesizing a second video file according to the coding pattern and a first video file, embedding the encryption parameters into metadata of the second video file to obtain a third video file, receiving the third video file sent by the user side, extracting the encryption parameters and the coding pattern from the third video file, decrypting the encryption parameters and the coding pattern by adopting a decryption key corresponding to the encryption key, judging authenticity of the third video file according to consistency of the decrypted encryption parameters and the decrypted coding pattern, effectively preventing the video from being forged, and ensuring authenticity and integrity of the video.
For the convenience of understanding of those skilled in the art, the embodiment description is performed in combination with multiple ports, and mainly includes two parts, namely a user port and a server port, as follows:
For the server side, a source picture file (e.g., com_mark_bg. Png) is first prepared, and an encrypted AES key (e.g., a string "emhueGp1R2ZSNDVhdVJ5bkZ5 aw=") encoded by Base64 is hidden in the picture.
In particular, secure and hidden embedding of the key is ensured by using an array called hashNum, wherein hashNum array defines how to distribute the byte data of the key into the byte stream of the picture. The first element of the hashNum array (here 16) specifies the byte block size that is read from the picture at a time, and the subsequent element (7,4,6,0,15,10,9,8) specifies which locations (indices) in these byte blocks are to be used to store the bytes of the key. Moreover, these index values must be smaller than the byte block size of each read and cannot be repeated to ensure that each byte of the key can be uniquely embedded in the picture data.
Further, byte data of the source picture is read block by block, and for each block of data, bytes of the encryption key are replaced one by one to these positions according to index positions specified by hashNum arrays. When all bytes of the encryption key are embedded in the picture data in this way, the modified data is written into a new picture file (e.g., target. Png).
Further, since the specific index array hashNum is used to embed the key, the same index array is also used to accurately recover the byte sequence of the key from the picture data when extracting the key, thereby ensuring both security and concealment.
The method comprises the steps that firstly, the user side obtains encryption parameters, specifically comprises the steps of obtaining shooting time through a built-in time stamp function of the mobile device, obtaining longitude and latitude through a GPS module, obtaining identity of a photographer through user login information, and selecting or inputting video application by a user.
Further, a server-side generated picture file (e.g., target. Png) is obtained, along with a specific index array hashNum for embedding the key. Wherein a particular index array hashNum defines a specific location of a key byte in a picture byte stream.
Further, the data of the picture file is read. Specifically including allocating a byte array bytes for temporarily storing data blocks read from the picture at a time by a predetermined byte block size per read (i.e., the first element of the hashNum array, here 16 bytes). At the same time, another byte array readBytes is allocated, which is large enough to hold the byte data of the entire picture file.
Further, the picture data is read block by block, and the read data blocks are stored in readBytes arrays. Specifically, according to the index position specified by hashNum arrays, bytes of the key are extracted from each read data block, after each data block is read, bytes of the key are extracted one by one according to the position specified by hashNum arrays (starting from index 1 because index 0 represents the data block size), and stored into a new byte array, forming a new array containing the complete AES key.
Further, an AES encryption algorithm is used for carrying out encryption processing on the encryption parameters through an encryption key extracted from the array, video anti-counterfeiting information VID_Sign is generated, and the video anti-counterfeiting information VID_Sign is converted into a two-dimension code picture IMG.jpg through a dimension code generation algorithm.
Specifically, the AES encryption algorithm converts plaintext data into ciphertext data through an encryption key, confidentiality and integrity of information are guaranteed, and the two-dimensional code generation algorithm encodes video anti-counterfeiting information VID_Sign into a two-dimensional code pattern, so that the follow-up video verification process is facilitated.
Further, img.jpg is synthesized with the original video file using a multimedia processing tool such as FFmpeg, to generate a new video file. Meanwhile, the VID_Sign is embedded into the metadata of the new video file, so that the close association of the video anti-counterfeiting information and the video file is ensured.
Further, the user side uploads the processed video file to the server after finishing the operation.
Aiming at a server side, the server receives a video file to be verified so as to analyze metadata of the video file, and extracts and decrypts video anti-counterfeiting information VID_Sign_Rec1. The decryption process uses the same AES algorithm and key as the encryption to restore the ciphertext data to plaintext data.
Further, the first frame of the video is intercepted, video anti-counterfeiting information VID_Sign_Rec2 in the two-dimensional code picture is analyzed and decrypted, and the decryption is performed by using an AES algorithm and a secret key.
Further, whether the video is true or false is judged by comparing whether the VID_Sign_Rec1 and the VID_Sign_Rec2 are consistent or not. If the video is consistent, the video is proved to be legal, and if the video is inconsistent, the video is proved to be forged.
In some embodiments, anti-counterfeiting processing is performed on video shooting in the loan application process for a credit agency, and the specific steps are as follows:
and the client manager opens the video encryption software and selects the video shooting function. The video encryption software automatically obtains the shooting time, longitude and latitude, and video use (such as borrower business land internal and external view image category) of the photographer (client manager), and loan application number.
Further, the video encryption software analyzes the AES key stored in the picture, encrypts the information by using the AES encryption algorithm, and generates video anti-counterfeiting information vid_sign. And converting the VID_Sign into a two-dimensional code picture IMG.jpg.
Further, using FFmpeg and other tools to synthesize img.jpg with the original video file, generating a new video file, and embedding vid_sign into metadata, the customer manager uploads the processed video file to the credit agency's server.
And the server side is used for extracting and decrypting the video anti-counterfeiting information VID_Sign_Rec1 by analyzing the metadata of the video file after the credit agency server receives the video file of the internal and external views of the borrower to be verified.
Specifically, a first frame of the video is intercepted, wherein all the video has anti-counterfeiting information, and the first frame can be intercepted to rapidly analyze the watermark, so that video anti-counterfeiting information VID_Sign_Rec2 in the two-dimensional code picture can be analyzed and decrypted. The authenticity of the video is verified by comparing VID_Sign_Rec1 with VID_Sign_Rec2. Through verification, when the comparison results of the two are consistent, the video is proved not to be tampered, the video is a legal borrower's operation ground internal and external scene video, and when the comparison results of the two are inconsistent, the video is proved to be tampered, and the video is not an original video file, so that the video is effectively prevented from being forged, and the authenticity and the integrity of the video are ensured.
As shown in fig. 3, some embodiments of the present application provide a video encryption anti-counterfeiting system, which includes a response module 310, an encryption module 320, a coding module 330, an embedding module 340, and an inspection module 350, specifically:
the response module 310 is configured to respond to a video shooting request from a server, shoot a video, obtain a first video file, and obtain parameters to be encrypted generated during video shooting;
the encryption module 320 is configured to encrypt the parameter to be encrypted according to the encryption key sent by the server side, so as to obtain an encryption parameter;
The encoding module 330 is configured to generate an encoding pattern according to the encryption parameter, and synthesize a second video file according to the encoding pattern and the first video file;
An embedding module 340, configured to embed the encryption parameter into metadata of the second video file to obtain a third video file;
The checking module 350 is configured to transmit the third video file to the server, so that the server extracts the encryption parameter and the encoding pattern from the third video file, decrypts the encryption parameter and the encoding pattern by using a decryption key corresponding to the encryption key, and determines the authenticity of the third video file according to the consistency of the decrypted encryption parameter and the encoding pattern.
In some embodiments, the encryption module 320 may include obtaining a picture sent by the server, where the picture includes a key byte that forms an encryption key.
In some implementations, the encryption module 320 may include extracting key bytes from the picture and combining the key bytes into an encryption key.
In some embodiments, the encryption module 320 may include encrypting the parameters to be encrypted according to the encryption key to obtain the encrypted parameters.
In some implementations, the embedding module 340 can include parsing metadata of the second video file to embed encryption parameters in the metadata of the second video file to obtain a third video file.
In some implementations, the verification module 350 can include an encryption key generated based on an AES encryption algorithm, encrypting the parameters to be encrypted according to the encryption key, and a decryption key generated according to the AES encryption algorithm and the encryption key.
It should be noted that, the video encryption anti-counterfeiting system provided in this embodiment and the video encryption anti-counterfeiting method described above are based on the same inventive concept, so the related content of the video encryption anti-counterfeiting method described above is also applicable to the content of the video encryption anti-counterfeiting system, and therefore, the description thereof is omitted herein.
The system shoots a video through responding to a video shooting request of a server side to obtain a first video file and obtain parameters to be encrypted generated during video shooting, encrypts the parameters to be encrypted according to an encryption key sent by the server side to obtain encryption parameters, generates a coding pattern according to the encryption parameters and synthesizes a second video file according to the coding pattern and the first video file, embeds the encryption parameters into metadata of the second video file to obtain a third video file, and transmits the third video file to the server side so that the server side extracts the encryption parameters and the coding pattern from the third video file, decrypts the encryption parameters and the coding pattern by adopting a decryption key corresponding to the encryption key, and judges authenticity of the third video file according to consistency of the decrypted encryption parameters and the decrypted coding pattern. Thus, the method can effectively prevent the video from being forged and ensure the authenticity and the integrity of the video.
The embodiment of the application also provides electronic equipment, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the video encryption anti-counterfeiting method when executing the computer program.
As shown in fig. 4, fig. 4 is a schematic hardware structure of an electronic device according to an embodiment of the present application, where the electronic device includes:
At least one battery;
At least one memory;
at least one processor;
At least one program;
The program is stored in the memory, and the processor executes at least one program to implement the video encryption anti-counterfeiting method according to the present disclosure.
The electronic device may be any intelligent terminal including a mobile phone, a tablet computer, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), a vehicle-mounted computer, and the like.
The electronic device according to the embodiment of the application is described in detail below.
The processor 1600 may be implemented by a general purpose central processing unit (Central Processing Unit, CPU), a microprocessor, an Application SPECIFIC INTEGRATED Circuit (ASIC), or one or more integrated circuits, etc. for executing related programs to implement the technical solutions provided by the embodiments of the present disclosure;
the Memory 1700 may be implemented in the form of Read Only Memory (ROM), static storage, dynamic storage, or random access Memory (Random Access Memory, RAM). Memory 1700 may store an operating system and other application programs, and when implementing the technical solutions provided by the embodiments of the present disclosure by software or firmware, the relevant program code is stored in memory 1700, and is invoked by processor 1600 to perform a video encryption anti-counterfeiting method of the embodiments of the present disclosure.
An input/output interface 1800 for implementing information input and output;
The communication interface 1900 is used for realizing communication interaction between the device and other devices, and can realize communication in a wired manner (such as USB, network cable, etc.), or can realize communication in a wireless manner (such as mobile network, WIFI, bluetooth, etc.);
bus 2000, which transfers information between the various components of the device (e.g., processor 1600, memory 1700, input/output interface 1800, and communication interface 1900);
Wherein processor 1600, memory 1700, input/output interface 1800, and communication interface 1900 enable communication connections within the device between each other via bus 2000.
The disclosed embodiments also provide a storage medium that is a computer-readable storage medium storing computer-executable instructions for causing a computer to perform a video encryption anti-counterfeiting method as described above.
The memory, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. In addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory remotely located relative to the processor, the remote memory being connectable to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The embodiments described in the embodiments of the present disclosure are for more clearly describing the technical solutions of the embodiments of the present disclosure, and do not constitute a limitation on the technical solutions provided by the embodiments of the present disclosure, and as those skilled in the art can know that, with the evolution of technology and the appearance of new application scenarios, the technical solutions provided by the embodiments of the present disclosure are equally applicable to similar technical problems.
It will be appreciated by those skilled in the art that the technical solutions shown in the figures do not limit the embodiments of the present disclosure, and may include more or fewer steps than shown, or may combine certain steps, or different steps.
The above described apparatus embodiments are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Those of ordinary skill in the art will appreciate that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
The terms "first," "second," "third," "fourth," and the like in the description of the application and in the above figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one (item)" means one or more, and "a plurality" means two or more. "and/or" is used to describe an association relationship of an associated object, and indicates that three relationships may exist, for example, "a and/or B" may indicate that only a exists, only B exists, and three cases of a and B exist simultaneously, where a and B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one of a, b or c may represent a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including multiple instructions for causing an electronic device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. The storage medium includes various media capable of storing programs, such as a U disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk.
While the preferred embodiments of the present application have been described in detail, the embodiments of the present application are not limited to the above-described embodiments, and those skilled in the art can make various equivalent modifications or substitutions without departing from the spirit of the embodiments of the present application, and these equivalent modifications or substitutions are included in the scope of the embodiments of the present application as defined in the appended claims.
The embodiments of the present application have been described in detail with reference to the accompanying drawings, but the present application is not limited to the above embodiments, and various changes can be made within the knowledge of one of ordinary skill in the art without departing from the spirit of the present application.