Movatterモバイル変換


[0]ホーム

URL:


CN113395475A - Data processing method and device, electronic equipment and storage equipment - Google Patents

Data processing method and device, electronic equipment and storage equipment
Download PDF

Info

Publication number
CN113395475A
CN113395475ACN202010166869.9ACN202010166869ACN113395475ACN 113395475 ACN113395475 ACN 113395475ACN 202010166869 ACN202010166869 ACN 202010166869ACN 113395475 ACN113395475 ACN 113395475A
Authority
CN
China
Prior art keywords
information
embedded
channel
embedding
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010166869.9A
Other languages
Chinese (zh)
Other versions
CN113395475B (en
Inventor
刘永亮
惠晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding LtdfiledCriticalAlibaba Group Holding Ltd
Priority to CN202010166869.9ApriorityCriticalpatent/CN113395475B/en
Publication of CN113395475ApublicationCriticalpatent/CN113395475A/en
Application grantedgrantedCritical
Publication of CN113395475BpublicationCriticalpatent/CN113395475B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The application discloses a data processing method, which comprises the following steps: acquiring a carrier video file and acquiring information to be embedded; and embedding the information to be embedded into a brightness channel and a chrominance channel of the carrier video file to obtain a target video file, wherein the embedded information embedded into the brightness channel and the embedded information embedded into the chrominance channel meet a preset correlation condition. When the information to be embedded is embedded into the carrier video file, the method can improve the accuracy of the embedded information extracted from the target video file by embedding the information to be embedded into both the brightness channel and the chrominance channel of the carrier video file and enabling the embedded information embedded into the brightness channel and the embedded information embedded into the chrominance channel to meet the preset correlation condition.

Description

Data processing method and device, electronic equipment and storage equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a data processing method and apparatus, an electronic device, and a storage device.
Background
With the development of digital media technology and internet technology, especially the gradual popularization of the fifth Generation Mobile communication technology (5G), people pay more and more attention to the digital copyright protection of digital data, such as video, audio, image, and the like. In real life, people usually use digital watermarking technology to protect digital copyright, and digital watermarking technology mainly refers to embedding watermark information in resources such as video files, audio files, images and the like in an implicit mode to provide copyright protection.
In the current digital watermarking technology, when information to be embedded, such as watermark information, is embedded into a carrier object, such as resources of a video file, an audio file, an image, and the like, especially when the watermark information is embedded into the video file, a general embedding strategy is as follows: considering that the luminance information of several consecutive frames in a video file is generally consistent, a block embedding method is used to embed the watermark information into the luminance channel of the video file and represent the watermark information by modifying the ascending and descending order changes of the luminance information of several consecutive frames in the video file.
Although the above embedding strategy can embed watermark information into a video file, there are cases where the luminance of several consecutive video frames in which watermark information is embedded is inconsistent, for example, there is a possibility that the luminance trend of several consecutive frames continuously increases or continuously decreases. When modifying the brightness information of several consecutive video frames in a video file, the above-mentioned embedding strategy usually affects the ascending and descending order of several consecutive frames by adding or subtracting a certain minimum perceived difference from the first several frames to the last several frames, however, when the ascending and descending trend of the brightness information of several consecutive frames is large, the trend of the brightness information of several consecutive frames cannot be changed from ascending to descending or from descending to ascending by this embedding strategy. This results in that when extracting watermark information from a watermark video file in which watermark information is embedded, accurate watermark information cannot be obtained by the ascending and descending order change of luminance information of consecutive frames. Therefore, the watermark embedding strategy in the prior art has the problem that the embedded watermark information can not be accurately extracted from the watermark video file.
In addition, when embedding information other than watermark information into a carrier object other than a video file, which may be a video object of another format or an object such as an image including several consecutive frames, embedded information cannot be accurately extracted from an object to be detected corresponding to the carrier object due to an embedding policy.
Disclosure of Invention
The embodiment of the application provides a data processing method to solve the problem that embedded information cannot be accurately extracted from a video file to be detected due to an information embedding strategy in the prior art.
An embodiment of the present application provides a data processing method, including: acquiring a carrier video file and acquiring information to be embedded; and embedding the information to be embedded into a brightness channel and a chrominance channel of the carrier video file to obtain a target video file, wherein the embedded information embedded into the brightness channel and the embedded information embedded into the chrominance channel meet a preset correlation condition.
Optionally, the information to be embedded is respectively embedded into a luminance channel and a chrominance channel of the carrier video file by the following steps, so that a correlation between the embedded information embedded into the luminance channel and the embedded information embedded into the chrominance channel is not less than a preset correlation threshold: embedding the information to be embedded into a brightness channel of the carrier video file according to the sequential embedding sequence; embedding the information to be embedded into a color component channel of a chrominance channel of the carrier video file according to the embedding sequence of the reverse order; embedding information from a start bit to a middle bit of the information to be embedded and information from the middle bit to an end bit of the information to be embedded into a saturation component channel of a chrominance channel of the carrier video file in a reverse embedding sequence;
optionally, the embedding the information to be embedded into a luminance channel and a chrominance channel of the carrier video file to obtain a target video file includes: grouping the information to be embedded according to a preset grouping rule to obtain at least one group of grouped information to be embedded; and embedding the at least one group of information to be embedded into a brightness channel and a chrominance channel of the carrier video file to obtain the target video file.
Optionally, the embedding the at least one group of information to be embedded into the luminance channel and the chrominance channel of the carrier video file to obtain the target video file includes: acquiring first packet information to be embedded and second packet information to be embedded from the at least one group of packet information to be embedded, wherein the second packet information to be embedded is information after the first packet information to be embedded; and embedding the first group of information to be embedded into a first video frame of the carrier video file, and embedding the second group of information to be embedded into a second video frame of the carrier video file to obtain a target video file, wherein the number of video frames contained in the first video frame corresponds to the length of the first group of information to be embedded, and the second video frame is a video frame after the first video frame.
Optionally, the embedding the information to be embedded in the first packet into the first video frame of the carrier video file includes: embedding the first group of information to be embedded into a brightness channel of the first video frame according to a sequential embedding sequence; embedding the first group of information to be embedded into a color component channel of a chrominance channel of the first video frame according to the embedding sequence of the reverse order; and embedding the information from the start bit to the middle bit of the first packet of information to be embedded and the information from the middle bit to the end bit of the first packet of information to be embedded into the saturation component channel of the chrominance channel of the first video frame in the reverse embedding sequence.
Optionally, the embedding the information to be embedded in the first packet into the first video frame of the carrier video file includes: acquiring a video frame to be processed from the first video frame, wherein the video frame to be processed is a preset odd number of continuous video frames; and embedding at least one bit of information in the first group of information to be embedded into a luminance channel and a chrominance channel of the video frame to be processed.
Optionally, the embedding at least one bit of information in the first packet of information to be embedded into a luminance channel and a chrominance channel of the video frame to be processed includes: adjusting brightness information, color information and saturation information between at least two video frames in the video frames to be processed, and establishing a mapping relation between the video frames to be processed and the at least one bit of information; and embedding the at least one bit of information into a brightness channel and a chroma channel of the video frame to be processed through the mapping relation.
Optionally, the mapping relationship includes: mapping first element information in an ascending relationship of brightness information between video frames in the video frames to be processed, mapping the first element information in an ascending relationship of color information between video frames in the video frames to be processed, and mapping the first element information in an ascending relationship of saturation information between video frames in the video frames to be processed; mapping second element information in a descending relation of brightness information among the video frames in the video frames to be processed, mapping the second element information in a descending relation of color information among the video frames in the video frames to be processed, and mapping the second element information in a descending relation of saturation information among the video frames in the video frames to be processed; wherein the first element information and the second element information are basic elements constituting the information to be embedded, and the first element information is different from the second element information.
Optionally, the mapping relationship is established by the following steps: acquiring a brightness channel of each video frame in the video frames to be processed, and a color component channel of a chromaticity channel and a block to be modified in a saturation component channel; acquiring the minimum perceptible difference corresponding to the brightness channel of each video frame, and the color component channel and the saturation component channel of the chrominance channel; and adjusting brightness information, color information and saturation information between at least two video frames in the video frames to be processed according to the block to be modified and the just noticeable difference, and establishing the mapping relation.
Optionally, the adjusting, according to the block to be modified and the just noticeable difference, luminance information, color information, and saturation information between at least two video frames of the video frames to be processed to establish the mapping relationship includes: if the information to be processed in the first group of information to be embedded is the first element information, subtracting the minimum perceptible difference corresponding to the block to be modified from at least one pixel in the block to be modified corresponding to the video frame before the middle video frame of the video frame to be processed, and adding the minimum perceptible difference corresponding to the block to be modified and at least one pixel in the block to be modified corresponding to the video frame after the middle video frame of the video frame to be processed; and if the information to be processed in the first group of information to be embedded is the second pixel information, adding at least one pixel in a block to be modified corresponding to a video frame before the middle video frame of the video frame to be processed and the minimum visual difference corresponding to the block to be modified, and subtracting the minimum visual difference corresponding to the block to be modified from at least one pixel in the block to be modified corresponding to a video frame after the middle video frame of the video frame to be processed.
Optionally, the block to be modified is a 2 × 2 data block.
Optionally, the grouping the information to be embedded according to a preset grouping rule to obtain at least one group of grouped information to be embedded includes: grouping the information to be embedded by a preset first length to obtain at least one group of information to be processed; and adding error correction information into the at least one group of information of the packets to be processed to obtain at least one group of information of the packets to be embedded, wherein the length of the information of the packets to be embedded is a preset second length, and the preset second length is greater than the preset first length.
An embodiment of the present application further provides a data processing method, including: acquiring a video file to be detected; and acquiring target embedded information from the brightness channel and the chrominance channel of the video file to be detected by judging whether the embedded information in the brightness channel of the video file to be detected and the embedded information in the chrominance channel of the video file to be detected meet a preset correlation condition.
Optionally, the method includes: acquiring at least one group of to-be-processed packet embedding information from the brightness channel and the chrominance channel of the to-be-detected video file by judging whether the embedding information in the brightness channel of the to-be-detected video file and the embedding information in the chrominance channel of the to-be-detected video file meet a preset correlation condition; and obtaining the target embedded information according to the at least one group of to-be-processed packet embedded information.
Optionally, the obtaining at least one group of to-be-processed packet embedding information includes: acquiring at least one group of original packet embedding information from the video file to be detected, wherein the original packet embedding information comprises first component information acquired from a brightness channel of the video file to be detected, second component information acquired from a color component channel of a chromaticity channel of the video file to be detected and third component information acquired from a saturation component channel of the chromaticity channel of the video file to be detected; and determining at least one group of to-be-processed packet embedding information from the at least one group of original packet embedding information by judging whether the correlation among the first component information, the second component information and the third component information is not less than a preset correlation threshold value.
Optionally, the obtaining at least one group of original packet embedding information from the video file to be detected includes: traversing a brightness channel and a chrominance channel of the video file to be detected by a sliding window to obtain at least one group of original packet embedding information; the step length of the sliding window is a preset step length value, the length of the sliding window is a preset window length, and the preset window length corresponds to the length of a packet used when the embedded information packet is embedded into the carrier video file.
Optionally, traversing the luminance channel and the chrominance channel of the video file to be detected with a sliding window to obtain at least one group of original packet embedding information, including: according to the sliding window, acquiring a first candidate video frame from the video file to be detected; acquiring first original packet embedding information from the first candidate video frame according to a preset mapping relation; and acquiring at least one group of original packet embedding information according to the first original packet embedding information.
Optionally, the obtaining, according to a preset mapping relationship, first original packet embedding information from the first candidate video frame includes: acquiring a first video frame to be processed from the first candidate video frame, wherein the first video frame to be processed is a preset odd number of continuous video frames; acquiring first to-be-processed component embedded information containing at least one bit of information from a brightness channel of the first to-be-processed video frame, a color component channel and a saturation component channel of a chrominance channel according to the preset mapping relation; and acquiring the first original packet embedding information according to the first to-be-processed component embedding information.
Optionally, the preset mapping relationship includes: mapping first element information in an ascending relationship of luminance information between video frames in the first video frame to be processed, mapping the first element information in an ascending relationship of color information between video frames in the video frame to be processed, and mapping the first element information in an ascending relationship of saturation information between video frames in the video frame to be processed; mapping second element information in a descending relation of brightness information among the video frames in the video frames to be processed, mapping the second element information in a descending relation of color information among the video frames in the video frames to be processed, and mapping the second element information in a descending relation of saturation information among the video frames in the video frames to be processed; wherein the first element information and the second element information are basic elements constituting the target embedded information, and the first element information is different from the second element information.
Optionally, the obtaining, according to the preset mapping relationship, first to-be-processed component embedding information including at least one bit of information from a luminance channel and a chrominance channel of the first to-be-processed video frame includes: calculating the pixel sum of a brightness channel, the pixel sum of a color component channel of a chrominance channel and the pixel sum of a saturation component channel of the chrominance channel of each frame of the first video frame to be processed to obtain the ascending and descending order relation of the brightness information, the color information and the saturation information of the first video frame to be processed; and acquiring first to-be-processed component embedded information containing at least one bit of information according to the ascending and descending order relation and the preset mapping relation.
Optionally, the method includes: acquiring a video frame between a start frame and an intermediate frame of the first video frame to be processed as a first video frame to be calculated; acquiring a video frame between an intermediate frame and an end frame of the first video frame to be processed as a second video frame to be calculated; acquiring a pixel sum of a brightness channel, a pixel sum of a color component channel of a chrominance channel and a pixel sum of a saturation component channel of the chrominance channel of the first video frame to be calculated as a first brightness pixel sum, a first color pixel sum and a first saturation pixel sum; acquiring a pixel sum of a brightness channel, a pixel sum of a color component channel of a chrominance channel and a pixel sum of a saturation component channel of the chrominance channel of the second video frame to be calculated as a second brightness pixel sum, a second color pixel sum and a second saturation pixel sum; and obtaining the ascending and descending order relation of the brightness information, the color information and the saturation information of the first video frame to be processed by comparing the first brightness pixel with the second brightness pixel, comparing the first color pixel with the second color pixel, and comparing the first saturation pixel with the second saturation pixel.
Optionally, the obtaining the target embedded information according to the at least one group of to-be-processed packet embedded information includes: acquiring the overlapping relation and the arrangement relation between the video frames corresponding to the at least one group of packet embedding information to be processed, and taking the packet embedding information to be processed, which has no overlapping relation between the corresponding video frames and has a continuous arrangement relation, as target packet embedding information; and obtaining the target embedded information according to the target grouping embedded information.
Optionally, the obtaining the target embedded information according to the target packet embedded information includes: carrying out error correction processing on the target packet embedded information to obtain error-corrected target packet embedded information; and acquiring the target embedded information according to the error-corrected target packet embedded information.
An embodiment of the present application further provides a data processing apparatus, including: the acquisition unit is used for acquiring a carrier video file and acquiring information to be embedded; and the embedding unit is used for embedding the information to be embedded into a brightness channel and a chrominance channel of the carrier video file to obtain a target video file, wherein the embedded information embedded into the brightness channel and the embedded information embedded into the chrominance channel meet a preset correlation condition.
An embodiment of the present application further provides an electronic device, including:
a processor;
a memory for storing a program of a data processing method, the apparatus performing the following steps after being powered on and running the program of the data processing method by the processor:
acquiring a carrier video file and acquiring information to be embedded; and embedding the information to be embedded into a brightness channel and a chrominance channel of the carrier video file to obtain a target video file, wherein the embedded information embedded into the brightness channel and the embedded information embedded into the chrominance channel meet a preset correlation condition.
An embodiment of the present application further provides a storage device, in which a program of the data processing method is stored, where the program is run by a processor and executes the following steps:
acquiring a carrier video file and acquiring information to be embedded; and embedding the information to be embedded into a brightness channel and a chrominance channel of the carrier video file to obtain a target video file, wherein the embedded information embedded into the brightness channel and the embedded information embedded into the chrominance channel meet a preset correlation condition.
An embodiment of the present application further provides another data processing apparatus, including: the device comprises a to-be-detected video file acquisition unit, a to-be-detected video file acquisition unit and a to-be-detected video file acquisition unit, wherein the to-be-detected video file acquisition unit is used for acquiring a to-be-detected video file; and the embedded information acquisition unit is used for acquiring target embedded information from the brightness channel and the chrominance channel of the video file to be detected by judging whether the embedded information in the brightness channel of the video file to be detected and the embedded information in the chrominance channel of the video file to be detected meet a preset correlation condition or not.
An embodiment of the present application further provides another electronic device, including:
a processor;
a memory for storing a program of a data processing method, the apparatus performing the following steps after being powered on and running the program of the data processing method by the processor:
acquiring a video file to be detected; and acquiring target embedded information from the brightness channel and the chrominance channel of the video file to be detected by judging whether the embedded information in the brightness channel of the video file to be detected and the embedded information in the chrominance channel of the video file to be detected meet a preset correlation condition.
An embodiment of the present application further provides another storage device, in which a program of a data processing method is stored, where the program is run by a processor and executes the following steps:
acquiring a video file to be detected; and acquiring target embedded information from the brightness channel and the chrominance channel of the video file to be detected by judging whether the embedded information in the brightness channel of the video file to be detected and the embedded information in the chrominance channel of the video file to be detected meet a preset correlation condition.
The present application further provides a data processing method, including: acquiring a carrier object and acquiring information to be embedded; and embedding the information to be embedded into a brightness channel and a chrominance channel of the carrier object to obtain a target object, wherein the embedded information embedded into the brightness channel and the embedded information embedded into the chrominance channel meet a preset correlation condition.
The present application also provides a data processing apparatus, comprising: the object acquisition unit is used for acquiring the carrier object and acquiring the information to be embedded; and the information embedding unit is used for embedding the information to be embedded into a luminance channel and a chrominance channel of the carrier object to obtain a target object, wherein the embedded information embedded into the luminance channel and the embedded information embedded into the chrominance channel meet a preset correlation condition.
The present application further provides an electronic device, comprising:
a processor;
a memory for storing a program of a data processing method, the apparatus performing the following steps after being powered on and running the program of the data processing method by the processor:
acquiring a carrier object and acquiring information to be embedded; and embedding the information to be embedded into a brightness channel and a chrominance channel of the carrier object to obtain a target object, wherein the embedded information embedded into the brightness channel and the embedded information embedded into the chrominance channel meet a preset correlation condition.
The present application also provides a storage device storing a program of a data processing method, the program being executed by a processor to perform the steps of:
acquiring a carrier object and acquiring information to be embedded;
and embedding the information to be embedded into a brightness channel and a chrominance channel of the carrier object to obtain a target object, wherein the embedded information embedded into the brightness channel and the embedded information embedded into the chrominance channel meet a preset correlation condition.
The present application further provides a data processing method, including: acquiring an object to be detected; and acquiring target embedded information from the brightness channel and the chromaticity channel of the object to be detected by judging whether the embedded information in the brightness channel of the object to be detected and the embedded information in the chromaticity channel of the object to be detected meet a preset correlation condition.
The present application also provides a data processing apparatus, comprising: the object acquisition unit is used for acquiring an object to be detected; and the embedded information acquisition unit is used for acquiring target embedded information from the brightness channel and the chrominance channel of the object to be detected by judging whether the embedded information in the brightness channel of the object to be detected and the embedded information in the chrominance channel of the object to be detected meet a preset correlation condition.
The present application further provides an electronic device, comprising:
a processor;
a memory for storing a program of a data processing method, the apparatus performing the following steps after being powered on and running the program of the data processing method by the processor:
acquiring an object to be detected; and acquiring target embedded information from the brightness channel and the chromaticity channel of the object to be detected by judging whether the embedded information in the brightness channel of the object to be detected and the embedded information in the chromaticity channel of the object to be detected meet a preset correlation condition.
The present application also provides a storage device storing a program of a data processing method, the program being executed by a processor to perform the steps of:
acquiring an object to be detected; and acquiring target embedded information from the brightness channel and the chromaticity channel of the object to be detected by judging whether the embedded information in the brightness channel of the object to be detected and the embedded information in the chromaticity channel of the object to be detected meet a preset correlation condition.
Compared with the prior art, the method has the following advantages:
an embodiment of the present application provides a data processing method, including: acquiring a carrier video file and acquiring information to be embedded; and embedding the information to be embedded into a brightness channel and a chrominance channel of the carrier video file to obtain a target video file, wherein the embedded information embedded into the brightness channel and the embedded information embedded into the chrominance channel meet a preset correlation condition. When the information to be embedded is embedded into the carrier video file, the information to be embedded is embedded into both the brightness channel and the chrominance channel of the carrier video file, and the embedded information embedded into the brightness channel and the embedded information embedded into the chrominance channel meet the preset correlation condition, so that when the embedded information is extracted from the target video file, namely the file to be detected, the embedded information embedded into the target video file can be accurately extracted according to the correlation of the embedded information acquired from the brightness channel and the chrominance channel of the target video file, and the accuracy of the extracted embedded information can be improved.
An embodiment of the present application further provides a data processing method, including: acquiring a video file to be detected; and acquiring target embedded information from the brightness channel and the chrominance channel of the video file to be detected by judging whether the embedded information in the brightness channel of the video file to be detected and the embedded information in the chrominance channel of the video file to be detected meet a preset correlation condition. When the target embedded information is extracted from the video file to be detected, the method judges whether the embedded information obtained from the brightness channel of the video file to be detected and the embedded information obtained from the chroma channel of the video file to be detected meet the preset correlation condition, and then takes the embedded information meeting the preset correlation condition as the final target embedded information, so that the accuracy of the extracted target embedded information can be improved.
Drawings
Fig. 1 is a schematic diagram of an application scenario of a data processing method according to a first embodiment of the present application.
Fig. 2 is a flowchart of a data processing method according to a first embodiment of the present application.
Fig. 3 is a flowchart of another data processing method according to a second embodiment of the present application.
Fig. 3-a is a schematic diagram of target packet embedded information provided in a second embodiment of the present application.
Fig. 4 is a schematic diagram of a data processing apparatus according to a third embodiment of the present application.
Fig. 5 is a schematic diagram of an electronic device according to a fourth embodiment of the present application.
Fig. 6 is a schematic diagram of another data processing apparatus according to a sixth embodiment of the present application.
Fig. 7 is a flowchart of a data processing method according to a ninth embodiment of the present application.
Fig. 8 is a schematic diagram of a data processing apparatus according to a tenth embodiment of the present application.
Fig. 9 is a flowchart of a data processing method according to a thirteenth embodiment of the present application.
Fig. 10 is a schematic diagram of a data processing method according to a fourteenth embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
In the first embodiment of the present application, in order to show the present application more clearly, an application scenario of the data processing method provided in the first embodiment of the present application is briefly introduced first.
The data processing method in the first embodiment of the present application may be applied to a scenario in which a client interacts with a server, as shown in fig. 1, which is a schematic diagram of an application scenario of the data processing method provided in the embodiment of the present application. Generally, when the method is implemented, based on the requirement that a user needs to embed information to be embedded into a carrier video file, a client firstly acquires information to be embedded and the carrier video file uploaded by the user or acquired from other channels, then the client firstly establishes connection with a server, the client sends the carrier video file and the information to be embedded to the server after the connection, and after the server receives the carrier video file and the information to be embedded, the server embeds the information to be embedded into a luminance channel and a chrominance channel of the carrier video file to acquire a target video file, wherein when the server embeds the information to be embedded into the luminance channel and the chrominance channel of the carrier video file, the embedded information embedded into the luminance channel and the chrominance channel need to meet preset correlation conditions; after the server obtains the target video file, the server provides the target video file to the client, and then the client receives the target video file.
It should be noted that the client may be a mobile terminal device, such as a mobile phone, a tablet computer, or a commonly used computer device; the server is generally a server, and the server may be a locally deployed physical server or a cloud server.
In addition, in specific implementation, the method provided in the first embodiment of the present application may also be applied to a client, a server, or an interaction between a server and a server. For example, after obtaining a carrier video file and information to be embedded, a client directly embeds the information to be embedded into a luminance channel and a chrominance channel of the carrier video file, and makes the embedded information of the luminance channel and the embedded information of the chrominance channel meet a preset correlation condition to obtain a target video file, and then the client directly displays the target video file to a user.
The above application scenario is only one specific embodiment of the data processing method provided in the first embodiment of the present application, and is provided for facilitating understanding of the method provided in the first embodiment of the present application, and is not intended to limit the method provided in the first embodiment of the present application.
Fig. 2 is a flowchart of a data processing method according to a first embodiment of the present application, and the method is described in detail below.
Step S201, a carrier video file is obtained, and information to be embedded is obtained.
In the first embodiment of the present application, the carrier video file generally refers to a video file composed of a plurality of consecutive video frames, the length of the video frames of the video file satisfies the requirement of embedding the information to be embedded into the file, and the carrier video file is in YUV format, or the carrier video file can be converted into YUV format.
It should be noted that YUV is a color coding method, and generally, Y' UV, YUV, YCbCr, YPbPr, etc. may be referred to as YUV, where Y represents Luminance or brightness (Luma) and is used to represent a gray scale value of an image or a video frame; u and V are used to represent Chroma (Chroma) for representing the color and saturation of an image or video frame, respectively, to describe the color of a pixel in the image or video frame. Typically, an image or video file stored based on the YUV format is composed of a luminance channel and a chrominance channel, wherein the chrominance channel may be further split into a color component channel (also referred to as U component channel) and a saturation component channel (also referred to as V component channel).
The information to be embedded is information that needs to be concealed and transmitted or information used for providing digital copyright protection, and may specifically be a piece of text information, or image information, such as company LOGO, contract document scanning piece, and the like.
Step S202, embedding the information to be embedded into a brightness channel and a chroma channel of the carrier video file to obtain a target video file, wherein the embedded information embedded into the brightness channel and the embedded information embedded into the chroma channel meet a preset correlation condition.
Before describing step S202 in detail, a brief description is first given of an embedding method and an embedding strategy used in the prior art when embedding information to be embedded into a carrier video file.
In the prior art, when embedding information to be embedded into a carrier video file, it is common that: 1. dividing a luminance channel, namely a Y channel, of the carrier video file into 32 x 32 non-overlapping blocks; 2. dividing each 32 × 32 block into 16 8 × 8 blocks, calculating and obtaining a Direct current Coefficient (DC) Coefficient of each 8 × 8 block, then calculating and obtaining an average DC Coefficient of the 32 × 32 blocks by using the DC coefficients of the 16 8 × 8 blocks, and finally calculating a Just Noticeable Difference (JND) threshold of each 8 × 8 block by using the average DC Coefficient; 3. each 8 by 8 block is subdivided into 4 by 4 blocks, and brightness information is selected, namely the brightness value of the 4 by 4 block with the maximum brightness value is modified, wherein the specific modification value is the product of the obtained JND threshold value and the embedding intensity factor beta, when the information to be embedded is embedded in the video file, if the embedded information is 1 and the video frame to be embedded is the first two frames of continuous 5 frames, the modification value is added to the brightness channel of the video frame, and if the video frame is the last two frames of continuous 5 frames, the modification value is subtracted from the brightness channel of the video frame; if the embedded information is 0, the opposite is true; 4. and repeating the embedding strategy until the carrier video file is decoded, and obtaining the target video file finally embedded with the information to be embedded. A Direct Coefficient (DC) is a Coefficient corresponding to u-0 and v-0 after a Discrete Cosine Transform (DCT) is performed on an image or a video frame, and is also referred to as a Direct component, which is described in detail in the prior art and is not described herein again; the embedding strength factor is a parameter which is set artificially and used for increasing or decreasing the information embedding strength; the JND threshold is used to describe the maximum modification that individual pixels of an image or video frame can allow without causing any noticeable artifacts.
Corresponding to the above embedding method, the prior art generally extracts the embedded information from the video file to be detected: 1. dividing a video file to be detected into single shots based on a direction experience mode dividing method, and discarding the shots smaller than a preset time value; it should be noted that the shot is different from a shot in a physical concept, where the shot is composed of a plurality of consecutive video frames, and the preset time value is related to the number of the consecutive video frames selected during embedding; 2. decoding the shot obtained in thestep 1, taking the middle area of the Y channel of the continuous 5 frames as a detection area, comparing the ascending and descending order relation of the brightness information of the continuous 5 frames, if the ascending order is the ascending order, extracting the information 0, otherwise, extracting theinformation 1; 3. and (5) repeating the step (2) until all the shots containing the embedded information are detected.
It can be seen that, when information is embedded into a carrier video file and target embedded information is extracted from a video file to be detected in the prior art, the following disadvantages exist: 1. because the 4 x 4 block with the largest brightness value is selected to modify the brightness value of the block as a whole, when the finally obtained target video file is amplified, the region in which the information is embedded can be obviously seen as an obvious brightness block, the concealment of the embedded information is low, and the risk of damage exists; 2. because the brightness information of the continuous frames of the embedded information is inconsistent, the embedded information is extracted simply according to the ascending and descending order condition of the brightness information, and the problem of extracting wrong embedded information may exist; 3. when information is extracted from a file to be detected, a part of shots are abandoned, so that a lot of video frames are wasted, and the accuracy of the extracted embedded information is reduced.
In order to solve the above problems in the prior art, in the first embodiment of the present application, when embedding information to be embedded into a carrier video file, the information to be embedded is respectively embedded into a luminance channel and a chrominance channel of the carrier video file, and the embedded information embedded into the luminance channel and the embedded information embedded into the chrominance channel satisfy a preset correlation condition, so that when extracting information from the video file to be detected, target embedded information can be accurately extracted, which will be described in detail in the following step S202.
In the first embodiment of the present application, the information to be embedded is respectively embedded into a luminance channel and a chrominance channel of the carrier video file, so that the correlation between the embedded information embedded into the luminance channel and the embedded information embedded into the chrominance channel is not less than a preset correlation threshold value: embedding the information to be embedded into a brightness channel of the carrier video file according to the sequential embedding sequence; embedding the information to be embedded into a color component channel of a chrominance channel of the carrier video file according to the embedding sequence of the reverse order; and embedding the information from the start bit to the middle bit of the information to be embedded and the information from the middle bit to the end bit of the information to be embedded into the saturation component channel of the chrominance channel of the carrier video file in the reverse embedding sequence.
It should be noted that, in the first embodiment of the present application, it is determined whether the correlation between the embedded information of the luminance channel and the correlation between the chrominance channels satisfies a preset correlation threshold, specifically, by calculating a Normalized Cross-correlation (NCC) of the two sets of embedded information.
Since it is considered that when information is embedded in consecutive video frames, such as consecutive 5 frames, of the carrier video file, even if the ascending and descending trends of the luminance information of the 5 frames are large, simply modifying the luminance information of the 5 frames cannot change the ascending and descending trends of the luminance information of the 5 frames. However, in practical implementation, it is found that after the luminance information of the 5 frames is modified and the information is embedded into the luminance channels of the 5 frames, the chrominance channels thereof have a significant ascending and descending trend. Therefore, in the first embodiment of the present application, the embedded information is embedded into the luminance channel and the chrominance channel of the carrier video file at the same time, and the luminance channel and the chrominance channel are embedded in a combined manner to obtain the target video file, so that the embedded information can be accurately extracted from the target video file.
In addition, in order to improve the concealment and robustness of embedded information, in the first embodiment of the present application, when embedding information to be embedded into a carrier video file, embedding the information to be embedded into the carrier video file by adopting a strategy of packet embedding and a method of adding error correction information to each packet to facilitate error correction of extracted information during information extraction processing, specifically includes: grouping the information to be embedded according to a preset grouping rule to obtain at least one group of grouped information to be embedded; and embedding the at least one group of information to be embedded into a brightness channel and a chrominance channel of the carrier video file to obtain the target video file.
The grouping the information to be embedded according to a preset grouping rule to obtain at least one group of grouped information to be embedded comprises: grouping the information to be embedded by a preset first length to obtain at least one group of information to be processed; and adding error correction information into the at least one group of information of the packets to be processed to obtain at least one group of information of the packets to be embedded, wherein the length of the information of the packets to be embedded is a preset second length, and the preset second length is greater than the preset first length.
For example, if the information to be embedded is a 105-bit binary sequence, the preset first length is 21 bits, and the preset second length is 31 bits, the 105-bit binary sequence is first grouped according to 21 bits of each group to obtain 5 groups of information to be processed; and then, the 5 groups of packet information to be processed are subjected to BCH coding by combining with BCH (31,21) error correction codes, and 5 groups of packet information to be embedded with error correction information and with the length of 31 bits are obtained. It should be noted that, in the first embodiment of the present application, the preset first length is 21 bits, the preset second length is 31 bits, the error correction information is a BCH error correction code, and in a specific implementation, other lengths may be set and other error correction coding manners are used, which is not described herein again.
The embedding the at least one group of information to be embedded into the luminance channel and the chrominance channel of the carrier video file to obtain the target video file comprises: acquiring first packet information to be embedded and second packet information to be embedded from the at least one group of packet information to be embedded, wherein the second packet information to be embedded is information after the first packet information to be embedded; embedding the first group of information to be embedded into a first video frame of the carrier video file, and embedding the second group of information to be embedded into a second video frame of the carrier video file to obtain the target video file, wherein the number of video frames contained in the first video frame corresponds to the length of the first group of information to be embedded, and the second video frame is a video frame after the first video frame.
In other words, a packet continuous embedding mode is adopted, a plurality of continuous video frames are selected as first video frames in a carrier video file, and first packet information to be embedded is embedded into the first video frames, wherein the number of the video frames contained in the selected continuous video frames corresponds to the length of the first packet information to be embedded; and then, selecting information after the first packet embedding information as second packet embedding information, embedding the second packet embedding information into a second video frame after the first video frame, and sequentially embedding all the obtained packet embedding information into the carrier video file in this way. It should be noted that, in specific implementation, all the obtained packet embedding information may also be sequentially embedded into the carrier video file in a reverse order or in a manner of randomly selecting consecutive video frames, so as to further increase the concealment of the embedding information in the obtained target video file, which is not described herein again.
The embedding the first packet of information to be embedded into the first video frame of the carrier video file includes: embedding the first group of information to be embedded into a brightness channel of the first video frame according to a sequential embedding sequence; embedding the first group of information to be embedded into a color component channel of a chrominance channel of the first video frame according to the embedding sequence of the reverse order; and embedding the information from the start bit to the middle bit of the first packet of information to be embedded and the information from the middle bit to the end bit of the first packet of information to be embedded into the saturation component channel of the chrominance channel of the first video frame in the reverse embedding sequence.
That is, when any one of the obtained packet embedding information is embedded into the carrier video file, in order to make the correlation of the embedding information embedded into different channels satisfy a preset correlation condition, for example, make the correlation greater than a preset correlation threshold, the present embodiment embeds the information to be embedded into different channels of the first video frame respectively in the manner described above.
Wherein the embedding the first packet of information to be embedded into the first video frame of the carrier video file comprises: acquiring a video frame to be processed from the first video frame, wherein the video frame to be processed is a preset odd number of continuous video frames; and embedding at least one bit of information in the first group of information to be embedded into a luminance channel and a chrominance channel of the video frame to be processed.
The embedding at least one bit of information in the first group of information to be embedded into a luminance channel and a chrominance channel of the video frame to be processed comprises: adjusting brightness information, color information and saturation information between at least two video frames in the video frames to be processed, and establishing a mapping relation between the video frames to be processed and the at least one bit of information; and embedding the at least one bit of information into a brightness channel and a chroma channel of the video frame to be processed through the mapping relation.
In a first embodiment of the present application, the mapping relationship includes: mapping first element information in an ascending relationship of brightness information between video frames in the video frames to be processed, mapping the first element information in an ascending relationship of color information between video frames in the video frames to be processed, and mapping the first element information in an ascending relationship of saturation information between video frames in the video frames to be processed; mapping second element information in a descending relation of brightness information among the video frames in the video frames to be processed, mapping the second element information in a descending relation of color information among the video frames in the video frames to be processed, and mapping the second element information in a descending relation of saturation information among the video frames in the video frames to be processed; wherein the first element information and the second element information are basic elements constituting the information to be embedded, and the first element information is different from the second element information. That is, the first element information is mapped in an ascending relationship among the luminance information, the color information, and the saturation information of consecutive odd-numbered video frames, respectively, and the second element information is mapped in a descending relationship among the luminance information, the color information, and the saturation information of consecutive odd-numbered video frames, respectively. In the first embodiment of the present application, if no special description is provided, the first element information isinformation 1, and the second element information is information 0 for example; of course, in specific implementation, the first element information and the second element information may also be set to corresponding values according to different element contents included in the information to be embedded, and details are not repeated here.
Specifically, when information is embedded into a video frame to be processed, the information is embedded into the video frame to be processed through the following steps: acquiring a brightness channel of each video frame in the video frames to be processed, and a color component channel of a chrominance channel and a block to be modified in a saturation component channel; acquiring the minimum perceptible difference corresponding to the brightness channel of each video frame, and the color component channel and the saturation component channel of the chrominance channel; and adjusting brightness information, color information and saturation information between at least two video frames in the video frames to be processed according to the block to be modified and the just noticeable difference, and establishing the mapping relation.
The adjusting, according to the block to be modified and the just noticeable difference, luminance information, color information, and saturation information between at least two video frames of the video frames to be processed to establish the mapping relationship includes: if the information to be processed in the first packet of information to be embedded is the first element information, for example, the information to be processed is 1, subtracting the minimum perceptible difference corresponding to the block to be modified from at least one pixel in a block to be modified corresponding to a video frame before the middle video frame of the video frame to be processed, and adding the minimum perceptible difference corresponding to the block to be modified and at least one pixel in a block to be modified corresponding to a video frame after the middle video frame of the video frame to be processed; if the information to be processed in the information to be embedded in the first packet is the second pixel information, for example, the information to be processed is 0, at least one pixel in a block to be modified corresponding to a video frame before the middle video frame of the video frame to be processed is added to the minimum perceptible difference corresponding to the block to be modified, and at least one pixel in a block to be modified corresponding to a video frame after the middle video frame of the video frame to be processed is subtracted from the minimum perceptible difference corresponding to the block to be modified.
Namely, one bit of information in the first packet of information to be embedded is embedded into the luminance channel, the color component channel and the saturation component channel of the video frame to be processed according to the following formula:
Figure BDA0002407754740000161
wherein, Fm(i, j) and F'm(i, j) respectively represents the state of the (i, j) th block to be modified of the mth frame when information is not embedded and information is embedded,
Figure BDA0002407754740000162
expressing the embedding strength factor, K is the number of the video frames to be processed, and the value of K is a preset odd number, b [ m ]]Representing at least one bit of information to be embedded, Δ, in the information to be embedded of the first packeti,jThe minimum perceptible difference corresponding to the block to be modified; in the first embodiment of the present application, the preset odd number is 5, and may be an odd number such as 7 or 9; the embedding strength factor is specifically 1.5
The block to be modified is a block of a pixel value to be modified in a different channel of each video frame, and here, the block to be modified in a luminance channel for obtaining one frame of the video frames to be processed is exemplified.
For example, after one frame of the video frames to be processed is obtained, the luminance channel of the video frame is divided into 32 × 32 blocks, and then each 32 × 32 block is divided into 16 8 × 8 blocks; calculating the DC coefficient of each 8 by 8 blocks, and obtaining the average DC coefficient of 32 by 32 blocks from the DC coefficients of 16 8 by 8 blocks; and then, each 8 × 8 block obtains a JND threshold according to the obtained average DC coefficient, wherein the obtaining manner of the JND threshold is described in detail in the prior art, and is not described herein again.
In the first embodiment of the present application, in order to reduce the intensity of the enlarged luminance block of the obtained target video file, i.e. to reduce the blockiness effect and increase the concealment of the embedded information, each 8 × 8 block obtained above is further divided into 16 2 × 2 blocks, and only 1 pixel and the JND threshold are selected in each 2 × 2 block for addition and subtraction.
It should be noted that, when the above is implemented specifically, other methods may also be adopted, such as the above ascending relation mapping information 0, the above descendingrelation mapping information 1, or mapping the multi-bit information in the embedded information by establishing other mathematical relations, such as a function, and details are not described here again.
In summary, the data processing method provided in the first embodiment of the present application includes: acquiring a carrier video file and acquiring information to be embedded; and embedding the information to be embedded into a brightness channel and a chrominance channel of the carrier video file to obtain a target video file, wherein the embedded information embedded into the brightness channel and the embedded information embedded into the chrominance channel meet a preset correlation condition. When the information to be embedded is embedded into the carrier video file, the information to be embedded is embedded into both the brightness channel and the chrominance channel of the carrier video file, and the embedded information embedded into the brightness channel and the embedded information embedded into the chrominance channel meet the preset correlation condition, so that when the embedded information is extracted from the target video file, namely the file to be detected, the embedded information embedded into the target video file can be accurately extracted according to the correlation of the embedded information acquired from the brightness channel and the chrominance channel of the target video file, and the accuracy of the extracted embedded information can be improved.
Corresponding to the data processing method provided in the first embodiment of the present application, the second embodiment of the present application further provides another data processing method, specifically for extracting target embedded information in a video file to be detected, please refer to fig. 3, which is a flowchart of another data processing method provided in the second embodiment of the present application, wherein a part of steps or details are described in detail in the first embodiment, so that the description herein is relatively simple, and for relevant points, refer to a part of descriptions in the method provided in the first embodiment of the present application, and the processing procedure described below is only schematic.
Fig. 3 is a flowchart of a data processing method according to a second embodiment of the present application, which is described below with reference to fig. 3.
Step S301, acquiring a video file to be detected.
Step S302, by judging whether the embedded information in the brightness channel of the video file to be detected and the embedded information in the chroma channel of the video file to be detected meet a preset correlation condition, target embedded information is obtained from the brightness channel and the chroma channel of the video file to be detected.
The obtaining of the target embedding information from the luminance channel and the chrominance channel of the video file to be detected by judging whether the embedding information in the luminance channel of the video file to be detected and the embedding information in the chrominance channel of the video file to be detected satisfy a preset correlation condition includes: acquiring at least one group of to-be-processed packet embedding information from the brightness channel and the chrominance channel of the to-be-detected video file by judging whether the embedding information in the brightness channel of the to-be-detected video file and the embedding information in the chrominance channel of the to-be-detected video file meet a preset correlation condition; and obtaining the target embedded information according to the at least one group of to-be-processed packet embedded information.
The acquiring at least one group of to-be-processed packet embedding information comprises: acquiring at least one group of original packet embedding information from the video file to be detected, wherein the original packet embedding information comprises first component information acquired from a brightness channel of the video file to be detected, second component information acquired from a color component channel of a chromaticity channel of the video file to be detected and third component information acquired from a saturation component channel of the chromaticity channel of the video file to be detected; and determining at least one group of to-be-processed packet embedding information from the at least one group of original packet embedding information by judging whether the correlation among the first component information, the second component information and the third component information is not less than a preset correlation threshold value.
When the target embedded information needs to be extracted from the video file to be detected, because the embedded information is respectively embedded into the luminance channel and the chrominance channel of the image to be detected in a mode of meeting the preset correlation condition in the embedding process, the target embedded information can be obtained from the luminance channel and the chrominance channel of the video file to be detected by respectively extracting the embedded information from the luminance channel and the chrominance channel of the image to be detected and judging whether the embedded information extracted from the luminance channel and the chrominance channel meets the preset correlation condition or not. In addition, because the information is embedded into the image in a grouping mode in the embedding process, the to-be-processed grouping embedded information is correspondingly and respectively extracted from the brightness channel and the chrominance channel of the to-be-detected image during extraction, and the to-be-processed grouping embedded information is spliced together, so that the target embedded information can be obtained.
Here, for example, the correlation of the packet embedding information obtained from the color component channels of the luminance channel and the chrominance channel of the video file to be detected is calculated, and specifically, the correlation of the packet embedding information obtained from the color component channels of the luminance channel and the chrominance channel of the video file to be detected can be calculated by the following formula:
Figure BDA0002407754740000191
wherein, WYAnd WvThe method comprises the steps of respectively representing to-be-determined packet embedding information extracted from color component channels of a brightness channel and a chrominance channel of a video file to be detected, and n represents the length of the to-be-determined packet embedding information. By comparing WYAnd WvAnd WYAnd Wv'Can determine whether the video file to be detected contains embedded information, wherein Wv'Is WvIn reverse order, i.e. WvThe information in (1) is expressed in reverse order.
The acquiring at least one group of original packet embedding information from the video file to be detected comprises: traversing a brightness channel and a chrominance channel of the video file to be detected by a sliding window to obtain at least one group of original packet embedding information; the step length of the sliding window is a preset step length value, the length of the sliding window is a preset window length, and the preset window length corresponds to the length of a packet used when the embedded information packet is embedded into the carrier video file. It should be noted that, the preset step value is 1, and the preset window length corresponds to the length of the packet used when the embedded information packet is embedded in the carrier video file, that is, is the same as the preset second length value in the embedding process, for example, in the first embodiment of the present application, the preset second length is 31, and the value of the preset window length is also 31. Of course, in specific implementation, the preset step length value and the preset window length may also be set to other values according to specific needs, which are not described herein again.
Traversing the brightness channel and the chrominance channel of the video file to be detected by using the sliding window to obtain at least one group of original packet embedding information, wherein the method comprises the following steps: according to the sliding window, acquiring a first candidate video frame from the video file to be detected; acquiring first original packet embedding information from the first candidate video frame according to a preset mapping relation; and acquiring at least one group of original packet embedding information according to the first original packet embedding information.
The obtaining of the first original packet embedding information from the first candidate video frame according to the preset mapping relationship includes: acquiring a first video frame to be processed from the first candidate video frame, wherein the first video frame to be processed is a preset odd number of continuous video frames; acquiring first to-be-processed component embedded information containing at least one bit of information from a brightness channel of the first to-be-processed video frame, a color component channel and a saturation component channel of a chrominance channel according to the preset mapping relation; and acquiring the first original packet embedding information according to the first to-be-processed component embedding information.
The preset mapping relationship comprises: mapping first element information in an ascending relationship of luminance information between video frames in the first video frame to be processed, mapping the first element information in an ascending relationship of color information between video frames in the video frame to be processed, and mapping the first element information in an ascending relationship of saturation information between video frames in the video frame to be processed; mapping second element information in a descending relation of brightness information among the video frames in the video frames to be processed, mapping the second element information in a descending relation of color information among the video frames in the video frames to be processed, and mapping the second element information in a descending relation of saturation information among the video frames in the video frames to be processed; wherein the first element information and the second element information are basic elements constituting the target embedded information, and the first element information is different from the second element information. In the second embodiment of the present application, if no special description is provided, the first element information isinformation 1, and the second element information is information 0 for example, that is, the first element information and the second element information in the first embodiment of the present application correspond to each other; of course, in specific implementation, the first element information and the second element information may also be set to corresponding values according to different element contents included in the target embedded information, and details are not repeated here.
The acquiring, according to the preset mapping relationship, first to-be-processed component embedding information including at least one bit of information from a luminance channel and a chrominance channel of the first to-be-processed video frame includes: calculating the pixel sum of a brightness channel, the pixel sum of a color component channel of a chrominance channel and the pixel sum of a saturation component channel of the chrominance channel of each frame of the first video frame to be processed to obtain the ascending and descending order relation of the brightness information, the color information and the saturation information of the first video frame to be processed; and acquiring first to-be-processed component embedded information containing at least one bit of information according to the ascending and descending order relation and the preset mapping relation.
In addition, in order to resist frame rate variation, such as frame rate variation of a video file to be detected caused by recording of an image pickup device, such as a video camera, and further cause a problem that embedded information cannot be accurately obtained, when extracting the embedded information in the video file to be detected, it is also possible to: acquiring a video frame between a start frame and an intermediate frame of the first video frame to be processed as a first video frame to be calculated; acquiring a video frame between an intermediate frame and an end frame of the first video frame to be processed as a second video frame to be calculated; acquiring a pixel sum of a brightness channel, a pixel sum of a color component channel of a chrominance channel and a pixel sum of a saturation component channel of the chrominance channel of the first video frame to be calculated as a first brightness pixel sum, a first color pixel sum and a first saturation pixel sum; acquiring a pixel sum of a brightness channel, a pixel sum of a color component channel of a chrominance channel and a pixel sum of a saturation component channel of the chrominance channel of the second video frame to be calculated as a second brightness pixel sum, a second color pixel sum and a second saturation pixel sum; and obtaining the ascending and descending order relation of the brightness information, the color information and the saturation information of the first video frame to be processed by comparing the first brightness pixel with the second brightness pixel, comparing the first color pixel with the second color pixel, and comparing the first saturation pixel with the second saturation pixel.
For example, when information is extracted from a first video frame to be processed, which is composed of 5 consecutive frames of a video file to be detected, it is determined whether the luminance information, the color information, and the saturation information in the first video frame to be processed are in an ascending order or a descending order according to the sizes of the luminance information, the color information, and the saturation information of the 2 nd frame and the 4 th frame of the 5 consecutive frames. Of course, in the implementation, other methods may be adopted, for example, only the middle area of the first to-be-processed video frame is selected to determine the ascending/descending order relationship among the luminance information, the color information, and the saturation information.
The obtaining the target embedded information according to the at least one group of to-be-processed packet embedded information includes: acquiring the overlapping relation and the arrangement relation between the video frames corresponding to the at least one group of packet embedding information to be processed, and taking the packet embedding information to be processed, which has no overlapping relation between the corresponding video frames and has a continuous arrangement relation, as target packet embedding information; and obtaining the target embedded information according to the target grouping embedded information. As shown in fig. 3-a, which is a schematic diagram of target packet embedded information provided in the second embodiment of the present application. As can be seen from fig. 3-a, after a plurality of pieces of packet embedding information to be processed are obtained from a video file to be detected, because the pieces of packet embedding information to be processed may contain overlapping information, it is further necessary to obtain target packet embedding information that contains information that is not overlapped and is in a continuous relationship from the pieces of packet embedding information to be processed, and specifically, the target packet embedding information may be obtained by whether the overlapping relationship and the ordering relationship between video frames corresponding to the pieces of packet embedding information to be processed are continuous or not; furthermore, it can be inferred that the packet embedding information is not acquired from the luminance channel and the chrominance channel of the video file to be detected.
The obtaining the target embedded information according to the target packet embedded information includes: carrying out error correction processing on the target packet embedded information to obtain error-corrected target packet embedded information; and acquiring the target embedded information according to the error-corrected target packet embedded information. That is, in order to increase the accuracy of the extracted information, the error correction processing may be performed on the acquired target packet embedded information by using a method corresponding to the error correction method used in the embedding processing, and then the target packet embedded information that is subjected to error correction and does not include the error correction code is spliced together, so that the target embedded information can be obtained.
In summary, the data processing method provided in the second embodiment of the present application includes: acquiring a video file to be detected; and acquiring target embedded information from the brightness channel and the chrominance channel of the video file to be detected by judging whether the embedded information in the brightness channel of the video file to be detected and the embedded information in the chrominance channel of the video file to be detected meet a preset correlation condition. When the target embedded information is extracted from the video file to be detected, the method judges whether the embedded information obtained from the brightness channel of the video file to be detected and the embedded information obtained from the chroma channel of the video file to be detected meet the preset correlation condition, and then takes the embedded information meeting the preset correlation condition as the final target embedded information, so that the accuracy of the extracted target embedded information can be improved.
In a third embodiment of the present application, a data processing apparatus corresponding to the data processing method provided in the first embodiment of the present application is further provided, please refer to fig. 4, which is a schematic diagram of an embodiment of a data processing apparatus provided in the third embodiment of the present application. A data processing apparatus provided in a third embodiment of the present application includes:
the obtainingunit 401 is configured to obtain a carrier video file and obtain information to be embedded.
An embeddingunit 402, configured to embed the information to be embedded into a luminance channel and a chrominance channel of the carrier video file, so as to obtain a target video file, where the embedded information embedded into the luminance channel and the embedded information embedded into the chrominance channel satisfy a preset correlation condition.
Optionally, the embedding unit is specifically configured to: embedding the information to be embedded into a brightness channel of the carrier video file according to the sequential embedding sequence; embedding the information to be embedded into a color component channel of a chrominance channel of the carrier video file according to the embedding sequence of the reverse order; and embedding the information from the start bit to the middle bit of the information to be embedded and the information from the middle bit to the end bit of the information to be embedded into the saturation component channel of the chrominance channel of the carrier video file in the reverse embedding sequence.
Optionally, the embedding the information to be embedded into a luminance channel and a chrominance channel of the carrier video file to obtain a target video file includes: grouping the information to be embedded according to a preset grouping rule to obtain at least one group of grouped information to be embedded; and embedding the at least one group of information to be embedded into a brightness channel and a chrominance channel of the carrier video file to obtain the target video file.
Optionally, the embedding the at least one group of information to be embedded into the luminance channel and the chrominance channel of the carrier video file to obtain the target video file includes: acquiring first packet information to be embedded and second packet information to be embedded from the at least one group of packet information to be embedded, wherein the second packet information to be embedded is information after the first packet information to be embedded; embedding the first group of information to be embedded into a first video frame of the carrier video file, and embedding the second group of information to be embedded into a second video frame of the carrier video file to obtain the target video file, wherein the number of video frames contained in the first video frame corresponds to the length of the first group of information to be embedded, and the second video frame is a video frame after the first video frame.
Optionally, the embedding the information to be embedded in the first packet into the first video frame of the carrier video file includes: embedding the first group of information to be embedded into a brightness channel of the first video frame according to a sequential embedding sequence; embedding the first group of information to be embedded into a color component channel of a chrominance channel of the first video frame according to the embedding sequence of the reverse order; and embedding the information from the start bit to the middle bit of the first packet of information to be embedded and the information from the middle bit to the end bit of the first packet of information to be embedded into the saturation component channel of the chrominance channel of the first video frame in the reverse embedding sequence.
Optionally, the embedding the information to be embedded in the first packet into the first video frame of the carrier video file includes: acquiring a video frame to be processed from the first video frame, wherein the video frame to be processed is a preset odd number of continuous video frames; and embedding at least one bit of information in the first group of information to be embedded into a luminance channel and a chrominance channel of the video frame to be processed.
Optionally, the embedding at least one bit of information in the first packet of information to be embedded into a luminance channel and a chrominance channel of the video frame to be processed includes: adjusting brightness information, color information and saturation information between at least two video frames in the video frames to be processed, and establishing a mapping relation between the video frames to be processed and the at least one bit of information; and embedding the at least one bit of information into a brightness channel and a chroma channel of the video frame to be processed through the mapping relation.
Optionally, the mapping relationship includes: mapping first element information in an ascending relationship of brightness information between video frames in the video frames to be processed, mapping the first element information in an ascending relationship of color information between video frames in the video frames to be processed, and mapping the first element information in an ascending relationship of saturation information between video frames in the video frames to be processed; mapping second element information in a descending relation of brightness information among the video frames in the video frames to be processed, mapping the second element information in a descending relation of color information among the video frames in the video frames to be processed, and mapping the second element information in a descending relation of saturation information among the video frames in the video frames to be processed; wherein the first element information and the second element information are basic elements constituting the information to be embedded, and the first element information is different from the second element information.
Optionally, the mapping relationship is established by the following steps: acquiring a brightness channel of each video frame in the video frames to be processed, and a color component channel of a chrominance channel and a block to be modified in a saturation component channel; acquiring the minimum perceptible difference corresponding to the brightness channel of each video frame, and the color component channel and the saturation component channel of the chrominance channel; and adjusting brightness information, color information and saturation information between at least two video frames in the video frames to be processed according to the block to be modified and the just noticeable difference, and establishing the mapping relation.
Optionally, the adjusting, according to the block to be modified and the just noticeable difference, luminance information, color information, and saturation information between at least two video frames of the video frames to be processed to establish the mapping relationship includes: if the information to be processed in the first group of information to be embedded is the first element information, subtracting the minimum perceptible difference corresponding to the block to be modified from at least one pixel in the block to be modified corresponding to the video frame before the middle video frame of the video frame to be processed, and adding the minimum perceptible difference corresponding to the block to be modified and at least one pixel in the block to be modified corresponding to the video frame after the middle video frame of the video frame to be processed; and if the information of the information to be processed in the first group of information to be embedded is the second element information, adding at least one pixel in a block to be modified corresponding to a video frame before the middle video frame of the video frame to be processed and the minimum perceptible difference corresponding to the block to be modified, and subtracting the minimum perceptible difference corresponding to the block to be modified from at least one pixel in the block to be modified corresponding to a video frame after the middle video frame of the video frame to be processed.
Optionally, the block to be modified is a 2 × 2 data block.
Optionally, the grouping the information to be embedded according to a preset grouping rule to obtain at least one group of grouped information to be embedded includes: grouping the information to be embedded by a preset first length to obtain at least one group of information to be processed; and adding error correction information into the at least one group of information of the packets to be processed to obtain at least one group of information of the packets to be embedded, wherein the length of the information of the packets to be embedded is a preset second length, and the preset second length is greater than the preset first length.
In correspondence with the data processing method provided in the first embodiment of the present application, a fourth embodiment of the present application further provides an electronic device, please refer to fig. 5, which is a schematic diagram of an electronic device provided in the fourth embodiment of the present application. A fourth embodiment of the present application provides an electronic device including:
aprocessor 501;
memory 502 for a program of a data processing method, which device, when powered on and running said program of a data processing method by said processor, performs the following steps:
acquiring a carrier video file and acquiring information to be embedded;
and embedding the information to be embedded into a brightness channel and a chrominance channel of the carrier video file to obtain a target video file, wherein the embedded information embedded into the brightness channel and the embedded information embedded into the chrominance channel meet a preset correlation condition.
Corresponding to the data processing method provided by the first embodiment of the present application, the fifth embodiment of the present application further provides a storage device, since the storage device embodiment is substantially similar to the method embodiment, the description is relatively simple, and for relevant points, reference may be made to part of the description of the method embodiment, and the storage device embodiment described below is only illustrative. A storage device according to a fifth embodiment of the present application stores a program of a data processing method, the program being executed by a processor to perform the steps of:
acquiring a carrier video file and acquiring information to be embedded;
and embedding the information to be embedded into a brightness channel and a chrominance channel of the carrier video file to obtain a target video file, wherein the embedded information embedded into the brightness channel and the embedded information embedded into the chrominance channel meet a preset correlation condition.
In correspondence with another data processing method provided by the second embodiment of the present application, a sixth embodiment of the present application further provides another data processing apparatus, please refer to fig. 6, which is a schematic diagram of an embodiment of another data processing apparatus provided by the sixth embodiment of the present application. A data processing apparatus provided in a sixth embodiment of the present application includes:
the to-be-detected videofile acquiring unit 601 is configured to acquire a to-be-detected video file.
An embeddedinformation obtaining unit 602, configured to obtain target embedded information from the luminance channel and the chrominance channel of the video file to be detected by determining whether embedded information in the luminance channel of the video file to be detected and embedded information in the chrominance channel of the video file to be detected satisfy a preset correlation condition.
Optionally, the embedded information obtaining unit is specifically configured to: acquiring at least one group of to-be-processed packet embedding information from the brightness channel and the chrominance channel of the to-be-detected video file by judging whether the embedding information in the brightness channel of the to-be-detected video file and the embedding information in the chrominance channel of the to-be-detected video file meet a preset correlation condition; and obtaining the target embedded information according to the at least one group of to-be-processed packet embedded information.
Optionally, the obtaining at least one group of to-be-processed packet embedding information includes: acquiring at least one group of original packet embedding information from the video file to be detected, wherein the original packet embedding information comprises first component information acquired from a brightness channel of the video file to be detected, second component information acquired from a color component channel of a chromaticity channel of the video file to be detected and third component information acquired from a saturation component channel of the chromaticity channel of the video file to be detected; and determining at least one group of to-be-processed packet embedding information from the at least one group of original packet embedding information by judging whether the correlation among the first component information, the second component information and the third component information is not less than a preset correlation threshold value.
Optionally, the obtaining at least one group of original packet embedding information from the video file to be detected includes: traversing a brightness channel and a chrominance channel of the video file to be detected by a sliding window to obtain at least one group of original packet embedding information; the step length of the sliding window is a preset step length value, the length of the sliding window is a preset window length, and the preset window length corresponds to the length of a packet used when the embedded information packet is embedded into the carrier video file.
Optionally, traversing the luminance channel and the chrominance channel of the video file to be detected with a sliding window to obtain at least one group of original packet embedding information, including: according to the sliding window, acquiring a first candidate video frame from the video file to be detected; acquiring first original packet embedding information from the first candidate video frame according to a preset mapping relation; and acquiring at least one group of original packet embedding information according to the first original packet embedding information.
Optionally, the obtaining, according to a preset mapping relationship, first original packet embedding information from the first candidate video frame includes: acquiring a first video frame to be processed from the first candidate video frame, wherein the first video frame to be processed is a preset odd number of continuous video frames; acquiring first to-be-processed component embedded information containing at least one bit of information from a brightness channel of the first to-be-processed video frame, a color component channel and a saturation component channel of a chrominance channel according to the preset mapping relation; and acquiring the first original packet embedding information according to the first to-be-processed component embedding information.
Optionally, the preset mapping relationship includes: mapping first element information in an ascending relationship of luminance information between video frames in the first video frame to be processed, mapping the first element information in an ascending relationship of color information between video frames in the video frame to be processed, and mapping the first element information in an ascending relationship of saturation information between video frames in the video frame to be processed; and mapping second element information according to the descending relation of the brightness information among the video frames in the video frames to be processed, mapping the second element information according to the descending relation of the color information among the video frames in the video frames to be processed, and mapping the second element information according to the descending relation of the saturation information among the video frames in the video frames to be processed.
Optionally, the obtaining, according to the preset mapping relationship, first to-be-processed component embedding information including at least one bit of information from a luminance channel and a chrominance channel of the first to-be-processed video frame includes: calculating the pixel sum of a brightness channel, the pixel sum of a color component channel of a chrominance channel and the pixel sum of a saturation component channel of the chrominance channel of each frame of the first video frame to be processed to obtain the ascending and descending order relation of the brightness information, the color information and the saturation information of the first video frame to be processed; and acquiring first to-be-processed component embedded information containing at least one bit of information according to the ascending and descending order relation and the preset mapping relation.
Optionally, the method includes: acquiring a video frame between a start frame and an intermediate frame of the first video frame to be processed as a first video frame to be calculated; acquiring a video frame between an intermediate frame and an end frame of the first video frame to be processed as a second video frame to be calculated; acquiring a pixel sum of a brightness channel, a pixel sum of a color component channel of a chrominance channel and a pixel sum of a saturation component channel of the chrominance channel of the first video frame to be calculated as a first brightness pixel sum, a first color pixel sum and a first saturation pixel sum; acquiring a pixel sum of a brightness channel, a pixel sum of a color component channel of a chrominance channel and a pixel sum of a saturation component channel of the chrominance channel of the second video frame to be calculated as a second brightness pixel sum, a second color pixel sum and a second saturation pixel sum; and obtaining the ascending and descending order relation of the brightness information, the color information and the saturation information of the first video frame to be processed by comparing the first brightness pixel with the second brightness pixel, comparing the first color pixel with the second color pixel, and comparing the first saturation pixel with the second saturation pixel.
Optionally, the obtaining the target embedded information according to the at least one group of to-be-processed packet embedded information includes: acquiring the overlapping relation and the arrangement relation between the video frames corresponding to the at least one group of packet embedding information to be processed, and taking the packet embedding information to be processed, which has no overlapping relation between the corresponding video frames and has a continuous arrangement relation, as target packet embedding information; and obtaining the target embedded information according to the target grouping embedded information.
Optionally, the obtaining the target embedded information according to the target packet embedded information includes: carrying out error correction processing on the target packet embedded information to obtain error-corrected target packet embedded information; and acquiring the target embedded information according to the error-corrected target packet embedded information.
In a seventh embodiment of the present application, corresponding to the data processing method provided in the second embodiment of the present application, another electronic device is provided, and since the embodiment of the electronic device is substantially similar to the embodiment of the method, the description is relatively simple, and for relevant points, reference may be made to part of the description of the embodiment of the method, and the embodiment of the electronic device described below is only illustrative. A seventh embodiment of the present application provides an electronic device including:
a processor;
a memory for storing a program of a data processing method, the apparatus performing the following steps after being powered on and running the program of the data processing method by the processor:
acquiring a video file to be detected;
and acquiring target embedded information from the brightness channel and the chrominance channel of the video file to be detected by judging whether the embedded information in the brightness channel of the video file to be detected and the embedded information in the chrominance channel of the video file to be detected meet a preset correlation condition.
Corresponding to the data processing method provided by the second embodiment of the present application, the eighth embodiment of the present application also provides another storage device, since the embodiment of the storage device is substantially similar to the embodiment of the method, the description is relatively simple, and for the relevant points, reference may be made to part of the description of the embodiment of the method, and the embodiment of the storage device described below is only illustrative. A storage device according to an eighth embodiment of the present application stores a program of a data processing method, the program being executed by a processor to perform the steps of:
acquiring a video file to be detected;
and acquiring target embedded information from the brightness channel and the chrominance channel of the video file to be detected by judging whether the embedded information in the brightness channel of the video file to be detected and the embedded information in the chrominance channel of the video file to be detected meet a preset correlation condition.
In correspondence with the data processing method provided in the first embodiment of the present application, a ninth embodiment of the present application further provides a data processing method, please refer to fig. 7, which is a flowchart of the data processing method provided in the ninth embodiment of the present application, wherein a part of steps or details are described in detail in the first embodiment, so that the description herein is relatively simple, and for the relevant points, reference may be made to a part of the description in the above method embodiment, and the processing procedure described below is only exemplary.
Fig. 7 is a flowchart of a data processing method according to a ninth embodiment of the present application, which is described below with reference to fig. 7.
Step S701, a carrier object is obtained, and information to be embedded is obtained.
The carrier object is an object corresponding to the carrier video file in the first embodiment of the present application, and contains a video frame with a length that meets the requirement of embedding information to be embedded into the object, and the object is in YUV format, or can be converted into YUV format.
For example, the carrier object may be a streaming media (streaming media) file, such as a video stream that is provided for an online video-on-demand platform or an online live platform and can be directly streamed; or, the video stream may also be transmitted in a Virtual Reality (VR) or Augmented Reality (AR) scene; alternatively, the image may also be an image containing several continuous frames, such as a dynamic image in a gif (graphics Interchange format) format; of course, as the technology advances, the carrier object may be other types of objects, and is not limited specifically herein.
Step S702, embedding the information to be embedded into a luminance channel and a chrominance channel of the carrier object to obtain a target object, where the embedded information embedded into the luminance channel and the embedded information embedded into the chrominance channel satisfy a preset correlation condition.
Corresponding to the data processing method provided in the ninth embodiment of the present application, a tenth embodiment of the present application further provides a data processing apparatus, please refer to fig. 8, which is a schematic diagram of an embodiment of a data processing apparatus provided in the tenth embodiment of the present application. A tenth embodiment of the present application provides a data processing apparatus including:
anobject obtaining unit 801, configured to obtain a carrier object and obtain information to be embedded.
Aninformation embedding unit 802, configured to embed the information to be embedded into a luminance channel and a chrominance channel of the carrier object to obtain a target object, where the embedding information embedded into the luminance channel and the embedding information embedded into the chrominance channel satisfy a preset correlation condition.
Corresponding to the data processing method provided by the ninth embodiment of the present application, the eleventh embodiment of the present application further provides an electronic device, since the embodiment of the electronic device is substantially similar to the embodiment of the method, the description is relatively simple, and for relevant points, reference may be made to part of the description of the embodiment of the method, and the embodiment of the electronic device described below is only illustrative. An electronic device provided in an eleventh embodiment of the present application includes:
a processor;
a memory for storing a program of a data processing method, the apparatus performing the following steps after being powered on and running the program of the data processing method by the processor:
acquiring a carrier object and acquiring information to be embedded;
and embedding the information to be embedded into a brightness channel and a chrominance channel of the carrier object to obtain a target object, wherein the embedded information embedded into the brightness channel and the embedded information embedded into the chrominance channel meet a preset correlation condition.
Corresponding to the data processing method provided by the ninth embodiment of the present application, the twelfth embodiment of the present application further provides a storage device, since the storage device embodiment is substantially similar to the method embodiment, the description is relatively simple, and for relevant points, reference may be made to part of the description of the method embodiment, and the storage device embodiment described below is only illustrative. A storage device according to a twelfth embodiment of the present application stores a program of a data processing method, the program being executed by a processor and performing the steps of:
acquiring a carrier object and acquiring information to be embedded;
and embedding the information to be embedded into a brightness channel and a chrominance channel of the carrier object to obtain a target object, wherein the embedded information embedded into the brightness channel and the embedded information embedded into the chrominance channel meet a preset correlation condition.
Corresponding to the data processing method provided in the ninth embodiment of the present application, the thirteenth embodiment of the present application further provides a data processing method, specifically configured to extract target embedded information in an object to be detected, please refer to fig. 9, which is a flowchart of the data processing method provided in the thirteenth embodiment of the present application, wherein a part of steps or details are described in detail in the first embodiment and the ninth embodiment, so that the description herein is relatively simple, and for relevant points, reference may be made to part of descriptions in the above method embodiments, and the processing procedure described below is only schematic.
Fig. 9 is a flowchart of a data processing method according to a thirteenth embodiment of the present application, which is described below with reference to fig. 8.
Step S901, an object to be detected is acquired.
The object to be detected is an object corresponding to the target object in which the embedded information is embedded in the luminance channel and the chrominance channel, respectively, as described in the first embodiment and the tenth embodiment of the present application.
Step S902, obtaining target embedding information from the luminance channel and the chrominance channel of the object to be detected by determining whether the embedding information in the luminance channel of the object to be detected and the embedding information in the chrominance channel of the object to be detected satisfy a preset correlation condition.
Corresponding to the data processing method provided in the thirteenth embodiment of the present application, a data processing apparatus is further provided in the fourteenth embodiment of the present application, please refer to fig. 10, which is a schematic diagram of an embodiment of a data processing apparatus provided in the fourteenth embodiment of the present application. A data processing apparatus provided in a fourteenth embodiment of the present application includes:
anobject acquisition unit 1001 is configured to acquire an object to be detected.
An embeddedinformation obtaining unit 1002, configured to obtain target embedded information from the luminance channel and the chrominance channel of the object to be detected by determining whether embedded information in the luminance channel of the object to be detected and embedded information in the chrominance channel of the object to be detected satisfy a preset correlation condition.
Corresponding to the data processing method provided in the thirteenth embodiment of the present application, the fifteenth embodiment of the present application further provides an electronic device, which is substantially similar to the method embodiment, so that the description is relatively simple, and for the relevant points, reference may be made to part of the description of the method embodiment, and the electronic device embodiments described below are only illustrative. A fifteenth embodiment of the present application provides an electronic device comprising:
a processor;
a memory for storing a program of a data processing method, the apparatus performing the following steps after being powered on and running the program of the data processing method by the processor:
acquiring an object to be detected;
and acquiring target embedded information from the brightness channel and the chromaticity channel of the object to be detected by judging whether the embedded information in the brightness channel of the object to be detected and the embedded information in the chromaticity channel of the object to be detected meet a preset correlation condition.
Corresponding to the data processing method provided in the thirteenth embodiment of the present application, the sixteenth embodiment of the present application further provides a storage device, since the storage device embodiment is substantially similar to the method embodiment, so that the description is relatively simple, and for the relevant points, reference may be made to part of the description of the method embodiment, and the storage device embodiment described below is only illustrative. A sixteenth embodiment of the present application provides a storage device, in which a program of a data processing method is stored, where the program is executed by a processor, and executes the following steps:
acquiring an object to be detected;
and acquiring target embedded information from the brightness channel and the chromaticity channel of the object to be detected by judging whether the embedded information in the brightness channel of the object to be detected and the embedded information in the chromaticity channel of the object to be detected meet a preset correlation condition.
Although the present application has been described with reference to the preferred embodiments, it is not intended to limit the present application, and those skilled in the art can make variations and modifications without departing from the spirit and scope of the present application, therefore, the scope of the present application should be determined by the claims that follow.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.

Claims (37)

1. A data processing method, comprising:
acquiring a carrier video file and acquiring information to be embedded;
and embedding the information to be embedded into a brightness channel and a chrominance channel of the carrier video file to obtain a target video file, wherein the embedded information embedded into the brightness channel and the embedded information embedded into the chrominance channel meet a preset correlation condition.
2. The data processing method according to claim 1, wherein the information to be embedded is embedded in a luminance channel and a chrominance channel of the carrier video file, respectively, by the following steps, such that a correlation between the embedded information embedded in the luminance channel and the embedded information embedded in the chrominance channel is not less than a preset correlation threshold:
embedding the information to be embedded into a brightness channel of the carrier video file according to the sequential embedding sequence;
embedding the information to be embedded into a color component channel of a chrominance channel of the carrier video file according to the embedding sequence of the reverse order;
and embedding the information from the start bit to the middle bit of the information to be embedded and the information from the middle bit to the end bit of the information to be embedded into the saturation component channel of the chrominance channel of the carrier video file in the reverse embedding sequence.
3. The data processing method according to claim 1 or 2, wherein said embedding the information to be embedded into a luminance channel and a chrominance channel of the carrier video file to obtain a target video file comprises:
grouping the information to be embedded according to a preset grouping rule to obtain at least one group of grouped information to be embedded;
and embedding the at least one group of information to be embedded into a brightness channel and a chrominance channel of the carrier video file to obtain the target video file.
4. The data processing method according to claim 3, wherein said embedding the at least one group of information to be embedded into the luminance channel and the chrominance channel of the carrier video file to obtain the target video file comprises:
acquiring first packet information to be embedded and second packet information to be embedded from the at least one group of packet information to be embedded, wherein the second packet information to be embedded is information after the first packet information to be embedded;
embedding the first group of information to be embedded into a first video frame of the carrier video file, and embedding the second group of information to be embedded into a second video frame of the carrier video file to obtain the target video file, wherein the number of video frames contained in the first video frame corresponds to the length of the first group of information to be embedded, and the second video frame is a video frame after the first video frame.
5. The data processing method according to claim 4, wherein said embedding the first packet of information to be embedded into the first video frame of the carrier video file comprises:
embedding the first group of information to be embedded into a brightness channel of the first video frame according to a sequential embedding sequence;
embedding the first group of information to be embedded into a color component channel of a chrominance channel of the first video frame according to the embedding sequence of the reverse order;
and embedding the information from the start bit to the middle bit of the first packet of information to be embedded and the information from the middle bit to the end bit of the first packet of information to be embedded into the saturation component channel of the chrominance channel of the first video frame in the reverse embedding sequence.
6. The data processing method according to claim 5, wherein said embedding the first packet of information to be embedded into the first video frame of the carrier video file comprises:
acquiring a video frame to be processed from the first video frame, wherein the video frame to be processed is a preset odd number of continuous video frames;
and embedding at least one bit of information in the first group of information to be embedded into a luminance channel and a chrominance channel of the video frame to be processed.
7. The data processing method according to claim 6, wherein said embedding at least one bit of information in the first packet of information to be embedded into the luminance channel and the chrominance channel of the video frame to be processed comprises:
adjusting brightness information, color information and saturation information between at least two video frames in the video frames to be processed, and establishing a mapping relation between the video frames to be processed and the at least one bit of information;
and embedding the at least one bit of information into a brightness channel and a chroma channel of the video frame to be processed through the mapping relation.
8. The data processing method of claim 7, wherein the mapping relationship comprises:
mapping first element information in an ascending relationship of brightness information between video frames in the video frames to be processed, mapping the first element information in an ascending relationship of color information between video frames in the video frames to be processed, and mapping the first element information in an ascending relationship of saturation information between video frames in the video frames to be processed;
mapping second element information in a descending relation of brightness information among the video frames in the video frames to be processed, mapping the second element information in a descending relation of color information among the video frames in the video frames to be processed, and mapping the second element information in a descending relation of saturation information among the video frames in the video frames to be processed;
wherein the first element information and the second element information are basic elements constituting the information to be embedded, and the first element information is different from the second element information.
9. The data processing method of claim 8, wherein the mapping relationship is established by:
acquiring a brightness channel of each video frame in the video frames to be processed, and a color component channel of a chrominance channel and a block to be modified in a saturation component channel;
acquiring the minimum perceptible difference corresponding to the brightness channel of each video frame, and the color component channel and the saturation component channel of the chrominance channel;
and adjusting brightness information, color information and saturation information between at least two video frames in the video frames to be processed according to the block to be modified and the just noticeable difference, and establishing the mapping relation.
10. The data processing method according to claim 9, wherein said adjusting luminance information, color information and saturation information between at least two of the video frames to be processed according to the block to be modified and the just noticeable difference, and establishing the mapping relationship comprises:
if the information to be processed in the first group of information to be embedded is the first element information, subtracting the minimum perceptible difference corresponding to the block to be modified from at least one pixel in the block to be modified corresponding to the video frame before the middle video frame of the video frame to be processed, and adding the minimum perceptible difference corresponding to the block to be modified and at least one pixel in the block to be modified corresponding to the video frame after the middle video frame of the video frame to be processed;
and if the information of the information to be processed in the first group of information to be embedded is the second element information, adding at least one pixel in a block to be modified corresponding to a video frame before the middle video frame of the video frame to be processed and the minimum perceptible difference corresponding to the block to be modified, and subtracting the minimum perceptible difference corresponding to the block to be modified from at least one pixel in the block to be modified corresponding to a video frame after the middle video frame of the video frame to be processed.
11. The data processing method of claim 9, wherein the block to be modified is a 2 x 2 block of data.
12. The data processing method according to claim 3, wherein the grouping the information to be embedded according to a preset grouping rule to obtain at least one group of grouped information to be embedded comprises:
grouping the information to be embedded by a preset first length to obtain at least one group of information to be processed;
and adding error correction information into the at least one group of information of the packets to be processed to obtain at least one group of information of the packets to be embedded, wherein the length of the information of the packets to be embedded is a preset second length, and the preset second length is greater than the preset first length.
13. A data processing method, comprising:
acquiring a video file to be detected;
and acquiring target embedded information from the brightness channel and the chrominance channel of the video file to be detected by judging whether the embedded information in the brightness channel of the video file to be detected and the embedded information in the chrominance channel of the video file to be detected meet a preset correlation condition.
14. The data processing method of claim 13, comprising:
acquiring at least one group of to-be-processed packet embedding information from the brightness channel and the chrominance channel of the to-be-detected video file by judging whether the embedding information in the brightness channel of the to-be-detected video file and the embedding information in the chrominance channel of the to-be-detected video file meet a preset correlation condition;
and obtaining the target embedded information according to the at least one group of to-be-processed packet embedded information.
15. The data processing method of claim 14, wherein the obtaining at least one set of packet embedding information to be processed comprises:
acquiring at least one group of original packet embedding information from the video file to be detected, wherein the original packet embedding information comprises first component information acquired from a brightness channel of the video file to be detected, second component information acquired from a color component channel of a chromaticity channel of the video file to be detected and third component information acquired from a saturation component channel of the chromaticity channel of the video file to be detected;
and determining at least one group of to-be-processed packet embedding information from the at least one group of original packet embedding information by judging whether the correlation among the first component information, the second component information and the third component information is not less than a preset correlation threshold value.
16. The data processing method according to claim 15, wherein said obtaining at least one set of original packet embedding information from the video file to be detected comprises:
traversing a brightness channel and a chrominance channel of the video file to be detected by a sliding window to obtain at least one group of original packet embedding information;
the step length of the sliding window is a preset step length value, the length of the sliding window is a preset window length, and the preset window length corresponds to the length of a packet used when the embedded information packet is embedded into the carrier video file.
17. The data processing method of claim 16, wherein traversing the luminance channel and the chrominance channel of the video file to be detected with a sliding window to obtain at least one set of original packet embedding information comprises:
according to the sliding window, acquiring a first candidate video frame from the video file to be detected;
acquiring first original packet embedding information from the first candidate video frame according to a preset mapping relation;
and acquiring at least one group of original packet embedding information according to the first original packet embedding information.
18. The data processing method of claim 17, wherein the obtaining first original packet embedding information from the first candidate video frame according to the preset mapping relationship comprises:
acquiring a first video frame to be processed from the first candidate video frame, wherein the first video frame to be processed is a preset odd number of continuous video frames;
acquiring first to-be-processed component embedded information containing at least one bit of information from a brightness channel of the first to-be-processed video frame, a color component channel and a saturation component channel of a chrominance channel according to the preset mapping relation;
and acquiring the first original packet embedding information according to the first to-be-processed component embedding information.
19. The data processing method of claim 18, wherein the preset mapping relationship comprises:
mapping first element information in an ascending relationship of luminance information between video frames in the first video frame to be processed, mapping the first element information in an ascending relationship of color information between video frames in the video frame to be processed, and mapping information in an ascending relationship of saturation information between video frames in the video frame to be processed;
mapping second element information in a descending relation of brightness information among the video frames in the video frames to be processed, mapping the second element information in a descending relation of color information among the video frames in the video frames to be processed, and mapping the second element information in a descending relation of saturation information among the video frames in the video frames to be processed;
wherein the first element information and the second element information are basic elements constituting the target embedded information, and the first element information is different from the second element information.
20. The data processing method of claim 19, wherein the obtaining, according to the preset mapping relationship, first to-be-processed component embedding information including at least one bit of information from a luminance channel and a chrominance channel of the first to-be-processed video frame comprises:
calculating the pixel sum of a brightness channel, the pixel sum of a color component channel of a chrominance channel and the pixel sum of a saturation component channel of the chrominance channel of each frame of the first video frame to be processed to obtain the ascending and descending order relation of the brightness information, the color information and the saturation information of the first video frame to be processed;
and acquiring first to-be-processed component embedded information containing at least one bit of information according to the ascending and descending order relation and the preset mapping relation.
21. The data processing method of claim 20, comprising:
acquiring a video frame between a start frame and an intermediate frame of the first video frame to be processed as a first video frame to be calculated;
acquiring a video frame between an intermediate frame and an end frame of the first video frame to be processed as a second video frame to be calculated;
acquiring a pixel sum of a brightness channel, a pixel sum of a color component channel of a chrominance channel and a pixel sum of a saturation component channel of the chrominance channel of the first video frame to be calculated as a first brightness pixel sum, a first color pixel sum and a first saturation pixel sum;
acquiring a pixel sum of a brightness channel, a pixel sum of a color component channel of a chrominance channel and a pixel sum of a saturation component channel of the chrominance channel of the second video frame to be calculated as a second brightness pixel sum, a second color pixel sum and a second saturation pixel sum;
and obtaining the ascending and descending order relation of the brightness information, the color information and the saturation information of the first video frame to be processed by comparing the first brightness pixel with the second brightness pixel, comparing the first color pixel with the second color pixel, and comparing the first saturation pixel with the second saturation pixel.
22. The data processing method according to claim 14, wherein said obtaining the target embedding information according to the at least one group of packet embedding information to be processed comprises:
acquiring the overlapping relation and the arrangement relation between the video frames corresponding to the at least one group of packet embedding information to be processed, and taking the packet embedding information to be processed, which has no overlapping relation between the corresponding video frames and has a continuous arrangement relation, as target packet embedding information;
and obtaining the target embedded information according to the target grouping embedded information.
23. The data processing method of claim 22, wherein obtaining the target embedded information according to the target packet embedded information comprises:
carrying out error correction processing on the target packet embedded information to obtain error-corrected target packet embedded information;
and acquiring the target embedded information according to the error-corrected target packet embedded information.
24. A data processing apparatus, comprising:
the acquisition unit is used for acquiring a carrier video file and acquiring information to be embedded;
and the embedding unit is used for embedding the information to be embedded into a brightness channel and a chrominance channel of the carrier video file to obtain a target video file, wherein the embedded information embedded into the brightness channel and the embedded information embedded into the chrominance channel meet a preset correlation condition.
25. An electronic device, comprising:
a processor;
a memory for storing a program of a data processing method, the apparatus performing the following steps after being powered on and running the program of the data processing method by the processor:
acquiring a carrier video file and acquiring information to be embedded;
and embedding the information to be embedded into a brightness channel and a chrominance channel of the carrier video file to obtain a target video file, wherein the embedded information embedded into the brightness channel and the embedded information embedded into the chrominance channel meet a preset correlation condition.
26. A storage device characterized by storing a program of a data processing method, the program being executed by a processor to execute the steps of:
acquiring a carrier video file and acquiring information to be embedded;
and embedding the information to be embedded into a brightness channel and a chrominance channel of the carrier video file to obtain a target video file, wherein the embedded information embedded into the brightness channel and the embedded information embedded into the chrominance channel meet a preset correlation condition.
27. A data processing apparatus, comprising:
the device comprises a to-be-detected video file acquisition unit, a to-be-detected video file acquisition unit and a to-be-detected video file acquisition unit, wherein the to-be-detected video file acquisition unit is used for acquiring a to-be-detected video file;
and the embedded information acquisition unit is used for acquiring target embedded information from the brightness channel and the chrominance channel of the video file to be detected by judging whether the embedded information in the brightness channel of the video file to be detected and the embedded information in the chrominance channel of the video file to be detected meet a preset correlation condition or not.
28. An electronic device, comprising:
a processor;
a memory for storing a program of a data processing method, the apparatus performing the following steps after being powered on and running the program of the data processing method by the processor:
acquiring a video file to be detected;
and acquiring target embedded information from the brightness channel and the chrominance channel of the video file to be detected by judging whether the embedded information in the brightness channel of the video file to be detected and the embedded information in the chrominance channel of the video file to be detected meet a preset correlation condition.
29. A storage device characterized by storing a program of a data processing method, the program being executed by a processor to execute the steps of:
acquiring a video file to be detected;
and acquiring target embedded information from the brightness channel and the chrominance channel of the video file to be detected by judging whether the embedded information in the brightness channel of the video file to be detected and the embedded information in the chrominance channel of the video file to be detected meet a preset correlation condition.
30. A data processing method, comprising:
acquiring a carrier object and acquiring information to be embedded;
and embedding the information to be embedded into a brightness channel and a chrominance channel of the carrier object to obtain a target object, wherein the embedded information embedded into the brightness channel and the embedded information embedded into the chrominance channel meet a preset correlation condition.
31. A data processing apparatus, comprising:
the object acquisition unit is used for acquiring the carrier object and acquiring the information to be embedded;
and the information embedding unit is used for embedding the information to be embedded into a luminance channel and a chrominance channel of the carrier object to obtain a target object, wherein the embedded information embedded into the luminance channel and the embedded information embedded into the chrominance channel meet a preset correlation condition.
32. An electronic device, comprising:
a processor;
a memory for storing a program of a data processing method, the apparatus performing the following steps after being powered on and running the program of the data processing method by the processor:
acquiring a carrier object and acquiring information to be embedded;
and embedding the information to be embedded into a brightness channel and a chrominance channel of the carrier object to obtain a target object, wherein the embedded information embedded into the brightness channel and the embedded information embedded into the chrominance channel meet a preset correlation condition.
33. A storage device characterized by storing a program of a data processing method, the program being executed by a processor to execute the steps of:
acquiring a carrier object and acquiring information to be embedded;
and embedding the information to be embedded into a brightness channel and a chrominance channel of the carrier object to obtain a target object, wherein the embedded information embedded into the brightness channel and the embedded information embedded into the chrominance channel meet a preset correlation condition.
34. A data processing method, comprising:
acquiring an object to be detected;
and acquiring target embedded information from the brightness channel and the chromaticity channel of the object to be detected by judging whether the embedded information in the brightness channel of the object to be detected and the embedded information in the chromaticity channel of the object to be detected meet a preset correlation condition.
35. A data processing apparatus, comprising:
the object acquisition unit is used for acquiring an object to be detected;
and the embedded information acquisition unit is used for acquiring target embedded information from the brightness channel and the chrominance channel of the object to be detected by judging whether the embedded information in the brightness channel of the object to be detected and the embedded information in the chrominance channel of the object to be detected meet a preset correlation condition.
36. An electronic device, comprising:
a processor;
a memory for storing a program of a data processing method, the apparatus performing the following steps after being powered on and running the program of the data processing method by the processor:
acquiring an object to be detected;
and acquiring target embedded information from the brightness channel and the chromaticity channel of the object to be detected by judging whether the embedded information in the brightness channel of the object to be detected and the embedded information in the chromaticity channel of the object to be detected meet a preset correlation condition.
37. A storage device characterized by storing a program of a data processing method, the program being executed by a processor to execute the steps of:
acquiring an object to be detected;
and acquiring target embedded information from the brightness channel and the chromaticity channel of the object to be detected by judging whether the embedded information in the brightness channel of the object to be detected and the embedded information in the chromaticity channel of the object to be detected meet a preset correlation condition.
CN202010166869.9A2020-03-112020-03-11Data processing method and device, electronic equipment and storage equipmentActiveCN113395475B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202010166869.9ACN113395475B (en)2020-03-112020-03-11Data processing method and device, electronic equipment and storage equipment

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202010166869.9ACN113395475B (en)2020-03-112020-03-11Data processing method and device, electronic equipment and storage equipment

Publications (2)

Publication NumberPublication Date
CN113395475Atrue CN113395475A (en)2021-09-14
CN113395475B CN113395475B (en)2023-02-28

Family

ID=77616630

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202010166869.9AActiveCN113395475B (en)2020-03-112020-03-11Data processing method and device, electronic equipment and storage equipment

Country Status (1)

CountryLink
CN (1)CN113395475B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114466147A (en)*2021-12-232022-05-10阿里巴巴(中国)有限公司Video brightness adjusting method and device, electronic equipment and storage medium
CN120075545A (en)*2025-04-242025-05-30北京奕之宣科技有限公司Video watermark embedding processing method, video watermark extracting processing device and video watermark extracting processing equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20030076979A1 (en)*2001-07-102003-04-24Kowa Co., Ltd.Method of embedding digital watermark, method of extracting embedded digital watermark and apparatuses for the same
CN101297320A (en)*2005-10-262008-10-29皇家飞利浦电子股份有限公司A method of embedding data in an information signal
CN103997652A (en)*2014-06-122014-08-20北京奇艺世纪科技有限公司Video watermark embedding method and device
US20170006301A1 (en)*2015-07-022017-01-05Cisco Technology, Inc.Mpeg-2 video watermarking technique

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20030076979A1 (en)*2001-07-102003-04-24Kowa Co., Ltd.Method of embedding digital watermark, method of extracting embedded digital watermark and apparatuses for the same
CN101297320A (en)*2005-10-262008-10-29皇家飞利浦电子股份有限公司A method of embedding data in an information signal
CN103997652A (en)*2014-06-122014-08-20北京奇艺世纪科技有限公司Video watermark embedding method and device
US20170006301A1 (en)*2015-07-022017-01-05Cisco Technology, Inc.Mpeg-2 video watermarking technique

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114466147A (en)*2021-12-232022-05-10阿里巴巴(中国)有限公司Video brightness adjusting method and device, electronic equipment and storage medium
CN114466147B (en)*2021-12-232024-03-15阿里巴巴(中国)有限公司Video brightness adjusting method and device, electronic equipment and storage medium
CN120075545A (en)*2025-04-242025-05-30北京奕之宣科技有限公司Video watermark embedding processing method, video watermark extracting processing device and video watermark extracting processing equipment
CN120075545B (en)*2025-04-242025-08-15北京奕之宣科技有限公司Video watermark embedding processing method, video watermark extracting processing device and video watermark extracting processing equipment

Also Published As

Publication numberPublication date
CN113395475B (en)2023-02-28

Similar Documents

PublicationPublication DateTitle
AU2020201708B2 (en)Techniques for encoding, decoding and representing high dynamic range images
US9996891B2 (en)System and method for digital watermarking
KR102128233B1 (en)Encoding, decoding, and representing high dynamic range images
CN108235037B (en)Encoding and decoding image data
KR101726572B1 (en)Method of lossless image enconding and decoding and device performing the same
CN113395475B (en)Data processing method and device, electronic equipment and storage equipment
CN113497908B (en)Data processing method and device, electronic equipment and storage equipment
JP4119637B2 (en) Digital watermark embedding method, digital watermark embedding device, digital watermark embedding program, digital watermark detection method, digital watermark detection device, and digital watermark detection program
HK40009290B (en)Methods for encoding and decoding high dynamic range images
JP2004023633A (en) Information embedding device and information extracting device
HK40009915A (en)Techniques for encoding, decoding and representing high dynamic range images
HK40009290A (en)Methods for encoding and decoding high dynamic range images
JP2005065202A (en)Data compression method
HK1228142A1 (en)Methods for encoding and decoding high dynamic range images
HK1228142B (en)Methods for encoding and decoding high dynamic range images

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp