Detailed Description
In order to better understand the above technical solution, the following describes the above technical solution in detail with reference to specific embodiments.
Example 1
The embodiment of the invention provides a seismic data compression method, which is shown in combination with fig. 1, and comprises the following steps S10-S50, namely:
Step S10, initializing resources by the main thread.
The method comprises the steps of calculating the number of segments of seismic data, distributing a compressed seismic data buffer area, determining the number of computing threads participating in compression and the like, so as to divide the data segments responsible for each computing thread and ensure the efficiency of data conversion.
As shown in connection with fig. 2, this step at least comprises sub-steps S101 to S104, wherein:
Step S101, calculating the number of segments of the seismic data channel to be compressed entering the original seismic data channel buffer.
Existing seismic data includes a plurality of seismic data traces, each of which includes a seismic trace header and a plurality of seismic samples, each of which is a 4-byte float value. The method comprises the steps of segmenting the to-be-compressed seismic data channels entering an original seismic data channel buffer area according to a preset number of seismic sample points to obtain data segment numbers, if all the seismic sample points of the seismic data channels can be divided by the preset number of seismic sample points, and no remainder exists, the segment numbers of the seismic data channels are quotient values obtained, otherwise, the segment numbers of the seismic data channels are quotient values obtained by adding 1, and the last segment of the seismic data channels is supplemented according to the preset number of seismic sample points.
In some embodiments, the number of seismic samples that each piece of data of a seismic data trace contains may be represented by SN, which must be a multiple of 8, preferably by default of 32, i.e., each piece contains 32 samples. The calling program gives the number M of samples of each channel of seismic data to calculate the number of segments, if the last segment is less than SN samples, the rest samples are supplemented to a multiple of 8, and a segment is calculated.
Step S102, setting compression digits of each seismic sample point, and distributing a compressed seismic data buffer area according to the number of segments of the seismic data channel to be compressed and the compression digits of each seismic sample point.
The number of bits after compression of each seismic sample point is reasonably set to influence the compression effect, in some embodiments, the value range is preferably 16-23 bits, and the compression rate can reach about 35% by realizing that the number of bits is 20 in a large number.
Step S103, determining the number of computing threads participating in compression.
The number of calculation threads can be used as the number of calculation threads participating in compression according to the number of calculation threads provided by the current hardware environment, or can be specified by a user, the number of calculation threads is determined according to the number of segments of the seismic data channel, and the number of calculation threads is not greater than the number of segments of the seismic data channel.
Step S104, initializing a synchronous concurrent bounded queue Q and a plurality of running bounded queues QS, and starting a calculation thread.
Specifically, as shown in fig. 3, steps S1041 to S1043 are included.
In step S1041, a synchronous concurrent bounded queue Q for coordinating the synchronous and independent running of each computing thread is initialized, and the maximum element number of the synchronous concurrent bounded queue is set by using the obtained number of computing threads.
The synchronous concurrent bounded queues can coordinate synchronous operation of each calculation thread, and ensure independence of each calculation thread on current data channel compression operation.
In step S1042, a running bounded queue QS corresponding to each computing thread is initialized, where the maximum number of elements in the running bounded queue is 1.
Each calculation thread corresponds to an operation bounded queue, and the maximum element number in the queue is set to be 1 so as to ensure that the calculation thread starts to operate from a blocking state when the seismic data channels are compressible, and the calculation thread is blocked after the current data channel compression is completed.
Step S1043, allocating the preset number of data segments to be compressed for each computing thread, and starting each computing thread.
And distributing operational data segments for each computing thread, starting each thread, and waiting for data compression computation after the seismic data channel is ready when each thread acquires elements of each thread from a queue because the running bounded queue used by each computing thread is empty.
And step S20, the fidelity of the seismic data channels before and after compression is calculated according to the compression bit number of each seismic sample point until the fidelity meets a preset fidelity threshold.
Specifically, as shown in fig. 4, at least steps S201 to S204 are included.
Step S201, calculating the fidelity of each seismic data sample before and after compression.
Specifically, if the sign of the original value and the compression value of the seismic data sample point are different, the fidelity of the seismic data sample point is set to 0.
Otherwise, if the original value of the seismic data sample point is 0 and the compression value is not 0, the fidelity is set to 0.
Otherwise, calculating the absolute value of the difference between the original value and the compressed value of the seismic data sample point, and setting the fidelity as a preset fidelity peak value if the absolute value is 0.
Otherwise, the fidelity is calculated as follows:
20log(V/ABS)
wherein ABS is the original value of the seismic data sample point and V is the absolute value of the difference between the original value and the compressed value of the seismic data sample point.
In practical application, the inventor finds that the fidelity of the data in the seismic data processing process can reach more than-105 DB without affecting the subsequent seismic data processing effect, and if the fidelity reaches-140 DB, the data processing effect is very good, and the preset fidelity peak value can be set to be-140 DB.
Step S202, dividing the sum of the fidelity of the seismic data sampling points in the data segments by the total number of the sampling points to obtain the fidelity of each data segment.
Step S203, the sum of the fidelity of each data segment is divided by the number of data segments of the seismic data channel, so as to obtain the fidelity of the seismic data channel before and after compression.
Step S204, judging whether the obtained fidelity before and after the compression of the seismic data channel meets a preset fidelity threshold, if the obtained fidelity before and after the compression of the seismic data channel meets the preset fidelity threshold, calculating and recording the data compression rate, otherwise, adjusting the compression bit number of each seismic sample point, and re-executing the steps of calculating the fidelity and follow-up steps of each seismic data sample point before and after the compression until the fidelity before and after the compression meets the preset fidelity threshold. In this embodiment, the preset fidelity threshold is set to-105 DB.
In other embodiments, it is desirable that the compression algorithm not only automatically adjusts the number of compression bits to meet the fidelity requirements, but also maximizes the data compression rate. Therefore, it is preferable that the data compression rate is recorded after the fidelity before and after the compression of the seismic data trace is tried each time, then the number of compression bits of each seismic sample point is adjusted again, steps S201 to S204 are executed again, the stored data compression rates are compared, and the number of compression bits of the seismic sample point corresponding to the maximum data compression rate is used as the number of compression bits of the finally set seismic sample point.
And step S30, calculating the length of the compressed seismic data channel according to the compression bit number of each seismic sample point. Referring to fig. 5, this step includes at least substeps S301 to S304, specifically:
step S301, calculating the number of bytes of each segment of data of the seismic data channel after compression according to a preset formula, wherein the preset formula is as follows:
(BIT*SN/8)+5
The BIT is the compression BIT number of each set seismic sampling point, and the SN is the seismic sampling point number contained in each section of data of the seismic data channel. The 5 bytes here include the maximum value of the original sample point of the segment (single precision floating point type 4 bytes), and the number of compression bits is one byte.
Step S302, the first byte number occupied by the last section of data divided by the whole seismic data channel is calculated according to the byte number of each section of data compression of the seismic data channel.
In step S303, the second byte count occupied by the last segment of data of the seismic-data trace is calculated.
In practical applications, since the last piece of data may not be full of SN samples, separate computation is required. If the number of samples in the last segment is equal to SN, the number of bytes of the whole segment is increased. If the number of samples of the last segment does not satisfy the multiple of 8, the number of samples is increased to the multiple of 8, and the number of bytes after the last segment is compressed is calculated by the value.
And step S304, adding the first byte number and the second byte number to obtain the length of the calculated compressed seismic data channel.
Step S40, the synchronous concurrent bounded queue Q is utilized, the bounded queue QS is operated to control the coordination calculation thread to convert the numerical type of the seismic sample point into a form with fewer digits according to a preset first rule, and the data segment of the seismic data is compressed.
An original data segment is composed of a plurality of seismic sampling points, each seismic sampling point is a 32-bit single-precision floating point type number, and data compression is realized by converting the floating point type number into a form occupying fewer bits and occupying fewer storage spaces. The compression operation of each data segment is mutually independent, and the data compression efficiency can be improved by a multithreading parallel mode. Such as each sample stored in an unsigned integer of N bits. In one data, M sample points are taken as a group, M requirement is a multiple of 8, and M value 32 is proper according to factors such as sample value change rule, abnormal value influence range and the like. Each group assists in independent compression and decompression operations with a SCALE factor of the floating point type (SCALE). Referring to fig. 6, the compression principle of the seismic data is as follows:
Assume nmax=2(N-1) -1;
Mmax is the maximum absolute value of the M samples in the group, SCALE factor scale=nmax/Mmax, if Mmax is 0 special treatment.
The original floating point type sample point value is fValue, the compressed unsigned integer value is nValue, and the corresponding value after compression is:
nValue=fValue×SCALE+Nmax
decompression operation fValue = (nValue-Nmax)/SCALE
As a result, nValue performs operations such as shift operations, bit or operation, byte swapping, etc. with the adjacent samples.
The specific implementation of this step will be described in detail below using this compression principle, in conjunction with the illustration of fig. 7. This step comprises at least the sub-steps S401-S405:
Step S401, obtaining the range of the data segment to be compressed and the sequence number of the computing thread in charge of the current computing thread.
Step S402, obtaining an operation bounded queue QS corresponding to the calculation thread through calculating the thread sequence number;
Step S403, obtaining an element from the running bounded queue QS belonging to the present computing thread, if there is no element in the running bounded queue, the computing thread is blocked, otherwise,
Step S404, an element value is obtained, if the element value represents that the whole seismic data channel compression is completed, the calculation thread exits, otherwise, step S405 is executed. Said step S405 comprises at least the sub-steps S4051-S4054.
Step S4051, calculating the byte position of the data segment to be compressed in the original data track buffer and the byte position in the compressed data buffer according to the obtained data segment number.
In step S4052, the maximum absolute value MAX of the valid samples in the segment is calculated by using the number of seismic samples contained in the data segment to be compressed, where the maximum positive integer BIT of the compressed data buffer for storing the data of the segment can be represented is the maximum absolute value of the valid samples.
In this step, it is necessary to determine whether the number of samples included in the last segment of data of the seismic-data trace is a multiple of 8, and if not, it is necessary to fill up the multiple of 8. In addition, if the number of broken seismic samples is abnormal (for example, if the number of seismic samples is INF (positive or negative infinity), NAN (not a number), absolute value exceeds a certain value (for example, 1010), etc., the number of broken seismic samples can be regarded as abnormal value), if the abnormality is to be discarded, the remaining seismic samples are taken as effective values, and the maximum absolute value MAX of the effective samples is found.
In step S4053, the bit number of the data in the data segment is divided by the maximum absolute value of the valid sample in the data segment by using the compressed data buffer to obtain the SCALE factor SCALE before and after the data segment is compressed.
Assuming that the maximum positive integer that BIT of the compressed data buffer can represent for storing the data of this section is V1, v1=pow (2, BIT-1) -1, POW () function is used to solve BIT-1 power (power) of 2, maximum absolute value MAX of effective sample value, and SCALE factor SCALE before and after data section seismic sample point compression is:
SCALE=V1/MAX
it should be noted that if the maximum absolute value MAX of the valid sample is 0, the SCALE factor is SCALE0.
Step S4054, creating a variable for compression and initializing the variable, sequentially converting each original seismic sample point in the data segment to be compressed into compressed data according to a preset operation method, placing the compressed data into a compressed data buffer area, placing an element into a synchronous concurrent bounded queue Q, and then executing the steps of obtaining the element from the running bounded queue QS belonging to the present calculation thread and thereafter. The preset operation method may include original value mapping, data shift splicing, byte exchange algorithm, and the like, and specifically:
In some embodiments, variables are created for compression and initialized, such as Value converted from the original seismic sample using a scale factor, value ChangeValue stored for compression from one seismic sample, the type of Value being unsigned integer, number of bits LeftMove required for the current sample to shift left to an initial Value of 0, number of bits RightMove required for the current sample to shift right to an initial Value of 0, and number of bits (remaining bits) LeftBits available in ChangeValue to an initial Value of 32.
Then, each original sample point in the section is calculated and converted into compressed data, and the compressed data is put into a compressed data buffer area, and the processing process is from the first sample value to the last sample value of the section, and the specific steps are as follows:
Step (1) of converting an original sample into an integer representation. The method comprises the following specific steps:
step (1.1) if the SCALE is 0, the integer Value converted by the current sample is 0, otherwise, the step (1.2) is entered.
Step (1.2) if the current segment is the last segment and is a sample of the patch part, the integer Value is 0, otherwise, step (1.3) is entered.
Step (1.3) the floating point Value of the current sample is SCALE, then rounded to the bit, assigned Value and added with V1.
Step (2) if the current sample is the first sample of the segment, then step (3) is entered, otherwise step (4) is entered.
And (3) assigning Value to ChangeValue, then shifting left (32-BIT) BITs, wherein the number of remaining BITs LeftBits of the Value is (32-BIT), and entering step (1). BIT represents the number of BITs that store the data of the segment.
And (4) if the residual BIT number LeftBits is larger than the BIT number BIT of the segment, entering the step (5), otherwise, entering the step (6).
Setting the left shift BIT number LeftMove to LeftMove-BIT, shifting Value to LeftMove BITs left, then performing BIT AND with ChangeValue, putting the result into ChangeValue, and entering step (1).
Setting right shift BIT RightMove as BIT-LeftBits, right shift Value RightMove BITs, and performing BIT AND operation with ChangeValue.
ChangeValue is a 4 byte unsigned integer, content exchange is performed according to the first byte and the fourth byte, the second byte and the third byte, and ChangeValue is written into the data compression buffer.
And (8) calculating the left shift bit number LeftMove to be 32-RightMove, if the left shift bit number is 32, the changevalue is set to 0, otherwise, assigning ChangeValue the result of the left shift of Value by LeftMove bits.
And (9) setting the number of bits LeftBits remaining in ChangeValue as the left-shifted number of bits, and entering the step (2).
By the original value mapping, the data shifting splicing, the byte exchange algorithm and the like, floating point type numerical values of the seismic data can be converted into values occupying fewer digits, occupying fewer storage spaces, and guaranteeing equal lengths of compressed data channels.
And S50, after the compression of one seismic trace is completed, the corresponding calculation thread is blocked, the other calculation threads are waited for completing the compression of the residual data trace, and the compressed seismic data is placed in the compressed seismic data buffer area for the main thread to acquire.
After one seismic channel is compressed, the next seismic channel can be compressed after the compressed data is output, and then each calculation thread is required to block and stop operation. Referring to fig. 8, the method specifically includes substeps S501 to S506:
in step S501, a trace of seismic data is prepared for compression. The interface corresponding to this step is called by the application program using the compression function.
In step S502, an element with a value of 1 is put into the bounded queue QS corresponding to each computing thread, so as to start each blocked computing thread.
Step S503, obtaining the elements of the number of the computing threads from the synchronous concurrency bounded queue Q, if the synchronous concurrency bounded queue Q does not have the corresponding number of the elements, the current computing threads are blocked, waiting for the completion of the compression task of each computing thread on the current data channel, otherwise, entering step S504.
In step S504, the compression task of each computing thread on the current data track has been completed, and the data in the compression buffer is valid and can be used by the calling application.
Step S505, if there are more seismic-data traces to be compressed, step S501 is performed. Otherwise, the compression task representing the data volume ends, and step S506 is performed.
In step S506, an element with a value of-1 is put into the running bounded queue QS corresponding to each computing thread, and the computing threads are waited for ending.
So far, the seismic data compression process ends.
For ease of understanding, the embodiment of the present invention is also provided in fig. 9, in order to expect a more detailed explanation of some of the core content of the present invention. Fig. 9 illustrates the operating states of the main thread, each calculation thread, the synchronous concurrent bounded queue Q, the running bounded queue QS, the original seismic data trace buffer, and the compressed seismic data buffer when compressing seismic data traces. When the seismic data channel needs compression, each calculation thread is converted into an operation state from a blocking state, each calculation thread is responsible for part of data segments in the operation state, and all sample values in each segment are subjected to operations such as mapping of the seismic data sample values, shift related operation, byte exchange and the like, and after the compression of all the data segments is completed, the calculation thread is blocked and waits for taking out compression/decompression result data.
In this embodiment, the main thread is responsible for initializing resources, dividing the data segment that each computing thread is responsible for, and each computing thread coordinately operates under the control of the synchronous concurrency bounded queue Q and the operation bounded queue QS. In the process of compression operation, each calculation thread runs independently without interference. When the seismic data channel needs to be compressed, each calculation thread is responsible for part of the data segments, and performs operations such as original value mapping, shifting, byte exchange and the like on all sample values in each segment, and after the compression of all the data segments is completed, the main thread waits for taking the compressed result data. When the synchronous queue is full, each computing thread completes the compression operation and waits for the main thread to acquire the compressed data.
Using the method of this example, the inventors conducted a number of experiments, one of which is shown in the present document, to verify the effectiveness of the present application.
Specifically, the seismic data compression method correspondingly develops a piece of seismic compression software, the basic structure of which is shown in fig. 10, the software is deployed on a high-performance cluster, and experiments are carried out on the cluster for a plurality of times. Each node in the cluster is provided with two Intel (R) Xeon (R) E5-2660 v3 2.6GHz CPU, each processor comprises 10 physical cores, and each node has 20 physical cores in total and 128GB of memory and shares storage. The version of the node operating system is RED HAT ENTERPRISE Linux SERVER RELEASE 6.8.8 (Santiago). The data size of the test is the actual data of the process project.
FIG. 11 is a diagram of seismic data produced by an actual processing project, and it can be seen from the diagram that when the number of concurrent operations in the cluster is large, the cluster storage IO load can be effectively relieved, the seismic operation efficiency is improved, and the method is very suitable for a large data/multi-project conventional processing cluster environment. The integrated application of the seismic data compression method not only saves disk space, but also improves IO efficiency. In experiments carried out by the applicant, the data body size of one original data is 617GB, the header is 29GB, the compressed data body is 410GB, the header is 29GB, and the compression rate is as high as 33.5% (only the data body).
According to the embodiment of the invention, all the sample points in the seismic data are processed, the size of the compressed data is controlled by segmenting the seismic data channel, compressing the number of sample point bits of the seismic data channel and the like under the condition of meeting the requirement of compression fidelity, and the data loading and output efficiency is ensured by a multi-thread related technology. The seismic data compression method provided by the invention improves the compression rate of the data on the premise of ensuring the numerical fidelity and the data transmission rate, can reduce the disk space in the seismic data processing process, and saves hardware equipment. In addition, the byte length of the seismic data channel before and after compression is unchanged, so that the functions of index sorting, data channel reading and writing and the like of the seismic data can be guaranteed to be unaffected.
Example two
The invention also provides a seismic data compression device, as shown in fig. 12, comprising a first initialization module 101, a fidelity calculation module 102, a first seismic data channel length calculation module 103, and a seismic data segment compression module 104, wherein:
The first initializing module 101 is used for initializing resources, at least comprises calculating the number of segments of the seismic data channels to be compressed entering the original seismic data channel buffer, distributing the compressed seismic data buffer, setting the compression bit number of each seismic sample point, determining the number of calculation threads participating in compression, initializing a synchronous concurrent bounded queue Q, a plurality of running bounded queues QS and starting the calculation threads, wherein the seismic data comprises a plurality of seismic data channels, each seismic data channel comprises a seismic channel head and a plurality of seismic sample points, the running bounded queues correspond to the calculation threads one by one, and each calculation thread is used for compressing the preset number of data segments in one seismic data channel;
The fidelity calculation module 102 is configured to calculate fidelity before and after compression of the seismic data trace according to the compression bit number of each seismic sample point until the fidelity meets a preset fidelity threshold;
a first seismic data trace length calculation module 103, configured to calculate a length of a compressed seismic data trace according to a compression bit number of each seismic sample point;
The seismic data segment compression module 104 is configured to utilize the synchronous concurrent bounded queue Q and operate the bounded queue QS to control the coordination calculation thread to convert the numerical type of the seismic sample point into a form with fewer bits according to a preset first rule so as to compress a data segment of the seismic data, and is also configured to, after one seismic trace is compressed, block a corresponding calculation thread, wait for the other calculation threads to complete the compression of the remaining data trace, and place the compressed seismic data in the compressed seismic data buffer area for the main thread to acquire.
The specific working process of the seismic data compression apparatus may be referred to in the first embodiment, and will not be described herein.
Example III
Based on the same inventive concept, this embodiment also discloses a seismic data decompression method, and those skilled in the art can understand that the steps of the decompression method are approximately the inverse of the compression operation, so the details of the decompression method can refer to embodiment one. The decompression method of the embodiment is slightly simpler than the compression process, does not have the processes of calculating the maximum value of each segment and the like, and does not need to care about the situations of fidelity, the bit number of sample value storage and the like.
Specifically, as shown in fig. 13, a seismic data decompression method may include the steps of S10 '-S40':
Step S10', initializing resources by a main thread, wherein the main thread at least comprises the steps of obtaining compression bit number and sample number information of each seismic sample point, calculating the segment number of compressed seismic data tracks, distributing a decompression seismic data buffer zone, determining the number of calculation threads participating in decompression, initializing a synchronous concurrent bounded queue and a plurality of running bounded queues, and starting the calculation threads, the decompression seismic data comprises a plurality of seismic data tracks, each seismic data track comprises a seismic track head and compressed sample value data, the sample value data comprises a plurality of compressed seismic sample points, each segment comprises a plurality of compressed seismic sample points, the running bounded queues correspond to the calculation threads one by one, and each calculation thread is used for decompressing a preset number of data segments in one seismic data track.
The method comprises the steps of obtaining compression bit number and sample number information of each seismic sample point, calculating the number of segments of a compressed seismic data channel, and distributing a decompressed seismic data buffer zone, wherein the steps of obtaining the compression bit number and the sample number information from the compressed seismic data channel, calculating the number of data segments of the compressed data channel, and further calculating the number of seismic sample points complemented in the last segment.
The method comprises the steps of determining the number of computing threads participating in decompression, wherein the number of computing threads is not larger than the number of segments of the seismic data channels according to the number of computing threads which can be provided by the current hardware environment as the number of computing threads participating in decompression or according to the number of segments of the seismic data channels.
Initializing a synchronous concurrent bounded queue, a plurality of running bounded queues, and starting a computing thread, including:
Initializing a synchronous concurrency bounded queue for coordinating synchronous and independent running of each computing thread, and setting the maximum element number of the synchronous concurrency bounded queue by using the obtained number of the computing threads. Initializing a running bounded queue corresponding to each computing thread, wherein the maximum number of elements in the running bounded queue is 1. And distributing the preset number of data segments to be decompressed for each computing thread, and starting each computing thread.
And step S20', calculating the length of the decompressed seismic data channel according to the compression bit number of each seismic sample point. Specifically, the method comprises the steps of S201 '-S204':
step S201', the number of bytes of each segment of data of the seismic data channel after compression is calculated according to a preset formula, wherein the preset formula is as follows:
(BIT*SN/8)+5
The BIT is the compression BIT number of each set seismic sampling point, and the SN is the seismic sampling point number contained in each section of data of the seismic data channel.
Step S202', the first byte number occupied by the last section of data divided by the whole seismic data channel is calculated according to the byte number of each section of data compression of the seismic data channel.
Step S203', calculate the second byte count occupied by the last segment of data of the seismic-data trace.
And step S204', adding the first byte number and the second byte number to obtain the length of the calculated compressed seismic data channel.
And step S30', converting the value type of the compressed seismic sample point into a floating point value according to a preset second rule by utilizing the synchronous concurrent bounded queues and running the bounded queue control coordination calculation thread. Specifically, the method comprises the steps of S301 '-S308':
step S301', obtaining the range of the data segment to be decompressed and the sequence number of the computing thread in charge of the current computing thread;
Step S302', obtaining an operation bounded queue QS corresponding to the calculation thread through calculating the sequence number of the thread;
Step S303', acquire an element from the running bounded queue QS belonging to the present computing thread, if there is no element in the running bounded queue, the computing thread blocks, otherwise,
Step S304', obtaining an element value, if the element value represents that the decompression of the whole seismic data channel is completed, the calculation thread exits, otherwise,
Step S305', calculating the byte position of the data segment to be decompressed in the decompressed data channel buffer area and the byte position in the decompressed data buffer area according to the obtained data segment number;
step S306', extracting the maximum absolute value of the seismic sample point of the section and the bit number of the sample value of the section from the compressed data section;
step S307', dividing the maximum absolute value of the effective sample value in the section in the compressed data buffer area by the maximum integer which can be represented of the bit number of the data of the section to obtain a SCALE factor SCALE before and after decompression;
Step S308', creating a variable for decompression and initializing the variable, sequentially converting each compressed seismic sample point in the data segment to be decompressed into decompressed data according to a preset operation method, placing the decompressed data into a decompressed data buffer area, placing an element into a synchronous concurrent bounded queue, and then executing the steps of obtaining the element from the running bounded queue belonging to the calculation thread and thereafter. In this step, the preset operation method at least includes one of data shift splicing, byte exchange and original value mapping algorithm.
And step S40', when the decompression of one seismic trace is completed, the corresponding calculation thread is blocked, the other calculation threads are waited for completing the decompression of the residual data trace, and the decompressed seismic data are placed in the decompressed seismic data buffer area for the main thread to acquire.
Example IV
Based on the same inventive concept, the embodiment also discloses a seismic data decompression apparatus, as shown in fig. 14, including a second initialization module 201, a second seismic data channel length calculation module 202, and a seismic data segment decompression module 203, wherein:
the second initialization module 201 is configured to initialize resources, at least including calculating a number of segments of a seismic data trace to be decompressed entering the original seismic data trace buffer, allocating a decompressed seismic data buffer, obtaining compression bits of each seismic sample point, determining a number of computation threads participating in decompression, initializing a synchronous concurrent bounded queue Q, a plurality of running bounded queues QS, and starting the computation threads, where the seismic data includes a plurality of seismic data traces, each seismic data trace includes a seismic trace header and a plurality of seismic sample points, the running bounded queues correspond to the computation threads one by one, and each computation thread is configured to decompress a preset number of data segments in a seismic data trace;
A second seismic data trace length calculation module 202 for calculating a length of the compressed seismic data trace based on the number of compression bits per seismic sample point, the length being used to read the compressed seismic data trace;
The seismic data segment decompression module 203 is configured to convert the numerical type of the compressed seismic sample point into a floating point number form according to a preset first rule by using the synchronous concurrent bounded queue Q and running the bounded queue QS to control the coordinated computation thread so as to decompress the data segment of the seismic data, and is also configured to, after the decompression of one seismic trace is completed, block the corresponding computation thread, wait for the other computation threads to complete the decompression of the remaining data trace, and place the decompressed seismic data in the decompressed seismic data buffer area for the main thread to acquire.
The specific working process of the seismic data compression apparatus may be referred to in the first embodiment, and will not be described herein.
Correspondingly, the embodiment of the invention also discloses a computer readable storage medium, wherein the computer readable storage medium stores instructions which, when run on the terminal, cause the terminal to execute the seismic data compression method as in the first embodiment.
The embodiment of the invention also discloses another computer readable storage medium, wherein the computer readable storage medium stores instructions which, when run on the terminal, cause the terminal to execute the seismic data decompression method as in the third embodiment.
In the foregoing detailed description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the subject matter require more features than are expressly recited in each claim. Rather, as the following claims reflect, invention lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate preferred embodiment of this invention.
The foregoing description includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the aforementioned embodiments, but one of ordinary skill in the art may recognize that many further combinations and permutations of various embodiments are possible. Accordingly, the embodiments described herein are intended to embrace all such alterations, modifications and variations that fall within the scope of the appended claims. Furthermore, as used in the specification or claims, the term "comprising" is intended to be inclusive in a manner similar to the term "comprising," as "comprising," is interpreted when employed as a transitional word in a claim. Furthermore, any use of the term "or" in the specification of the claims is intended to mean "non-exclusive or".