Movatterモバイル変換


[0]ホーム

URL:


CN110929854A - A data processing method, device and hardware accelerator - Google Patents

A data processing method, device and hardware accelerator
Download PDF

Info

Publication number
CN110929854A
CN110929854ACN201811100198.5ACN201811100198ACN110929854ACN 110929854 ACN110929854 ACN 110929854ACN 201811100198 ACN201811100198 ACN 201811100198ACN 110929854 ACN110929854 ACN 110929854A
Authority
CN
China
Prior art keywords
matrix
hardware accelerator
zero elements
index information
column
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811100198.5A
Other languages
Chinese (zh)
Other versions
CN110929854B (en
Inventor
刘保庆
郑淼
张一栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co LtdfiledCriticalHuawei Technologies Co Ltd
Priority to CN201811100198.5ApriorityCriticalpatent/CN110929854B/en
Publication of CN110929854ApublicationCriticalpatent/CN110929854A/en
Application grantedgrantedCritical
Publication of CN110929854BpublicationCriticalpatent/CN110929854B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本申请提供一种数据处理方法、装置及硬件加速器。该数据处理方法包括:处理器获取第一矩阵和第二矩阵,根据第二矩阵中的非零元素和硬件加速器的规格,得到至少一个第三矩阵和至少一个索引信息;硬件加速器根据第三矩阵对应的索引信息,从第一矩阵中对应的l行中获取第四矩阵,根据第四矩阵和第三矩阵得到第五矩阵,根据至少一个第五矩阵得到目标结果。由于处理器剔除了第二矩阵的n列中的部分或全部零元素得到第三矩阵,第三矩阵参与运算的零元素数量比第二矩阵的n列中的零元素数量小,及根据索引信息得到的第四矩阵的元素数量较小,可通过减少参与运算的零元素的数量达到减少硬件加速器总运算量的目的,以提高硬件加速器的运算效率。

Figure 201811100198

The present application provides a data processing method, device and hardware accelerator. The data processing method includes: the processor obtains the first matrix and the second matrix, and obtains at least one third matrix and at least one index information according to the non-zero elements in the second matrix and the specifications of the hardware accelerator; the hardware accelerator obtains at least one third matrix according to the third matrix For the corresponding index information, the fourth matrix is obtained from the corresponding l row in the first matrix, the fifth matrix is obtained according to the fourth matrix and the third matrix, and the target result is obtained according to at least one fifth matrix. Since the processor removes some or all of the zero elements in the n columns of the second matrix to obtain the third matrix, the number of zero elements involved in the operation of the third matrix is smaller than the number of zero elements in the n columns of the second matrix, and according to the index information The obtained fourth matrix has a small number of elements, and the purpose of reducing the total operation amount of the hardware accelerator can be achieved by reducing the number of zero elements participating in the operation, so as to improve the operation efficiency of the hardware accelerator.

Figure 201811100198

Description

Data processing method and device and hardware accelerator
Technical Field
The present application relates to the field of data processing, and in particular, to a data processing method and apparatus, and a hardware accelerator.
Background
The neural network abstracts the human brain neuron network from the information processing angle, establishes a certain simple model, and forms different networks according to different connection modes. In recent years, neural networks have been rapidly developed and widely used in many fields such as image recognition, speech recognition, natural language processing, weather forecast, gene expression, content push, and the like.
The operation involved in the neural network is mainly a convolutional layer (Conv) and a fully connected layer (FC). The convolution layer and the full-link layer occupy more than 90% of the computation amount and computation time of the whole network, and therefore, accelerating the computation of the convolution layer and the full-link layer is the key for improving the performance of the neural network. Most of the operations in the convolutional layer and the fully-connected layer can be summarized as operations of matrix multiplication vectors or matrix multiplication matrices, and because a large number of parameters are involved in the operations, but the specification of a hardware accelerator for executing the operations is limited, the hardware accelerator cannot directly calculate data matrices and parameter matrices and needs to process the data matrices and the parameter matrices. The description is given by way of example in fig. 1a and 1 b. As shown in fig. 1a, the specification of the hardware accelerator is a schematic diagram of multiplication of 8 × 8 matrix and 8 × 8 matrix. Fig. 1B is a schematic diagram of multiplication of the parameter matrix a (32 × 32) and the data matrix B (32 × 32) involved in the operation. If the hardware accelerator shown in fig. 1a is required to be used for the matrix operation shown in fig. 1B, both the parameter matrix a and the data matrix B may be loaded into the hardware accelerator, and then the parameter matrix a and the data matrix B are equally divided into 16 matrices of 8 × 8, each row of the parameter matrix a needs to perform multiply-add operation with each column of the data matrix B, the calculation amount is relatively large, and the calculation amount is also larger and larger as the parameter matrix a and the data matrix B increase, so that the calculation efficiency of the hardware accelerator is relatively low.
Disclosure of Invention
The application provides a data processing method, a data processing device and a hardware accelerator, which are used for improving the operation efficiency of the hardware accelerator.
In a first aspect, the present application provides a data processing method applicable to a data processing apparatus, wherein the data processing apparatus comprises a processor and a hardware accelerator. The method comprises the steps that a processor acquires a first matrix and a second matrix, and the processor obtains at least one third matrix and corresponding at least one index information according to non-zero elements in the second matrix and the specification of a hardware accelerator; the hardware accelerator acquires index information obtained through processing by the processor, and for each third matrix in the at least one third matrix, the hardware accelerator acquires a fourth matrix from the corresponding row in the first matrix according to the index information corresponding to the third matrix, and acquires a fifth matrix according to the fourth matrix and the third matrix; obtaining a target result according to the at least one fifth matrix, wherein the target result is a matrix of L x N; the first matrix and the second matrix both comprise non-zero elements, the first matrix is a matrix of L × M, the second matrix is a matrix of M × N, and L, M and N are positive integers; the specification of the hardware accelerator is used for indicating the hardware accelerator to process the product operation of a matrix of l x m and a matrix of m x n, the third matrix is a matrix of (t x m) x n, each third matrix in the at least one third matrix respectively comprises each non-zero element in different n columns in the second matrix and does not comprise part or all zero elements in n columns of the second matrix, and the index information is used for indicating the position information of the non-zero elements in the n columns in the second matrix; l is a positive integer not greater than L, M is a positive integer not greater than M, N is a positive integer not greater than N, and t is a positive integer; the fourth matrix is a matrix of l m.
Based on the scheme, the processor removes part of zero elements or all zero elements in n columns of the second matrix to obtain the third matrix, so that the number of the zero elements of the third matrix participating in operation in the hardware accelerator is reduced compared with the number of the zero elements in the n columns of the second matrix. The method comprises the steps of deleting rows with only zero elements in n columns of a second matrix to obtain a third matrix, obtaining a fourth matrix with smaller element number according to index information, and enabling the zero elements to have no influence on an operation result.
In one possible implementation, the processor may convert the second matrix into a third matrix that the hardware accelerator can operate on, and the determining of the third matrix may be implemented by determining a value of t, where t satisfies the following condition: (t-1) × m < p ≦ t × m, p being the number of non-zero elements in the column with the most non-zero elements of the second matrix. In this way, the third matrix may include all the non-zero elements of the second matrix, and since the non-zero elements have a large influence on the operation result and the zero elements have no influence on the operation result, the accuracy of the matrix operation by the hardware accelerator is improved. Further, through the determined t value, a third matrix can be obtained according to the second matrix with any sparse rate.
Further, when the second matrix is sparse processed, in order to improve the computing performance of hardware acceleration, p of the second matrix may satisfy the following condition: p is less than or equal to M-M.
In one possible implementation, the index information includes row addresses and column addresses, and one column address corresponds to m row addresses; the hardware accelerator may select m columns of elements from corresponding l rows in the first matrix according to m row addresses corresponding to one column address of the index information, to obtain a fourth matrix of l × m, where the m row addresses correspond to m column addresses of the m columns of elements one to one. The index information includes the same number of column addresses as the number of the obtained fourth matrices.
When the processor determines the third matrix according to the second matrix, the following two situations exist, namely the first situation: t is 1; the second case: t is more than or equal to 2.
For the first case, the hardware accelerator may directly perform a multiply-add operation on the fourth matrix and the third matrix to obtain a fifth matrix. For the second case, the hardware accelerator may first divide the third matrix into t m × n matrices, and then perform multiply-add operations on the fourth matrix and the t m × n matrices, respectively, to obtain a fifth matrix. In this way, the calculation process of the matrix can be simplified.
In a second aspect, the present application provides a data processing method, which is applicable to a hardware accelerator, the method including: the hardware accelerator acquires a first matrix, a third matrix and index information, wherein the third matrix and the index information are obtained by the processor according to non-zero elements in the second matrix and the specification of the hardware accelerator, the first matrix and the second matrix both comprise the non-zero elements, the first matrix is a matrix of L M, the second matrix is a matrix of M N, the specification of the hardware accelerator is used for indicating the hardware accelerator to process the multiplication operation of the matrix of L M and the matrix of M N, the index information is used for indicating the position information of the non-zero elements in different N columns in the second matrix, the third matrix is a matrix of (t M) N, each third matrix in at least one third matrix respectively comprises each non-zero element in different N columns in the second matrix and does not comprise part or all zero elements in the N columns of the second matrix, wherein M and N are both positive integers, l is a positive integer not greater than L, M is a positive integer not greater than M, N is a positive integer not greater than N, and t is a positive integer; the hardware accelerator acquires a fourth matrix from the corresponding row l in the first matrix according to the index information, wherein the fourth matrix is a matrix of l x m; the hardware accelerator obtains a fifth matrix according to the fourth matrix and the third matrix; and the hardware accelerator obtains a target result according to the at least one fifth matrix, wherein the target result is a matrix of L x N.
In a third aspect, the present application provides an apparatus comprising a processor and a hardware accelerator. Optionally, the apparatus may further comprise a memory for storing instructions when the memory is included; the processor is configured to execute the instructions stored in the memory and control the hardware accelerator to perform the acceleration task, and when the processor executes the instructions stored in the memory, the processor in the apparatus is configured to perform the method performed by the processor according to the first aspect or any one of the first aspects, so as to obtain at least one third matrix and corresponding at least one index information. The hardware accelerator in the apparatus is then caused to perform the method performed by the hardware accelerator of the first aspect or any of the first aspects.
In a fourth aspect, the present application provides an apparatus for implementing any one of the above first aspect or the first aspect, including corresponding functional modules, respectively for implementing the steps in the above methods. The functions may be implemented by hardware, or by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the above-described functions.
In a possible implementation manner, the apparatus includes a processing module and an operation module, and these modules may perform corresponding functions in the method example of the first aspect, which is specifically referred to the detailed description in the method example, and are not described herein again.
In a fifth aspect, the present application provides a hardware accelerator comprising an access circuit, a selection circuit and an arithmetic circuit, the hardware accelerator being operable to perform any of the methods of the second or third aspects via the access circuit, the selection circuit and the arithmetic circuit under control of an external processor.
In a sixth aspect, the present application provides a hardware accelerator for implementing any one of the above second aspects or second aspects, including corresponding functional circuits for implementing the steps in the above methods, respectively. The functions may be implemented by hardware. The hardware includes one or more circuits corresponding to the functions described above.
In a possible design, the structure of the hardware accelerator includes an access circuit, an operation circuit, and a storage circuit, and may perform corresponding functions in any method example of the second aspect or the second aspect, for specific reference, detailed descriptions in the method example are given, and details are not described here.
In a seventh aspect, the present application provides a computer storage medium having instructions stored therein, which when run on a computer, cause the processor to perform the method performed by the processor in the first aspect, any possible implementation manner of the first aspect, the second aspect, or any possible implementation manner of the second aspect, to obtain at least one third matrix and corresponding at least one index information. And then, the hardware accelerator is enabled to execute the method executed by the hardware accelerator in the first aspect, any possible implementation manner of the first aspect, the second aspect, or any possible implementation manner of the second aspect according to the at least one third matrix and the corresponding at least one index information.
In an eighth aspect, the present application provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of the first aspect or any possible implementation of the first aspect, or cause the computer to perform the method of the second aspect or any possible implementation of the second aspect.
In a ninth aspect, the present application provides a chip system, comprising a processor and a hardware accelerator. Optionally, the chip system further comprises a memory for storing instructions when the memory is included; the memory is configured to store a computer program, and the processor is configured to call and run the computer program from the memory, so that the device with the system on chip installed executes any method on the processor side in the first aspect, any possible implementation manner of the first aspect, the second aspect, or any possible implementation manner of the second aspect, and obtains at least one third matrix and corresponding at least one index information. And then, the hardware accelerator is enabled to execute the method executed by the hardware accelerator in the first aspect, any possible implementation manner of the first aspect, the second aspect, or any possible implementation manner of the second aspect according to the at least one third matrix and the corresponding at least one index information.
Drawings
FIG. 1a is a diagram illustrating the specification of a hardware accelerator according to the prior art;
FIG. 1b is a diagram illustrating a product of a parameter matrix and a data matrix in the prior art;
FIG. 2a is a schematic structural diagram of a data processing apparatus provided in the present application;
fig. 2b is a schematic diagram of a software and hardware collaboration framework of a data processing apparatus provided in the present application;
FIG. 3 is a schematic flow chart of a data processing method provided in the present application;
fig. 4a is a schematic structural diagram of a second matrix provided in the present application;
fig. 4b is a schematic structural diagram of a first matrix provided in the present application;
FIG. 4c is a schematic flow chart of another data processing method provided herein;
fig. 5a is a schematic structural diagram of a second matrix block provided in the present application;
fig. 5b is a schematic structural diagram of a third matrix provided in the present application;
FIG. 5c is a diagram illustrating an index information structure provided in the present application;
fig. 6a is a schematic structural diagram of a first matrix block provided in the present application;
fig. 6b is a schematic structural diagram of a fourth matrix provided in the present application;
fig. 6c is a schematic structural diagram of multiplication of a fourth matrix and a third matrix provided in the present application;
fig. 7a is a schematic structural diagram of another second matrix block provided in the present application;
FIG. 7b is a schematic structural diagram of another third matrix provided herein;
FIG. 7c is a diagram illustrating another structure of index information provided herein;
fig. 8 is a schematic structural diagram of an apparatus provided in the present application.
Detailed Description
Fig. 2a schematically shows a structural diagram of a data processing apparatus provided in the present application. As shown in fig. 2a, the apparatus includes a processing system 10, a hardware accelerator 20, adata bus 30 and a control bus 40. Processing system 10 and hardware accelerator 20 may communicate data viadata bus 30 and control information via control bus 40. The processing system 10 is the controlling component of the overall system and includes aprocessor 111 and, optionally, the processing system 10 also includes amemory 112. Theprocessor 111 is used to run the software-side code and load acceleration tasks (e.g., matrices or vectors) onto the hardware accelerator 20. Optionally, hardware accelerator 20 may include one or more Intellectual Property (IP) cores, and hardware accelerator 20 may control the operating state and data reading of the one or more IP cores. The functions of the IP core include, for example: some functional blocks, such as Synchronous Dynamic Random Access Memory (SDRAM) controller, Peripheral Component Interconnect (PCI) interface, etc., which are commonly used in digital circuits but are relatively complex, are designed as circuits capable of modifying parameters. Theprocessor 111 may be a Central Processing Unit (CPU), a Network Processor (NP), or a combination of a CPU and an NP. Theprocessor 111 may further include a hardware chip. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof. Thememory 112 is used for storing data, and can be controlled by theprocessor 111 to read data from or write data to thememory 112, thememory 112 can be a double data rate synchronous dynamic random access memory (DDR SDRAM), a random-access memory (RAM), or a non-volatile memory (non-volatile memory), such as a flash memory (flash memory), a Hard Disk Drive (HDD), or a solid-state drive (SSD), and thememory 112 can further include a combination of the above memories.
The hardware accelerator 20, which is a hardware acceleration component of the whole system, can design and implement a dedicated acceleration IP core according to a specific computation task, and the acceleration of the algorithm can be implemented by the IP core on the hardware accelerator. For example, a hardware accelerator applied in a neural network may achieve acceleration of the neural network algorithm. Hardware accelerator 20 and processing system 10 may independently execute tasks in parallel after data interaction is complete. The hardware accelerator 20 may be a Graphics Processing Unit (GPU), or an FPGA, or an Application Specific Integrated Circuit (ASIC).
Adata bus 30 for data transfer throughout the processing system 10 and the hardware accelerator 20. Thedata bus 30 may adopt an advanced extensible interface-Stream (AXI-Stream) protocol or a bus interface (PCI express, PCI-E) bus protocol, where AXI is a bus protocol and AXI-Stream is a high-performance transmission protocol, allowing unlimited data burst transmission.
A control bus 40 for the transmission of control signals throughout the processing system 10 and the hardware accelerator 20. The control bus 40 can adopt an AXI-Lite protocol, which is a lightweight address mapping single-time transmission protocol and is suitable for control signal transmission of a hardware operation circuit.
Based on the data processing device framework shown in fig. 2a, fig. 2b is a schematic diagram illustrating a software and hardware collaboration framework of a specific data processing device provided by the present application. As shown in fig. 2b, the framework includes a processing system 10a, a hardware accelerator 20a, a data bus 30a, and a control bus 40a, where the processing system 10a and the hardware accelerator 20a may transmit data via the data bus 30a and control information via the control bus 40 a.
The processing system 10a may be the same as the processing system 10 described above in fig. 2 a. That is, the processor 111a in the processing system 10a may be the same as theprocessor 111 in FIG. 2a, and the memory 112a may be the same as thememory 112 in FIG. 2 a. The data bus 30a may be the same as thedata bus 30 in FIG. 2a described above, and the control bus 40a may be the same as the control bus 40 in FIG. 2a described above, and is not further delineated here.
Hardware accelerator 20a may be the same as hardware accelerator 20 in FIG. 2a described above. Hardware accelerator 20a may include respective IP cores that are: a memory circuit 211, an arithmetic circuit 212, an input buffer circuit 213, an output buffer circuit 214, a read circuit 215 (fig. 2b illustrates the read circuit as a Direct Memory Access (DMA)) and a control interconnect circuit 216.
The storage circuit 211 includes a first data buffer 2111, a second data buffer 2112, an index information buffer 2113, and a selection circuit 2114 (the selection circuit is taken as an example for explanation in fig. 2 b). The first data buffer 2111 is configured to buffer data loaded from thememory 112 into the hardware accelerator, for example, when the hardware accelerator 20a is applied to a neural network, the first data buffer 2111 may buffer a convolution kernel parameter matrix of each layer of the neural network model. The second data buffer 2112 is used for buffering data loaded from the memory 112a into the hardware accelerator, for example, when the hardware accelerator 20a is applied to a neural network, the second data buffer 2112 may buffer the feature map data matrix. The index information buffer 2113 may be used to buffer the position information of the non-zero elements in the parameter matrix or the data matrix, such as the column address, the row address, and the relationship between the column address and the row address of the non-zero elements. A selector 2114, configured to select data in a corresponding position according to the index information cached in the index information cache region 2113; for example, if the index information is obtained according to the parameter matrix, the selector selects data at a corresponding position from the data matrix according to the index information; if the index information is obtained according to the data matrix, the selector selects the data at the corresponding position from the parameter matrix according to the index information.
The arithmetic circuit 212 is a core component of the hardware accelerator, and various functional units of the arithmetic circuit are solidified inside the arithmetic circuit. The arithmetic circuit 212 includes a multiplication unit 2121, an accumulation unit 2122, a vector calculation unit 2123, and a stimulus function unit 2124. The multiplying unit 2121 is configured to multiply an input vector by an element at a position corresponding to the matrix to obtain an intermediate value, or multiply an input matrix by the matrix to obtain an intermediate value, where the multiplying unit may be a matrix and vector multiplying unit, or a matrix and matrix multiplying unit. An accumulation unit 2122, configured to accumulate the intermediate values calculated by the multiplication unit 2121 to obtain gate vector values; an excitation function unit 2123 configured to perform excitation function processing on the gate vector values obtained by the accumulation unit, and a vector calculation unit 2124 configured to calculate each gate vector to obtain a final result. An input buffer 213 and an output buffer 214 for temporarily storing input and output data in the hardware accelerator, and the arithmetic circuit 212 may put the result into the output buffer and output the result after completing the calculation, wherein the output buffer 214 may be written back to thememory 112 once after the buffer is full.
Direct memory read DMA215 is used for data transfer between hardware accelerator 20a and storage 112 a. optionally, the physical addresses of data storage in storage 112a may be sequential, which may facilitate data transfer by DMA215 in hardware accelerator 20 a. Each circuit of the hardware accelerator may be equipped with one DMA to enable parallel data reads. And a control interconnection circuit 216 for controlling interconnection of the signal lines.
Either the above-mentioned fig. 2a or fig. 2b can be applied to a neural network acceleration system, and an optional implementation manner in the neural network acceleration system is: under the control of a processor in a processor system, the opening of the DMA is controlled through a control bus and a data bus respectively, the DMA transmits convolution kernel parameter data in the memory to a first data cache region 2111 in a storage circuit 211 of the hardware accelerator 20, transmits input characteristic diagram data in the memory to a second data cache region 2112 of the storage circuit of the hardware accelerator, then performs multiplication, accumulation, excitation function and other processing in an arithmetic circuit 212 to obtain a target result, and the obtained target result is transmitted back to the memory through the DMA.
The data processing apparatus shown in fig. 2a or fig. 2b can be applied to devices, such as a mobile phone, a camera, or a cloud server. The data processing device in fig. 2a or 2b may also be part of a System on Chip (SoC).
It is to be understood that, in the present application, "at least one" means one or more, "a plurality" means two or more.
Based on the architecture shown in fig. 2a or fig. 2b, the present application provides a data processing method. Fig. 3 is a schematic flow chart of a data processing method provided in the present application. In this embodiment, the processor may be theprocessor 111 in fig. 2a, or may be the processor 111a in fig. 2b, and the hardware accelerator may be the hardware accelerator 20 in fig. 2a, or may be the hardware accelerator 20a in fig. 2 b. As shown in fig. 3, the method comprises the steps of:
instep 301, a processor obtains a first matrix and a second matrix.
Wherein the first matrix and the second matrix both comprise non-zero elements, the first matrix is a matrix of L M, the second matrix is a matrix of M N, and L, M and N are both positive integers.
In one implementation, L and M of the first matrix are equal and M and N of the second matrix are equal.
In yet another implementation, L and M of the first matrix are equal and M and N of the second matrix are not equal.
In yet another implementation, L and M of the first matrix are not equal, and M and N of the second matrix are equal.
In yet another implementation, L and M of the first matrix are not equal, and M and N of the second matrix are not equal.
Step 302, the processor obtains at least one third matrix and corresponding at least one index information according to the non-zero elements in the second matrix and the specification of the hardware accelerator.
The specification of the hardware accelerator is used for indicating the hardware accelerator to process the product operation of a matrix of l x m and a matrix of m x n, the third matrix is a matrix of (t x m) x n, each third matrix in the at least one third matrix respectively comprises each non-zero element in different n columns in the second matrix and does not comprise part or all zero elements in the n columns of the second matrix, the index information is used for indicating the position information of the non-zero elements in the n columns of the second matrix, and one third matrix corresponds to one index information; l is a positive integer not greater than L, M is a positive integer not greater than M, N is a positive integer not greater than N, and t is a positive integer.
For example, if 4 third matrices are obtained, each third matrix includes n rows of the second matrix, and n rows of the second matrix included in any two of the 4 third matrices are different. For example, if the second matrix includes 32 columns and each third matrix includes 8 columns, the 8 columns of the second matrix included in the 4 third matrices may be: the first third matrix comprises 1 to 8 columns of the second matrix, the second third matrix comprises 9 to 16 columns of the second matrix, the third matrix comprises 17 to 24 columns of the second matrix, and the fourth third matrix comprises 25 to 32 columns of the second matrix. Wherein the first, second, third and fourth do not indicate a sequential order, but merely to distinguish the third matrix.
Optionally, the sizes of l, m and n in the specification of the hardware accelerator are all the same, or any two of them are the same, or all three are different.
In one possible implementation, the processor determines that the t value of the third matrix satisfies the condition: (t-1) m < p ≦ t m, where p is the number of non-zero elements in the column with the most non-zero elements in the different n columns in the second matrix. It is also understood that the processor may determine the number of rows of the third matrix according to the t value, and then form the third matrix according to the non-zero elements in the n columns of the second matrix block and the determined number of rows and columns of the third matrix. The number of rows of the third matrix is determined by the number p of non-zero elements in the column with the most non-zero elements in different n columns in the second matrix, all non-zero elements in different n columns in the second matrix can be included in the third matrix, and the third matrix includes all non-zero elements in different n columns in the second matrix due to the fact that the non-zero elements have large influence on the operation result, and the operation result can be accurately obtained through the third matrix.
For example, if the number of rows of the determined third matrix is 8, and the number of non-zero elements in a certain column of different n columns in the second matrix is 6, the column may be filled with two zero elements, where the positions of the two zero elements in the column may be in the first two rows of the column, may be in the last two rows of the column, and may also be in any two positions in between, which is not limited in this application.
Alternatively, the first matrix and the second matrix may both be sparse matrices, or both the first matrix and the second matrix may be non-sparse matrices, or one of them may be a sparse matrix. The sparse matrix refers to a matrix in which the number of elements with a value of 0 is much greater than the number of non-zero elements in the matrix (for example, the ratio of the number of elements with a value of 0 to the total number of elements in the matrix is greater than a preset threshold), and the non-zero elements are distributed irregularly, and the sparse matrix helps to reduce the requirement on hardware Input/Output (I/O) bandwidth when data is loaded from a memory to a hardware accelerator. For example, taking the second matrix as the sparse matrix, since the zero element in the sparse matrix has no influence on the operation result, but the zero element participates in the operation, the consumption of hardware acceleration is additionally increased, and the operation performance of the hardware accelerator is reduced, therefore, when the second matrix is subjected to sparse processing, in order to improve the operation performance of the hardware acceleration, the number p of the non-zero elements in the column with the most non-zero elements in each column of the second matrix can satisfy the condition that p is less than or equal to M-M, so that when the processor converts the second matrix into a plurality of third matrices, the row with only zero elements in different n columns in the second matrix can be deleted to obtain the third matrix, the number of the zero elements participating in the operation can be reduced, and the improvement of the calculation efficiency of the hardware accelerator is facilitated. Further, according to the condition that t meets, any irregular second sparse matrix can be converted into a regular third matrix, and then the product operation of the second matrix with any sparse rate can be achieved through the hardware accelerator.
As described with reference to fig. 2b, after the processor 111a obtains at least one third matrix and corresponding at least one index information according to the non-zero elements in the second matrix and the specification of the hardware accelerator, the obtained at least one third matrix and corresponding at least one index information may be stored in the memory 112 a.
Step 303, for each third matrix in the at least one third matrix, the hardware accelerator acquires a fourth matrix from the corresponding l rows in the first matrix according to the index information corresponding to the third matrix, where the fourth matrix is a matrix of l × m.
Optionally, the index information includes row addresses and column addresses, and one column address corresponds to m row addresses. The index information may be in a form of a table, or may be in other forms, for example, a text in an extensible markup language (XML) format, and the format of the index information is not limited in the present application.
Specifically, the hardware accelerator selects m column elements from corresponding l rows in the first matrix according to m row addresses corresponding to one column address of the index information to obtain a fourth matrix, where the m row addresses are in one-to-one correspondence with the m column addresses of the m column elements. Here, at least one fourth matrix may be obtained according to one index information.
In a possible implementation, n columns of the second matrix correspond to l rows of the first matrix. For example, if the first matrix comprises 32 rows, the second matrix comprises 32 columns, and each third matrix comprises 8 columns, the first third matrix comprises 1 to 8 columns of the second matrix, which may correspond to 1 to 8 rows of the first matrix; the second third matrix comprises 9 to 16 columns of the second matrix, which can correspond to 9 to 16 rows of the first matrix; the third matrix comprises 17 to 24 columns of the second matrix, which can correspond to 17 to 24 columns of the first matrix; the fourth third matrix includes 25 to 32 columns of the second matrix, which may correspond to 25 to 32 rows of the first matrix. The first, second, third and fourth do not indicate a sequential order, but merely to distinguish the third matrix.
Taking fig. 2b as an example, in one possible implementation manner, the hardware accelerator 20a loads the first matrix in the memory 112a to the second data cache 2112 under the control of the processor 111a, loads the index information to the index information cache 2113, and loads the third matrix to the first data cache 2111 of the hardware accelerator 20a, and the processor 111a controls the selector 2114 to obtain the fourth matrix from the first matrix cached in the hardware accelerator 20a according to the index information. In another possible implementation, the processor 111a divides the first matrix into one or more first matrix blocks (the first matrix block includes l rows), and the hardware accelerator 20 loads the first matrix block in the memory 112a into the second data buffer 2112 multiple times under the control of the processor 111a, where one first matrix block may be loaded at a time, or multiple first matrix blocks may be loaded at a time. Thus, the loading bandwidth can be improved; specifically, the fourth matrix may be obtained from the first matrix block loaded into the second data buffer 2112.
In one possible implementation, the first matrix may include at least one first matrix block aiFirst matrix block AiComprises l rows, which can be adjacent l rows in the first matrix, or alternate l rows of the first matrix. The second matrix comprises at least one second matrix block BjSecond matrix block BjThe matrix comprises n columns of the second matrix, wherein the n columns can be adjacent n columns in the second matrix or n columns of intervals of the second matrix, and i and j are integers. The present application provides three processors for determining a first matrix block aiAnd a second matrix block BjWherein the specification of the hardware accelerator is to instruct the hardware accelerator to process a multiplication operation of the matrix of l x m and the matrix of m x n.
In one implementation, the first step is determinedThe sparse rate of the second matrix is taken as the reciprocal of the sparse rate of the second matrix as the highest gear T1According to the specification of the hardware accelerator and the highest gear T1A second matrix block B of a second matrixjCan be determined as (m x T)1) And rows and columns. Therefore, the size of the second matrix block of the second matrix can be determined quickly, and especially when the distribution of the non-zero elements in the second matrix is uniform, the size of the second matrix block can be determined quickly and accurately.
Accordingly, a first matrix block A of the first matrixiIs equal to the rows of the second matrix, so that the first matrix block a can be divided intoiIs determined as l rows (m × T)1) And (4) columns.
In the second implementation manner, the second matrix is divided into a plurality of small matrix blocks according to the distribution of non-zero elements in the second matrix (for example, the second matrix is divided into 9 small matrix blocks according to the squared figure), the sparsity of each small matrix block is determined, and if the sparsity of most of the small matrix blocks is smaller than the threshold and the sparsity of a few of the small matrix blocks is larger than the threshold, the reciprocal of the sparsity of most of the small matrix blocks can be used as the highest gear T2According to the specification of the hardware accelerator and the highest gear T2A second matrix block B of a second matrixjIs determined as (m × T)2) And rows and columns. And aiming at the second matrix with non-uniform non-zero element distribution, the size of the second matrix block can be accurately determined by the second implementation mode.
For example, if the sparsity of most of the second matrix is less than 25%, and the local sparsity is greater than 25%, the highest gear T2May be set to 4.
Accordingly, the size of the first matrix block of the first matrix may be determined as l rows (m × T)2) And (4) columns.
Third, the highest gear T of the second matrix is determined according to historical empirical values3According to the specification of the hardware accelerator and the highest gear T3Determining a second matrix block B of a second matrixjCan be determined as (m x T)3) And rows and columns.
Accordingly, the first moment of the first matrixThe size of the array block may be determined as l rows (m × T)3) And (4) columns.
Illustratively, the first matrix is a matrix of 32 × 32, the second matrix is also a matrix of 32 × 32, the specification of the hardware accelerator is to support the product operation of the matrix of 8 × 8 and the matrix of 8 × 8, the highest gear T of the second matrix is 4 as an example, and fig. 4a provides a structural schematic diagram of the second matrix for the present application. As shown in FIG. 4a, the second matrix comprises four second matrix blocks BjAre respectively B0、B1、B2And B3Each second matrix block includes 8 columns, and the number of rows of the second matrix block is (m × T) ═ 8 × 4, and the number of columns n is 8. Correspondingly, fig. 4b provides a schematic structural diagram of a first matrix. As shown in fig. 4 b. The first matrix also comprises four first matrix blocks aiAre respectively A0、A1、A2And A3Each first matrix block has 8 rows and 32 columns (m × T) — (8 × 4).
And 304, aiming at each third matrix in the at least one third matrix, the hardware accelerator obtains a fifth matrix according to the fourth matrix and the third matrix.
With reference to fig. 2b, the processor 111a controls the hardware accelerator 20a to input the fourth matrix and the third matrix to the operation module 212, calculate multiplication between the third matrix and the fourth matrix in the multiplication unit 2121 of the operation circuit 212 to obtain an intermediate value, and input the intermediate value to the accumulation unit 2122 of the operation circuit 212 to perform accumulation operation to obtain a fifth matrix.
Optionally, the fourth matrix includes a plurality of matrices, one fourth matrix is multiplied by one column of the third matrix, and the multiplication results of the input fourth matrices and the input columns of the third matrix are accumulated to obtain a fifth matrix.
In a possible implementation manner, if t is an integer greater than or equal to 2, the hardware accelerator divides the third matrix into t m × n matrices; and the hardware accelerator respectively performs multiply-add operation on the fourth matrix and t m × n matrixes to obtain a fifth matrix.
Instep 305, the hardware accelerator obtains a target result according to the at least one fifth matrix, where the target result is an L × N matrix.
In one possible implementation manner, in combination with fig. 2b, the accumulation unit 2122 of the hardware accelerator 20a accumulates at least one fifth matrix to obtain a target result, where the target result is a result of a product operation of the first matrix and the second matrix.
Through theabove steps 301 to 305, since the third matrix is obtained by converting the non-zero elements in different N columns in the second matrix, the fourth matrix of L × M is reduced compared to the first matrix specification of L × M, and the fourth matrix of L × M meets the specification of the hardware accelerator, and the third matrix is generated according to the specification of the hardware accelerator, the hardware accelerator can obtain the fifth matrix by multiplying the fourth matrix of L × M and the third matrix, and obtain the target result according to the fifth matrix, where the target result is the matrix of L × N, that is, the target result is the operation result of the first matrix and the second matrix. Therefore, the multiplication of the first matrix and the second matrix is realized by the multiplication of the third matrix and the fourth matrix and the calculation according to the fifth matrix. In this scheme, the processor removes part or all of zero elements in different n columns in the second matrix to obtain a third matrix, so that the number of zero elements in the third matrix participating in operations in the hardware accelerator is reduced compared with the number of zero elements in different n columns in the second matrix. The method comprises the steps of deleting rows with only zero elements in different n columns in the second matrix to obtain a third matrix, obtaining a fourth matrix according to index information, and enabling the zero elements to have no influence on an operation result.
The data processing method flow shown in fig. 3 can be applied to a data processing scenario of a network model of a neural network. In a data processing scene of the neural network model, the first matrix can be a data matrix or a parameter matrix, and if the first matrix is the data matrix, the second matrix is the parameter matrix; if the first matrix is a parameter matrix, the second matrix is a data matrix. In neural network models, the parameter matrix is typically a sparse matrix.
The data processing method provided by the present application will be described below by taking as an example that the first matrix is the matrix shown in fig. 4b and the second matrix is the matrix shown in fig. 4 a. In the method, the specification of the hardware accelerator is taken as the product operation of the matrix supporting 8 × 8 and the matrix supporting 8 × 8 as an example.
As shown in fig. 4c, a schematic flow chart of another data processing method is provided for the present application. In the method, the processor may be the processor in fig. 2a or fig. 2b, and the hardware accelerator may be the hardware accelerator in fig. 2a or fig. 2 b. The method comprises the following steps:
instep 401, a processor obtains a first matrix and a second matrix.
This step is the same asstep 301 in the embodiment shown in fig. 3, and reference may be made to the foregoing description, which is not repeated herein.
Instep 402, the processor divides the first matrix into four first matrix blocks and the second matrix into four second matrix blocks.
Divide the first matrix into A0、A1、A2And A3Four first matrix blocks, the second matrix being divided into B0、B1、B2And B3Four matrix blocks. For the process of the processor for partitioning the first matrix and the second matrix, reference may be made to the implementation manner ofstep 303 in the embodiment shown in fig. 3, which is not described herein again.
In this application, the multiplication of the first matrix and the second matrix requires that each first matrix block of the first matrix and each second matrix block of the second matrix are multiplied and accumulated. The product operation of the first matrix and the second matrix is: a. the0*B0+A0*B1+A0*B2+A0*B3+A1*B0+A1*B1+A1*B2+A1*B3+A2*B0+A2*B1+A2*B2+A2*B3+A3*B0+A3*B1+A3*B2+A3*B3. First matrix block AiAnd a second matrix block BjThe multiplication process of (A) is the same, wherein BjThe number of non-zero elements in Ai*BjThe operation process of (2). This embodiment is illustrated separately in two cases.
The first case is a second matrix block B as shown in FIG. 5ajFor illustration, as shown in FIG. 5a, an exemplary diagram of a second matrix block BjIncluding 32 rows and 8 columns. In this first case, after theabove step 402,steps 403 to 410 are continued.
Instep 403, the processor determines a second matrix block BjThe number p of non-zero elements in the column with the most non-zero elements.
As in fig. 5a, a second matrix block BjThe number of the non-zero elements of each column is respectively as follows: p is a radical of0=6,p1=4,p2=7,p3=6,p4=0,p5=4,p6=5,p73, that is, the second matrix block BjThe number p of non-zero elements in the column with the most non-zero elements equals 7.
Instep 404, the processor determines a third matrix according to the determined p and the specification of the hardware accelerator.
In one possible implementation, t in the third matrix satisfies the condition: (t-1) m<p ≦ t × m, in conjunction with FIG. 5a (second matrix Block B)jThe number of non-zero elements in the column with the most non-zero elements in the column is 7) and the specification of the hardware accelerator, t meets the condition: (t-1). 8<7 ≦ t × 8, t ≦ 1 may be determined, i.e. the third matrix is a matrix of 8 × 8, as shown in fig. 5 b. The third matrix as shown in fig. 5B comprises the second matrix block B as shown in fig. 5ajAll of which are assigned non-zero elements, and each column is supplemented with the appropriate number of zero elements, which are identified by x. The non-zero elements of the third matrix may not be closely spaced, e.g.,column 0, 6, may be in the last row, orrow 6, it being understood that complementary zero elements may be in the last two rows ofcolumn 0, or the first two rows ofcolumn 0, or in the first two rows of column 0The positions of any two rows incolumn 0 are not limited in this application to the positions of the complementary zero elements in any column.
Instep 405, the processor determines a second matrix block BjThe index information of (2).
It can also be understood that a second matrix block B is determinedjPosition information of non-zero elements, and position information of zero padding required to form a matrix satisfying the specification of the hardware accelerator.
Second matrix Block B shown in FIG. 5ajAs shown in fig. 5c, the position information of the non-zero element is: the row addresses of the non-zero elements in the 0 th column are 1/4/8/12/17/28 respectively, the row addresses of the non-zero elements in the 1 st column are 3/6/14/24 respectively, the row addresses of the non-zero elements in the 2 nd column are 0/1/21/25/28/29/30 respectively, the row addresses of the non-zero elements in the 3 rd column are 0/5/12/13/21/27 respectively, the 4 th column are all zero elements, the row addresses of the non-zero elements in the 5 th column are 5/12/18/28 respectively, the row addresses of the non-zero elements in the 6 th column are 3/8/21/23/25 respectively, and the row addresses of the non-zero elements in the 7 th column are 11/16/30 respectively. For the number p of non-zero elements in a columniColumns smaller than 8(8 is the row number of the right matrix of the hardware accelerator), which may be filled with zero elements, as shown in fig. 5c, the zero elements may also be identified by x, for example, the number of non-zero elements incolumn 0 is 6, 2 zero elements need to be filled, and the number of non-zero elements incolumn 3 is 8, and zero elements do not need to be filled.
As explained in conjunction with fig. 2b above, the processor 111a may control to store both the determined third matrix and the index information in the memory 112 a. Since the number of zero elements of the third matrix is compared to the second matrix block BjThe number of the zero elements is less, so that the data amount stored in the memory can be reduced, and the memory space of the memory is saved.
There is no sequence between theabove steps 404 and 405, and step 404 may be executed first, and then step 405 may be executed;step 405 may be performed first, and then step 404 may be performed;step 404 and step 405 may also be performed simultaneously.
Atstep 406, the hardware accelerator loads the third matrix, the index information, and the corresponding first matrix block.
In one possible implementation, the first matrix block aiAs shown in fig. 6a, comprising 32 rows and 8 columns. The hardware accelerator loads the first matrix block AiThe third matrix and the index information corresponding to the third matrix do not need to load all the first matrix and the second matrix at a time, so that the bandwidth of loading data from the memory to the hardware accelerator is improved. It will also be appreciated that in order to increase the bandwidth of data loading from memory to the hardware accelerator, and the computational performance of the hardware accelerator, in an alternative implementation, the hardware accelerator may be loaded with the first matrix block a one at a timeiA third matrix and index information corresponding to the third matrix. As explained in conjunction with fig. 2b above, in one possible implementation, the processor 111a stores the third matrix and the index information in the memory 112a, and the first matrix is also in the first matrix block aiIs stored in the memory 112 a. Under the control of the processor 111a, the hardware accelerator 20a loads the third matrix in the memory 112a into the first data buffer 2111, loads the first matrix block corresponding to the third matrix into the second data buffer 2122, and loads the index information into the index information buffer 2113.
Instep 407, the hardware accelerator selects a first matrix block A according to the index informationiM rows of elements are selected to obtain a fourth matrix.
Wherein, the first matrix block A comprises m row addresses corresponding to a column address of the index informationiEach row of (A) is required to be associated with a second matrix block BjSo that a fourth matrix is obtained according to a column address of the index information, and in combination with the index information in fig. 5c, 8 by 8 fourth matrices, which may be respectively named as ai_1、ai_2…..ai_8. Taking 8 row addresses corresponding to the 0 th column address in FIG. 5c as an example, the first matrix block A can be selectediTo obtain a fourth matrix of 8 x 8, as shown in fig. 6b, wherein the zero element in the index information can be selected from the first matrix block aiCan be directly filled with zero elements, and the fourth matrix shown in FIG. 6b isA first matrix block a is selectediThe last two positions of elements of each row. From the first matrix block A by means of index informationiThe fourth matrix is obtained, and it can also be understood that 8 by 8 fourth matrices are generated by using a 32 row and 8 column matrix block according to the index information, so that the specification of the matrix requiring the operation of the hardware accelerator can be reduced, and the 32 row and 8 column matrix block can be operated in the hardware accelerator of the specification.
Taking fig. 2b as an example, the selector 2114 of the hardware accelerator 20a caches the first matrix block a from the second data cache 2122 according to the index information cached in the index information cache 2113iM rows of elements are selected to obtain a fourth matrix.
And 409, performing multiply-add operation on the fourth matrix and the third matrix by the hardware accelerator to obtain a fifth matrix.
One possible implementation is shown in FIG. 6c, ai_1Is multiplied with the first column of the third matrix, ai_2Is multiplied by the second column of the third matrix, by which column a is derivedi_8And multiplying the fourth matrix by the 8 th column of the third matrix, and accumulating intermediate values obtained by the multiplication to obtain a fifth matrix of 8 x 8. The multiplication of the fourth matrix and the third matrix is also understood to be a multiplication of a matrix and a vector. By the scheme, the hardware accelerator can finish the product operation of the third matrix and the fourth matrix at one time.
With reference to fig. 2b, in a possible implementation manner, the third matrix and 8 fourth matrices may be input to the multiplication unit 2121 of the operation circuit 212 for one time to perform a product operation, so as to obtain an intermediate value, and the intermediate value is input to the accumulation unit 2122 to perform an accumulation operation, so as to obtain a fifth matrix.
And step 410, accumulating the obtained at least one fifth matrix by the hardware accelerator to obtain a target result.
As described with reference to fig. 4a, fig. 4b and fig. 2b, the first matrix shown in fig. 4b includes 4 first matrix blocks, and the second matrix shown in fig. 4a includes 4 second matrix blocks, so that the hardware accelerator 20a may obtain 4 × 4 — 16 fifth matrices, and the accumulation unit 2122 of the hardware accelerator 20a may accumulate the 16 fifth matrices to obtain the target result.
Through theabove steps 403 to 410, the specification of the hardware accelerator is that multiplication of 8 × 8 matrix and 8 × 8 matrix is supported, and the hardware accelerator of the specification realizes that the multiplication operation of 8 × 32 matrix and 32 × 8 matrix is completed at one time.
The second case is a second matrix block B as shown in FIG. 7ajFor illustration, as shown in FIG. 7a, an exemplary diagram of a second matrix is shown, the second matrix block BjIncluding 32 rows and 8 columns, the following is a detailed description of the continuing steps 403-410 afterstep 402 in the second scenario.
Instep 403, the processor determines the second matrix block B in FIG. 7ajThe number of the non-zero elements of each column is respectively as follows: p is a radical of0=8,p1=8,p2=11,p3=6,p4=1,p5=6,p6=5,p73, then the second matrix block BjThe number p of non-zero elements in the column with the most non-zero elements is 11.
Instep 404, the processor satisfies the condition according to t in the third matrix: (t-1) m<11 ≦ t × m, t may be determined to be 2, i.e. the third matrix is a matrix of (2 × 8) × 8, as shown in fig. 7 b. The third matrix shown in fig. 7B comprises the second matrix block B shown in fig. 7ajEach column is supplemented with an appropriate number of zero elements, which are identified by x, e.g.,column 0 is supplemented with 8 zero elements, and the last 8 rows ofcolumn 0 are instantiated with the supplemented 8 zero elements. Optionally, the non-zero elements of the third matrix may not be closely arranged, for example, the complementary zero element may be located in the last 8 rows of the 0 th column, or in the first 8 rows of the 0 th column, or in any 8 rows of the 0 th column, and the present application does not limit the location of the complementary zero element in any column.
Instep 405 above, the processor determines the second matrix block B shown in FIG. 7ajAs shown in fig. 7c, the index information is divided into two parts, the first part is the position information of the non-zero element of the first 8 columns,respectively as follows: the row addresses of the non-zero elements in the 0 th column are 1/4/8/9/12/13/17/28 respectively, the row addresses of the non-zero elements in the 1 st column are 3/4/6/11/14/20/24/26 respectively, the row addresses of the non-zero elements in the 2 nd column are 0/1/3/16/18/20/21/25 respectively, the row addresses of the non-zero elements in the 3 rd column are 0/5/12/13/21/27 respectively, the row address of the non-zero elements in the 4 th column is 15, the row addresses of the non-zero elements in the 5 th column are 5//11/12/18/24/28 respectively, the row addresses of the non-zero elements in the 6 th column are 3/8/21/23/25 respectively, the row addresses of the non-zero elements ofcolumn 7 are 11/16/30, respectively. For the number p of non-zero elements in a columniColumns smaller than m-8 may be filled with zero elements, which may also be identified by x. For example, if the number of the non-zero elements in the 0 th column is 8, then zero elements do not need to be supplemented, and the number of the non-zero elements in the 3 rd column is 6, and 2 zero elements need to be supplemented, and the positions of the 2 zero elements in the 3 rd column may be arbitrary, and fig. 7c takes the positions in the last two rows as an example. The second part is position information of the non-zero elements in the last 8 columns, which are respectively: the row address of the non-zero element incolumn 0 is 29, that is,column 0 has a non-zero element, therest 7 elements can be identified by zero elements,column 1 has no non-zero element and can all be identified by zero elements, the row addresses of the non-zero elements incolumn 2 are 28/29/30, therest 5 elements can be identified by zero elements, andcolumn 3/4/5/6/7 also has no non-zero element and can all be identified by zero elements.
Instep 406, the hardware accelerator loads the third matrix shown in FIG. 7b, the index information shown in FIG. 7c, and the corresponding first matrix block. In this second situation, the first matrix block may be the same as the first matrix block shown in fig. 6a, and the manner in which the hardware accelerator loads the third matrix, the index information, and the corresponding first matrix block may be the same as that described instep 406, which is not repeated herein.
Instep 407, the hardware accelerator may select from the first matrix block A according to the index information shown in FIG. 7ciAnd (3) circularly selecting data twice: the first time, according to the row address corresponding to each of the first 8 columns (the first 0 th column to the 7 th column) in the index information shown in fig. 7c, 8 fourth matrices are obtained from the first matrix block, which may be named as ai_1_1、ai_1_2…..ai_1_8(ii) a The second time, according to the row addresses corresponding to the last 8 columns (the last 0 th column to the 7 th column) in the index information shown in fig. 7c, 8 fourth matrices are obtained from the first matrix, which may be respectively named as ai_2_1、ai_2_2…..ai_2_8The method for obtaining the fourth matrices twice is the same as the process of fig. 6b, and the details are not repeated here.
Taking the above fig. 2b as an example, the selector 2114 in the hardware accelerator 20a may cache the first matrix block a twice from the second data cache 2122 according to the index information cached in the index information cache 2113iM rows of elements are selected to obtain a fourth matrix.
Afterstep 407, and beforestep 409, the second case further includesstep 408, in which the hardware accelerator divides the third matrix into 28 x 8 matrix blocks.
In the third matrix shown in fig. 7b, an 8 x 8 matrix block is located above the dotted line, and another 8 x 8 matrix block is located below the dotted line. The index information corresponding to the front 8 rows and columns in fig. 7c is the position information of the non-zero elements in the matrix block above the dotted line in fig. 7b, and the index information corresponding to the rear 8 rows and columns in fig. 7c is the position information of the non-zero elements in the matrix block below the dotted line in fig. 7 b.
In theabove step 409, the hardware accelerator performs multiply-add operation on the obtained 16 fourth matrices shown in fig. 6b and the third matrix shown in fig. 7b to obtain a fifth matrix.
In conjunction with FIG. 2b, the 16 fourth matrices may be input to the arithmetic circuitry 212 of the hardware accelerator 20a twice, and the third matrix, a, may be input for the first timei_1_1A fourth matrix ofi_1_2A of a fourth matrix …i_1_8Are input together into the multiplication unit 2121 of the arithmetic circuit 212 of the hardware accelerator 20a, each fourth matrix in the multiplication unit 2121 being multiplied by a column of the third matrix, i.e. ai_1_1Multiplication with the first column of the third matrix, ai_1_2Multiplication with the second column of the third matrix, and so on, ai_1_8Multiplied withcolumn 8 of the third matrix. A second time ofi_2_1A fourth matrix ofi_2_2Fourth matrix of…..ai_2_8Is also input to the multiplication unit 2121 of the arithmetic circuit 212 of the hardware accelerator 20a for the product operation, i.e. ai_2_1Multiplication with the first column of the third matrix, ai_2_2Multiplication with the second column of the third matrix, and so on, ai_2_8The third matrix is multiplied by the 8 th column of the third matrix, and the process of multiplying twice can refer to the process of fig. 6c, which is not described herein again, and the intermediate value obtained from the operation results of twice is input to the accumulation unit 2122 for accumulation to obtain the fifth matrix. This process in case two can also be understood as the process in case one having t (t 2 in case two) times computed in a loop.
Further, the hardware accelerator accumulates at least one fifth matrix obtained in case two to obtain a target result.
In case two, the hardware accelerator may complete the multiplication operation of the 8 × 32 matrix and the 32 × 8 matrix by twice the multiplication operation of the 8 × 8 matrix and the 8 × 8 matrix.
In the application, the processor obtains at least one third matrix and corresponding at least one index information according to the non-zero elements in the second matrix and the specification of the hardware accelerator, including two cases. For convenience of explanation, the specification of the hardware accelerator, which is the second matrix shown in fig. 4a and the first matrix shown in fig. 4b, is described as an example of the multiplication operation that can support the matrix of 8 × 8 and the matrix of 8 × 8.
In this application, if the second matrix is a left matrix, the third matrix is a matrix of m × t × n, and the data processing method can refer to the contents of fig. 3 to 7c, which is not described herein again.
Based on the above and the same idea, the present application provides an apparatus, as shown in fig. 2a, for performing the above data processing method. The apparatus in this example may execute the scheme correspondingly executed by the data processing apparatus in fig. 3, or may execute the scheme correspondingly executed by the data processing apparatus in fig. 4 c.
Optionally, thememory 112 may be further configured to store program instructions, and theprocessor 111 may call the program instructions stored in thememory 112 to perform one or more steps in the embodiments shown in the above schemes, or in alternative embodiments thereof, so that the apparatus implements the functions of the data processing apparatus in the above methods.
The processor 111 is configured to execute the instructions stored in the memory and control the hardware accelerator 20 to execute the acceleration task, and when the processor 111 executes the instructions stored in the memory 112, the processor 111 in the apparatus is configured to obtain a first matrix and a second matrix, where the first matrix and the second matrix both include non-zero elements, the first matrix is a matrix of L × M, the second matrix is a matrix of M × N, and L, M and N are positive integers; obtaining at least one third matrix and corresponding at least one index information according to the nonzero elements in the second matrix and the specification of the hardware accelerator, wherein the specification of the hardware accelerator is used for indicating the hardware accelerator to process the product operation of the matrix of l x m and the matrix of m x n, the third matrix is the matrix of (t x m) x n, each third matrix in the at least one third matrix respectively comprises each nonzero element in different n columns in the second matrix and does not comprise part or all zero elements in the n columns of the second matrix, and the index information is used for indicating the position information of the nonzero elements in different n columns in the second matrix; l is a positive integer not greater than L, M is a positive integer not greater than M, N is a positive integer not greater than N, and t is a positive integer;
the hardware accelerator 20 is configured to, for each third matrix in the at least one third matrix, obtain a fourth matrix from corresponding l rows in the first matrix according to index information corresponding to the third matrix, where the fourth matrix is a matrix of l × m; obtaining a fifth matrix according to the fourth matrix and the third matrix; and obtaining a target result according to the at least one fifth matrix, wherein the target result is a matrix of L x N.
In an alternative embodiment, t satisfies the following condition: (t-1) × m < p ≦ t × m, p being the number of non-zero elements in the column with the most non-zero elements in the different n columns in the second matrix.
In an alternative embodiment, p satisfies the following condition: p is less than or equal to M-M.
In an optional embodiment, the index information includes row addresses and column addresses, and one column address corresponds to m row addresses; the hardware accelerator 20 is configured to select m column elements from one corresponding row in the first matrix according to m row addresses corresponding to one column address of the index information, to obtain a fourth matrix, where the m row addresses are in one-to-one correspondence with the m column addresses of the m column elements.
In an alternative embodiment, t is an integer greater than or equal to 2; a hardware accelerator 20 for dividing the third matrix into t matrices of m × n; and respectively carrying out multiply-add operation on the fourth matrix and t matrices of m x n to obtain a fifth matrix.
Based on the foregoing and similar considerations, the present application provides anapparatus 800 for performing the above-described data processing method. Fig. 8 schematically illustrates a structural diagram of an apparatus provided in the present application, and as shown in fig. 8, anapparatus 800 includes aprocessing module 801, a selectingmodule 802, and anoperating module 803. Theselection module 802 may be the selector 2114 in fig. 2b, and theoperation module 803 may be the operation circuit 212 in fig. 2 b. Theapparatus 800 in this example may be the data processing apparatus in the above description, and may execute the scheme correspondingly executed by the data processing apparatus in fig. 3, or may execute the scheme correspondingly executed by the data processing apparatus in fig. 4 c.
Aprocessing module 801, configured to obtain a first matrix and a second matrix, where the first matrix and the second matrix both include non-zero elements, the first matrix is an L × M matrix, the second matrix is an M × N matrix, and L, M and N are both positive integers; obtaining at least one third matrix and corresponding at least one index information according to the nonzero elements in the second matrix and the specification of the hardware accelerator, wherein the specification of the hardware accelerator is used for indicating the hardware accelerator to process the product operation of the matrix of l x m and the matrix of m x n, the third matrix is the matrix of (t x m) x n, each third matrix in the at least one third matrix respectively comprises each nonzero element in different n columns in the second matrix and does not comprise part or all zero elements in the n columns of the second matrix, and the index information is used for indicating the position information of the nonzero elements in different n columns in the second matrix; l is a positive integer not greater than L, M is a positive integer not greater than M, N is a positive integer not greater than N, and t is a positive integer;
a selectingmodule 802, configured to, for each third matrix in the at least one third matrix, obtain, according to index information corresponding to the third matrix, a fourth matrix from l rows corresponding to the first matrix, where the fourth matrix is a matrix of l × m;
anoperation module 803, configured to obtain a fifth matrix according to the fourth matrix and the third matrix; and obtaining a target result according to the fifth matrix, wherein the target result is an L-N matrix.
It should be understood that the above division of the units of theapparatus 800 is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. In this application, theprocessing module 801 shown in fig. 8 may be implemented by theprocessor 111 shown in fig. 2a, and the selectingmodule 802 and the calculatingmodule 803 may be implemented by the hardware accelerator 20 shown in fig. 2 a. That is to say, in this embodiment of the application, theprocessing module 801 may execute the scheme executed by theprocessor 111 in fig. 2a, the selectingmodule 802 and the calculatingmodule 803 may execute the scheme executed by the hardware accelerator 20 in fig. 2a, and the rest of contents may be referred to above, which is not described herein again.
In the above embodiments, the implementation may be wholly or partly implemented by software, hardware or a combination thereof, and when implemented using a software program, may be wholly or partly implemented in the form of a computer program product. The computer program product includes one or more instructions. The procedures or functions according to the present application are generated in whole or in part when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The instructions may be stored on or transmitted from one computer storage medium to another computer storage medium, e.g., the instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optics, twisted pair) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer storage medium may be any medium that can be accessed by a computer or a data storage device comprising one or more integrated media, servers, data centers, and the like. The medium may be a magnetic medium (e.g., a flexible disk, a hard disk, a magnetic tape, a magneto-optical disk (MO), etc.), an optical medium (e.g., an optical disk), or a semiconductor medium (e.g., a ROM, an EPROM, an EEPROM, a Solid State Disk (SSD)), etc.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by instructions. These instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (20)

Translated fromChinese
1.一种数据处理方法,其特征在于,应用于数据处理装置,所述数据处理装置包括处理器和硬件加速器,所述方法包括:1. A data processing method, characterized in that, applied to a data processing device, the data processing device comprising a processor and a hardware accelerator, the method comprising:所述处理器获取第一矩阵和第二矩阵,所述第一矩阵和所述第二矩阵均包括非零元素,所述第一矩阵为L*M的矩阵,所述第二矩阵为M*N的矩阵,L、M和N均为正整数;The processor obtains a first matrix and a second matrix, the first matrix and the second matrix both include non-zero elements, the first matrix is an L*M matrix, and the second matrix is M* A matrix of N, where L, M and N are all positive integers;所述处理器根据所述第二矩阵中的非零元素和所述硬件加速器的规格,得到至少一个第三矩阵和对应的至少一个索引信息,所述硬件加速器的规格用于指示所述硬件加速器处理l*m的矩阵与m*n的矩阵的乘积运算,所述第三矩阵为(t*m)*n的矩阵,所述至少一个第三矩阵中的各所述第三矩阵分别包括所述第二矩阵中不同的n列中的非零元素、且不包括所述第二矩阵的所述n列中的部分或全部零元素,所述索引信息用于指示所述第二矩阵的所述n列中的非零元素的位置信息;l为不大于L的正整数,m为不大于M的正整数,n为不大于N的正整数,t为正整数;The processor obtains at least one third matrix and at least one corresponding index information according to the non-zero elements in the second matrix and the specification of the hardware accelerator, where the specification of the hardware accelerator is used to indicate the hardware accelerator Process the product operation of a matrix of l*m and a matrix of m*n, the third matrix is a matrix of (t*m)*n, and each of the third matrices in the at least one third matrix respectively includes the the non-zero elements in different n columns of the second matrix, and does not include some or all zero elements in the n columns of the second matrix, the index information is used to indicate all the elements of the second matrix The position information of the non-zero elements in the n column; l is a positive integer not greater than L, m is a positive integer not greater than M, n is a positive integer not greater than N, and t is a positive integer;针对所述至少一个第三矩阵中的每个第三矩阵,所述硬件加速器根据所述第三矩阵对应的索引信息,从所述第一矩阵中对应的l行中获取第四矩阵,所述第四矩阵为l*m的矩阵;根据所述第四矩阵和所述第三矩阵,得到第五矩阵;根据至少一个所述第五矩阵,得到目标结果,所述目标结果为L*N的矩阵。For each third matrix in the at least one third matrix, the hardware accelerator obtains, according to the index information corresponding to the third matrix, a fourth matrix from the corresponding row in the first matrix, and the The fourth matrix is an l*m matrix; a fifth matrix is obtained according to the fourth matrix and the third matrix; a target result is obtained according to at least one of the fifth matrices, and the target result is L*N matrix.2.如权利要求1所述的方法,其特征在于,所述t满足下列条件:2. The method of claim 1, wherein the t satisfies the following conditions:(t-1)*m<p≤t*m,所述p为所述第二矩阵的所述n列中非零元素最多的列中非零元素的个数。(t-1)*m<p≤t*m, where p is the number of non-zero elements in the column with the most non-zero elements in the n columns of the second matrix.3.如权利要求2所述的方法,其特征在于,所述p满足下列条件:3. The method of claim 2, wherein the p satisfies the following conditions:p≤M-m。p≤M-m.4.如权利要求1至3任一项所述的方法,其特征在于,所述索引信息包括行地址和列地址,一个列地址对应m个行地址;4. The method according to any one of claims 1 to 3, wherein the index information comprises a row address and a column address, and one column address corresponds to m row addresses;所述硬件加速器根据所述索引信息,从所述第一矩阵中对应的l行中获取第四矩阵,包括:The hardware accelerator obtains, according to the index information, a fourth matrix from the corresponding row 1 of the first matrix, including:所述硬件加速器根据所述索引信息的一个列地址对应的m个行地址,从所述第一矩阵中对应的l行中选择m列元素,得到所述第四矩阵,其中,所述m个行地址与所述m列元素的m个列地址一一对应。The hardware accelerator selects m column elements from corresponding l rows in the first matrix according to m row addresses corresponding to one column address of the index information, to obtain the fourth matrix, wherein the m The row addresses are in one-to-one correspondence with the m column addresses of the m column elements.5.如权利要求1至4任一项所述的方法,其特征在于,所述t为大于或等于2的整数;5. The method according to any one of claims 1 to 4, wherein the t is an integer greater than or equal to 2;所述硬件加速器根据所述第四矩阵和所述第三矩阵,得到第五矩阵,包括:The hardware accelerator obtains a fifth matrix according to the fourth matrix and the third matrix, including:所述硬件加速器将所述第三矩阵分为t个m*n的矩阵;The hardware accelerator divides the third matrix into t m*n matrices;所述硬件加速器将所述第四矩阵与所述t个m*n的矩阵分别进行乘加运算,得到所述第五矩阵。The hardware accelerator performs multiplication and addition operations on the fourth matrix and the t m*n matrices, respectively, to obtain the fifth matrix.6.一种数据处理方法,其特征在于,应用于硬件加速器,所述方法包括:6. A data processing method, characterized in that, applied to a hardware accelerator, the method comprising:所述硬件加速器获取第一矩阵、第三矩阵和索引信息,所述第三矩阵和所述索引信息为处理器根据第二矩阵中的非零元素和所述硬件加速器的规格得到的,所述第一矩阵和所述第二矩阵均包括非零元素,所述第一矩阵为L*M的矩阵,所述第二矩阵为M*N的矩阵,所述硬件加速器的规格用于指示所述硬件加速器处理l*m的矩阵与m*n的矩阵的乘积运算,所述索引信息用于指示所述第二矩阵中不同的n列中的非零元素的位置信息,所述第三矩阵为(t*m)*n的矩阵,所述第三矩阵包括所述第二矩阵的n列中的各非零元素、且不包括所述第二矩阵的所述n列中的部分或全部零元素,其中,M和N均为正整数,l为不大于L的正整数,m为不大于M的正整数,n为不大于N的正整数,t为正整数;The hardware accelerator obtains the first matrix, the third matrix and the index information, the third matrix and the index information are obtained by the processor according to the non-zero elements in the second matrix and the specifications of the hardware accelerator, the Both the first matrix and the second matrix include non-zero elements, the first matrix is an L*M matrix, the second matrix is an M*N matrix, and the specification of the hardware accelerator is used to indicate the The hardware accelerator processes the product operation of a matrix of l*m and a matrix of m*n, the index information is used to indicate the position information of the non-zero elements in different n columns in the second matrix, and the third matrix is (t*m)*n matrix, the third matrix includes each non-zero element in the n columns of the second matrix and does not include some or all zeros in the n columns of the second matrix element, where M and N are both positive integers, l is a positive integer not greater than L, m is a positive integer not greater than M, n is a positive integer not greater than N, and t is a positive integer;所述硬件加速器根据所述索引信息,从所述第一矩阵中对应的l行中获取第四矩阵,所述第四矩阵为l*m的矩阵;The hardware accelerator obtains, according to the index information, a fourth matrix from the corresponding l row in the first matrix, where the fourth matrix is an l*m matrix;所述硬件加速器根据所述第四矩阵和所述第三矩阵,得到第五矩阵;The hardware accelerator obtains a fifth matrix according to the fourth matrix and the third matrix;所述硬件加速器根据至少一个所述第五矩阵,得到目标结果,所述目标结果为L*N的矩阵。The hardware accelerator obtains a target result according to at least one of the fifth matrices, and the target result is an L*N matrix.7.如权利要求6所述的方法,其特征在于,所述t满足下列条件:7. The method of claim 6, wherein the t satisfies the following conditions:(t-1)*m<p≤t*m,所述p为所述第二矩阵的所述n列中非零元素最多的列中非零元素的个数。(t-1)*m<p≤t*m, where p is the number of non-zero elements in the column with the most non-zero elements in the n columns of the second matrix.8.如权利要求7所述的方法,其特征在于,所述p满足下列条件:8. The method of claim 7, wherein the p satisfies the following conditions:p≤M-m。p≤M-m.9.如权利要求6至8任一项所述的方法,其特征在于,所述索引信息包括行地址和列地址,一个列地址对应m个行地址;9. The method according to any one of claims 6 to 8, wherein the index information comprises a row address and a column address, and one column address corresponds to m row addresses;所述硬件加速器根据所述索引信息,从所述第一矩阵中对应的l行中获取第四矩阵,包括:The hardware accelerator obtains, according to the index information, a fourth matrix from the corresponding row 1 of the first matrix, including:所述硬件加速器根据所述索引信息的一个列地址对应的m个行地址,从所述第一矩阵中对应的l行中选择m列元素,得到所述第四矩阵,其中,所述m个行地址与所述m列元素的m个列地址一一对应。The hardware accelerator selects m column elements from corresponding l rows in the first matrix according to m row addresses corresponding to one column address of the index information, to obtain the fourth matrix, wherein the m The row addresses are in one-to-one correspondence with the m column addresses of the m column elements.10.如权利要求6至9任一项所述的方法,其特征在于,所述t为大于或等于2的整数;10. The method according to any one of claims 6 to 9, wherein the t is an integer greater than or equal to 2;所述硬件加速器根据所述第四矩阵和所述第三矩阵,得到第五矩阵,包括:The hardware accelerator obtains a fifth matrix according to the fourth matrix and the third matrix, including:所述硬件加速器将所述第三矩阵分为t个m*n的矩阵;The hardware accelerator divides the third matrix into t m*n matrices;所述硬件加速器将所述第四矩阵与所述t个m*n的矩阵分别进行乘加运算,得到所述第五矩阵。The hardware accelerator performs multiplication and addition operations on the fourth matrix and the t m*n matrices, respectively, to obtain the fifth matrix.11.一种装置,其特征在于,包括:11. A device, characterized in that, comprising:处理器,用于获取第一矩阵和第二矩阵,所述第一矩阵和所述第二矩阵均包括非零元素,所述第一矩阵为L*M的矩阵,所述第二矩阵为M*N的矩阵,L、M和N均为正整数;根据所述第二矩阵中的非零元素和所述硬件加速器的规格,得到至少一个第三矩阵和对应的至少一个索引信息,所述硬件加速器的规格用于指示所述硬件加速器处理l*m的矩阵与m*n的矩阵的乘积运算,所述第三矩阵为(t*m)*n的矩阵,所述至少一个第三矩阵中的各所述第三矩阵分别包括所述第二矩阵中不同的n列中的各非零元素、且不包括所述第二矩阵的所述n列中的部分或全部零元素,所述索引信息用于指示所述第二矩阵的所述n列中的非零元素的位置信息;l为不大于L的正整数,m为不大于M的正整数,n为不大于N的正整数,t为正整数;a processor, configured to obtain a first matrix and a second matrix, the first matrix and the second matrix both include non-zero elements, the first matrix is an L*M matrix, and the second matrix is M *N matrix, L, M and N are all positive integers; according to the non-zero elements in the second matrix and the specifications of the hardware accelerator, at least one third matrix and corresponding at least one index information are obtained, the The specification of the hardware accelerator is used to instruct the hardware accelerator to process the product operation of a matrix of 1*m and a matrix of m*n, the third matrix is a matrix of (t*m)*n, the at least one third matrix Each of the third matrices in the second matrix includes each non-zero element in different n columns in the second matrix, and does not include some or all zero elements in the n columns of the second matrix, the The index information is used to indicate the position information of the non-zero elements in the n columns of the second matrix; l is a positive integer not greater than L, m is a positive integer not greater than M, and n is a positive integer not greater than N , t is a positive integer;硬件加速器,用于针对所述至少一个第三矩阵中的每个第三矩阵,根据所述第三矩阵对应的索引信息,从所述第一矩阵中对应的l行中获取第四矩阵,所述第四矩阵为l*m的矩阵;根据所述第四矩阵和所述第三矩阵,得到第五矩阵;根据至少一个所述第五矩阵,得到目标结果,所述目标结果为L*N的矩阵。A hardware accelerator, configured to, for each third matrix in the at least one third matrix, obtain a fourth matrix from the corresponding l row in the first matrix according to the index information corresponding to the third matrix, and the The fourth matrix is a matrix of 1*m; according to the fourth matrix and the third matrix, a fifth matrix is obtained; according to at least one of the fifth matrices, a target result is obtained, and the target result is L*N matrix.12.如权利要求11所述的装置,其特征在于,所述t满足下列条件:12. The apparatus of claim 11, wherein the t satisfies the following conditions:(t-1)*m<p≤t*m,所述p为所述第二矩阵的所述n列中非零元素最多的列中非零元素的个数。(t-1)*m<p≤t*m, where p is the number of non-zero elements in the column with the most non-zero elements in the n columns of the second matrix.13.如权利要求12所述的装置,其特征在于,所述p满足下列条件:13. The apparatus of claim 12, wherein the p satisfies the following conditions:p≤M-m。p≤M-m.14.如权利要求11至13任一项所述的装置,其特征在于,所述索引信息包括行地址和列地址,一个列地址对应m个行地址;14. The device according to any one of claims 11 to 13, wherein the index information comprises a row address and a column address, and one column address corresponds to m row addresses;所述硬件加速器,具体用于:The hardware accelerator is specifically used for:根据所述索引信息的一个列地址对应的m个行地址,从所述第一矩阵中对应的l行中选择m列元素,得到所述第四矩阵,其中,所述m个行地址与所述m列元素的m个列地址一一对应。According to m row addresses corresponding to one column address of the index information, m column elements are selected from the corresponding l rows in the first matrix to obtain the fourth matrix, wherein the m row addresses are the same as the The m column addresses of the m column elements are in one-to-one correspondence.15.如权利要求11至14任一项所述的装置,其特征在于,所述t为大于或等于2的整数;15. The device according to any one of claims 11 to 14, wherein the t is an integer greater than or equal to 2;所述硬件加速器,具体用于:The hardware accelerator is specifically used for:将所述第三矩阵分为t个m*n的矩阵;将所述第四矩阵与所述t个m*n的矩阵分别进行乘加运算,得到所述第五矩阵。The third matrix is divided into t m*n matrices; the fourth matrix and the t m*n matrices are respectively multiplied and added to obtain the fifth matrix.16.一种硬件加速器,其特征在于,包括:16. A hardware accelerator, comprising:存取电路,用于获取第一矩阵、第三矩阵和索引信息,所述第三矩阵和所述索引信息为处理器根据第二矩阵中的非零元素和所述硬件加速器的规格得到的,所述第一矩阵和所述第二矩阵均包括非零元素,所述第一矩阵为L*M的矩阵,所述第二矩阵为M*N的矩阵,所述硬件加速器的规格用于指示所述硬件加速器处理l*m的矩阵与m*n的矩阵的乘积运算,所述索引信息用于指示所述第二矩阵中不同的n列中的非零元素的位置信息,所述第三矩阵为(t*m)*n的矩阵,所述至少一个第三矩阵中的各所述第三矩阵分别包括所述第二矩阵中不同的n列中的各非零元素、且不包括所述第二矩阵的所述n列中的部分或全部零元素,其中,M和N均为正整数,l为不大于L的正整数,m为不大于M的正整数,n为不大于N的正整数,t为正整数;an access circuit, configured to acquire a first matrix, a third matrix and index information, where the third matrix and the index information are obtained by the processor according to the non-zero elements in the second matrix and the specifications of the hardware accelerator, Both the first matrix and the second matrix include non-zero elements, the first matrix is an L*M matrix, the second matrix is an M*N matrix, and the specification of the hardware accelerator is used to indicate The hardware accelerator processes the product operation of a matrix of 1*m and a matrix of m*n, and the index information is used to indicate the position information of non-zero elements in different n columns in the second matrix, and the third The matrix is a (t*m)*n matrix, and each of the third matrices in the at least one third matrix respectively includes each non-zero element in different n columns in the second matrix, and does not include all the non-zero elements in the second matrix. Part or all of the zero elements in the n columns of the second matrix, where M and N are both positive integers, l is a positive integer not greater than L, m is a positive integer not greater than M, and n is a positive integer not greater than N , t is a positive integer;选择电路,用于根据所述索引信息,从所述第一矩阵中对应的l行中获取第四矩阵,所述第四矩阵为l*m的矩阵;a selection circuit, configured to obtain a fourth matrix from the corresponding l row in the first matrix according to the index information, where the fourth matrix is an l*m matrix;运算电路,用于根据所述第四矩阵和所述第三矩阵,得到第五矩阵;根据至少一个所述第五矩阵,得到目标结果,所述目标结果为L*N的矩阵。an arithmetic circuit, configured to obtain a fifth matrix according to the fourth matrix and the third matrix; and obtain a target result according to at least one of the fifth matrices, where the target result is an L*N matrix.17.如权利要求16所述的硬件加速器,其特征在于,所述t满足下列条件:17. The hardware accelerator of claim 16, wherein the t satisfies the following conditions:(t-1)*m<p≤t*m,所述p为所述第二矩阵的所述n列中非零元素最多的列中非零元素的个数。(t-1)*m<p≤t*m, where p is the number of non-zero elements in the column with the most non-zero elements in the n columns of the second matrix.18.如权利要求17所述的硬件加速器,其特征在于,所述p满足下列条件:18. The hardware accelerator of claim 17, wherein the p satisfies the following conditions:p≤M-m。p≤M-m.19.如权利要求16至18任一项所述的硬件加速器,其特征在于,所述索引信息包括行地址和列地址,一个列地址对应m个行地址;19. The hardware accelerator according to any one of claims 16 to 18, wherein the index information comprises a row address and a column address, and one column address corresponds to m row addresses;所述选择电路,具体用于:The selection circuit is specifically used for:根据所述索引信息的一个列地址对应的m个行地址,从所述第一矩阵中对应的l行中选择m列元素,得到所述第四矩阵,其中,所述m个行地址与所述m列元素的m个列地址一一对应。According to m row addresses corresponding to one column address of the index information, m column elements are selected from the corresponding l rows in the first matrix to obtain the fourth matrix, wherein the m row addresses are the same as the The m column addresses of the m column elements are in one-to-one correspondence.20.如权利要求16至19任一项所述的硬件加速器,其特征在于,所述t为大于或等于2的整数;20. The hardware accelerator according to any one of claims 16 to 19, wherein the t is an integer greater than or equal to 2;所述运算电路,具体用于:The operation circuit is specifically used for:将所述第三矩阵分为t个m*n的矩阵;将所述第四矩阵与所述t个m*n的矩阵分别进行乘加运算,得到所述第五矩阵。The third matrix is divided into t m*n matrices; the fourth matrix and the t m*n matrices are respectively multiplied and added to obtain the fifth matrix.
CN201811100198.5A2018-09-202018-09-20Data processing method and device and hardware acceleratorActiveCN110929854B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201811100198.5ACN110929854B (en)2018-09-202018-09-20Data processing method and device and hardware accelerator

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201811100198.5ACN110929854B (en)2018-09-202018-09-20Data processing method and device and hardware accelerator

Publications (2)

Publication NumberPublication Date
CN110929854Atrue CN110929854A (en)2020-03-27
CN110929854B CN110929854B (en)2024-04-16

Family

ID=69856242

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201811100198.5AActiveCN110929854B (en)2018-09-202018-09-20Data processing method and device and hardware accelerator

Country Status (1)

CountryLink
CN (1)CN110929854B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2023165290A1 (en)*2022-03-042023-09-07华为技术有限公司Data processing method and apparatus, and electronic device and storage medium
CN118333127A (en)*2024-06-072024-07-12鼎道智芯(上海)半导体有限公司Data processing method and device and data processing chip
WO2024152605A1 (en)*2023-01-162024-07-25华为技术有限公司Matrix operation method and apparatus, and related device
WO2024199297A1 (en)*2023-03-312024-10-03华为云计算技术有限公司Data pre-processing method and apparatus
WO2024222762A1 (en)*2023-04-252024-10-31华为技术有限公司Matrix storage method and device
CN119294315A (en)*2024-12-102025-01-10奕行智能科技(广州)有限公司 A verification method for parallel queue scheduling circuit

Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101663829A (en)*2006-09-292010-03-03联发科技股份有限公司Architecture for joint detection hardware accelerator
US20180189234A1 (en)*2016-12-312018-07-05Intel CorporationHardware accelerator architecture for processing very-sparse and hyper-sparse matrix data
CN108268283A (en)*2016-12-312018-07-10英特尔公司For operating the computing engines framework data parallel to be supported to recycle using yojan

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101663829A (en)*2006-09-292010-03-03联发科技股份有限公司Architecture for joint detection hardware accelerator
US20180189234A1 (en)*2016-12-312018-07-05Intel CorporationHardware accelerator architecture for processing very-sparse and hyper-sparse matrix data
CN108268283A (en)*2016-12-312018-07-10英特尔公司For operating the computing engines framework data parallel to be supported to recycle using yojan

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ANURAG MUKARA等: "《SCNN: An Accelerator for Compressed-sparse Convolutional Neural Networks》"*
程凯 等: "《基于GPU的高效稀疏矩阵存储格式研究》"*

Cited By (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2023165290A1 (en)*2022-03-042023-09-07华为技术有限公司Data processing method and apparatus, and electronic device and storage medium
WO2024152605A1 (en)*2023-01-162024-07-25华为技术有限公司Matrix operation method and apparatus, and related device
WO2024199297A1 (en)*2023-03-312024-10-03华为云计算技术有限公司Data pre-processing method and apparatus
WO2024222762A1 (en)*2023-04-252024-10-31华为技术有限公司Matrix storage method and device
CN118333127A (en)*2024-06-072024-07-12鼎道智芯(上海)半导体有限公司Data processing method and device and data processing chip
CN118333127B (en)*2024-06-072025-07-22鼎道智芯(上海)半导体有限公司Data processing method and device and data processing chip
CN119294315A (en)*2024-12-102025-01-10奕行智能科技(广州)有限公司 A verification method for parallel queue scheduling circuit
CN119294315B (en)*2024-12-102025-02-18奕行智能科技(广州)有限公司Verification method of parallel queue scheduling circuit

Also Published As

Publication numberPublication date
CN110929854B (en)2024-04-16

Similar Documents

PublicationPublication DateTitle
CN110929854A (en) A data processing method, device and hardware accelerator
US11449576B2 (en)Convolution operation processing method and related product
CN112840356B (en)Operation accelerator, processing method and related equipment
CN107689948B (en)Efficient data access management device applied to neural network hardware acceleration system
JP6905573B2 (en) Arithmetic logic unit and calculation method
CN113435682B (en) Gradient compression for distributed training
CN109213962B (en)Operation accelerator
CN108304922B (en) Computing device and computing method for neural network computing
US20200160162A1 (en)Computing device and method
US9411726B2 (en)Low power computation architecture
US12254398B2 (en)Sparse machine learning acceleration
US11704556B2 (en)Optimization methods for quantization of neural network models
CN109993293B (en) A Deep Learning Accelerator for Stacked Hourglass Networks
KR20220154764A (en) Inference engine circuit architecture
CN110728351A (en) Data processing method, related equipment and computer storage medium
WO2019215907A1 (en)Arithmetic processing device
CN110414672B (en)Convolution operation method, device and system
WO2021128820A1 (en)Data processing method, apparatus and device, and storage medium and computer program product
CN116781484A (en)Data processing method, device, computer equipment and storage medium
CN110377874B (en) Convolution operation method and system
CN114730331B (en) Data processing device and data processing method
WO2023098256A1 (en)Neural network operation method and apparatus, chip, electronic device and storage medium
WO2020042770A1 (en)Image recognition method and apparatus
WO2024222115A1 (en)Neural network pruning method and apparatus, and data processing method and apparatus
CN111324793B (en)Method and device for controlling operation of storing data of region of interest

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp