Movatterモバイル変換


[0]ホーム

URL:


CN111061507A - Operation method, operation device, computer equipment and storage medium - Google Patents

Operation method, operation device, computer equipment and storage medium
Download PDF

Info

Publication number
CN111061507A
CN111061507ACN201910625494.5ACN201910625494ACN111061507ACN 111061507 ACN111061507 ACN 111061507ACN 201910625494 ACN201910625494 ACN 201910625494ACN 111061507 ACN111061507 ACN 111061507A
Authority
CN
China
Prior art keywords
data
instruction
machine learning
matrix
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910625494.5A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Cambricon Information Technology Co Ltd
Original Assignee
Shanghai Cambricon Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Cambricon Information Technology Co LtdfiledCriticalShanghai Cambricon Information Technology Co Ltd
Priority to PCT/CN2019/110167priorityCriticalpatent/WO2020073925A1/en
Publication of CN111061507ApublicationCriticalpatent/CN111061507A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本公开涉及一种运算方法、装置、计算机设备和存储介质。其中的组合处理装置包括:机器学习运算装置、通用互联接口和其他处理装置;机器学习运算装置与其他处理装置进行交互,共同完成用户指定的计算操作,其中,组合处理装置还包括:存储装置,该存储装置分别与机器学习运算装置和其他处理装置连接,用于保存机器学习运算装置和其他处理装置的数据。本公开实施例所提供的运算方法、装置、计算机设备和存储介质的适用范围广,进行运算的处理效率高、处理速度快。

Figure 201910625494

The present disclosure relates to an arithmetic method, an apparatus, a computer device and a storage medium. The combined processing device includes: a machine learning computing device, a general interconnection interface and other processing devices; the machine learning computing device interacts with other processing devices to jointly complete the computing operation specified by the user, wherein the combined processing device further includes: a storage device, The storage device is respectively connected with the machine learning computing device and other processing devices, and is used for saving the data of the machine learning computing device and other processing devices. The computing method, device, computer equipment and storage medium provided by the embodiments of the present disclosure have a wide range of applications, and have high processing efficiency and high processing speed for computing.

Figure 201910625494

Description

Operation method, operation device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an operation method, an operation device, a computer device, and a storage medium.
Background
With the continuous development of science and technology, machine learning, especially neural network algorithms, are more and more widely used. The method is well applied to the fields of image recognition, voice recognition, natural language processing and the like. However, as the complexity of neural network algorithms is higher and higher, the types and the number of involved data operations are increasing. In the related art, the efficiency and the speed of matrix transposition operation on data are low.
Disclosure of Invention
In view of the above, the present disclosure provides an operation method, an operation device, a computer device, and a storage medium to improve efficiency and speed of performing a matrix transposition operation on data.
According to a first aspect of the present disclosure, there is provided a matrix transpose instruction processing apparatus, the apparatus including:
the control module is used for analyzing the obtained matrix transposition instruction to obtain an operation code and an operation domain of the matrix transposition instruction, and obtaining data to be operated, a target address, and input height and input width of the data to be operated, which are required by executing the matrix transposition instruction, according to the operation code and the operation domain;
an operation module, configured to perform matrix transposition operation on the data to be operated according to the input height and the input width to obtain transposed data, and store the transposed data in the target address,
the operation code is used for indicating that the operation of the matrix transposition instruction on data is matrix transposition operation, and the operation domain comprises a data address to be operated, the input height, the input width and the target address.
According to a second aspect of the present disclosure, there is provided a machine learning arithmetic device, the device including:
one or more matrix transpose instruction processing devices according to the first aspect, configured to obtain data to be operated and control information from another processing device, execute a specified machine learning operation, and transmit an execution result to the other processing device through an I/O interface;
when the machine learning arithmetic device comprises a plurality of matrix transposition instruction processing devices, the plurality of matrix transposition instruction processing devices can be connected through a specific structure and transmit data;
the matrix transposition instruction processing devices are interconnected through a PCIE bus of a fast peripheral equipment interconnection bus and transmit data so as to support operation of larger-scale machine learning; a plurality of the matrix transposition instruction processing devices share the same control system or own respective control systems; the matrix transposition instruction processing devices share a memory or have respective memories; the interconnection mode of the matrix transposition instruction processing devices is any interconnection topology.
According to a third aspect of the present disclosure, there is provided a combined processing apparatus, the apparatus comprising:
the machine learning arithmetic device, the universal interconnect interface, and the other processing device according to the second aspect;
and the machine learning arithmetic device interacts with the other processing devices to jointly complete the calculation operation designated by the user.
According to a fourth aspect of the present disclosure, there is provided a machine learning chip including the machine learning network operation device of the second aspect or the combination processing device of the third aspect.
According to a fifth aspect of the present disclosure, there is provided a machine learning chip package structure, which includes the machine learning chip of the fourth aspect.
According to a sixth aspect of the present disclosure, a board card is provided, which includes the machine learning chip packaging structure of the fifth aspect.
According to a seventh aspect of the present disclosure, there is provided an electronic device, which includes the machine learning chip of the fourth aspect or the board of the sixth aspect.
According to an eighth aspect of the present disclosure, there is provided a matrix transposition instruction processing method applied to a matrix transposition instruction processing apparatus, the method including:
analyzing the obtained matrix transposition instruction to obtain an operation code and an operation domain of the matrix transposition instruction, and obtaining data to be operated, a target address, and input height and input width of the data to be operated, which are required by executing the matrix transposition instruction, according to the operation code and the operation domain;
performing matrix transposition operation on the data to be operated according to the input height and the input width to obtain transposed data, and storing the transposed data into the target address,
the operation code is used for indicating that the operation of the matrix transposition instruction on data is matrix transposition operation, and the operation domain comprises a data address to be operated, the input height, the input width and the target address.
According to a ninth aspect of the present disclosure, there is provided a non-volatile computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described matrix transposition instruction processing method.
The device comprises a control module and an operation module, wherein the control module is used for analyzing the obtained matrix transposition instruction to obtain an operation code and an operation domain of the matrix transposition instruction, and acquiring data to be operated, a target address, and input height and input width of the data to be operated, which are required by executing the matrix transposition instruction, according to the operation code and the operation domain; the operation module is used for performing matrix transposition operation on data to be operated according to the input height and the input width to obtain transposed data, and storing the transposed data into a target address. The matrix transposition instruction processing method, the matrix transposition instruction processing device, the computer equipment and the storage medium provided by the embodiment of the disclosure have wide application range, high processing efficiency and high processing speed for the matrix transposition instruction, and high processing efficiency and high processing speed for performing matrix transposition operation.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a block diagram of a matrix transpose instruction processing apparatus according to an embodiment of the present disclosure.
Fig. 2 a-2 f show block diagrams of a matrix transpose instruction processing apparatus according to an embodiment of the present disclosure.
Fig. 3 is a schematic diagram illustrating an application scenario of a matrix transpose instruction processing apparatus according to an embodiment of the present disclosure.
Fig. 4a, 4b show block diagrams of a combined processing device according to an embodiment of the present disclosure.
Fig. 5 shows a schematic structural diagram of a board card according to an embodiment of the present disclosure.
Fig. 6 shows a flowchart of a matrix transpose instruction processing method according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, not all embodiments of the present disclosure. All other embodiments, which can be derived by one skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the scope of protection of the present disclosure.
It should be understood that the terms "zero," "first," "second," and the like in the claims, the description, and the drawings of the present disclosure are used for distinguishing between different objects and not for describing a particular order. The terms "comprises" and "comprising," when used in the specification and claims of this disclosure, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the disclosure herein is for the purpose of describing particular embodiments only, and is not intended to be limiting of the disclosure. As used in the specification and claims of this disclosure, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the term "and/or" as used in the specification and claims of this disclosure refers to any and all possible combinations of one or more of the associated listed items and includes such combinations.
As used in this specification and claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Due to the wide use of neural network algorithms, the computing capability of computer hardware is continuously improved, and the types and the number of data operations involved in practical application are continuously improved. Because of the variety of programming languages, in order to implement the operation process of the matrix transposition operation in different language environments, in the related art, because there is no matrix transposition instruction which can be widely applied to various programming languages at the present stage, technicians need to customize multiple instructions corresponding to the programming language environments to implement the matrix transposition operation, which results in low efficiency and low speed of the matrix transposition operation. The present disclosure provides a method and an apparatus for processing a matrix transposition instruction, a computer device, and a storage medium, which can realize matrix transposition operation with only one instruction, and can significantly improve the efficiency and speed of performing matrix transposition operation.
Fig. 1 shows a block diagram of a matrix transpose instruction processing apparatus according to an embodiment of the present disclosure. As shown in fig. 1, the apparatus includes acontrol module 11 and anoperation module 12.
Thecontrol module 11 is configured to analyze the obtained matrix transposition instruction to obtain an operation code and an operation domain of the matrix transposition instruction, and obtain data to be operated, a target address, and an input height and an input width of the data to be operated, which are required for executing the matrix transposition instruction, according to the operation code and the operation domain. The operation code is used for indicating the operation of the matrix transposition instruction on the data to be the matrix transposition operation, and the operation domain comprises a data address to be operated, an input height, an input width and a target address.
And theoperation module 12 is configured to perform matrix transposition operation on the data to be operated according to the input height and the input width to obtain transposed data, and store the transposed data in the target address.
In this embodiment, the control module may obtain the data to be operated from the data address to be operated. The operation field may include an input height and an input width, or the operation field may include a memory address storing the input height and the input width of the data to be operated on. When a specific numerical value of the input height and the input width of the data to be operated is directly included in the operation field, the specific numerical value may be determined as the input height and the input width. When the memory addresses of the input height and the input width are included in the operation field, the input height and the input width may be obtained from the corresponding memory addresses. The control module may obtain instructions and data through a data input output unit, which may be one or more data I/O interfaces or I/O pins.
In this embodiment, the operation code may be a part of an instruction or a field (usually indicated by a code) specified in the computer program to perform an operation, and is an instruction sequence number used to inform a device executing the instruction which instruction needs to be executed specifically. The operation field may be a source of all data required for executing the corresponding instruction, and all data required for executing the corresponding instruction includes parameters such as data to be operated on, input height and input width of the data to be operated on, and a corresponding operation method. For a matrix transposition instruction, the matrix transposition instruction must comprise an operation code and an operation domain, wherein the operation domain at least comprises a data address to be operated, an input height, an input width and a target address
It should be understood that the instruction format of the matrix transpose instruction and the contained operation code and operation domain may be set as needed by those skilled in the art, and the disclosure is not limited thereto.
In this embodiment, the apparatus may include one or more control modules and one or more operation modules, and the number of the control modules and the number of the operation modules may be set according to actual needs, which is not limited in this disclosure. When the apparatus includes a control module, the control module may receive a matrix transpose instruction and control one or more operation modules to perform a matrix transpose operation. When the apparatus includes a plurality of control modules, the plurality of control modules may respectively receive the matrix transposition instruction, and control the corresponding one or more operation modules to perform the matrix transposition operation.
The matrix transposition instruction processing device provided by the embodiment of the disclosure comprises a control module and an operation module, wherein the control module is used for analyzing the obtained matrix transposition instruction to obtain an operation code and an operation domain of the matrix transposition instruction, and acquiring data to be operated, a target address, and input height and input width of the data to be operated, which are required for executing the matrix transposition instruction, according to the operation code and the operation domain; the operation module is used for performing matrix transposition operation on data to be operated according to the input height and the input width to obtain transposed data, and storing the transposed data into a target address. The matrix transposition instruction processing device provided by the embodiment of the disclosure has a wide application range, and is high in processing efficiency and processing speed for the matrix transposition instruction, and high in processing efficiency and processing speed for performing matrix transposition operation.
Fig. 2a shows a block diagram of a matrix transpose instruction processing apparatus according to an embodiment of the present disclosure. In one possible implementation, as shown in fig. 2a, theoperation module 12 may include a plurality ofmatrix transpose operators 120. And a plurality ofmatrix transpose operators 120, configured to perform matrix transpose operation on the data to be operated according to the input height and the input width. Wherein the height of the transposed data is equal to the input width, and the width of the transposed data is equal to the input height.
In this implementation, the operation module may further include a matrix transpose operator. The number of the matrix transpose operators may be set according to the data amount of the matrix transpose operation required to be performed, the processing speed of the matrix transpose operation, the processing efficiency, and the like, which is not limited by the present disclosure.
Fig. 2b illustrates a block diagram of a matrix transpose instruction processing apparatus according to an embodiment of the present disclosure. In one possible implementation, as shown in fig. 2b, theoperation module 12 may include amaster operation sub-module 121 and a plurality ofslave operation sub-modules 122, where themaster operation sub-module 121 includes a plurality of matrix transpose operators 120 (not shown in the figure).
And themain operation sub-module 121 is configured to perform matrix transposition operation on the data to be operated according to the input height and the input width by using the plurality ofmatrix transposition operators 120 to obtain transposed data, and store the transposed data in the target address.
In a possible implementation manner, thecontrol module 11 is further configured to analyze the obtained calculation instruction to obtain an operation domain and an operation code of the calculation instruction, and obtain data to be operated, which is required for executing the calculation instruction, according to the operation domain and the operation code. Theoperation module 12 is further configured to perform an operation on the data to be operated according to the calculation instruction to obtain a calculation result of the calculation instruction. The operation module may include a plurality of operators for performing operations corresponding to operation types of the calculation instructions.
In this implementation, the calculation instruction may be other instructions for performing arithmetic operations, logical operations, and the like on data such as scalars, vectors, matrices, tensors, and the like, and those skilled in the art may set the calculation instruction according to actual needs, which is not limited by the present disclosure.
In this implementation, the arithmetic unit may include an adder, a divider, a multiplier, a comparator, and the like, which are capable of performing arithmetic operations, logical operations, and the like on data. The type and number of the arithmetic units may be set according to the requirements of the size of the data amount of the arithmetic operation to be performed, the type of the arithmetic operation, the processing speed and efficiency of the arithmetic operation on the data, and the like, which is not limited by the present disclosure.
In a possible implementation manner, thecontrol module 11 is further configured to analyze the calculation instruction to obtain a plurality of operation instructions, and send the data to be operated and the plurality of operation instructions to themain operation sub-module 121.
Themaster operation sub-module 121 is configured to perform preamble processing on data to be operated, and transmit data and operation instructions with the plurality ofslave operation sub-modules 122.
Theslave operation submodule 122 is configured to execute an intermediate operation in parallel according to the data and the operation instruction transmitted from themaster operation submodule 121 to obtain a plurality of intermediate results, and transmit the plurality of intermediate results to themaster operation submodule 122.
Themain operation sub-module 121 is further configured to perform subsequent processing on the plurality of intermediate results to obtain a calculation result of the calculation instruction, and store the calculation result in the corresponding address.
In this implementation, when the computation instruction is an operation performed on scalar or vector data, the apparatus may control the main operation sub-module to perform an operation corresponding to the computation instruction by using an operator therein. When the calculation instruction is to perform an operation on data having a dimension greater than or equal to 2, such as a matrix, a tensor, or the like, the device may control the slave operation submodule to perform an operation corresponding to the calculation instruction by using an operator therein.
It should be noted that, a person skilled in the art may set the connection manner between the master operation submodule and the plurality of slave operation submodules according to actual needs to implement the configuration setting of the operation module, for example, the configuration of the operation module may be an "H" configuration, an array configuration, a tree configuration, and the like, which is not limited in the present disclosure.
Fig. 2c shows a block diagram of a matrix transpose instruction processing apparatus according to an embodiment of the present disclosure. In one possible implementation, as shown in fig. 2c, theoperation module 12 may further include one or more branch operation sub-modules 123, and thebranch operation sub-module 123 is configured to forward data and/or operation instructions between themaster operation sub-module 121 and theslave operation sub-module 122. Themain operation sub-module 121 is connected to one or morebranch operation sub-modules 123. Therefore, the main operation sub-module, the branch operation sub-module and the slave operation sub-module in the operation module are connected by adopting an H-shaped structure, and data and/or operation instructions are forwarded by the branch operation sub-module, so that the resource occupation of the main operation sub-module is saved, and the instruction processing speed is further improved.
Fig. 2d shows a block diagram of a matrix transpose instruction processing apparatus according to an embodiment of the present disclosure. In one possible implementation, as shown in FIG. 2d, a plurality ofslave operation sub-modules 122 are distributed in an array.
Eachslave operation submodule 122 is connected to another adjacentslave operation submodule 122, themaster operation submodule 121 is connected to k slave operation submodules 122 of the plurality ofslave operation submodules 122, and the k slave operation submodules 122 are: nslave operator sub-modules 122 of row 1, nslave operator sub-modules 122 of row m, and mslave operator sub-modules 122 of column 1.
As shown in fig. 2d, the k slave operator modules include only the n slave operator modules in the 1 st row, the n slave operator modules in the m th row, and the m slave operator modules in the 1 st column, that is, the k slave operator modules are slave operator modules directly connected to the master operator module among the plurality of slave operator modules. The k slave operation submodules are used for forwarding data and instructions between the master operation submodules and the plurality of slave operation submodules. Therefore, the plurality of slave operation sub-modules are distributed in an array, the speed of sending data and/or operation instructions to the slave operation sub-modules by the master operation sub-module can be increased, and the instruction processing speed is further increased.
Fig. 2e shows a block diagram of a matrix transpose instruction processing apparatus according to an embodiment of the present disclosure. In one possible implementation, as shown in fig. 2e, the operation module may further include atree sub-module 124. The tree submodule 124 includes aroot port 401 and a plurality ofbranch ports 402. Theroot port 401 is connected to themaster operation submodule 121, and the plurality ofbranch ports 402 are connected to the plurality ofslave operation submodules 122, respectively. The tree sub-module 124 has a transceiving function, and is configured to forward data and/or operation instructions between themaster operation sub-module 121 and theslave operation sub-module 122. Therefore, the operation modules are connected in a tree-shaped structure under the action of the tree-shaped sub-modules, and the speed of sending data and/or operation instructions from the main operation sub-module to the auxiliary operation sub-module can be increased by utilizing the forwarding function of the tree-shaped sub-modules, so that the instruction processing speed is increased.
In one possible implementation, thetree submodule 124 may be an optional result of the apparatus, which may include at least one level of nodes. The nodes are line structures with forwarding functions, and the nodes do not have operation functions. The lowest level node is connected to the slave operation sub-module to forward data and/or operation instructions between themaster operation sub-module 121 and theslave operation sub-module 122. In particular, if the tree submodule has zero level nodes, the apparatus does not require the tree submodule.
In one possible implementation, thetree submodule 124 may include a plurality of nodes of an n-ary tree structure, and the plurality of nodes of the n-ary tree structure may have a plurality of layers.
For example, fig. 2f illustrates a block diagram of a matrix transpose instruction processing device according to an embodiment of the present disclosure. As shown in FIG. 2f, the n-ary tree structure may be a binary tree structure with tree-type sub-modules including 2 levels of nodes 01. The lowest level node 01 is connected with the slave operation sub-module 122 to forward data and/or operation instructions between themaster operation sub-module 121 and theslave operation sub-module 122.
In this implementation, the n-ary tree structure may also be a ternary tree structure or the like, where n is a positive integer greater than or equal to 2. The number of n in the n-ary tree structure and the number of layers of nodes in the n-ary tree structure may be set by those skilled in the art as needed, and the disclosure is not limited thereto.
In one possible implementation, as shown in fig. 2 a-2 f, the apparatus may further include astorage module 13. Thestorage module 13 is used for storing data to be operated.
In this implementation, the storage module may include one or more of a cache and a register, and the cache may include a temporary cache and may further include at least one NRAM (Neuron Random Access Memory). The cache can be used for storing data to be operated on, and the register can be used for storing scalar data in the data to be operated on.
In one possible implementation, the cache may include a neuron cache. The neuron buffer, i.e., the neuron random access memory, may be configured to store neuron data in data to be operated on, where the neuron data may include neuron vector data.
In a possible implementation manner, the apparatus may further include a direct memory access module for reading or storing data from the storage module.
In one possible implementation, as shown in fig. 2 a-2 f, thecontrol module 11 may include an instruction storage sub-module 111, aninstruction processing sub-module 112, and aqueue storage sub-module 113.
The instruction storage submodule 111 is configured to store a matrix transpose instruction.
Theinstruction processing sub-module 112 is configured to parse the matrix transposition instruction to obtain an operation code and an operation domain of the matrix transposition instruction.
Thequeue storage sub-module 113 is configured to store an instruction queue, where the instruction queue includes a plurality of instructions to be executed that are sequentially arranged according to an execution order, and the plurality of instructions to be executed may include a matrix transpose instruction.
In this implementation manner, the execution order of the multiple instructions to be executed may be arranged according to the receiving time, the priority level, and the like of the instructions to be executed to obtain an instruction queue, so that the multiple instructions to be executed are sequentially executed according to the instruction queue.
In one possible implementation, as shown in fig. 2 a-2 f, thecontrol module 11 may further include adependency processing sub-module 114.
The dependencyrelationship processing submodule 114 is configured to, when it is determined that a first to-be-executed instruction in the multiple to-be-executed instructions is associated with a zeroth to-be-executed instruction before the first to-be-executed instruction, cache the first to-be-executed instruction in the instruction storage submodule 111, and after the zeroth to-be-executed instruction is executed, extract the first to-be-executed instruction from the instruction storage submodule 111 and send the first to-be-executed instruction to theoperation module 12.
The method for determining the zero-th instruction to be executed before the first instruction to be executed has an incidence relation with the first instruction to be executed comprises the following steps: the first storage address interval for storing the data required by the first to-be-executed instruction and the zeroth storage address interval for storing the data required by the zeroth to-be-executed instruction have an overlapped area. On the contrary, there is no association relationship between the first to-be-executed instruction and the zeroth to-be-executed instruction before the first to-be-executed instruction, which may be that there is no overlapping area between the first storage address interval and the zeroth storage address interval.
By the method, according to the dependency relationship between the first to-be-executed instruction and the zeroth to-be-executed instruction before the first to-be-executed instruction, the subsequent first to-be-executed instruction is executed after the execution of the previous zeroth to-be-executed instruction is finished, and the accuracy of the operation result is ensured.
In one possible implementation, the instruction format of the matrix transpose instruction may be:
transpose dst src srcHeight srcWidth
wherein, transpose is the operation code of the matrix transpose instruction, dst, src height, and src width are the operation domain of the matrix transpose instruction. Wherein dst is a target address, src is a data address to be operated, src height is an input height of data to be operated, and src width is an input width of data to be operated.
It should be understood that the operation code of the matrix transpose instruction, the position of the operation code and the operation field in the instruction format can be set as required by those skilled in the art, and the present disclosure does not limit this.
In one possible implementation manner, the apparatus may be disposed in one or more of a Graphics Processing Unit (GPU), a Central Processing Unit (CPU), and an embedded Neural Network Processor (NPU).
It should be noted that, although the matrix transpose instruction processing apparatus has been described above by taking the above embodiments as examples, those skilled in the art will understand that the present disclosure should not be limited thereto. In fact, the user can flexibly set each module according to personal preference and/or actual application scene, as long as the technical scheme of the disclosure is met.
Application example
An application example according to the embodiment of the present disclosure is given below in conjunction with "performing a matrix transposition operation with a matrix transposition instruction processing apparatus" as an exemplary application scenario to facilitate understanding of a flow of the matrix transposition instruction processing apparatus. It is understood by those skilled in the art that the following application examples are merely for the purpose of facilitating understanding of the embodiments of the present disclosure and should not be construed as limiting the embodiments of the present disclosure
Fig. 3 is a schematic diagram illustrating an application scenario of a matrix transpose instruction processing apparatus according to an embodiment of the present disclosure. As shown in fig. 3, the matrix transpose instruction processing apparatus processes a matrix transpose instruction as follows:
thecontrol module 11 analyzes the obtained matrix transposition instruction 1 (for example, the matrix transposition instruction 1 is transpose 5001006432), and obtains an operation code and an operation domain of the matrix transposition instruction 1. The operation code of the matrix transposition instruction 1 is transpose, the target address is 500, the address of the data to be calculated is 100, the input height is 64, and the input width is 32. Thecontrol module 11 obtains 64 × 32 data to be operated from the data address 100 to be operated.
Theoperation module 12 performs matrix transposition operation on data to be operated to obtain transposed data of 32 × 64, and stores the transposed data in the target address 500.
The working process of the above modules can refer to the above related description.
Thus, the matrix transposition instruction processing device can efficiently and quickly process the matrix transposition instruction, and the processing efficiency and the processing speed for performing the matrix transposition operation are high.
The present disclosure provides a machine learning arithmetic device, which may include one or more of the above matrix transposition instruction processing devices, and is configured to acquire data to be operated and control information from other processing devices, and execute a specified machine learning operation. The machine learning arithmetic device can obtain a matrix transposition instruction from other machine learning arithmetic devices or non-machine learning arithmetic devices, and transmit an execution result to peripheral equipment (also called other processing devices) through an I/O interface. Peripheral devices such as cameras, displays, mice, keyboards, network cards, wifi interfaces, servers. When more than one matrix transposition instruction processing device is included, the matrix transposition instruction processing devices can be linked and transmit data through a specific structure, for example, the matrix transposition instruction processing devices are interconnected and transmit data through a PCIE bus, so as to support larger-scale operation of the neural network. At this time, the same control system may be shared, or there may be separate control systems; the memory may be shared or there may be separate memories for each accelerator. In addition, the interconnection mode can be any interconnection topology.
The machine learning arithmetic device has high compatibility and can be connected with various types of servers through PCIE interfaces.
Fig. 4a shows a block diagram of a combined processing device according to an embodiment of the present disclosure. As shown in fig. 4a, the combined processing device includes the machine learning arithmetic device, the universal interconnection interface, and other processing devices. The machine learning arithmetic device interacts with other processing devices to jointly complete the operation designated by the user.
Other processing devices include one or more of general purpose/special purpose processors such as Central Processing Units (CPUs), Graphics Processing Units (GPUs), neural network processors, and the like. The number of processors included in the other processing devices is not limited. The other processing devices are used as interfaces of the machine learning arithmetic device and external data and control, and comprise data transportation to finish basic control of starting, stopping and the like of the machine learning arithmetic device; other processing devices may cooperate with the machine learning computing device to perform computing tasks.
And the universal interconnection interface is used for transmitting data and control instructions between the machine learning arithmetic device and other processing devices. The machine learning arithmetic device acquires required input data from other processing devices and writes the input data into a storage device on the machine learning arithmetic device; control instructions can be obtained from other processing devices and written into a control cache on a machine learning arithmetic device chip; the data in the storage module of the machine learning arithmetic device can also be read and transmitted to other processing devices.
Fig. 4b shows a block diagram of a combined processing device according to an embodiment of the present disclosure. In a possible implementation manner, as shown in fig. 4b, the combined processing device may further include a storage device, and the storage device is connected to the machine learning operation device and the other processing device respectively. The storage device is used for storing data stored in the machine learning arithmetic device and the other processing device, and is particularly suitable for data which is required to be calculated and cannot be stored in the internal storage of the machine learning arithmetic device or the other processing device.
The combined processing device can be used as an SOC (system on chip) system of equipment such as a mobile phone, a robot, an unmanned aerial vehicle and video monitoring equipment, the core area of a control part is effectively reduced, the processing speed is increased, and the overall power consumption is reduced. In this case, the generic interconnect interface of the combined processing device is connected to some component of the apparatus. Some parts are such as camera, display, mouse, keyboard, network card, wifi interface.
The present disclosure provides a machine learning chip, which includes the above machine learning arithmetic device or combined processing device.
The present disclosure provides a machine learning chip package structure, which includes the above machine learning chip.
Fig. 5 shows a schematic structural diagram of a board card according to an embodiment of the present disclosure. As shown in fig. 5, the board includes the above-mentioned machine learning chip package structure or the above-mentioned machine learning chip. The board may include, in addition to the machine learning chip 389, other kits including, but not limited to: memory device 390,interface device 391 and control device 392.
The memory device 390 is coupled to a machine learning chip 389 (or a machine learning chip within a machine learning chip package structure) via a bus for storing data. Memory device 390 may include multiple sets of memory cells 393. Each group of memory cells 393 is coupled to a machine learning chip 389 via a bus. It is understood that each group 393 may be a DDR SDRAM (Double Data Rate SDRAM).
DDR can double the speed of SDRAM without increasing the clock frequency. DDR allows data to be read out on the rising and falling edges of the clock pulse. DDR is twice as fast as standard SDRAM.
In one embodiment, memory device 390 may include 4 groups of memory cells 393. Each group of memory cells 393 may include a plurality of DDR4 particles (chips). In one embodiment, the machine learning chip 389 may include 4 72-bit DDR4 controllers therein, where 64bit is used for data transmission and 8bit is used for ECC check in the 72-bit DDR4 controller. It is appreciated that when DDR4-3200 particles are used in each group of memory cells 393, the theoretical bandwidth of data transfer may reach 25600 MB/s.
In one embodiment, each group 393 of memory cells includes a plurality of double rate synchronous dynamic random access memories arranged in parallel. DDR can transfer data twice in one clock cycle. A controller for controlling DDR is provided in the machine learning chip 389 for controlling data transfer and data storage of each memory unit 393.
Interface device 391 is electrically coupled to machine learning chip 389 (or a machine learning chip within a machine learning chip package). Theinterface device 391 is used to implement data transmission between the machine learning chip 389 and an external device (e.g., a server or a computer). For example, in one embodiment, theinterface device 391 may be a standard PCIE interface. For example, the data to be processed is transmitted to the machine learning chip 289 by the server through the standard PCIE interface, so as to implement data transfer. Preferably, when PCIE 3.0X 16 interface transmission is adopted, the theoretical bandwidth can reach 16000 MB/s. In another embodiment, theinterface device 391 may also be another interface, and the disclosure does not limit the specific representation of the other interface, and the interface device can implement the switching function. In addition, the calculation result of the machine learning chip is still transmitted back to the external device (e.g., server) by the interface device.
The control device 392 is electrically connected to a machine learning chip 389. The control device 392 is used to monitor the state of the machine learning chip 389. Specifically, the machine learning chip 389 and the control device 392 may be electrically connected through an SPI interface. The control device 392 may include a single chip Microcomputer (MCU). For example, machine learning chip 389 may include multiple processing chips, multiple processing cores, or multiple processing circuits, which may carry multiple loads. Therefore, the machine learning chip 389 can be in different operation states such as a multi-load and a light load. The control device can regulate and control the working states of a plurality of processing chips, a plurality of processing circuits and/or a plurality of processing circuits in the machine learning chip.
The present disclosure provides an electronic device, which includes the above machine learning chip or board card.
The electronic device may include a data processing apparatus, a computer device, a robot, a computer, a printer, a scanner, a tablet, a smart terminal, a cell phone, a tachograph, a navigator, a sensor, a camera, a server, a cloud server, a camera, a camcorder, a projector, a watch, an earphone, a mobile storage, a wearable device, a vehicle, a household appliance, and/or a medical device.
The vehicle may include an aircraft, a ship, and/or a vehicle. The household appliances may include televisions, air conditioners, microwave ovens, refrigerators, electric rice cookers, humidifiers, washing machines, electric lamps, gas cookers, and range hoods. The medical device may include a nuclear magnetic resonance apparatus, a B-mode ultrasound apparatus and/or an electrocardiograph.
Fig. 6 shows a flowchart of a matrix transpose instruction processing method according to an embodiment of the present disclosure. The method can be applied to computer equipment and the like comprising a memory and a processor, wherein the memory is used for storing data used in the process of executing the method; the processor is used for executing relevant processing and operation steps, such as the steps S51 and S52. As shown in fig. 6, the method is applied to the above-described matrix transpose instruction processing apparatus, and includes step S51 and step S52.
In step S51, the control module is used to analyze the obtained matrix transposition instruction to obtain an operation code and an operation domain of the matrix transposition instruction, and obtain data to be operated, a target address, an input height and an input width of the data to be operated, which are required for executing the matrix transposition instruction, according to the operation code and the operation domain. The operation code is used for indicating the operation of the matrix transposition instruction on the data to be the matrix transposition operation, and the operation domain comprises a data address to be operated, an input height, an input width and a target address.
In step S52, the operation module performs matrix transposition operation on the data to be operated according to the input height and the input width to obtain transposed data, and stores the transposed data in the target address.
In a possible implementation manner, performing matrix transposition operation on data to be operated according to an input height and an input width to obtain transposed data may include:
and performing matrix transposition operation on the data to be operated according to the input height and the input width by utilizing a plurality of matrix transposition operators. Wherein the height of the transposed data is equal to the input width, and the width of the transposed data is equal to the input height.
In one possible implementation, the operation module includes a master operation submodule and a plurality of slave operation submodules, and the master operation submodules include a plurality of matrix transpose operators. Wherein, the step S52 may include:
and performing matrix transposition operation on the data to be operated according to the input height and the input width by utilizing a plurality of matrix transposition operators in the main operation sub-module to obtain transposed data, and storing the transposed data into a target address.
In one possible implementation, the method may further include: the storage module of the device is used for storing the data to be operated,
wherein the memory module comprises at least one of a register and a cache,
the cache is used for storing data to be operated, and comprises at least one neuron cache NRAM;
the register is used for storing scalar data in the data to be operated;
and the neuron cache is used for storing neuron data in the data to be operated, wherein the neuron data comprises neuron vector data.
In a possible implementation manner, analyzing the obtained matrix transpose instruction to obtain an operation code and an operation domain of the matrix transpose instruction may include:
storing a matrix transposition instruction;
analyzing the matrix transposition instruction to obtain an operation code and an operation domain of the matrix transposition instruction;
the method includes storing an instruction queue, where the instruction queue includes a plurality of instructions to be executed that are sequentially arranged according to an execution order, and the plurality of instructions to be executed may include a matrix transpose instruction.
In one possible implementation, the method may further include: when determining that the first to-be-executed instruction in the plurality of to-be-executed instructions has an association relation with a zeroth to-be-executed instruction before the first to-be-executed instruction, caching the first to-be-executed instruction, after the zeroth to-be-executed instruction is executed, executing the first to-be-executed instruction,
the method for determining the zero-th instruction to be executed before the first instruction to be executed has an incidence relation with the first instruction to be executed comprises the following steps:
the first storage address interval for storing the data required by the first to-be-executed instruction and the zeroth storage address interval for storing the data required by the zeroth to-be-executed instruction have an overlapped area.
It should be noted that, although the matrix transpose instruction processing method is described above by taking the above-described embodiment as an example, those skilled in the art can understand that the present disclosure should not be limited thereto. In fact, the user can flexibly set each step according to personal preference and/or actual application scene, as long as the technical scheme of the disclosure is met.
The matrix transposition instruction processing method provided by the embodiment of the disclosure has a wide application range, high processing efficiency and high processing speed for the matrix transposition instruction, and high processing efficiency and high processing speed for performing matrix transposition operation.
The present disclosure also provides a non-transitory computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, implement the above-described matrix transpose instruction processing method.
It is noted that while for simplicity of explanation, the foregoing method embodiments have been described as a series of acts or combination of acts, it will be appreciated by those skilled in the art that the present disclosure is not limited by the order of acts, as some steps may, in accordance with the present disclosure, occur in other orders and concurrently. Further, those skilled in the art will also appreciate that the embodiments described in the specification are exemplary embodiments and that acts and modules referred to are not necessarily required by the disclosure.
It should be further noted that, although the steps in the flowchart of fig. 6 are shown in sequence as indicated by the arrows, the steps are not necessarily executed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 6 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
It should be understood that the above-described apparatus embodiments are merely exemplary, and that the apparatus of the present disclosure may be implemented in other ways. For example, the division of the units/modules in the above embodiments is only one logical function division, and there may be another division manner in actual implementation. For example, multiple units, modules, or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented.
In addition, unless otherwise specified, each functional unit/module in the embodiments of the present disclosure may be integrated into one unit/module, each unit/module may exist alone physically, or two or more units/modules may be integrated together. The integrated units/modules may be implemented in the form of hardware or software program modules.
If the integrated unit/module is implemented in hardware, the hardware may be digital circuits, analog circuits, etc. Physical implementations of hardware structures include, but are not limited to, transistors, memristors, and the like. Unless otherwise specified, the storage module may be any suitable magnetic storage medium or magneto-optical storage medium, such as resistive Random Access Memory rram (resistive Random Access Memory), Dynamic Random Access Memory dram (Dynamic Random Access Memory), Static Random Access Memory SRAM (Static Random-Access Memory), enhanced Dynamic Random Access Memory edram (enhanced Dynamic Random Access Memory), High-Bandwidth Memory HBM (High-Bandwidth Memory), hybrid Memory cubic hmc (hybrid Memory cube), and so on.
The integrated units/modules, if implemented in the form of software program modules and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a memory and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments. The technical features of the embodiments may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The foregoing may be better understood in light of the following clauses:
clause a1, a matrix transpose instruction processing device, the device comprising:
the control module is used for analyzing the obtained matrix transposition instruction to obtain an operation code and an operation domain of the matrix transposition instruction, wherein the operation code is used for indicating the operation of the matrix transposition instruction on data to be matrix transposition operation, the operation domain comprises a data address to be operated, an input height, an input width and a target address, and the data to be operated, the target address, the input height and the input width of the data to be operated, which are required by the execution of the matrix transposition instruction, are obtained according to the operation code and the operation domain;
and the operation module is used for performing matrix transposition operation on the data to be operated according to the input height and the input width to obtain transposed data and storing the transposed data into the target address.
Clause a2, the apparatus of clause a1, the computing module comprising:
a plurality of matrix transpose operators for performing matrix transpose operations on the data to be operated according to the input height and the input width,
wherein a height of the transposed data is equal to the input width, and a width of the transposed data is equal to the input height.
Clause A3, the apparatus of clause a2, the arithmetic module comprising a master arithmetic sub-module and a plurality of slave arithmetic sub-modules, the master arithmetic sub-module comprising the plurality of matrix transpose operators,
the main operation sub-module is configured to perform matrix transposition operation on the data to be operated according to the input height and the input width by using the plurality of matrix transposition operators to obtain transposed data, and store the transposed data in the target address.
Clause a4, the apparatus of clause a1, further comprising:
a storage module for storing the data to be operated,
wherein the storage module comprises at least one of a register and a cache,
the cache is used for storing the data to be operated, and comprises at least one neuron cache NRAM;
the register is used for storing scalar data in the data to be operated;
the neuron cache is used for storing neuron data in the data to be operated, wherein the neuron data comprises neuron vector data.
Clause a5, the apparatus of clause a1, the control module comprising:
the instruction storage submodule is used for storing the matrix transposition instruction;
the instruction processing submodule is used for analyzing the matrix transposition instruction to obtain an operation code and an operation domain of the matrix transposition instruction;
the queue storage submodule is used for storing an instruction queue, the instruction queue comprises a plurality of instructions to be executed, which are sequentially arranged according to an execution sequence, and the plurality of instructions to be executed comprise the matrix transposition instruction.
Clause a6, the apparatus of clause a5, the control module further comprising:
the dependency relationship processing submodule is used for caching a first instruction to be executed in the instruction storage submodule when the fact that the first instruction to be executed in the plurality of instructions to be executed is associated with a zeroth instruction to be executed before the first instruction to be executed is determined, extracting the first instruction to be executed from the instruction storage submodule after the zeroth instruction to be executed is executed, and sending the first instruction to be executed to the operation module,
wherein the association relationship between the first to-be-executed instruction and a zeroth to-be-executed instruction before the first to-be-executed instruction comprises:
and a first storage address interval for storing the data required by the first instruction to be executed and a zeroth storage address interval for storing the data required by the zeroth instruction to be executed have an overlapped area.
Clause a7, a machine learning computing device, the device comprising:
one or more matrix transpose instruction processing devices as set forth in any of clauses a 1-clause a6 for obtaining data to be operated and control information from other processing devices, performing a specified machine learning operation, and transmitting the execution result to other processing devices through an I/O interface;
when the machine learning arithmetic device comprises a plurality of matrix transposition instruction processing devices, the plurality of matrix transposition instruction processing devices can be connected through a specific structure and transmit data;
the matrix transposition instruction processing devices are interconnected through a PCIE bus of a fast peripheral equipment interconnection bus and transmit data so as to support operation of larger-scale machine learning; a plurality of the matrix transposition instruction processing devices share the same control system or own respective control systems; the matrix transposition instruction processing devices share a memory or have respective memories; the interconnection mode of the matrix transposition instruction processing devices is any interconnection topology.
Clause A8, a combination processing device, comprising:
the machine learning computing device, universal interconnect interface, and other processing device of clause a 7;
the machine learning arithmetic device interacts with the other processing devices to jointly complete the calculation operation designated by the user,
wherein the combination processing apparatus further comprises: and a storage device connected to the machine learning arithmetic device and the other processing device, respectively, for storing data of the machine learning arithmetic device and the other processing device.
Clause a9, a machine learning chip, the machine learning chip comprising:
the machine learning computing device of clause a7 or the combined processing device of clause A8.
Clause a10, an electronic device, comprising:
the machine learning chip of clause a 9.
Clause a11, a card, comprising: a memory device, an interface device and a control device and a machine learning chip as described in clause a 9;
wherein the machine learning chip is connected with the storage device, the control device and the interface device respectively;
the storage device is used for storing data;
the interface device is used for realizing data transmission between the machine learning chip and external equipment;
and the control device is used for monitoring the state of the machine learning chip.
Clause a12, a matrix transpose instruction processing method applied to a matrix transpose instruction processing apparatus including a control module and an operation module, the method comprising:
analyzing the obtained matrix transposition instruction by using a control module to obtain an operation code and an operation domain of the matrix transposition instruction, and obtaining data to be operated, a target address, and input height and input width of the data to be operated, which are required by executing the matrix transposition instruction, according to the operation code and the operation domain;
performing matrix transposition operation on the data to be operated by using an operation module according to the input height and the input width to obtain transposed data, storing the transposed data into the target address,
the operation code is used for indicating that the operation of the matrix transposition instruction on data is matrix transposition operation, and the operation domain comprises a data address to be operated, the input height, the input width and the target address.
Clause a13, performing matrix transposition operation on the data to be operated according to the input height and the input width according to the method described in clause a12, to obtain transposed data, including:
performing matrix transposition operation on the data to be operated according to the input height and the input width by utilizing a plurality of matrix transposition operators in the operation module,
wherein a height of the transposed data is equal to the input width, and a width of the transposed data is equal to the input height.
Clause a14, the method of clause a13, the arithmetic module comprising a master arithmetic sub-module and a plurality of slave arithmetic sub-modules, the master arithmetic sub-module comprising the plurality of matrix transpose operators,
performing matrix transposition operation on the data to be operated according to the input height and the input width to obtain transposed data, and storing the transposed data in the target address, including:
and performing matrix transposition operation on the data to be operated according to the input height and the input width by utilizing a plurality of matrix transposition operators in the main operation sub-module to obtain transposed data, and storing the transposed data into the target address.
Clause a15, the method of clause a12, the method further comprising:
storing the data to be operated by utilizing a storage module of the device,
wherein the storage module comprises at least one of a register and a cache,
the cache is used for storing the data to be operated, and comprises at least one neuron cache NRAM;
the register is used for storing scalar data in the data to be operated;
the neuron cache is used for storing neuron data in the data to be operated, wherein the neuron data comprises neuron vector data.
Clause a16, according to the method described in clause a12, parsing the obtained matrix transpose instruction to obtain an operation code and an operation domain of the matrix transpose instruction, includes:
storing the matrix transposition instruction;
analyzing the matrix transposition instruction to obtain an operation code and an operation domain of the matrix transposition instruction;
and storing an instruction queue, wherein the instruction queue comprises a plurality of instructions to be executed which are sequentially arranged according to an execution sequence, and the plurality of instructions to be executed comprise the matrix transposition instruction.
Clause a17, the method of clause a16, the method further comprising:
when determining that the first to-be-executed instruction in the plurality of to-be-executed instructions is associated with a zeroth to-be-executed instruction before the first to-be-executed instruction, caching the first to-be-executed instruction, and after determining that the zeroth to-be-executed instruction is completely executed, controlling to execute the first to-be-executed instruction,
wherein the association relationship between the first to-be-executed instruction and a zeroth to-be-executed instruction before the first to-be-executed instruction comprises:
and a first storage address interval for storing the data required by the first instruction to be executed and a zeroth storage address interval for storing the data required by the zeroth instruction to be executed have an overlapped area.
Clause a18, a non-transitory computer readable storage medium having computer program instructions stored thereon that, when executed by a processor, implement the method of any of clauses a 12-a 17.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

Translated fromChinese
1.一种矩阵转置指令处理装置,其特征在于,所述装置包括:1. a matrix transpose instruction processing device, it is characterized in that, described device comprises:控制模块,用于对获取到的矩阵转置指令进行解析,得到所述矩阵转置指令的操作码和操作域,所述操作码用于指示所述矩阵转置指令对数据所进行的运算为矩阵转置运算,所述操作域包括待运算数据地址、输入高度、输入宽度和目标地址,并根据所述操作码和所述操作域获取执行所述矩阵转置指令所需的待运算数据、目标地址、所述待运算数据的输入高度和输入宽度;The control module is used to parse the acquired matrix transpose instruction, and obtain the operation code and operation domain of the matrix transposition instruction, and the operation code is used to instruct the operation performed by the matrix transposition instruction on the data as: Matrix transpose operation, the operation field includes the address of the data to be operated, the input height, the input width and the target address, and obtains the data to be operated required to execute the matrix transpose instruction according to the operation code and the operation field, target address, the input height and input width of the data to be calculated;运算模块,用于根据所述输入高度和所述输入宽度,对所述待运算数据进行矩阵转置运算,得到转置数据,并将所述转置数据存入所述目标地址中。an operation module, configured to perform a matrix transposition operation on the data to be operated according to the input height and the input width to obtain transposed data, and store the transposed data in the target address.2.根据权利要求1所述的装置,其特征在于,所述运算模块,包括:2. The device according to claim 1, wherein the arithmetic module comprises:多个矩阵转置运算器,用于根据所述输入高度和所述输入宽度,对所述待运算数据进行矩阵转置运算,a plurality of matrix transpose operators for performing a matrix transpose operation on the data to be operated according to the input height and the input width,其中,所述转置数据的高度等于所述输入宽度,所述转置数据的宽度等于所述输入高度。The height of the transposed data is equal to the input width, and the width of the transposed data is equal to the input height.3.根据权利要求2所述的装置,其特征在于,所述运算模块包括主运算子模块和多个从运算子模块,所述主运算子模块包括所述多个矩阵转置运算器,3. The device according to claim 2, wherein the operation module comprises a main operation submodule and a plurality of slave operation submodules, and the main operation submodule comprises the plurality of matrix transpose operators,所述主运算子模块,用于利用所述多个矩阵转置运算器根据所述输入高度和所述输入宽度对所述待运算数据进行矩阵转置运算,得到转置数据,并将所述转置数据存入所述目标地址中。The main operation sub-module is configured to perform a matrix transposition operation on the data to be operated according to the input height and the input width by using the plurality of matrix transpose operators to obtain transposed data, and convert the The transposed data is stored in the target address.4.一种机器学习运算装置,其特征在于,所述装置包括:4. A machine learning computing device, wherein the device comprises:一个或多个如权利要求1-3任一项所述的矩阵转置指令处理装置,用于从其他处理装置中获取待运算数据和控制信息,并执行指定的机器学习运算,将执行结果通过I/O接口传递给其他处理装置;One or more matrix transpose instruction processing devices as described in any one of claims 1-3, for obtaining data to be operated and control information from other processing devices, and executing the specified machine learning operation, the execution result is passed through. The I/O interface is passed to other processing devices;当所述机器学习运算装置包含多个所述矩阵转置指令处理装置时,所述多个所述矩阵转置指令处理装置间可以通过特定的结构进行连接并传输数据;When the machine learning computing device includes a plurality of the matrix transpose instruction processing devices, the plurality of the matrix transpose instruction processing devices can be connected through a specific structure and data can be transmitted;其中,多个所述矩阵转置指令处理装置通过快速外部设备互连总线PCIE总线进行互联并传输数据,以支持更大规模的机器学习的运算;多个所述矩阵转置指令处理装置共享同一控制系统或拥有各自的控制系统;多个所述矩阵转置指令处理装置共享内存或者拥有各自的内存;多个所述矩阵转置指令处理装置的互联方式是任意互联拓扑。Wherein, a plurality of the matrix transpose instruction processing apparatuses are interconnected and transmit data through the fast peripheral device interconnection bus PCIE bus to support larger-scale machine learning operations; a plurality of the matrix transposition instruction processing apparatuses share the same The control system may have its own control system; a plurality of the matrix transposition instruction processing devices share memory or have their own memory; the interconnection mode of the multiple matrix transposition instruction processing devices is any interconnection topology.5.一种组合处理装置,其特征在于,所述组合处理装置包括:5. A combined processing device, characterized in that the combined processing device comprises:如权利要求4所述的机器学习运算装置、通用互联接口和其他处理装置;The machine learning computing device, universal interconnection interface and other processing devices as claimed in claim 4;所述机器学习运算装置与所述其他处理装置进行交互,共同完成用户指定的计算操作,The machine learning computing device interacts with the other processing devices to jointly complete the computing operation specified by the user,其中,所述组合处理装置还包括:存储装置,该存储装置分别与所述机器学习运算装置和所述其他处理装置连接,用于保存所述机器学习运算装置和所述其他处理装置的数据。Wherein, the combined processing device further includes: a storage device, the storage device is respectively connected to the machine learning computing device and the other processing device, and is used for saving the data of the machine learning computing device and the other processing device.6.一种机器学习芯片,其特征在于,所述机器学习芯片包括:6. A machine learning chip, wherein the machine learning chip comprises:如权利要求4所述的机器学习运算装置或如权利要求5所述的组合处理装置。The machine learning computing device according to claim 4 or the combination processing device according to claim 5 .7.一种电子设备,其特征在于,所述电子设备包括:7. An electronic device, characterized in that the electronic device comprises:如权利要求6所述的机器学习芯片。The machine learning chip of claim 6.8.一种板卡,其特征在于,所述板卡包括:存储器件、接口装置和控制器件以及如权利要求6所述的机器学习芯片;8. A board, characterized in that the board comprises: a storage device, an interface device, a control device, and a machine learning chip as claimed in claim 6;其中,所述机器学习芯片与所述存储器件、所述控制器件以及所述接口装置分别连接;Wherein, the machine learning chip is connected to the storage device, the control device and the interface device respectively;所述存储器件,用于存储数据;the storage device for storing data;所述接口装置,用于实现所述机器学习芯片与外部设备之间的数据传输;The interface device is used to realize data transmission between the machine learning chip and an external device;所述控制器件,用于对所述机器学习芯片的状态进行监控。The control device is used for monitoring the state of the machine learning chip.9.一种矩阵转置指令处理方法,其特征在于,所述方法应用于矩阵转置指令处理装置,所述装置包括控制模块和运算模块,所述方法包括:9. A method for processing a matrix transposition instruction, wherein the method is applied to a matrix transposition instruction processing device, the device comprises a control module and an arithmetic module, and the method comprises:利用控制模块对获取到的矩阵转置指令进行解析,得到所述矩阵转置指令的操作码和操作域,并根据所述操作码和所述操作域获取执行所述矩阵转置指令所需的待运算数据、目标地址、所述待运算数据的输入高度和输入宽度;Use the control module to parse the acquired matrix transpose instruction, obtain the opcode and operation domain of the matrix transpose instruction, and obtain the required operation code for executing the matrix transpose instruction according to the opcode and the operation domain. Data to be calculated, target address, input height and input width of the data to be calculated;利用运算模块根据所述输入高度和所述输入宽度,对所述待运算数据进行矩阵转置运算,得到转置数据,并将所述转置数据存入所述目标地址中,According to the input height and the input width, the operation module is used to perform a matrix transposition operation on the data to be operated to obtain transposed data, and the transposed data is stored in the target address,其中,所述操作码用于指示所述矩阵转置指令对数据所进行的运算为矩阵转置运算,所述操作域包括待运算数据地址、所述输入高度、所述输入宽度和所述目标地址。The operation code is used to indicate that the operation performed by the matrix transpose instruction on the data is a matrix transposition operation, and the operation field includes the address of the data to be operated, the input height, the input width and the target address.10.一种非易失性计算机可读存储介质,其上存储有计算机程序指令,其特征在于,所述计算机程序指令被处理器执行时实现权利要求9所述的方法。10. A non-volatile computer-readable storage medium having computer program instructions stored thereon, wherein the computer program instructions implement the method of claim 9 when executed by a processor.
CN201910625494.5A2018-10-092019-07-11Operation method, operation device, computer equipment and storage mediumPendingCN111061507A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
PCT/CN2019/110167WO2020073925A1 (en)2018-10-092019-10-09Operation method and apparatus, computer device and storage medium

Applications Claiming Priority (2)

Application NumberPriority DateFiling DateTitle
CN2018112033612018-10-16
CN20181120336102018-10-16

Publications (1)

Publication NumberPublication Date
CN111061507Atrue CN111061507A (en)2020-04-24

Family

ID=70297407

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201910625494.5APendingCN111061507A (en)2018-10-092019-07-11Operation method, operation device, computer equipment and storage medium

Country Status (1)

CountryLink
CN (1)CN111061507A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112163748A (en)*2020-09-182021-01-01江苏现代职教图书发行有限公司 A data processing method, device and teaching material management information system
CN118012505A (en)*2020-06-302024-05-10上海寒武纪信息科技有限公司 Artificial intelligence processors, integrated circuit chips, boards, electronic devices

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN100565105C (en)*2008-02-032009-12-02航天东方红卫星有限公司A kind of star-load TDICCD camera calculates and method of adjustment integral time
CN107861757A (en)*2017-11-302018-03-30上海寒武纪信息科技有限公司Arithmetic unit and Related product
CN107992329A (en)*2017-07-202018-05-04上海寒武纪信息科技有限公司 A calculation method and related products
WO2018174926A1 (en)*2017-03-202018-09-27Intel CorporationSystems, methods, and apparatuses for tile transpose

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN100565105C (en)*2008-02-032009-12-02航天东方红卫星有限公司A kind of star-load TDICCD camera calculates and method of adjustment integral time
WO2018174926A1 (en)*2017-03-202018-09-27Intel CorporationSystems, methods, and apparatuses for tile transpose
CN107992329A (en)*2017-07-202018-05-04上海寒武纪信息科技有限公司 A calculation method and related products
CN107861757A (en)*2017-11-302018-03-30上海寒武纪信息科技有限公司Arithmetic unit and Related product

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王琦: "基于矩阵转置优化的Intel KNL特性分析", 《计算机工程与设计》*

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN118012505A (en)*2020-06-302024-05-10上海寒武纪信息科技有限公司 Artificial intelligence processors, integrated circuit chips, boards, electronic devices
CN112163748A (en)*2020-09-182021-01-01江苏现代职教图书发行有限公司 A data processing method, device and teaching material management information system

Similar Documents

PublicationPublication DateTitle
CN110096309B (en) Computing method, apparatus, computer equipment and storage medium
CN110096310B (en)Operation method, operation device, computer equipment and storage medium
CN110119807B (en)Operation method, operation device, computer equipment and storage medium
CN110096283A (en)Operation method, device, computer equipment and storage medium
CN111047005A (en) Computing method, apparatus, computer equipment and storage medium
CN111353124A (en) Computing method, apparatus, computer equipment and storage medium
CN111061507A (en)Operation method, operation device, computer equipment and storage medium
CN111340202B (en) Computing method, device and related products
CN111290789B (en)Operation method, operation device, computer equipment and storage medium
CN111275197B (en)Operation method, device, computer equipment and storage medium
CN111047030A (en)Operation method, operation device, computer equipment and storage medium
CN111026440B (en)Operation method, operation device, computer equipment and storage medium
CN111124497B (en)Operation method, operation device, computer equipment and storage medium
CN111290788B (en) Computing method, apparatus, computer equipment and storage medium
CN112395008A (en)Operation method, operation device, computer equipment and storage medium
CN111338694B (en) Computing method, apparatus, computer equipment and storage medium
CN111339060B (en)Operation method, device, computer equipment and storage medium
CN111353125B (en)Operation method, operation device, computer equipment and storage medium
CN111353595A (en) Computing method, device and related products
CN112396169B (en)Operation method, device, computer equipment and storage medium
CN112395009A (en)Operation method, operation device, computer equipment and storage medium
CN111062483A (en) Computing method, apparatus, computer equipment and storage medium
CN112395006B (en)Operation method, device, computer equipment and storage medium
CN112395007A (en)Operation method, operation device, computer equipment and storage medium
CN112395001A (en)Operation method, operation device, computer equipment and storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication
RJ01Rejection of invention patent application after publication

Application publication date:20200424


[8]ページ先頭

©2009-2025 Movatter.jp