Movatterモバイル変換


[0]ホーム

URL:


CN111158757B - Parallel access device and method and chip - Google Patents

Parallel access device and method and chip
Download PDF

Info

Publication number
CN111158757B
CN111158757BCN201911406669.XACN201911406669ACN111158757BCN 111158757 BCN111158757 BCN 111158757BCN 201911406669 ACN201911406669 ACN 201911406669ACN 111158757 BCN111158757 BCN 111158757B
Authority
CN
China
Prior art keywords
lane
address
access
step length
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911406669.XA
Other languages
Chinese (zh)
Other versions
CN111158757A (en
Inventor
杨龚轶凡
郑瀚寻
闯小明
周远航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhonghao Xinying (Hangzhou) Technology Co.,Ltd.
Original Assignee
Zhonghao Xinying Hangzhou Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhonghao Xinying Hangzhou Technology Co ltdfiledCriticalZhonghao Xinying Hangzhou Technology Co ltd
Priority to CN201911406669.XApriorityCriticalpatent/CN111158757B/en
Publication of CN111158757ApublicationCriticalpatent/CN111158757A/en
Application grantedgrantedCritical
Publication of CN111158757BpublicationCriticalpatent/CN111158757B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The embodiment of the invention discloses a parallel access method, a parallel access device and a chip, which can be used for realizing parallel storage or reading operation of data in the technical field of integrated circuits. Target addresses are generated for a plurality of lanes through the address generator, and the lanes access corresponding storage positions in the RAM according to the target addresses and carry out data access operation in parallel. When the address generator generates the target address for the lane, the lane step length generating unit is used for generating the lane step length, wherein the lane step length is K times of the step length, and the generated lane step lengths are different under the control of the same SIMD control instruction, so that the target address generated by the address generating unit cannot form access conflict. Therefore, the invention can reduce the power consumption of related hardware on the premise of ensuring the lane parallel conflict-free access to the memory, and simultaneously shortens the whole time consumption for data parallel access operation.

Description

Parallel access device and method and chip
Technical Field
The present invention relates to the field of integrated circuit technologies, and in particular, to a parallel access apparatus and method, and a chip.
Background
With the development of science and technology and the progress of society, the design of integrated circuits is widely applied, more and more electronic devices enter the daily life of people, not only brings convenience to the daily life of people, but also further promotes the innovation and research and development of science and technology. In the field of integrated circuit design, data access is one of the most important technologies. The processor accesses the data to access the memory through a load instruction and a store instruction, the load instruction is used to load the data in the corresponding address of the memory into the corresponding register when the processor needs to use the data in the memory, and the store instruction is used to store the data in the corresponding register to the corresponding address of the memory when the processor needs to store the data.
In application scenarios such as multimedia, big data, artificial intelligence, etc., a data parallel algorithm is often used, for example, parallel operations on a plurality of matrices are required in a neural network, and such parallel operations are required to be performed on a large number of data sets simultaneously. Simultaneous operation on a large number of data sets requires simultaneous access to the large number of data sets in parallel. In application scenarios such as multimedia, big data, artificial intelligence, etc., the SIMD technology is mostly adopted to perform parallel processing, and the SIMD (single Instruction Multiple data), i.e., the single Instruction stream Multiple data stream technology, is a technology that uses one controller to control a plurality of processors, thereby implementing parallelism in space. In short, one control instruction can process a plurality of data simultaneously and in parallel. The functional units of the SIMD extension part, which are responsible for the operations of access, storage, calculation and the like, all support the simultaneous processing of a plurality of parallel subunits, so that one SIMD extension instruction can operate a plurality of elements at one time, and the parallel subunits are called lanes. In parallel processing, it is often the case that two or more lanes point to the exact same location in the memory, i.e. there is an address access conflict. The prior art generally adopts a mode of arranging a conflict detection device to solve the problem of access conflict, and an access request related to an access conflict lane is arbitrated through the conflict detection device. Parallel access of data in this manner is not only inefficient, but also involves high power consumption by associated hardware and long data access time.
Disclosure of Invention
In view of the above, the present invention provides a parallel access apparatus and method and a chip, so as to solve the problems of low efficiency of data access operation, high power consumption of related hardware, and long time consumption of data access caused by access conflict resolution in an integrated circuit by means of conflict detection.
In a first aspect, an embodiment of the present invention provides a parallel access apparatus, including a memory and M lanes, where the memory includes a plurality of memory groups, where the number of the memory groups is not less than the number M of the lanes, and M is an integer not less than 2; the device also comprises an immediate pile and an address generator, wherein the immediate pile is connected with the address generator, the address generator is connected with each lane, and each lane is connected with each storage group;
the immediate heap is used for providing address generation information and step size, and the step size is an odd number;
the address generator is used for receiving the SIMD control instruction, lane information and address generation information and generating a target address for a lane; the lane information includes a step size; the address generator includes a lane step length generation unit and an address generation unit, wherein:
the lane step length generating unit is used for generating lane step lengths according to the SIMD control instructions and the lane information, wherein the lane step lengths are K times of the step lengths, K is an integer, the value range of K is [ N, M + N-1], and N is an integer not less than 0;
the address generation unit is used for summing the address generation information and the lane step length according to the control instruction, and outputting the obtained sum value serving as a target address to a corresponding lane according to the control information;
the M lanes are used for accessing the corresponding storage groups according to respective target addresses and performing access operation in parallel.
The parallel access device provided by the embodiment of the invention generates the lane step length of each lane according to the step length set as the odd number by using the lane step length generating unit, and further generates the target address of the lane by using the lane step length. The method and the device ensure that no conflict exists between the addresses generated for each lane, so that the access operation of a plurality of lanes accessing the memory in parallel can be accurately and orderly carried out according to respective target addresses, and the condition of access conflict caused by parallel access is avoided. The problem of multi-lane parallel access conflict is generally solved by arranging a conflict detection method or device in the prior art. Compared with the prior art, the collision detection is not needed to be carried out on the target address of the lane, namely, a relevant collision detection device is not needed to be arranged in relevant hardware, so that the execution efficiency of access operation of multi-lane parallel access is improved, the power consumption of the relevant hardware is reduced, and the whole time consumption of data parallel access is shortened.
Preferably, the lane step generating unit includes an arithmetic operation device for generating the lane step. The arithmetic operation device in the step length of the lane is controlled by the control instruction to process the lane information, a value which is K times of the step length of the lane information can be obtained, and the value is used for calculating the target address of the lane, so that the lane target addresses generated by the address generator are different, and the condition of access conflict during lane access is avoided.
More preferably, the arithmetic operation device includes a plurality of adders connected in cascade. The lane step length is calculated by gradually accumulating the step length in the lane information through the adders, so that the lane step length calculating method is simple in structure, improves the operation rate and reduces the hardware power consumption.
More preferably, the arithmetic operation device includes an adder and a shifter, wherein the lane step generation unit generates and outputs the class a lane step using the shifter, or generates and outputs the class B lane step using the adder. The hardware structure of the lane step generation unit is further simplified, and better flexibility in selecting lane step calculation is provided. The generation efficiency of the lane step length is further improved, and the hardware power consumption is reduced.
Preferably, the address generation information includes a base address and an offset, and the address generation unit includes two adders for summing the base address and the offset and a lane step size, the resultant sum value being a target address. The base address is used as the common base address of each lane, and the lane step length of each lane is set to be different, so that each lane can access different storage groups in parallel, and the condition of access conflict is avoided. By setting the offset, better flexibility in address generation is provided in situations where it is guaranteed that there are no access conflicts. And the two adders are used for summing the base address, the offset and the lane step length, so that the hardware structure cost is minimized.
In a second aspect, an embodiment of the present invention provides a parallel access method. Providing a memory and M lanes, the method comprising the steps of:
step 110: dividing the memory into a plurality of storage groups, wherein the number of the storage groups is not less than M;
step 120: acquiring SIMD control, sequentially generating at least two target addresses according to the SIMD control instruction, and sequentially sending the at least two target addresses to corresponding lanes in the M lanes, wherein one target address can only be sent to one lane; the process of producing a single target address includes:
acquiring lane information according to the SIMD control instruction, wherein the lane information comprises a step length which is an odd number; generating a lane step length according to the lane information, wherein the lane step length is K times of the step length, K is an integer in an interval [ N, M + N-1], and N is an integer not less than 0;
acquiring address generation information according to the SIMD control instruction, summing the address generation information and lane step length, and directly sending the obtained sum value as a target address to a corresponding lane according to the SIMD control instruction; the lane step length generated in the process of generating a single target address according to the SIMD control instruction is different each time;
step 130: and after the target addresses generated according to the SIMD control instruction are all sent to the corresponding lanes, the corresponding lanes simultaneously start to run, and access the corresponding stored values according to the respective received target addresses to perform access operation in parallel.
The parallel access method provided by the embodiment of the invention generates the lane step length of each lane by using the step length set to be odd number, and further generates the target address of the lane by using the lane step length. The method and the device ensure that no conflict exists between the addresses generated for each lane, so that the access operation of a plurality of lane parallel access memories can be accurately and orderly carried out according to respective target addresses, and the condition of lane access conflict caused by parallel access is avoided. The problem of multi-lane parallel access conflict is generally solved by arranging a conflict detection method or device in the prior art. Compared with the prior art, the collision detection is not needed to be carried out on the target address of the lane, namely, a relevant collision detection device is not needed to be arranged in relevant hardware, so that the execution efficiency of access operation of multi-lane parallel access is improved, the power consumption of the relevant hardware is reduced, and the whole time consumption of data parallel access is shortened.
Preferably, the address generation information includes a base address and an offset, and thestep 120 includes summing the base address, the offset and a lane step size, and using the sum as the target address. The base address is used as the common base address of each lane, and the lane step length of each lane is set to be different, so that each lane can access different storage groups in parallel, and the condition of access conflict is avoided. By setting the offset, better flexibility in address generation is provided in situations where it is guaranteed that there are no access conflicts.
Preferably, the aforementioned method further provides a lane step generation unit including an arithmetic operation device that is controlled to generate a lane step using the SIMD control instruction. The lane information is processed by an arithmetic operation device in the step length of the control instruction control lane, a value which is K times of the step length in the lane information can be obtained, wherein K is an integer not less than 0, the value is used for calculating the target address of the lane, the target addresses of the lanes generated by an address generator are ensured to be different, and the condition of access conflict during lane access is avoided.
In particular, in the aforementioned parallel access method, each memory bank has a respective bank number, the number of lanes is M, and the log of the destination address is low2M bits are the group number. Determining the number of bits used to represent the group number in the destination address by the number of lanes, and directly recording the group number in the generated destination addressThe step of separately setting the record group number and related devices is not needed, and the related hardware constitution is simplified.
In a third aspect, an embodiment of the present invention provides a chip. The chip includes a computer-readable storage medium for storing a computer program; the chip also comprises a processor, wherein the processor comprises the parallel access device disclosed by the foregoing; the processor is configured to implement the steps of the aforementioned parallel access method when executing the computer program stored in the readable storage medium.
The invention can be further combined to provide more implementation modes on the basis of the implementation modes provided by the aspects.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flow chart of a parallel access method according to an embodiment of the present invention;
FIG. 2 is a block diagram of aparallel access device 200 according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of anaddress generator 300 according to an embodiment of the present invention;
fig. 4(a) is a schematic structural diagram of a lane stepsize generating unit 410 according to an embodiment of the present invention;
fig. 4(B) is a schematic structural diagram of a lane stepsize generating unit 420 according to an embodiment of the present invention;
fig. 4(C) is a schematic structural diagram of a lane stepsize generating unit 430 according to an embodiment of the present invention;
FIG. 5 is a block diagram of anaddress generator 500 according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of alane 600 according to an embodiment of the present invention;
FIG. 7 is a sample of data that needs to be stored according to an embodiment of the present invention;
FIG. 8 is a RAM memory provided by an embodiment of the present invention for storing the data of FIG. 7;
fig. 9 is a schematic structural diagram of a four-laneparallel access device 900 according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a stepsize generating unit 1000 for calculating four lanes according to an embodiment of the present invention;
FIG. 11 is a state diagram of the memory 800 resulting from the storage of the first element of each matrix of FIG. 7;
FIG. 12 is a memory state diagram of the memory 800 resulting from the completion of all elements of each matrix in FIG. 7;
fig. 13 is a schematic structural diagram of a chip 1300 according to an embodiment of the present invention;
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that when an element is referred to as being "connected to" another element, or "coupled" to one or more other elements, it can be directly connected to the other element or be indirectly connected to the other element.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The following specifically describes embodiments of the present invention.
Fig. 1 is a schematic flow chart of a parallel access method according to an embodiment of the present invention. The method can realize the simultaneous data access operation of a plurality of lanes under the control of the SIMD instruction. As shown in fig. 1, the method comprises the steps of:
step 110: the RAM memory is divided into a plurality of memory banks. The memory mentioned in the embodiment of the invention is a RAM memory, namely a random access memory. The RAM memory may be divided into a number of memory groups, including a number of memory banks for storing data. The embodiment of the invention divides the memory into a plurality of memory groups and sets a group number for each memory group, wherein the number of the memory groups is not less than the number of lanes, and if the number of the lanes is M and M is not less than 2, the number of the memory groups is more than or equal to M, and the number of the memory groups is preferably equal to the number of the lanes.
Step 120: and sequentially generating a plurality of target addresses according to the SIMD control instruction, and sending the target addresses to corresponding lanes. When a lane needs to perform a data access operation, a target address needs to be acquired first, and the lane accesses a memory bank in a memory group to which the target address points to perform the data access operation. And sequentially generating at least two target addresses according to the SIMD control instruction, and directly sending the generated at least two target addresses to corresponding lanes in the M lanes, wherein one target address can be sent to only one lane. The specific number of target addresses generated by the SIMD control instruction, as determined by the SIMD control instruction, may be any integer within [2, M ]. The process of generating a single target address includes:
obtaining lane information according to the SIMD control instruction, wherein the lane information comprises a step length which is an odd number; generating a lane step length according to lane information by a lane step length generating unit, wherein the lane step length is K times of the step length, K is an integer, the value range of K is [ N, M + N-1], and N is an integer not less than 0;
fetch address generationThe address generation information comprises a base address and an offset; summing the address generation information and the lane step length according to the SIMD control instruction, namely summing a base address, an offset and the lane step length, and directly sending the obtained sum value serving as a target address to a lane; the target address is composed of a real address and a group number, wherein the low log in the target address2M bits are the group number, the rest bits are the real address, and the real address points to a memory bank. The lane step size generated in each process of generating a single target address according to the SIMD control instruction is different.
Step 130: and simultaneously starting to operate a plurality of lanes acquiring the target addresses, and accessing corresponding storage groups in parallel to perform access operation. After the lanes acquire the respective target addresses, the lanes start to operate simultaneously, access to the storage groups corresponding to the group numbers included in the target addresses according to the respective target addresses, further access to the corresponding storage banks according to the actual addresses, and perform data access operations in parallel.
The parallel access method provided by the embodiment of the invention generates the lane step length of each lane by using the step length set to be odd number, and further generates the address of the lane by using the lane step length. The method ensures that no conflict exists between the addresses generated for each lane, so that the access operation of a plurality of lanes accessing the memory in parallel can be accurately and orderly carried out according to respective target addresses. The condition that access conflict occurs in parallel access is avoided. The problem of multi-lane parallel access conflict is generally solved by arranging a conflict detection method or device in the prior art. Compared with the prior art, the collision detection is not needed to be carried out on the target address of the lane any more, namely, a collision detection device relevant to relevant hardware is not needed any more, so that the execution efficiency of access operation of multi-lane parallel access is improved, the power consumption of relevant hardware is reduced, and the whole time consumption of data parallel access is shortened.
Fig. 2 is a schematic structural diagram of aparallel access apparatus 200 according to an embodiment of the present invention. As shown in fig. 2, thematrix processing apparatus 200 includes animmediate heap 210, anaddress generator 220,M lanes 230, M is not less than 2, aRAM memory 240, theRAM memory 240 is divided intoQ memory groups 241, and thememory groups 241 include a plurality of memory banks for storing data, where Q is equal to or greater than M, and preferably Q is equal to M. Theimmediate heap 210 is coupled to anaddress generator 220; anaddress generator 220 is connected to eachlane 230; each memory bank has a read/write port, and eachlane 230 is connected to the read/write port (not shown) of eachmemory bank 241 in theRAM memory 240.
Theimmediate heap 210 is used to provide address generation information and a step size, where the step size is an odd number. Theimmediate heap 210 sends theaddress generator 220 the corresponding address generation information and step size according to the request signal when receiving an external request signal, where the request signal may be sent by theaddress generator 220 or sent by another external control device.
Theaddress generator 220 is for receiving control instructions and lane information as well as address generation information from theimmediate heap 210. Wherein the control instruction comprises a SIMD instruction decoded by a decoder; the lane information includes the step size from theimmediate heap 210, and also includes 0, 0 is provided by the external device; the address generation information includes a base address and also includes an offset.
Fig. 3 is a schematic structural diagram of anaddress generator 300 according to an embodiment of the present invention. As shown in fig. 3, theaddress generator 300 includes a lanestep generation unit 301 and anaddress generation unit 302. The lane steplength generating unit 301 is configured to receive lane information, and calculate a lane step length according to a control instruction, where the lane step length is K times of the step length in the lane information, K is an integer, a value range of K is [ N, M + N-1], N is an integer not less than 0, and the generated lane step lengths are different under the control of the same SIMD control instruction. The lane steplength generating unit 301 generates a lane step length and then sends the lane step length to theaddress generating unit 302, and theaddress generating unit 302 receives the lane step length and the address generating information and generates a target address for the lane according to the lane step length and the address generating information.
In a preferred embodiment, the lane step generating unit generates the lane step by using a shifter and an adderLane step size. Fig. 4(a) is a schematic structural diagram of a lane steplength generating unit 410 according to an embodiment of the present invention. As shown in fig. 4(a), the lanestep generation unit 410 includestransmission lines 411 and 412, and ashifter 413 and anadder 414. Suppose that M lanes are now used to perform the data-accessing operation, the serial numbers of the lanes arelane 0,lane 1,lane 2, … …, and lane M-1, respectively. The lane step generating unit receives lane information, the lane information comprises 0 from external equipment and step from the immediate pile, and thetransmission line 411 is used for taking 0 as the lane step of thelane 0 according to the SIMD control instruction and directly transmitting the lane step to the address generating unit; thetransmission line 412 is used for directly transmitting the step size received from the immediate pile to the address generation unit as the lane step size oflane 1 according to the SIMD control instruction; wherein, the step size received from the external device is 0 and the step size received from the immediate pile are both lane information. Theshifter 413 is used for shifting the step length received from the immediate data stack according to the SIMD control instruction, and directly outputting the obtained shifting result to the address generation unit as the lane step length of the A-type lane; wherein the A-type lane comprises a serial number of 2PP is an integer not less than 1; the lane step directly output by theshifter 413 is also referred to as a class a lane step. Theadder 414 is used for summing the shift result output by theshifter 413 and the step length received from the immediate data stack according to the SIMD control instruction, and directly outputting the obtained sum value to the address generation unit as the lane step length of the B-type lane; where the class B lane is a lane other thanlane 0 andlane 1, and the class a lane, the lane step directly output by theadder 414 is also referred to as a class B lane step. In addition, theadder 414 may be further configured to receive the shift results of the twoshifters 413, calculate a sum value, and output the sum value as a class B lane step; theadder 414 may also be configured to receive an output result of anotheradder 414 and the step size received from the immediate pile, calculate a sum, and output the sum as the class B lane step size; theadder 414 is also configured to receive an output result of anotheradder 414 and an output result of theshifter 413, calculate a sum value, and output the resulting sum value as a class B lane step.
In another preferred embodiment, the lane step generating unit generates the lane step by using a plurality of adders. Fig. 4(B) is a schematic structural diagram of a lane steplength generating unit 420 according to an embodiment of the present invention. As shown in fig. 4(B), the lanestep generation unit 420 includes atransmission line 421 and atransmission line 422, and a plurality ofadders 423, and theadders 423 are connected in cascade. Suppose that M lanes are now used to perform the data-accessing operation, the lane numbers arelane 0,lane 1,lane 2,lane 3, … …, and lane M-1, respectively. The lane step generation unit receives lane information, the lane information comprises 0 from external equipment and step from an immediate pile, and thetransmission line 421 is used for taking 0 in the lane information as the lane step of thelane 0 according to the SIMD control instruction and directly transmitting the lane step to the address generation unit; thetransmission line 422 is used to directly transmit the step size received from the immediate pile to the address generation unit as the lane step size oflane 1 according to the SIMD control instruction. Theadders 423 are connected in a cascade mode, theadders 423 are used for receiving two step lengths from the immediate data stack, calculating a sum value and directly outputting the sum value to the address generating unit as a lane step length of thelane 2; theadder 423 may also be configured to output its own operation result to anotheradder 423; theadder 423 may also be configured to calculate a sum value from the operation result received from theother adder 423 and the step size from the immediate pile, and output the resultant sum value directly to the address generation unit as the lane step size.
In another preferred embodiment, the lane step generating unit generates the lane step by using a plurality of multipliers. Fig. 4(C) is a schematic structural diagram of a lane steplength generating unit 430 according to an embodiment of the present invention. As shown in fig. 4(C), the lanestep generation unit 430 includes M multipliers 431, which aremultiplier 0,multiplier 1, … …, and multiplier M-1, where 0, 1, … …, and M-1 are multiplier numbers. Each multiplier is fixedly written with a numerical value which is equal to the sequence number of the multiplier. Suppose that M lanes are now used to perform the data-accessing operation, the lane numbers arelane 0,lane 1,lane 2,lane 3, … …, and lane M-1, respectively. The lane step generating unit receives lane information including a step from the immediate pile, and themultiplier 0 is used for receiving the step from the immediate pile, multiplying the step by a value fixedly written thereon, calculating the lane step of thelane 0, and directly outputting the lane step to the address generating unit. The other multipliers have the same working process as themultiplier 0, all receive the step length from the immediate data stack, multiply the step length by the numerical value fixedly written on the step length, calculate the lane step length of the lane with the same serial number as the multiplier, and directly output the lane step length to the value address generating unit. The lane step generated for each lane in the three preferred embodiments described above is, in turn, K times the step in the lane information, K ranging from [0, M-1 ]. In some other preferred embodiments, the range of K can be set according to specific conditions, and the selectable interval is [ N, M + N-1], where N is an integer not less than 1.
In fig. 3, theaddress generating unit 302 is configured to receive the lane step sent by the lane step generating unit and the address generating information sent by theimmediate data stack 210, and sum the lane step and the address generating information, where the sum is the target address of one lane. Fig. 5 is a preferred implementation of the address generator provided by the embodiment of the present invention. Fig. 5 is a schematic structural diagram of anaddress generator 500 according to an embodiment of the present invention. As shown in fig. 5, theaddress generator 500 includes afirst adder 501 and asecond adder 502, wherein thefirst adder 501 is configured to receive the address generation information, calculate a sum of a base address and an offset in the address generation information, and send the sum to thesecond adder 502 as a calculation result, and thesecond adder 502 is configured to receive a lane step and the calculation result sent by thefirst adder 501, and sum the two, and output the resultant sum as a destination address of the lane. In another embodiment, thesecond adder 502 may also be used to receive the address generation information, calculate a sum of the base address and the offset in the address generation information, and send the sum to thefirst adder 501. Thefirst adder 501 receives the lane step and the calculation result sent by thesecond adder 502, and sums them, and the resulting sum value is output as the target address of the lane.
In fig. 2, thelanes 230 receive the respective target addresses, start the operation at the same time, locate the storage groups according to the group numbers in the respective target addresses, and further access the corresponding storage banks according to the actual addresses to perform the data access operation. Fig. 6 is a schematic structural diagram of alane 600 according to an embodiment of the present invention. As shown in fig. 6, thelane 600 includes a controljudgment logic device 601, aregister file 602, and an arithmeticlogic unit ALU 603. Wherein thecontrol decision logic 601 is configured to receive the target address and identify the group number and the physical address therein to locate the memory group to be accessed by the lane according to the group number and to locate the memory bank in the memory group according to the physical address. Theregister file 602 includes a plurality of registers, including a destination register, which is a register for storing a source operand, where the source operand is data loaded in a lane access memory for calculation by the arithmeticlogic unit ALU 603; also included inregister file 602 is a result register for storing the results of the operations performed byALU 603. The lane loads the data in the target address in the memory to the target register when executing the load instruction, and stores the data in the result register to the target address in the memory when executing the store instruction.
The parallel access device provided by the embodiment of the invention generates the lane step length of each lane according to the step length set as the odd number by using the lane step length generation unit, and further generates the target address of the lane by using the lane step length. The method and the device ensure that no conflict exists between the addresses generated for each lane, so that the access operation of a plurality of lane parallel access memories can be accurately and orderly carried out according to respective target addresses, and the condition of access conflict caused by lane parallel access is avoided. The problem of multi-lane parallel access conflict is generally solved by arranging a conflict detection method or device in the prior art. Compared with the prior art, the collision detection is not needed to be carried out on the target address of the lane, namely, a relevant collision detection device is not needed to be arranged in relevant hardware, so that the execution efficiency of access operation of multi-lane parallel access is improved, the power consumption of the relevant hardware is reduced, and the whole time consumption of data parallel access is shortened.
For a better understanding of the invention, a simple example of the working process using the aforementioned parallel access method and parallel access means is now given. Fig. 7 shows a sample of data to be stored according to an embodiment of the present invention. As shown in fig. 7, the data sample includes 4 matrices, all of which have been stored in corresponding destination registers in the register file in the lane. Referring to fig. 8, a RAM memory for storing the data in fig. 7 according to an embodiment of the present invention is shown. As shown in FIG. 8, the RAM memory is divided into four memory groups, and since the number M of lanes is equal to 4, the lower 2 bits (i.e., log) of the physical address in the RAM memory24-2) is the group number. Each storage group is provided with a read-write port and comprises four storage banks, each storage bank corresponds to a physical address, namely the storage bank with the physical address of 0000-1100 is a storage group with the group number of 00; the memory bank with the physical address of 0001 and 1101 is a memory bank with the bank number of 01; the storage body with the physical address of 0010 and 1110 is a storage group, and the group number is 10; the memory bank with the physical address of 1100-1111 is a memory bank with the bank number of 11.
Now to store four matrices as shown in fig. 7 in parallel in the memory shown in fig. 8, four lanes are required for simultaneous access storage. Fig. 9 is a schematic structural diagram of a four-laneparallel access device 900 according to an embodiment of the present invention. As shown in fig. 9, theparallel access apparatus 900 includes animmediate file 910, anaddress generator 920, fourlanes 930, aRAM memory 940, and fourmemory banks 941. After receiving the SIMD storage control instruction decoded by the decoder, theaddress generator 920 obtains relevant information according to the SIMD storage control instruction to sequentially generate four target addresses, and sends the generated four target addresses to corresponding lanes, and the lanes are started simultaneously after receiving the target addresses, and access theRAM memory 940 in parallel to perform storage operation.RAM memory 940 is identical to memory 800 shown in fig. 8. One SIMD store control instruction may perform a store operation for one element of the matrix. The specific process is as follows:
theaddress generator 920 sends a request for obtaining address generation information and lane information to theimmediate heap 910 according to the received SIMD control instruction, theimmediate heap 910 sends corresponding address generation information and lane information to theaddress generator 920 after receiving the request, the address generation information includes a base address and an offset, and the lane information includes a step size and 0 (i.e. the step size and the offset are 0)
0000) Here, the base address of the lane information is set to 0000 for easy understanding; the offset is set to 0000, and in some other embodiments, the base address and the offset may be other values; the step size is set to 3, i.e., 0011. It is noted that the step sizes provided in this application may also be other odd numbers, and thestep size 3 is only an illustrative example chosen for ease of understanding. Fig. 10 is a schematic structural diagram of a steplength generating unit 1000 for calculating four lanes according to an embodiment of the present invention. Theaddress generator 920 receives the address generation information and the lane information, and generates a lane step using the lanestep generation unit 1000 shown in fig. 10. As shown in fig. 10, the lanestep generating unit 1000 includes atransmission line 1001, atransmission line 1002, ashifter 1003, and anadder 1004. The target address is generated using anaddress generation unit 500 as shown in fig. 5.
When generating the target address oflane 0, the lanestep generation unit 1000 receives lane information including 0000 from the external device and step 0011 from theimmediate pile 910. The lanestep generation unit 1000 directly outputs 0000 in lane information as a lane step of the lane 0 (i.e., 0 times of the step 0011) using thetransmission line 1001 according to the SIMD control instruction and directly outputs to theaddress generation unit 500, and after receiving thelane step 0000 and the address generation information (i.e., thebase address 0000 and the offset 0000), theaddress generation unit 500 adds thebase address 0000 and the offset 0000 by thefirst adder 501 and sends the obtainedresult 0000 to thesecond adder 502; thelane step 0000 is added to theresult 0000 of thefirst adder 501 by thesecond adder 502, the obtainedresult 0000 is the target address of thelane 0, and thetarget address 0000 is directly sent to thelane 0. After the generation of the target address oflane 0 is completed, theaddress generator 920 continues to generate the target address forlane 1.
When generating the target address oflane 1, the lanestep generation unit 1000 receives lane information including 0000 from the external device and step 0011 from theimmediate heap 910. The lanestep generation unit 1000 directly outputs astep 0011 in the lane information as a lane step (i.e., 1 time of the step 0011) of thelane 1 using thetransmission line 1002 according to the SIMD control instruction and directly outputs to theaddress generation unit 500, and after receiving thelane step 0011 and the address generation information (i.e., thebase address 0000 and the offset 0000), theaddress generation unit 500 adds thebase address 0000 and the offset 0000 through thefirst adder 501 and sends the obtainedresult 0000 to thesecond adder 502; thesecond adder 502 adds thelane step 0011 to theresult 0000 of thefirst adder 501, and the obtainedresult 0011 is the target address of thelane 1, and thetarget address 0011 is directly sent to thelane 1. After the generation of the destination address forlane 1 is completed, theaddress generator 920 continues to generate the destination address forlane 2.
When generating the target address oflane 2,step generation unit 1000 receives lane information including 0000 from the external device and step 0011 fromimmediate pile 910. The lanestep generation unit 1000 sends thestep 0011 to theshifter 1003 according to the SIMD control instruction, theshifter 1003 shifts thestep 0011 by one bit to the left to obtain ashift result 0110, theshift result 0110 is used as a lane step of the lane 2 (namely, 2 times the step 0011), and is directly output to theaddress generation unit 500, and after receiving thelane step 0110 and the address generation information (namely, thebase address 0000 and the offset 0000), theaddress generation unit 500 adds thebase address 0000 and the offset 0000 by thefirst adder 501, and sends the obtainedresult 0000 to thesecond adder 502; thelane step 0110 is added to theresult 0000 of thefirst adder 501 by thesecond adder 502, the obtainedresult 0110 is the target address of thelane 1, and thetarget address 0110 is directly sent to thelane 2. After the generation of the destination address oflane 2 is completed, theaddress generator 920 continues to generate the destination address forlane 3.
When generating the target address oflane 3, stepsize generation unit 1000 receives lane information including 0000 from the external device andstep size 0011 fromimmediate pile 910. The lanestep generation unit 1000 sends astep 0011 to theshifter 1003 according to the SIMD control instruction, theshifter 1003 shifts thestep 0011 by one bit to the left to obtain ashift result 0110, then theshifter 1003 sends theshift result 0110 to theadder 1004, meanwhile, the lanestep generation unit 1000 sends thestep 0011 to theadder 1004 according to the SIMD control instruction, theadder 1004 sums the receivedshift result 0110 and thestep 0011, and outputs the obtainedsum 1001 as a lane step (i.e., 3 times the step 0011) of thelane 3 to theaddress generation unit 500, and after receiving thelane step 1001 and the address generation information (i.e., thebase 0000 and the offset 0000), theaddress generation unit 500 adds thebase 0000 and the offset 0000 by thefirst adder 501, and sends the obtainedresult 0000 to thesecond adder 502; thesecond adder 502 adds thelane step 1001 to theresult 0000 of thefirst adder 501, and the obtainedresult 1001 is the target address of thelane 1, and thetarget address 1001 is directly sent to thelane 3.
After the target addresses of the four lanes are generated, the SIMD control instruction controls the four lanes to start running simultaneously, accesses the storage groups corresponding to the group numbers in the target addresses of the lanes in parallel in the memory 800, and further accesses the memory banks corresponding to the real addresses in the target addresses to perform the operation of storing the matrix elements.Lane 0 stores thefirst element 1 in the matrix a1 to the memory bank 00 in the memory bank 00, i.e. the memory bank with thephysical address 0000 in fig. 8;lane 1 stores thefirst element 2 in matrix a2 into memory bank 00 in memory bank 11, i.e. the memory bank withphysical address 0011 in fig. 8;lane 2 stores thefirst element 3 in matrix a3 into memory bank 01 in memory bank 10, i.e. the memory bank withphysical address 0110 in fig. 8;lane 3 stores thefirst element 4 in matrix a4 into memory bank 10 in memory bank 01, i.e. into the memory bank withphysical address 1001 in fig. 8. The storage state of the memory 800 after the storage is completed is shown in fig. 11.
The process of storing the remaining three elements in each matrix is similar to the above process except that the SIMD store control instruction when storing the second element controls theaddress generator 920 to fetch 0100 as the base address from theimmediate heap 910; the SIMD store control instruction when storing the third element controls theaddress generator 920 to obtain a base address of 1000 from theimmediate heap 910; the SIMD store control instruction when storing the fourth element controls theaddress generator 920 to fetch thebase address 1100 from theimmediate heap 910. The storage state of the memory 800 after all is completed is shown in fig. 12. That is, to complete the storage of four elements in the matrix, four SIMD control instructions are required, and each SIMD control instruction can complete the control of a series of operations such as address generation to the lane access memory for data storage.
The process of data reading (loading) is similar to the storage process, except that the storage process is that after the address generator generates the target address for each lane, four lanes are started simultaneously, and the matrix data stored in the register files in the lanes are stored into the corresponding target addresses in the memory in parallel; in the reading process, after the address generator generates the target address for each lane by the step length, the base address and the offset which are the same as those in the storage process, the four lanes are started to run simultaneously, and the matrix data stored in the target address corresponding to each lane in the memory are loaded into the register files in each lane in parallel. The detailed process is not described herein. It should be understood that the above description is only made for the operation of the parallel access method and the parallel access device provided by the present invention by taking four lanes as an example, and does not represent that the present invention can only realize the parallel access of four lanes. It can be understood that the parallel access method and the parallel access device provided by the invention can be easily popularized to M lanes from the principle and the situation of four lanes, wherein M is an integer not less than 2.
Fig. 13 is a schematic structural diagram of a chip 1300 according to an embodiment of the present invention. The chip 1300 shown in fig. 13 includes one ormore processors 1301, acommunication interface 1302, and a computer-readable storage medium 1303, and theprocessors 1301, thecommunication interface 1302, and the computer-readable storage medium 1303 may be connected by a bus, or may implement communication by other means such as wireless transmission. The embodiment of the present invention is exemplified by connection via abus 1304. The computer-readable storage medium 1303 is used for storing instructions, and theprocessor 1301 includes the parallel access apparatus disclosed in the above embodiments, and is used for executing the instructions stored in the computer-readable storage medium 1303. In another embodiment, the computer-readable storage medium 1303 is used for storing a program code, and theprocessor 1301 may call the program code stored in the computer-readable storage medium 1303 to implement the related functions of the parallel access apparatus, which may be specifically referred to the related descriptions in the foregoing embodiments, and will not be described herein again.
It should be understood that, in the embodiments of the present invention, theProcessor 1301 may be a Central Processing Unit (CPU), and the Processor may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Thecommunication interface 1302 may be a wired interface (e.g., an ethernet interface) or a wireless interface (e.g., a cellular network interface or using a wireless local area network interface) for communicating with other modules or equipment devices. For example, thecommunication interface 1302 in the embodiment of the present application may be specifically configured to receive input data input by a user; or receive data from an external device, etc.
The computer-readable storage medium 1303 may include Volatile Memory (Volatile Memory), such as Random Access Memory (RAM); the Memory may also include a Non-Volatile Memory (Non-Volatile Memory), such as a Read-Only Memory (ROM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, HDD), or a Solid-State Drive (SSD); the memory may also comprise a combination of memories of the kind described above. The memory may be configured to store a set of program codes for facilitating the processor to invoke the program codes stored in the computer readable storage medium to implement the aforementioned parallel access method or the related functions of the parallel access apparatus.
It should be noted that fig. 13 is only one possible implementation manner of the embodiment of the present invention, and in practical applications, the chip may further include more or less components, which is not limited herein. For the content that is not shown or described in the embodiment of the present invention, reference may be made to the relevant explanation in the foregoing method embodiment, which is not described herein again.
An embodiment of the present invention further provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the computer-readable storage medium is run on a processor, the flow of the foregoing parallel access method is implemented. The storage medium includes a ROM/RAM, a magnetic disk, an optical disk, and the like.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the terminal device and the unit described above may refer to corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The embodiment of the invention discloses a parallel access method, a parallel access device, a processor, a chip and a computer readable storage medium, which can be used for realizing the operation of data storage or reading in the technical field of integrated circuits. Target addresses are generated for a plurality of lanes to be executed in parallel through an address generator, and the lanes access corresponding positions in the RAM according to the target addresses to perform data access operation in parallel. When the address generator generates the target address for the lane, the lane step generating unit is used for generating the lane step, wherein the lane step is K times of the step, and under the control of the same SIMD instruction, the lane step generated by the lane step generating unit for each lane is different, so that the target addresses generated by the address generating unit by using the lane step, the base address and the offset do not form a conflict, namely, the access conflict is avoided when the address generator generates the address for the lane. Therefore, the present invention no longer needs to provide an address conflict detection device, or omit the steps related to address conflict detection, compared with the prior art. Therefore, on the premise of ensuring that the parallel lanes correctly access the storage groups in the memory, the power consumption of related hardware can be reduced, and meanwhile, the whole time consumption for data parallel access operation is shortened.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

CN201911406669.XA2019-12-312019-12-31Parallel access device and method and chipActiveCN111158757B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201911406669.XACN111158757B (en)2019-12-312019-12-31Parallel access device and method and chip

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201911406669.XACN111158757B (en)2019-12-312019-12-31Parallel access device and method and chip

Publications (2)

Publication NumberPublication Date
CN111158757A CN111158757A (en)2020-05-15
CN111158757Btrue CN111158757B (en)2021-11-30

Family

ID=70559647

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201911406669.XAActiveCN111158757B (en)2019-12-312019-12-31Parallel access device and method and chip

Country Status (1)

CountryLink
CN (1)CN111158757B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN117234408A (en)*2022-06-062023-12-15中科寒武纪科技股份有限公司Method and device for reading target data in data based on instruction
CN116719559B (en)*2022-07-202024-06-11广州众远智慧科技有限公司Method and device for infrared scanning

Citations (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103246541A (en)*2013-04-272013-08-14中国人民解放军信息工程大学Method for evaluating auto-parallelization and multistage parallelization cost
CN103777924A (en)*2012-10-232014-05-07亚德诺半导体技术公司Processor architecture and method for simplifying programmable single instruction, multiple data within a register
CN104424129A (en)*2013-08-192015-03-18上海芯豪微电子有限公司Cache system and method based on read buffer of instructions
CN104424158A (en)*2013-08-192015-03-18上海芯豪微电子有限公司General unit-based high-performance processor system and method
CN104699624A (en)*2015-03-262015-06-10中国人民解放军国防科学技术大学FFT (fast Fourier transform) parallel computing-oriented conflict-free storage access method
CN105005465A (en)*2015-06-122015-10-28北京理工大学Processor based on bit or byte parallel acceleration
CN105446773A (en)*2015-11-182016-03-30上海兆芯集成电路有限公司 Systems and methods for speculatively parallel execution of unaligned load instructions of cache lines
CN105893319A (en)*2014-12-122016-08-24上海芯豪微电子有限公司Multi-lane/multi-core system and method
CN107003846A (en)*2014-12-232017-08-01英特尔公司The method and apparatus for loading and storing for vector index
CN109690956A (en)*2016-09-222019-04-26高通股份有限公司 Data storage at contiguous memory addresses
CN110096450A (en)*2018-01-292019-08-06北京思朗科技有限责任公司More granularity parallel storage systems and memory

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US8086806B2 (en)*2008-03-242011-12-27Nvidia CorporationSystems and methods for coalescing memory accesses of parallel threads
US8635431B2 (en)*2010-12-082014-01-21International Business Machines CorporationVector gather buffer for multiple address vector loads

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103777924A (en)*2012-10-232014-05-07亚德诺半导体技术公司Processor architecture and method for simplifying programmable single instruction, multiple data within a register
CN103246541A (en)*2013-04-272013-08-14中国人民解放军信息工程大学Method for evaluating auto-parallelization and multistage parallelization cost
CN104424129A (en)*2013-08-192015-03-18上海芯豪微电子有限公司Cache system and method based on read buffer of instructions
CN104424158A (en)*2013-08-192015-03-18上海芯豪微电子有限公司General unit-based high-performance processor system and method
CN105893319A (en)*2014-12-122016-08-24上海芯豪微电子有限公司Multi-lane/multi-core system and method
CN107003846A (en)*2014-12-232017-08-01英特尔公司The method and apparatus for loading and storing for vector index
CN104699624A (en)*2015-03-262015-06-10中国人民解放军国防科学技术大学FFT (fast Fourier transform) parallel computing-oriented conflict-free storage access method
CN105005465A (en)*2015-06-122015-10-28北京理工大学Processor based on bit or byte parallel acceleration
CN105446773A (en)*2015-11-182016-03-30上海兆芯集成电路有限公司 Systems and methods for speculatively parallel execution of unaligned load instructions of cache lines
CN109690956A (en)*2016-09-222019-04-26高通股份有限公司 Data storage at contiguous memory addresses
CN110096450A (en)*2018-01-292019-08-06北京思朗科技有限责任公司More granularity parallel storage systems and memory

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
8位微处理器与IIC总线接口软核的设计与研究;周干民;《中国优秀硕士学位论文全文数据库(电子期刊)》;20020630;第I137-30页*
An Access-Pattern-Aware On-Chip Vector Memory System with Automatic Loading for SIMD Architectures;Tong Geng. etc;《2018 IEEE High Performance extreme Computing Conference (HPEC)》;20180927;第1-7页*
一种基于奔腾SIMD指令的快速背景提取方法;周西汉;《计算机工程与应用》;20040930;第81-83页*
基于内存云的大块数据对象并行存取策略;褚征;《计算机应用》;20160630;第1526-1532+1566页*

Also Published As

Publication numberPublication date
CN111158757A (en)2020-05-15

Similar Documents

PublicationPublication DateTitle
US8984043B2 (en)Multiplying and adding matrices
KR20200108774A (en)Memory Device including instruction memory based on circular queue and Operation Method thereof
CN114391135A (en) Method and associated memory device and system for performing in-memory processing operations on consecutively allocated data
CN110415157A (en) A computing method and device for matrix multiplication
CN114341802B (en) Method for performing in-memory processing operations and related memory devices and systems
US11138106B1 (en)Target port with distributed transactions
WO2021041638A1 (en)Copy data in a memory system with artificial intelligence mode
CN111158757B (en)Parallel access device and method and chip
US10942889B2 (en)Bit string accumulation in memory array periphery
US11487342B2 (en)Reducing power consumption in a neural network environment using data management
EP3931707A1 (en)Storage device operation orchestration
US12399722B2 (en)Memory device and method including processor-in-memory with circular instruction memory queue
CN113407154A (en)Vector calculation device and method
US11669489B2 (en)Sparse systolic array design
US11941371B2 (en)Bit string accumulation
CN116931876A (en) Matrix operation system, matrix operation method, satellite navigation method and storage medium
CN116149602A (en)Data processing method, device, electronic equipment and storage medium
US10942890B2 (en)Bit string accumulation in memory array periphery
US10997277B1 (en)Multinomial distribution on an integrated circuit
US11487699B2 (en)Processing of universal number bit strings accumulated in memory array periphery
US12423580B1 (en)Crossbar based transpose data transfers
US12271732B1 (en)Configuration of a deep vector engine using an opcode table, control table, and datapath table
CN118012519A (en)Digital signal processing method, device and processor for processing digital signal
WO2021041644A1 (en)Transfer data in a memory system with artificial intelligence mode
CN114063887A (en)Writing and reading method, processor chip, storage medium and electronic device

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
TA01Transfer of patent application right
TA01Transfer of patent application right

Effective date of registration:20210209

Address after:311201 No. 602-11, complex building, 1099 Qingxi 2nd Road, Hezhuang street, Qiantang New District, Hangzhou City, Zhejiang Province

Applicant after:Zhonghao Xinying (Hangzhou) Technology Co.,Ltd.

Address before:518057 5-15, block B, building 10, science and technology ecological park, Yuehai street, Nanshan District, Shenzhen City, Guangdong Province

Applicant before:Shenzhen Xinying Technology Co.,Ltd.

GR01Patent grant
GR01Patent grant
PE01Entry into force of the registration of the contract for pledge of patent right
PE01Entry into force of the registration of the contract for pledge of patent right

Denomination of invention:Parallel access device and method, as well as chip

Granted publication date:20211130

Pledgee:Xiaoshan Branch of Agricultural Bank of China Ltd.

Pledgor:Zhonghao Xinying (Hangzhou) Technology Co.,Ltd.

Registration number:Y2024330001536


[8]ページ先頭

©2009-2025 Movatter.jp