Disclosure of Invention
The main object of the present invention is to provide a data processing method, apparatus, system, medium and program product for a reconfigurable cryptographic computing array, which can efficiently utilize the reconfigurable cryptographic computing array for data processing.
To achieve the above object, a first aspect of an embodiment of the present invention provides a data processing method for a reconfigurable cryptographic computing array, including:
the reconfigurable cryptographic computing array comprises at least one cryptographic core group, each cryptographic core group is used for processing data of a preset service type, and the method comprises the following steps:
responding to at least one service request sent by a host, and caching at least one service request, wherein the service request comprises a service type;
Searching a service request consistent with a preset service type of a cipher computing core group under the condition that the cipher computing core group is in an idle state;
Acquiring data to be processed indicated by the service request from the host side;
and sending the data to be processed to the cipher computing core group in an idle state for data processing.
In an embodiment of the present invention, said caching at least one service request includes:
and caching at least one service request to at least one preset service request caching space, wherein each service request caching space caches one service request.
In an embodiment of the present invention, after the searching for the service request consistent with the preset service type of the cryptographic core group, the method includes:
the service request is taken out from the service request cache space corresponding to the service request;
Marking the buffer state of the service request buffer space as an empty state, wherein the buffer state comprises an empty state and a full state, the empty state indicates that the service request buffer space does not buffer the service request, and the full state indicates that the service request buffer space has buffered the service request.
In an embodiment of the present invention, the caching at least one service request into at least one preset service request cache space includes:
Acquiring the service request cache space with the cache state being an empty state and the quantity of the service request cache space with the cache state being an empty state;
caching at least one service request to a service request cache space with the empty cache state under the condition that the number of the service request cache spaces with the empty cache state is not less than the number of at least one service request;
Under the condition that the number of service request cache spaces with the cache state being the empty state is smaller than the number of at least one service request, acquiring a first number of service requests, and caching the first number of service requests to the service request cache spaces with the cache state being the empty state, wherein the first number is the same as the number of service request cache spaces with the cache state being the empty state;
and executing the operation of obtaining the service request cache space with the empty cache state and the quantity of the service request cache space with the empty cache state again.
In an embodiment of the present invention, each of the cryptographic core groups has a group of transmission interfaces for transmitting information related to the corresponding cryptographic core group;
The method further comprises the steps of:
Constructing at least one group of access interfaces, the number of which is the same as that of the transmission interfaces, wherein at least one group of access interfaces corresponds to at least one group of transmission interfaces one by one;
and transmitting information related to the cryptographic core group related to the access interface through a preset access interface.
In an embodiment of the present invention, the transmission interface includes a status transmission interface and a data transmission interface;
the state transmission interface is used for transmitting an idle state, an input full state and an output full state of the corresponding cipher computing core group, wherein the idle state indicates that the cipher computing core group does not currently execute a data processing task, the input full state indicates that the length of data currently received by the cipher computing core group reaches an input length appointed in a service request, and the output full state indicates that the length of data output by the cipher computing core group currently execute the data processing task reaches an output length appointed in the service request;
The data transmission interface is used for transmitting data to be processed and processed data corresponding to the data to be processed.
In one embodiment of the invention, the method comprises the following steps:
monitoring working states of all cipher computing core groups in the reconfigurable cipher computing array in real time, wherein the working states comprise an idle state and a busy state;
Searching a transmission interface corresponding to a cipher computing core group under the condition that the working state of the cipher computing core group is an idle state;
and receiving the message of the cipher computing core group in an idle state sent by the transmission interface through the access interface.
In an embodiment of the present invention, before the searching for the service request consistent with the preset service type of the cryptographic core group, the searching includes:
analyzing the service request to obtain service information, wherein the service information comprises a service type, a service source address, a service source length, a service destination address and a service destination length;
The obtaining the data to be processed indicated by the service request from the host side comprises the following steps:
Acquiring data to be processed indicated by the service request from the host based on the service source address and the service source length;
After the data to be processed is sent to the cryptographic core group in the idle state for data processing, the method comprises the following steps:
And sending the processed data corresponding to the data to be processed to a storage space corresponding to the service destination address and the service destination length.
A second aspect of an embodiment of the present invention provides a data processing apparatus for a reconfigurable cryptographic computing array, the reconfigurable cryptographic computing array including at least one cryptographic core group, each of the cryptographic core groups being for processing data of a preset traffic type, the apparatus comprising:
the service caching module is used for responding to at least one service request sent by the host end and caching at least one service request, wherein the service request comprises a service type;
the service searching module is used for searching a service request consistent with a preset service type of the cipher computing core group under the condition that the cipher computing core group is in an idle state;
The data acquisition module is used for acquiring the data to be processed indicated by the service request from the host side;
And the data transmitting module is used for transmitting the data to be processed to the cryptographic core group in the idle state for data processing.
In an embodiment of the present invention, the service buffering module is specifically configured to buffer at least one service request to at least one preset service request buffering space, where each service request buffering space buffers one service request.
In one embodiment of the invention, the apparatus comprises:
the logic control module is used for taking out the service request from the service request cache space corresponding to the service request, marking the cache state of the service request cache space as an empty state, wherein the cache state comprises an empty state and a full state, the empty state indicates that the service request cache space does not cache the service request, and the full state indicates that the service request cache space has cached the service request.
In an embodiment of the present invention, the caching at least one service request into at least one preset service request cache space includes:
Acquiring the service request cache space with the cache state being an empty state and the quantity of the service request cache space with the cache state being an empty state;
caching at least one service request to a service request cache space with the empty cache state under the condition that the number of the service request cache spaces with the empty cache state is not less than the number of at least one service request;
Under the condition that the number of service request cache spaces with the cache state being the empty state is smaller than the number of at least one service request, acquiring a first number of service requests, and caching the first number of service requests to the service request cache spaces with the cache state being the empty state, wherein the first number is the same as the number of service request cache spaces with the cache state being the empty state;
and executing the operation of obtaining the service request cache space with the empty cache state and the quantity of the service request cache space with the empty cache state again.
In an embodiment of the present invention, each of the cryptographic core groups has a group of transmission interfaces for transmitting information related to the corresponding cryptographic core group;
The data processing device comprises at least one group of access interfaces, the number of which is the same as that of at least one group of transmission interfaces, at least one group of access interfaces are in one-to-one correspondence with at least one group of transmission interfaces, and the access interfaces are used for transmitting information related to the related cryptographic core group.
In an embodiment of the present invention, the access interface includes a state access interface and a data access interface;
The state access interface is used for transmitting an idle state, an input full state and an output full state of the corresponding cipher computing core group, wherein the idle state indicates that the cipher computing core group does not currently execute a data processing task, the input full state indicates that the length of data currently received by the cipher computing core group reaches an input length appointed in a service request, and the output full state indicates that the length of data output by the cipher computing core group currently execute the data processing task reaches an output length appointed in the service request;
The data access interface is used for transmitting data to be processed and processed data corresponding to the data to be processed.
In one embodiment of the invention, the apparatus comprises:
The state monitoring module is used for monitoring working states of all the cipher computing core groups in the reconfigurable cipher computing array in real time, wherein the working states comprise an idle state and a busy state;
the searching module is used for searching a transmission interface corresponding to the cipher computing core group under the condition that the working state of the cipher computing core group is an idle state;
And the sending module is used for receiving the message of the cipher computing core group in an idle state sent by the transmission interface through the access interface.
In one embodiment of the invention, the apparatus comprises:
The analysis module is used for analyzing the service request to obtain service information, wherein the service information comprises a service type, a service source address, a service source length, a service destination address and a service destination length;
the data acquisition module is used for acquiring data to be processed indicated by the service request from the host side based on the service source address and the service source length;
the apparatus further comprises:
And the return module is used for sending the processed data corresponding to the data to be processed to the storage space corresponding to the service destination address and the service destination length.
A third aspect of the present disclosure provides a data processing system comprising a host side, a system bus connected to the host side, a data processing apparatus for a reconfigurable cryptographic computing array as described in the second aspect, a transmission interface, and the reconfigurable cryptographic computing array;
the data processing device for a reconfigurable cryptographic computing array is disposed between the system bus and the reconfigurable cryptographic computing array.
A fourth aspect of the present disclosure also provides a computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to perform the above-described data processing method for a reconfigurable cryptographic computing array.
A fifth aspect of the present disclosure also provides a computer program product comprising a computer program which, when executed by a processor, implements the data processing method for a reconfigurable cryptographic computing array described above.
According to the embodiment of the invention, the data processing method, the device, the system, the medium and the program product for the reconfigurable cipher computing array comprise at least one cipher computing core group, each cipher computing core group is used for processing data of a preset service type, the method comprises the steps of responding to at least one service request sent by a host end, caching the at least one service request, wherein the service request comprises the service type, searching the service request consistent with the preset service type of the cipher computing core group under the condition that the cipher computing core group is in an idle state, acquiring to-be-processed data indicated by the service request from the host end, and sending the to-be-processed data to the cipher computing core group in the idle state for data processing. And the DMA (direct memory access) data handling is not required, the data handling efficiency of the reconfigurable password computing array is improved, and the on-chip CPU (Central processing Unit) is hardly required to participate, so that the on-chip CPU resources are released.
Detailed Description
In order to make the objects, features and advantages of the present invention more comprehensible, the technical solutions in the embodiments of the present invention will be clearly described in conjunction with the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention provides a data processing method for a reconfigurable cryptographic computing array, the reconfigurable cryptographic computing array comprises at least one cryptographic core group, each cryptographic core group is used for processing data of a preset service type, and the method comprises the following steps: responding to at least one service request sent by a host end, caching at least one service request, wherein the service request comprises a service type, searching the service request consistent with the preset service type of a cipher computing core group under the condition that the cipher computing core group is in an idle state, acquiring to-be-processed data indicated by the service request from the host end, and sending the to-be-processed data to the cipher computing core group in the idle state for data processing. And the DMA (direct memory access) data handling is not required, the data handling efficiency of the reconfigurable password computing array is improved, and the on-chip CPU (Central processing Unit) is hardly required to participate, so that the on-chip CPU resources are released.
Some embodiments of the present invention are described in detail below with reference to the accompanying drawings. The embodiments and features of the embodiments described below may be combined with each other without conflict between the embodiments.
Referring to fig. 1, fig. 1 is a flowchart of a data processing method for a reconfigurable cryptographic computing array according to an embodiment of the invention, where the reconfigurable cryptographic computing array includes at least one cryptographic core set, and each cryptographic core set is configured to process data of a predetermined service type, and the method includes the following operations:
In operation S110, at least one service request is buffered in response to at least one service request sent by the host, where the service request includes a service type.
In operation S120, in the case that the cryptographic core group is in an idle state, a service request consistent with a preset service type of the cryptographic core group is searched.
In operation S130, the data to be processed indicated by the service request is obtained from the host.
And S140, transmitting the data to be processed to the cryptographic core group in an idle state for data processing.
In the present invention, the reconfigurable cryptographic computing array is a coarse-grained reconfigurable cryptographic multi-core architecture, the structure of which is arranged according to the array, each row can be used as a group of cryptographic computing cores capable of independently realizing an encryption/decryption algorithm, and a plurality of rows can be used as a group of cryptographic computing cores capable of independently realizing an encryption/decryption algorithm.
In the present invention, each cryptographic core group is used for processing data of a preset service type, and the service type can be different encryption/decryption algorithms, for example, algorithms of a grouping algorithm AES, SM4, a hash algorithm SM3, and the like, and for example, encryption/decryption algorithms of modes of an ECB, a CBC, and the like, including a grouping algorithm. It can be understood that the specific service type of each cipher computing core group can be preconfigured, and the specific service type of each cipher computing core group can be the same or different, which is not limited by the invention, and the invention takes the example that all cipher computing core groups are used for processing the data with the service type of aes_ecb encryption algorithm, and the invention is schematically illustrated.
In the invention, the service request, the data to be processed and the processed data can be transmitted through a system bus (AIX bus), wherein the system bus interface comprises a slave interface and a master interface, the slave interface transmits the service request of the host end, and the master interface is used for accessing the storage space of the host end so as to transmit the data to be processed and the processed data.
In the present invention, in operation S110, the service request sent by the host includes who is me (service type, indicating which cryptographic core group is selected for data processing), where to get from (service source address, service source length), where to get from (service destination address, service destination length), and derivative questions (service identifier, service response, key address length), etc.
In the present invention, in operation S120, the cryptographic core group is in an idle state, which means that no data processing task is performed. And searching a service request consistent with the preset service type of the cipher core group, for example, if the cipher core group in an idle state is the cipher core group with the service type of AES_ECB encryption algorithm, searching the service request with the service type of AES_ECB encryption algorithm.
In the present invention, in operation S130, the data to be processed indicated by the service request is obtained from the host side, that is, the data to be processed indicated by the service source address and the service source length is obtained from the storage space of the host side through the system bus.
In the present invention, after operation S140, the processed data corresponding to the data to be processed is sent to the storage space corresponding to the service destination address and the service destination length, and optionally, a service request return packet at the host side may also be returned to the storage space at the host side, so as to mark that the service request is completed.
According to the embodiment of the invention, at least one service request sent by a host end is responded to and cached, the service request comprises a service type, under the condition that a cipher computing core group is in an idle state, the service request consistent with the preset service type of the cipher computing core group is searched, the data to be processed indicated by the service request is obtained from the host end, the data to be processed is sent to the cipher computing core group in the idle state for data processing, DMA (direct memory access) is not required to be invoked for data carrying, the data carrying efficiency of the reconfigurable cipher computing array is improved, on-chip CPU (Central processing Unit) participation is hardly required, and on-chip CPU resources are released.
In an embodiment of the present invention, the step of buffering at least one service request in operation S110 includes buffering at least one service request in at least one predetermined service request buffering space, each service request buffering space buffering one service request. The method solves the problem of data congestion of multiple paths of data flows and improves the data scheduling speed.
In an embodiment of the present invention, after the operation S120, the method further includes taking out the service request from the service request cache space corresponding to the service request, marking the cache state of the service request cache space as an empty state, where the cache state includes an empty state and a full state, where the empty state indicates that the service request cache space does not cache the service request, and the full state indicates that the service request cache space has cached the service request. According to the embodiment of the invention, the service request cache space state is marked, so that the service request cache space state can be obtained in real time, whether the service request cache space and which service request cache space can be written in or not can be determined, and the service request cache space can be utilized in real time and efficiently.
Referring to fig. 2, fig. 2 is a flow chart of buffering at least one service request into at least one preset service request buffering space according to an embodiment of the present invention, which includes the following operations:
in operation S210, the service request buffer space with the buffer status being empty and the number of service request buffer spaces with the buffer status being empty are obtained.
In operation S220, in the case that the number of service request buffer spaces in the buffer status is not less than the number of at least one service request, at least one service request is buffered to the service request buffer space in the buffer status.
In operation S230, when the number of service request buffer spaces in the buffer status is smaller than the number of at least one service request, a first number of service requests is acquired, and the first number of service requests is buffered to the service request buffer spaces in the buffer status, where the number of service request buffer spaces in the buffer status is the same as the number of service request buffer spaces in the buffer status.
Operation S210 is performed again.
In an example of the present invention, taking 30 service request buffer spaces as an example, the number of service requests is 15, if the number of service request buffer spaces with buffer status being empty is 20, if the number of service request buffer spaces with buffer status being empty is not less than the number of service requests with buffer status being 20, then the 15 service requests are buffered to any 15 of the 20 service request buffer spaces with buffer status being empty, and if the number of service request buffer spaces with buffer status being empty is 10, then the number of service request buffer spaces with buffer status being empty is 10 and is less than the number of service requests with buffer status being 15, then the 10 service requests are buffered to any of the 10 service request buffer spaces with buffer status being empty, after the buffering is completed, the number of service request buffer spaces with buffer status being empty and the service request buffer spaces with buffer status being empty are acquired again.
According to the embodiment of the invention, the service processing progress state is acquired in real time, so that the time for writing the service cache request space and the service quantity are determined, and the service cache space can be utilized in real time and efficiently.
Referring to fig. 3, fig. 3 is a flow chart of a data processing method for a reconfigurable cryptographic computing array according to an embodiment of the invention, each cryptographic core group has a set of transmission interfaces for transmitting information related to the corresponding cryptographic core group, and the method may include the following operations:
In operation S310, at least one set of access interfaces having the same number as the transmission interfaces is constructed, and at least one set of access interfaces corresponds to at least one set of transmission interfaces one by one.
In operation S320, information about the cryptographic kernel group related to the access interface is transmitted through the preset access interface.
In an embodiment of the present invention, the relevant information may be a calculation state of the cryptographic core group, including an operation state, an input buffer state, an output buffer state, data of the input cryptographic core group, data output by the cryptographic core group, whether a service data stream is abnormal, and so on. According to the embodiment of the invention, each computing core group corresponds to one group of interfaces (a transmission interface and an access interface) and one service request cache space, so that the problem of data congestion of multiple data streams can be solved.
In the invention, each cipher core group corresponds to one transmission interface and one access interface, for example, 30 cipher core groups are provided with 30 transmission interfaces and 30 access interfaces, and one cipher core group corresponds to one transmission interface and one access interface, and each transmission interface and each access interface work independently and can be processed in parallel. The number of groups of the transmission interfaces and the access interfaces may be 16-32 groups, and may be any other number, and the number of groups of the transmission interfaces is not specifically limited in the present disclosure. In an example, the 128bit data bandwidth of the system bus (AXI 4.0) is the same as the maximum bandwidth of the data transmission of the cryptographic core group, the number of the cryptographic core groups can generally select 16 groups or 32 groups of scales in consideration of the influence of factors such as chip power consumption, area and the like, and the data carrying bandwidth of the cryptographic core group of the 32 groups is generally configured to be far greater than the data bandwidth of the cryptographic core group processing algorithm, so that the data processing method for the reconfigurable cryptographic computing array, which is improved by the invention, can completely meet the bandwidth requirement of the reconfigurable cryptographic computing array.
In an embodiment of the present invention, the access interface includes a state access interface and a data access interface, where the state access interface is used to transmit an idle state, an input full state, and an output full state of a corresponding cryptographic core group, and the data access interface is used to transmit data to be processed and processed data corresponding to the data to be processed. It can be understood that the transmission interfaces of the cryptographic core group include a state transmission interface for transmitting the idle state, the input full state, and the output full state of the corresponding cryptographic core group to the corresponding access interfaces, and a data transmission interface for transmitting the data to be processed and the processed data corresponding to the data to be processed to the corresponding access interfaces.
In the invention, the idle state indicates that the cryptographic core group does not currently execute a data processing task, the input full state indicates that the length of data currently received by the cryptographic core group reaches the input length specified in the service request, and the output full state indicates that the length of data output by the cryptographic core group currently executing the data processing task reaches the output length specified in the service request. Therefore, the computing state of each cipher computing core group in the reconfigurable cipher computing array can be detected in real time, including a working state (idle state and busy state), an input buffer state (input not full and input full), an output buffer state (output not full and output full), whether the service data flow is abnormal or not can be detected, and the like, so as to provide basis for service distribution, for example, when detecting that one cipher computing core group is in the idle state, a message that the cipher computing core group is in the idle state is sent through a state transmission interface of the cipher computing core group so as to return to a corresponding service request of the cipher computing core group consistent with the service type thereof, and for example, when detecting that the buffer data output by the cipher computing core group reaches the data length contained in the service request, a message of the output full state of the cipher computing core group is sent through a state transmission interface of the cipher computing core group, so that processed data output by the cipher computing core group is sent to a storage space of a host computer end.
In an embodiment of the present invention, operation S140, after sending the data to be processed to the cryptographic core group in the idle state for data processing, includes sending the processed data corresponding to the data to be processed to the host side by using a transmission interface corresponding to the cryptographic core group.
Referring to fig. 4, fig. 4 is a flow chart of buffering at least one service request into at least one preset service request buffering space according to an embodiment of the present invention, which includes the following operations:
In operation S410, the working states of all the cryptographic core groups in the reconfigurable cryptographic computing array are monitored in real time, wherein the working states include an idle state and a busy state.
In operation S420, when the working state of the cryptographic core group is in the idle state, the transmission interface corresponding to the cryptographic core group is searched.
In operation S430, a message that the cryptographic core group is in an idle state and is sent by the transmission interface is received through the access interface.
In the embodiment of the invention, the working states of all the cipher computing core groups in the reconfigurable cipher computing array are monitored in real time, the working states comprise an idle state and a busy state, and when the working states of the cipher computing core groups are idle, the transmission interface corresponding to the cipher computing core groups is searched, and the message of the cipher computing core groups in the idle state sent by the transmission interface is received through the access interface. Each group of transmission interfaces independently work, and state monitoring and state transmission can be performed in multiple paths in parallel, so that data carrying efficiency is improved.
In an embodiment of the present invention, before searching the service request consistent with the preset service type of the cryptographic core group in operation S120, the method includes parsing the service request to obtain service information, where the service information includes a service type, a service source address, a service source length, a service destination address, and a service destination length. In operation S130, obtaining the data to be processed indicated by the service request from the host side includes obtaining the data to be processed indicated by the service request from the host side based on the service source address and the service source length. Operation S140, after the data to be processed is sent to the cryptographic core group in the idle state for data processing, includes sending the processed data corresponding to the data to be processed to the storage space corresponding to the service destination address and the service destination length.
Referring to fig. 5, fig. 5 is a data processing apparatus for a reconfigurable cryptographic computing array according to an embodiment of the invention, the reconfigurable cryptographic computing array includes at least one cryptographic core set, each cryptographic core set is configured to process data of a predetermined service type, and the apparatus 500 includes:
A service buffering module 510, configured to respond to at least one service request sent by the host side, and buffer at least one service request, where the service request includes a service type;
The service searching module 520 is configured to search a service request consistent with a preset service type of the cryptographic core group when the cryptographic core group is in an idle state;
a data obtaining module 530, configured to obtain, from the host side, data to be processed indicated by the service request;
The data sending module 540 is configured to send the data to be processed to the cryptographic core group in an idle state for data processing.
In an embodiment of the present invention, the service buffering module 510 is specifically configured to buffer at least one service request to at least one preset service request buffering space, where each service request buffering space buffers one service request.
In one embodiment of the invention, the apparatus comprises:
The logic control module is used for taking out the service request from the service request cache space corresponding to the service request, marking the cache state of the service request cache space as an empty state, wherein the cache state comprises an empty state and a full state, the empty state indicates that the service request cache space does not cache the service request, and the full state indicates that the service request cache space has cached the service request.
In an embodiment of the present invention, the caching at least one service request into at least one preset service request cache space includes:
Acquiring the service request cache space with the cache state being an empty state and the quantity of the service request cache space with the cache state being an empty state;
Caching at least one service request to the service request cache space with the empty cache state under the condition that the number of the service request cache spaces with the empty cache state is not less than the number of at least one service request;
Under the condition that the number of the service request cache spaces with the cache state being the empty state is smaller than the number of at least one service request, acquiring a first number of service requests, and caching the first number of service requests to the service request cache spaces with the cache state being the empty state, wherein the first number is the same as the number of the service request cache spaces with the cache state being the empty state;
And executing the operation of obtaining the service request buffer space with the buffer state being empty and the quantity of the service request buffer space with the buffer state being empty again.
In an embodiment of the present invention, each of the cryptographic core groups has a group of transmission interfaces for transmitting information related to the corresponding cryptographic core group;
The data processing device comprises at least one group of access interfaces, the number of which is the same as that of at least one group of transmission interfaces, at least one group of the access interfaces are in one-to-one correspondence with at least one group of the transmission interfaces, and the access interfaces are used for transmitting information related to the related cryptographic core group.
In one embodiment of the invention, the access interface comprises a state access interface and a data access interface;
The state access interface is used for transmitting an idle state, an input full state and an output full state of the corresponding cipher computing core group, wherein the idle state indicates that the cipher computing core group does not currently execute a data processing task, the input full state indicates that the length of data currently received by the cipher computing core group reaches the input length appointed in a service request, and the output full state indicates that the length of data output by the cipher computing core group currently execute the data processing task reaches the output length appointed in the service request;
The data access interface is used for transmitting data to be processed and processed data corresponding to the data to be processed.
In one embodiment of the invention, the apparatus comprises:
The state monitoring module is used for monitoring working states of all the cipher computing core groups in the reconfigurable cipher computing array in real time, wherein the working states comprise an idle state and a busy state;
The searching module is used for searching a transmission interface corresponding to the cipher computing core group under the condition that the working state of the cipher computing core group is an idle state;
And the sending module is used for receiving the message of the cipher computing core group in an idle state sent by the transmission interface through the access interface.
In one embodiment of the invention, the apparatus comprises:
The analysis module is used for analyzing the service request to obtain service information, wherein the service information comprises a service type, a service source address, a service source length, a service destination address and a service destination length;
the data obtaining module 530 is configured to obtain, from the host side, data to be processed indicated by the service request based on the service source address and the service source length;
The apparatus further comprises:
and the return module is used for sending the processed data corresponding to the data to be processed to a storage space corresponding to the service destination address and the service destination length.
According to embodiments of the present disclosure, for example, any of the plurality of modules of the service buffering module 510, the service searching module 520, the data obtaining module 530, and the data transmitting module 540 may be combined in one module to be implemented, or any of the plurality of modules may be split into a plurality of modules. Or at least some of the functionality of one or more of the modules may be combined with, and implemented in, at least some of the functionality of other modules. According to embodiments of the present disclosure, at least one of the traffic caching module 510, the traffic lookup module 520, the data acquisition module 530, the data transmission module 540 may be implemented at least in part as hardware circuitry, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or in hardware or firmware, such as any other reasonable manner of integrating or packaging the circuitry, or in any one of or a suitable combination of three of software, hardware, and firmware. Or at least one of the traffic buffering module 510, the traffic searching module 520, the data obtaining module 530, the data transmitting module 540 may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
FIG. 6 schematically illustrates a block diagram of a data processing system according to an embodiment of the present disclosure. Fig. 7 schematically illustrates a flow diagram for completing a service request according to an embodiment of the present disclosure.
As shown in fig. 6, the data processing system includes a host side, a system bus connected to the host side, a data processing device for a reconfigurable cryptographic computing array, a transmission interface, and the reconfigurable cryptographic computing array. The data processing apparatus for a reconfigurable cryptographic computing array is disposed between the system bus and the reconfigurable cryptographic computing array. Each cipher core group is provided with a group of transmission interfaces, the transmission interfaces are used for transmitting information related to the corresponding cipher core group, the data processing device comprises at least one group of access interfaces, the number of which is the same as that of at least one group of the transmission interfaces, at least one group of the access interfaces corresponds to at least one group of the transmission interfaces one by one, and the access interfaces are used for transmitting information related to the related cipher core group.
Fig. 8 schematically illustrates a block diagram of an electronic device according to an embodiment of the disclosure.
As shown in fig. 8, an electronic device 800 according to an embodiment of the present disclosure includes a reconfigurable processor 801, and the reconfigurable processor 801 includes a reconfigurable cryptographic calculation array, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 802 or a program loaded from a storage section 808 into a Random Access Memory (RAM) 803. The processor 801 is connected to the bus by the data processing apparatus 500 of fig. 5 as shown in the above embodiments, and the processor 801 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or an associated chipset and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. The processor 801 may also include on-board memory for caching purposes. The processor 801 may include a single processing unit or multiple processing units for performing the different actions of the method flows according to embodiments of the disclosure.
In the RAM 803, various programs and data required for the operation of the electronic device 800 are stored. The processor 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. The processor 801 performs various operations of the method flow according to the embodiments of the present disclosure by executing programs in the ROM 802 and/or the RAM 803. Note that the program may be stored in one or more memories other than the ROM 802 and the RAM 803. The processor 801 may also perform various operations of the method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
According to an embodiment of the present disclosure, the electronic device 800 may also include an input/output (I/O) interface 805, the input/output (I/O) interface 805 also being connected to the bus 804. The electronic device 800 may also include one or more of an input portion 806 including a keyboard, mouse, etc., an output portion 807 including a display such as a Cathode Ray Tube (CRT), liquid Crystal Display (LCD), etc., and speakers, etc., a storage portion 808 including a hard disk, etc., and a communication portion 809 including a network interface card such as a LAN card, modem, etc., connected to the I/O interface 805. The communication section 809 performs communication processing via a network such as the internet. The drive 810 is also connected to the I/O interface 805 as needed. A removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 810 as needed so that a computer program read out therefrom is mounted into the storage section 808 as needed.
The present disclosure also provides a computer-readable storage medium that may be included in the apparatus/device/system described in the above embodiments, or may exist alone without being assembled into the apparatus/device/system. The computer-readable storage medium carries one or more programs which, when executed, implement methods in accordance with embodiments of the present disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example, but is not limited to, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, according to embodiments of the present disclosure, the computer-readable storage medium may include ROM 802 and/or RAM 803 and/or one or more memories other than ROM 802 and RAM 803 described above.
Embodiments of the present disclosure also include a computer program product comprising a computer program containing program code for performing the methods shown in the flowcharts. The program code, when executed in a computer system, causes the computer system to implement the item recommendation method provided by embodiments of the present disclosure.
The above-described functions defined in the system/apparatus of the embodiments of the present disclosure are performed when the computer program is executed by the processor 801. The systems, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
In one embodiment, the computer program may be based on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted, distributed, and downloaded and installed in the form of a signal on a network medium, and/or from a removable medium 811 via a communication portion 809. The computer program may comprise program code that is transmitted using any appropriate network medium, including but not limited to wireless, wireline, etc., or any suitable combination of the preceding.
In such an embodiment, the computer program may be downloaded and installed from a network via the communication section 809, and/or installed from the removable media 811. The above-described functions defined in the system of the embodiments of the present disclosure are performed when the computer program is executed by the processor 801. The systems, devices, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
According to embodiments of the present disclosure, program code for performing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, such computer programs may be implemented in high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. Programming languages include, but are not limited to, such as Java, c++, python, "C" or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that the features recited in the various embodiments of the disclosure and/or in the claims may be provided in a variety of combinations and/or combinations, even if such combinations or combinations are not explicitly recited in the disclosure. In particular, the features recited in the various embodiments of the present disclosure and/or the claims may be variously combined and/or combined without departing from the spirit and teachings of the present disclosure. All such combinations and/or combinations fall within the scope of the present disclosure.
The embodiments of the present disclosure are described above. These examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described above separately, this does not mean that the measures in the embodiments cannot be used advantageously in combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be made by those skilled in the art without departing from the scope of the disclosure, and such alternatives and modifications are intended to fall within the scope of the disclosure.