Disclosure of Invention
An embodiment of the present application aims to provide a method and an apparatus for processing a packet, and an electronic device, so as to improve processing efficiency and improve device performance.
In a first aspect, an embodiment of the present application provides a method for processing a packet, where the method is applied to a chip that accelerates a service of a core processor in an electronic device, and the method includes:
receiving a service message sent by the core processor;
determining an input cache unit capable of storing the service message from a plurality of input cache units, and caching the service message to the determined input cache unit; the input cache units correspond to different algorithm engines one by one;
processing the service message through an algorithm engine corresponding to the determined input cache unit;
and sending the processed service message to the core processor.
In the embodiment of the application, by setting a different input cache unit for a plurality of algorithm engines in a one-to-one correspondence manner, the service message processed by each algorithm engine is cached in a corresponding input cache unit. Because the storage capacity of the input cache unit has an upper limit, a large number of service messages can be prevented from being distributed to the same algorithm engine for processing, so that the load is distributed uniformly by a plurality of algorithm engines as much as possible, the load of each algorithm engine is reduced, the processing efficiency is improved, and the equipment performance is improved.
With reference to the first aspect, in a first possible implementation manner, determining an input cache unit capable of storing the service packet from a plurality of input cache units includes:
determining the data size of the service message, and acquiring the residual space size of each input cache unit;
and determining that the input cache unit with the residual space size larger than or equal to the data size is the input cache unit capable of storing the service message.
In the embodiment of the present application, the input cache unit capable of storing the service packet can be determined conveniently and directly by comparing the data size of the service packet with the size of the remaining space of each input cache unit.
With reference to the first possible implementation manner of the first aspect, in a second possible implementation manner, the determining that the size of the remaining space is greater than or equal to the size of the data includes:
sequentially judging whether the size of the residual space of each input cache unit sorted according to the size of the storage space is larger than or equal to the size of the data; determining that the input cache unit which determines for the first time that the size of the remaining space is greater than or equal to the size of the data is the input cache unit which can store the service packet; or
And randomly selecting one input buffer unit from the input buffer units with the residual space size larger than or equal to the data size.
In the embodiment of the application, the service message can be rapidly stored in the input cache unit judged for the first time by adopting the convenient mode of comparing the size after sequencing. Or, the service messages can be distributed to each input buffer unit as uniformly as possible by adopting a random extraction mode.
With reference to the second possible implementation manner of the first aspect, in a third possible implementation manner, the chip includes the multiple input cache units, and a size of a storage space of the input cache unit after the sorting is larger than a maximum data size of the service packet.
In the embodiment of the application, the size of the storage space of the last input cache unit after sequencing is larger than the maximum data size of the service message, so that the service message of the giant frame can be smoothly cached in the last input cache unit after sequencing, and the processing of the giant frame service message is conveniently realized.
With reference to the second possible implementation manner of the first aspect, in a fourth possible implementation manner, after sequentially determining whether the size of the remaining space of each input buffer unit is greater than or equal to the data size, the method further includes:
determining that the size of the remaining space of the input cache unit positioned at the back of the sequence is still smaller than the data size;
sending a service suspension request to the core processor so that the core processor suspends sending a new service message to the chip; or
After determining the data size of the service packet and obtaining the size of the remaining space of each input buffer unit, the method further includes:
determining the number ratio of the input cache units with the residual space size smaller than the data size in all the input cache units;
and if the number ratio is larger than a preset ratio threshold, sending a service suspension request to the core processor so that the core processor suspends sending a new service message to the chip.
In the embodiment of the application, the size of the remaining space of the input cache unit after the sorting can reflect the height of the storage water level of the whole input cache units, so when the size of the remaining space of the input cache unit after the sorting is still smaller than the data size, a pause service is requested from the core processor, and the reduction of the storage water level can be effectively controlled. Or the determined quantity ratio can reflect the storage water level of the whole input cache units, so that when the determined quantity ratio is larger than a preset ratio threshold, the core processor is requested to suspend service, and the reduction of the storage water level can be effectively controlled.
With reference to the first aspect, in a fifth possible implementation manner, the sending the processed service packet to the core processor includes:
judging whether other messages received before the service message are processed;
if so, waiting until the other messages are processed, and outputting the processed service messages and the processed other messages to the core processor according to the receiving sequence of the service messages and the other messages.
In the embodiment of the application, the processed service messages are sent according to the sequence during receiving, so that the problem of disorder does not need to be solved after the core processor receives the processed messages, and the overhead of the core processor can be further reduced.
With reference to the first aspect, in a sixth possible implementation manner, the sending the processed service packet to the core processor includes:
caching the processed service message into the output cache unit corresponding to the determined algorithm engine;
and when the processed service message can be output, extracting the processed service message from the output cache unit, and sending the processed service message to the core processor.
In the embodiment of the present application, by caching the processed service packet in the output cache unit, the processed service packet is output from the output cache unit only when the processed service packet can be output, so that an output error or disorder of the service packet can be effectively avoided.
With reference to the sixth possible implementation manner of the first aspect, in a seventh possible implementation manner, after the processed service packet is cached in the output cache unit corresponding to the determined algorithm engine, the method further includes:
determining that the storage amount of the output buffer unit has reached an upper limit;
sending a service suspension request to the determined algorithm engine to enable the determined algorithm engine to suspend the ongoing processing.
In the embodiment of the application, the service suspension request is sent to the algorithm engine, so that the storage water level of the corresponding output cache unit can be effectively controlled, and the storage water level of the output cache unit is prevented from exceeding the upper limit.
In a second aspect, an embodiment of the present application provides a message processing apparatus, which is applied to a chip for accelerating a service of a core processor in an electronic device, where the apparatus includes:
a data receiving and sending unit, configured to receive a service packet sent by the core processor;
the data processing unit is used for determining an input cache unit capable of storing the service message from a plurality of input cache units and caching the service message to the determined input cache unit; the input cache units correspond to different algorithm engines one by one; the service message is processed through the algorithm engine corresponding to the determined input cache unit;
the data transceiver unit is further configured to send the processed service packet to the core processor.
With reference to the second aspect, in a first possible implementation manner,
the data processing unit is used for determining the data size of the service message and acquiring the residual space size of each input cache unit; and determining that the input cache unit with the residual space size larger than or equal to the data size is the input cache unit capable of storing the service message.
With reference to the first possible implementation manner of the second aspect, in a second possible implementation manner, the data processing unit is configured to sequentially determine whether the size of the remaining space of each input buffer unit sorted according to the size of the storage space is greater than or equal to the size of the data; determining that the input cache unit which determines for the first time that the size of the remaining space is greater than or equal to the size of the data is the input cache unit which can store the service packet; or
The data processing unit is configured to select one input buffer unit from the input buffer units with the remaining space size greater than or equal to the data size at random.
With reference to the second possible implementation manner of the second aspect, in a third possible implementation manner, the chip includes the multiple input cache units, and a size of a storage space of the input cache unit located at the last of the sorting is larger than a maximum data size of the service packet.
With reference to the second possible implementation manner of the second aspect, in a fourth possible implementation manner, after the data processing unit sequentially determines whether the size of the remaining space of each input buffer unit is greater than or equal to the data size,
the data processing unit is further configured to determine that the size of the remaining space of the input cache unit positioned at the back of the sequence is still smaller than the size of the data; sending a service suspension request to the core processor so that the core processor suspends sending a new service message to the chip; or
After the data processing unit determines the data size of the service message and obtains the size of the remaining space of each input buffer unit,
the data processing unit is further configured to determine a number ratio of the input buffer units with the remaining space size smaller than the data size in all the input buffer units; and if the number ratio is larger than a preset ratio threshold, sending a service suspension request to the core processor so that the core processor suspends sending a new service message to the chip.
With reference to the second aspect, in a fifth possible implementation manner,
the data processing unit is used for judging whether other messages received before the service message are processed;
if so, the data processing unit is configured to control the data transceiving unit to output the processed service packet and the processed other packets to the core processor according to the receiving sequence of the service packet and the other packets after waiting until the processing of the other packets is completed.
With reference to the second aspect, in a sixth possible implementation manner,
the data processing unit is configured to cache the processed service packet in the output cache unit corresponding to the determined algorithm engine;
the data processing unit is configured to, when the processed service packet can be output, extract the processed service packet from the output cache unit, and control the data transceiver unit to send the processed service packet to the core processor.
With reference to the sixth possible implementation manner of the second aspect, in a seventh possible implementation manner, after the data processing unit caches the processed service packet in the output cache unit corresponding to the determined algorithm engine,
the data processing unit is further used for determining that the storage capacity of the output buffer unit reaches an upper limit; sending a service suspension request to the determined algorithm engine to enable the determined algorithm engine to suspend the ongoing processing.
In a third aspect, an embodiment of the present application provides an electronic device, including: a core processor, and a chip;
the core processor is used for sending a service message to the chip;
the memory is used for setting a plurality of input cache units, the input cache units correspond to different algorithm engines one by one, and the algorithm engines are preset in the chip;
the chip is configured to execute the packet processing method according to the first aspect and any possible implementation manner of the first aspect on the service packet by using the plurality of input cache units and the algorithm engine.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where the storage medium has program codes stored thereon, and when the program codes are executed by the computer, the method for processing a packet according to the first aspect or any possible implementation manner of the first aspect is performed.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
Referring to fig. 1, an embodiment of the present application provides anelectronic device 10, where theelectronic device 10 includes: acore processor 11 and achip 12.
Thecore Processor 11 may be a Central Processing Unit (CPU), a Network Processor (NP), or the like; a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), or the like; thechip 12 may be an FPGA (Field Programmable Gate Array).
In this embodiment, thecore processor 11 may allocate a part of the traffic to thechip 12 for processing, for example, allocate encryption/decryption traffic based on a CBC (cipher Block Chaining) mode to thechip 12. Correspondingly, a plurality ofalgorithm engines 121 are disposed in thechip 12, and thechip 12 can process the service packet sent by thecore processor 11 through thealgorithm engines 121, for example, thealgorithm engines 121 perform encryption and decryption processing on the service packet in the CBC mode, and then send the processed service packet to thecore processor 11.
In order to improve the processing efficiency of thechip 12 on the service packet, it is necessary to fully utilize a plurality ofalgorithm engines 121.
For example, atotal buffer unit 122 of the traffic packet and a plurality ofinput buffer units 123 with the same number as that of thealgorithm engines 121 may be partitioned in a Memory, such as a Block RAM (Block Random Access Memory), in thechip 12, where eachinput buffer unit 123 corresponds to onealgorithm engine 121 in the plurality ofalgorithm engines 121, that is, the plurality ofinput buffer units 123 correspond todifferent algorithm engines 121 one by one. Thus, after receiving the service message, thechip 12 first buffers the service message in thetotal buffer unit 122. Aninput buffer unit 123 capable of buffering the service packet is determined from the plurality ofinput buffer units 123, and the service packet is buffered from thetotal buffer unit 122 to the determinedinput buffer unit 123. Since eachalgorithm engine 121 processes the service packet in theinput buffer unit 123 corresponding to itself, the number of the service packets processed by eachalgorithm engine 121 may be limited when the storage space of theinput buffer unit 123 has an upper limit. In this way, the service packet may be uniformly distributed to eachalgorithm engine 121 as much as possible through the plurality ofinput buffer units 123, so that eachalgorithm engine 121 is fully utilized, and the situation that somealgorithm engines 121 are overloaded andother algorithm engines 121 are not fully utilized is avoided.
Of course, the manner of dividing the memory in thechip 12 into regions is an exemplary manner of the present embodiment, and is not limited to the present embodiment. For example, as shown in fig. 2, theelectronic device 10 may further divide thetotal buffer unit 122 and the plurality ofinput buffer units 123 in thememory 13 of theelectronic device 10.
The following describes how thechip 12 can efficiently process the service message by using thetotal buffer unit 122 and the plurality ofinput buffer units 123 in a manner executed by the method.
Referring to fig. 3 in conjunction with fig. 1, an embodiment of the present application provides a method for processing a message, where the method is executed by achip 12 that accelerates a service of acore processor 11 in anelectronic device 10, and the method for processing the message may include:
step S100: and receiving a service message sent by the core processor.
Step S200: determining an input cache unit capable of storing the service message from a plurality of input cache units, and caching the service message to the determined input cache unit; and the input buffer units correspond to different algorithm engines one by one.
Step S300: and processing the service message through an algorithm engine corresponding to the determined input cache unit.
Step S400: and sending the processed service message to the core processor.
The following describes steps S100 to S400 in detail with reference to examples.
Step S100: and receiving the service message sent by the core processor.
Thecore processor 11 may send traffic messages to thechip 12 that need to be processed by thechip 12. Accordingly, thechip 12 may receive the service packet through an interface of the backplane where the chip is located, for example, an rx interface. After receiving the service message, thechip 12 may first buffer the service message into thetotal buffer unit 122 to set aside time for thechip 12 to determine whichalgorithm engine 121 processes the service message.
It should be noted that the space requirement of thetotal buffer unit 122 is a little larger, so that when a plurality ofalgorithm engines 121 are not in time to process a large number of service messages, thetotal buffer unit 122 can accumulate unprocessed service messages to a certain extent, so as to gain time for the processing of thealgorithm engines 121.
Step S200: determining an input cache unit capable of storing the service message from a plurality of input cache units, and caching the service message to the determined input cache unit; and the input buffer units correspond to different algorithm engines one by one.
For the determination of theinput buffer unit 123, thechip 12 may adopt a sequential selection manner, or may adopt a random selection manner.
Referring to fig. 1 and 3, for the sequential selection approach:
after the service packet is cached in thetotal cache unit 122, on one hand, thechip 12 may determine the data size of the service packet, and on the other hand, thechip 12 may obtain the remaining space size of each currentinput cache unit 123. In this embodiment, thechip 12 sorts the plurality ofinput buffer units 123 in advance according to the size of the storage space of theinput buffer units 123. Thus, thechip 12 can sequentially determine whether the size of the remaining space of eachinput buffer unit 123 is greater than or equal to the data size of the service packet according to the sorting.
As an exemplary manner of the determination, comparators with the same number as that of theinput cache units 123 may be deployed in thechip 12 for a plurality ofinput cache units 123, and each comparator is configured to compare the size of the remaining space of a corresponding one of theinput cache units 123 with the data size of the service packet. Thechip 12 may input the remaining space of eachinput buffer unit 123 into a corresponding one of the comparators, and input the data size of the service packet into each comparator.
Therefore, thechip 12 can sequentially determine, according to the sequence, whether the comparison result output by each comparator indicates that the size of the remaining space of the correspondinginput buffer unit 123 is greater than or equal to the size of the service message data. By judging according to the judgment sequence, thechip 12 may determine that theinput cache unit 123 determined that the size of the remaining space is greater than or equal to the size of the data for the first time, where theinput cache unit 123 is aninput cache unit 123 capable of storing the service packet, and thealgorithm engine 121 corresponding to theinput cache unit 123 is analgorithm engine 121 capable of processing the service packet.
For example, the comparator determines that the size of the remaining space of theinput buffer unit 123 is greater than or equal to the size of the service packet data through comparison, and the comparator may output a logic signal "1", otherwise, output a logic signal "0"; then thechip 12 may determine in sequence which comparator outputs the logic signal "1"; when it is determined for the first time that a certain comparator outputs a logic signal "1", thechip 12 does not continue the subsequent determination, ends the current determination process, and determines theinput buffer unit 123 corresponding to the comparator that outputs the logic signal "1".
After theinput buffer unit 123 is determined, thechip 12 may extract the service packet from the buffer area and store the service packet in the determinedinput buffer unit 123.
It should be noted that, in order to realize that thealgorithm engines 121 are fully utilized and at the same time thealgorithm engines 121 can process the service packet of the macro frame, when theinput buffer units 123 are divided, the space size of the last input buffer unit orunits 123 in the sequence may be divided into a larger size, for example, the space size of the last input buffer unit orunits 123 in the sequence is divided into at least 4 Block RAMs of 18 kbits, so that the space size of the last input buffer unit orunits 123 in the sequence is larger than the maximum data size of the service packet. Thus, the lastinput buffer unit 123 or units in the sequence can store the maximum length 8192 bytes of the superframe service message. It can be understood that, with this arrangement, since theinput buffer units 123 capable of processing the macro frame service packet are located at the end of the sorting, the space remaining in theseinput buffer units 123 is the best for allinput buffer units 123. After receiving the service packet of the macro frame, thechip 12 may determine that theinput buffer unit 123 located at the last of the sorting is not able to store the service packet of the macro frame in theinput buffer unit 123 located at the front of the sorting. Since theinput buffer unit 123 at the last of the sequencing not only has a space size that satisfies the requirement of storing the service packet of the macro frame, but also has a better space surplus condition of theinput buffer units 123, the service packet of the macro frame can be stored, thereby realizing the processing of the service packet of the macro frame.
In addition, thechip 12 may also control the storage water levels of the plurality ofinput buffer units 123 based on the sequential judgment manner according to the sequence.
For example, thechip 12 may set theinput buffer unit 123 located at the back in the sorting as the threshold node. In the process of sequentially judging according to the sequence, if it is judged that theinput cache unit 123 corresponding to the threshold node has not determined theinput cache unit 123 capable of storing the service packet, it indicates that all theinput cache units 123 before the threshold node are full, and further indicates that the storage speed of the plurality ofinput cache units 123 reaches the upper limit, that is, the storage water level of the plurality ofinput cache units 123 is full. Therefore, thechip 12 may send a service suspension request to thecore processor 11, so that thecore processor 11 suspends sending a new service packet to thechip 12, and further, the storage water levels of the multipleinput buffer units 123 start to decrease, thereby implementing control over the storage water levels of the multipleinput buffer units 123.
For example, if the order is sequentially determined from the 1 stinput buffer unit 123 to the 100 thinput buffer unit 123, the 95 thinput buffer unit 123 in the order may be set as the threshold node. If the size of the remaining space of the 95 thinput buffer unit 123 is still smaller than the data size of the service packet, it indicates that the storage water level of the plurality ofinput buffer units 123 may reach 95%, and then sends a service suspension request to thecore processor 11, so that the storage water level is decreased from 95%.
It is worth pointing out that the threshold node needs to be set before theinput buffer unit 123 capable of storing the macro frame service packet, so as to avoid the accumulation of small packets in theinput buffer unit 123 capable of storing the macro frame service packet.
For the random selection mode:
after the service packet is cached in thetotal cache unit 122, on one hand, thechip 12 may determine the data size of the service packet, and on the other hand, thechip 12 may obtain the remaining space size of each currentinput cache unit 123. In this way, thechip 12 may compare the size of the remaining space of eachinput buffer unit 123 with the data size of the service packet, so as to determine theinput buffer unit 123 with the size of the remaining space being greater than or equal to the data size. The specific comparison method between the size of the remaining space of eachinput buffer unit 123 and the size of the data of the service packet may refer to the foregoing, and will not be described again here.
After the determination, thechip 12 may randomly select oneinput buffer unit 123 from theinput buffer units 123 with the remaining space size greater than or equal to the data size, and buffer the service packet into the determinedinput buffer unit 123.
It can be understood that, since theinput buffer units 123 are randomly selected, the service messages can be uniformly stored in eachinput buffer unit 123 as much as possible, so that thealgorithm engines 121 are fully utilized.
It should be noted that, when theinput cache unit 123 is randomly selected, if the storage of the macro frame service packet is to be implemented, the macro frame service packet may be identified, and theinput cache unit 123 capable of storing the macro frame service packet may be identified. Thus, when the service packet is identified as a service packet that is not a macro frame, the service packet can be stored in the other unidentifiedinput cache unit 123 by the identifier; and when the service message is identified as a macro frame service message, the macro frame service message is stored in the identifiedinput cache unit 123.
In addition, thechip 12 may also control the storage water levels of the plurality ofinput buffer units 123 based on a random selection manner.
For example, thechip 12 may set a proportion threshold value of theinput buffer units 123 with the remaining space size smaller than the data size of the service packet in all theinput buffer units 123. In the determination process, if it is determined that the number ratio of theinput buffer units 123 with the remaining space size smaller than the data size in all theinput buffer units 123 is greater than the preset ratio threshold, it indicates that the storage water levels of the plurality ofinput buffer units 123 have reached the upper limit. Therefore, thechip 12 may send a service suspension request to thecore processor 11, so that thecore processor 11 suspends sending a new service packet to thechip 12, and further, the storage water levels of the plurality ofinput buffer units 123 start to decrease, thereby implementing control over the storage water levels of the plurality ofinput buffer units 123.
For example, the number of theinput buffer units 123 is 100 in total, and the percentage threshold may be set to 95%. If the number ratio is greater than the size ratio threshold, it indicates that the storage water level of the plurality ofinput buffer units 123 may reach 95%, and then sends a service suspension request to thecore processor 11 to decrease the storage water level from 95%.
In this embodiment, after buffering the service packet in the determinedinput buffer unit 123, thechip 12 may continue to execute step S300.
Step S300: and processing the service message through an algorithm engine corresponding to the determined input cache unit.
Eachalgorithm engine 121 may sequentially extract the corresponding service messages from theinput buffer unit 123 corresponding to itself for processing according to the sequence of storing the service messages in theinput buffer unit 123. Therefore, when thealgorithm engine 121 processes the service packet, thechip 12 may extract the service packet from theinput buffer unit 123 corresponding to thealgorithm engine 121, and process the service packet through thealgorithm engine 121, for example, perform encryption and decryption processing in the CBC mode, so as to obtain the processed service packet.
After obtaining the processed service message, thechip 12 may continue to execute step S400.
Step S400: and sending the processed service message to the core processor.
In some embodiments, after obtaining the processed service packet, thechip 12 may directly send the processed service packet to thecore processor 11.
In other embodiments, after obtaining the processed service packets, thechip 12 may send the processed service packets to thecore processor 11 according to the sequence of the service packets received, so as to avoid increasing the burden on the core processor 11 (thecore processor 11 is not required to solve the out-of-order problem by itself).
Specifically, each time a service packet is received, thechip 12 may add a unique serial number to the service packet at the header of the service packet, for example, add a 16-bit unique serial number to the header. Wherein, the rule for adding the unique serial number is as follows: the unique sequence number added after receiving a new service packet next time is +1 based on the previous unique sequence number, for example, the unique sequence number of the service packet a received first is 0x0000, and the unique sequence number of the service packet B received later is 0x0001, and the unique sequence number may be added to 0xffff from 0x0000, and is cycled to 0x0000 after being added to 0 xffff.
Thechip 12 can determine whether other messages (other service messages) received before the service message are being processed when outputting each processed service message based on the unique serial number, and if so, wait until the other messages are processed, and then output the processed service message and the processed other messages to thecore processor 11 according to the receiving sequence of the service message and the other messages.
Referring to fig. 4 and 5, it can be understood that when there is another message received before the service message is processed, the processed service message needs to be buffered to implement the delayed output, so that theoutput buffer units 124 with the same number as thealgorithm engine 121 may be divided in the memory of thechip 12 or in thememory 13 of theelectronic device 10 to buffer the processed service message through theoutput buffer units 124.
Specifically, eachoutput cache unit 124 corresponds to onealgorithm engine 121, so that thechip 12 may store the processed service packet output by eachalgorithm engine 121 in theoutput cache unit 124 corresponding to thealgorithm engine 121. When it is determined that the processed service packet can be output (that is, other packets before the processed service packet are all processed or other packets before the processed service packet are all output), the processed service packet may be extracted from theoutput cache unit 124 and sent to thecore processor 11.
In this embodiment, thechip 12 may also control the storage water level of eachoutput buffer unit 124. For example, thechip 12 may preset an upper storage limit of theoutput buffer unit 124, and when it is determined that the storage amount of theoutput buffer unit 124 has reached the upper limit, thechip 12 may send a service suspension request to thealgorithm engine 121 corresponding to theoutput buffer unit 124 to cause thealgorithm engine 121 to suspend the ongoing processing, thereby controlling the storage level of theoutput buffer unit 124 to drop.
Referring to fig. 1, please refer to fig. 6, based on the same inventive concept, an embodiment of the present application further provides amessage processing apparatus 100, where themessage processing apparatus 100 is applied to achip 12 for accelerating a service of acore processor 11 in anelectronic device 10, and themessage processing apparatus 100 includes:
adata transceiving unit 110, configured to receive the service packet sent by thecore processor 11.
Thedata processing unit 120 is configured to determine aninput cache unit 123 capable of storing the service packet from the multipleinput cache units 123, and cache the service packet in the determinedinput cache unit 123; theinput buffer units 123 correspond todifferent algorithm engines 121 one by one; and, thealgorithm engine 121 is configured to process the service packet through the determinedinput buffer unit 123.
Thedata transceiver unit 110 is further configured to send the processed service packet to thecore processor 11.
It should be noted that, as those skilled in the art can clearly understand, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Some embodiments of the present application further provide a computer-readable storage medium of a computer-executable nonvolatile program code, where the storage medium can be a general-purpose storage medium, such as a removable disk or a hard disk, and the computer-readable storage medium has a program code stored thereon, where the program code is executed by a computer to perform the steps of the message processing method according to any of the above embodiments.
The program code product of the message processing method provided in the embodiment of the present application includes a computer-readable storage medium storing the program code, and instructions included in the program code may be used to execute the method in the foregoing method embodiment, and specific implementation may refer to the method embodiment, which is not described herein again.
In summary, the embodiment of the present application provides a method and an apparatus for processing a packet, and an electronic device. By setting a different algorithm engine for the plurality of input cache units in a one-to-one correspondence manner, the service message processed by each algorithm engine is cached in the corresponding input cache unit. Because the storage capacity of the input cache unit has an upper limit, a large number of service messages can be prevented from being distributed to the same algorithm engine for processing, so that the load is distributed uniformly by a plurality of algorithm engines as much as possible, the load of each algorithm engine is reduced, the processing efficiency is improved, and the equipment performance is improved.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.