Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
Some block diagrams and/or flow diagrams are shown in the figures. It will be understood that some blocks of the block diagrams and/or flowchart illustrations, or combinations thereof, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the instructions, which execute via the processor, create means for implementing the functions/acts specified in the block diagrams and/or flowchart block or blocks. The techniques of this disclosure may be implemented in hardware and/or software (including firmware, microcode, etc.). In addition, the techniques of this disclosure may take the form of a computer program product on a computer-readable storage medium having instructions stored thereon for use by or in connection with an instruction execution system.
The embodiment of the disclosure provides a test method and a test device capable of applying the method. The method is applied to a micro service architecture, the micro service architecture can comprise a first service and a second service, and the method can comprise the steps of monitoring a first transaction request message sent by the first service to the second service, and determining whether the first transaction request message can be used for testing the second service. And if the first transaction request message is determined to be capable of being used for testing the second service, generating a plurality of first test request messages for requesting the service from the second service based on the first transaction request message. And sending a plurality of first test request messages to the second service, and monitoring the response of the second service to the plurality of first test request messages to realize the test of the second service.
Fig. 1 schematically illustrates a system architecture suitable for testing methods and apparatus according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, the system architecture includes: service consumers and service providers, as well as service registries, configuration centers and monitoring centers.
It should be understood that a service consumer represents a party that consumes a service and a service provider represents a party that provides a service. For example, for the example "initiator- > service A- > service B- > service C- > service D- > service E" described above, service A- > service B-service A calls service B, which is seen to be relatively service consumer and service B service provider. While service B- > service C, service B invokes service C, it can be seen that service B is the service consumer and service C is the service provider, … …, relatively speaking.
In addition, the service registry is used to register corresponding information for each service. The configuration center is used for configuring corresponding information for each service. The monitoring center is used for monitoring the whole process of each service so as to test the performance of some service nodes.
By the testing method and the testing device provided by the embodiment of the disclosure, a targeted high-concurrency testing means is adopted, namely, a plurality of testing request messages are continuously sent only to services which definitely need to be subjected to performance testing to be tested. Specifically, a normal transaction request is initiated on the full link, when the transaction request reaches a service which is definitely required to be subjected to performance testing, a high concurrent performance test is automatically initiated on the service, and the service is automatically recovered to a normal transaction request, so that the technical problems of high cost, low testing efficiency and incapability of completing long-flow performance verification in the related technology can be solved, the dependence of the performance testing on the whole application environment can be reduced, and the working efficiency of the performance testing can be improved.
Fig. 2 schematically shows a flow chart of a testing method according to an embodiment of the present disclosure.
As shown in fig. 2, the method may be applied to, for example, a micro service architecture, which may include, for example, a first service and a second service, and may include, for example, operations S210 to S250.
In operation S210, a first transaction request message sent by a first service to a second service is monitored.
It should be understood that in the disclosed embodiments, the first service has a calling relationship to the second service. I.e. the first service is a service consumer and the second service is a service provider.
In addition, in the embodiment of the present disclosure, the first transaction request message may be, for example, a normal transaction request message.
Next, in operation S220, it is determined whether the first transaction request message can be used for testing the second service.
To implement the targeted testing, in the embodiment of the present disclosure, a corresponding instruction may be set in the normal transaction request message for instructing performance testing on certain services.
Then, in operation S230, if it is determined that the first transaction request message can be used to test the second service, a plurality of first test request messages for requesting the service to the second service are generated based on the first transaction request message.
Specifically, a plurality of first test request messages may be generated based on the first transaction request message and all of the first test request messages may be sent to the second service, so as to implement a high concurrent pressure test on the second service.
Then, a plurality of first test request messages are sent to the second service in operation S240.
Then, in operation S250, responses of the second service to the plurality of first test request messages are monitored, so as to implement a test on the second service.
For example, the responses of the second service to the plurality of first test request messages may be monitored to test the high concurrency stress that the second service can bear, the average time for responding, and the like.
Further, as an optional embodiment, determining whether the first transaction request message can be used for testing the second service may include, for example, determining whether the first transaction request message carries a predetermined instruction. Wherein the predetermined instruction is for instructing a test of the second service.
Fig. 3 schematically illustrates a schematic diagram of processing a request message by a replication device according to an embodiment of the disclosure. As shown in fig. 3, in the embodiment of the present disclosure, the flow of processing the request message by the copy apparatus is as in operations S310 to S360.
In operation S310, the replication device is started to listen to all transaction request messages.
It should be understood that the copying means is realized by a copying program in the embodiment of the present disclosure. In some cross-service performance test projects, due to the limitation of objective factors (such as limited test hardware resources, coordination among multiple services, and the like), a complete test environment cannot be built to complete the test work. According to the embodiment of the disclosure, the tested service is built, then the software program is adopted to identify the tested service, and a plurality of request messages are continuously copied as the test request messages on the basis of the normal transaction request messages and are sent to the tested service. This software program is generally referred to as a copy program.
Next, in operation S320, whether the received transaction request message contains a special instruction of the performance test tag is analyzed, so as to determine whether the transaction request message is a test request message. If the test request message is true, operation S330 continues. If not, directly jumping to operation S360.
It should be noted that, in the embodiment of the present disclosure, the test request message may correspond to a data structure as shown in table 1.
TABLE 1
It should be understood that in the disclosed embodiments, the special instructions described above may be implemented in json-formatted pseudo code.
Then, in operation S330, when the message information reaches the designated service B and service D, the target request message is copied according to the transaction information in the normal transaction request message, and the target request message is automatically copied into a test request message in combination with the data resource for parameterization that has been embedded in advance. For example, the actual values of 100 card numbers predefined in the pre-embedded data processing module are shown in table 2.
TABLE 2
| cardno |
| 6222123456789000001 |
| 6222123456789000002 |
| … |
| 6222123456789000100 |
It should be understood that the targeting request message is a test request message sent to a service that has explicitly needed to perform a performance test.
It should be noted that, in the embodiment of the present disclosure, every time the targeting request message is copied, the { cardno } in the message is replaced with the current real card number, that is, 6222123456789000001 is replaced for the first time, 6222123456789000002 is replaced for the second time, and so on.
Then, in operation S340, based on the test request packet generated in operation S330, the target request packet is forwarded, and a next service is requested.
Then, in operation S350, it is determined whether the target requirements of "concurrency number" and "duration" have been reached according to the special instruction of the performance test tag. If the target request message is reached, the copying operation of the target request message is ended, the operation S360 is continued, otherwise, the operation S330 is skipped, and the circular processing is performed again.
Then, in operation S360, the normal transaction request message is forwarded, and the next service is continuously called according to the normal flow.
By the embodiment of the disclosure, a request can be continuously initiated for a targeted service (i.e. a service which has already definitely needed to be subjected to a performance test) until the required concurrency number and duration are reached, so that the purpose of testing the pressure bearing capacity of the targeted service can be achieved.
In the embodiment of the disclosure, a special instruction form of adding a performance test label in a transaction request message is adopted, when a copying device identifies that a service to be called next in the transaction request message is a service requiring a performance test, a plurality of test request messages are automatically copied, and a high concurrent request is initiated for the service to realize the performance test by combining with a data resource which is buried in advance and used for parameterization.
As an alternative embodiment, the micro service architecture may also include a third service, for example. Correspondingly, the method may further include, for example, preventing the second service from sending the plurality of second test request messages to the third service in the process of monitoring the responses of the second service to the plurality of first test request messages. The plurality of second test request messages are messages which are generated by the second service in response to the plurality of first test request messages and are used for requesting the third service for service.
According to the embodiment of the disclosure, for the service needing performance test, the baffle device can be used for identifying the test request message, and the test return message (also called a targeting response message or a test response message) is automatically obtained by combining the normal transaction response message, so that the purpose that the service needing performance test can be accurately targeted for performance test through one normal transaction request without affecting the transaction and calling other services on the whole link is achieved.
It should be understood that in the disclosed embodiments, the baffle device is a device implemented by a baffle program. In some cross-service performance test projects, due to the limitation of objective factors (such as limited test hardware resources, coordination among multiple services, and the like), a complete test environment cannot be built to complete the test work. At this time, the service to be tested is generally built, and then a software program is adopted to simulate the functions of other related services. This software program is generally referred to as a baffle program.
As an alternative embodiment, the method may further include, for example, obtaining a second transaction request message, where the second transaction request message is a message that is generated by the second service in response to the first transaction request message and is used for requesting a service from a third service. And forwarding the second transaction request message to the third service. And acquiring a transaction response message generated by the third service in response to the second transaction request message. And generating a plurality of test response messages based on the transaction response messages, wherein the plurality of test response messages correspond to the plurality of second test request messages one by one. And sending a plurality of test response messages to the second service.
FIG. 4 schematically illustrates a flow diagram of a baffle device process according to an embodiment of the present disclosure. As shown in fig. 4, the flow of the barrier device processing request message is as in operations S410 to S480.
In operation S410, the barrier device is activated to listen for all transaction request messages.
Next, in operation S420, the received transaction request message is analyzed and determined to be a test request message. If not, operation S430 is continued. If the test request message is received, the operation S460 is skipped.
Then, in operation S430, the normal transaction request message is forwarded, and the next service is continuously called according to the normal flow.
Then, in operation S440, the normal transaction response message is monitored, and the related information of the normal transaction response message returned after the next service is called by the normal transaction request message in operation S430 is acquired.
Then, in operation S450, the normal transaction response message is forwarded and returned to the original request service according to the normal flow.
Then, in operation S460, for each test request packet, a copy process of the targeted response packet is performed, specifically, a correct test return packet is automatically assembled according to the normal transaction response packet obtained in operation S440, in combination with the pre-populated data resource for parameterization. For example, the actual values of 100 card numbers predefined in the pre-embedded data processing module are shown in table 3.
TABLE 3
| cardno |
| 6222123456789000001 |
| 6222123456789000002 |
| … |
| 6222123456789000100 |
It should be understood that each time the targeting response message is assembled, the { cardno } in this message is replaced with the current real card number, i.e., 6222123456789000001 for the first time, 6222123456789000002 for the second time, and so on.
In operation S470, the targeted response packet is forwarded based on the generated test return packet in operation S460, and returned to the original request service.
In operation S480, it is determined whether the targeted response packet replication is completely completed. If all the test request messages have returned to the test return message, the copying operation of the targeted response message is finished, and the whole processing flow is completed. Otherwise, the operation goes to operation S460 for loop processing.
As an alternative embodiment, after sending a plurality of test response messages to the second service, the method may further include: and preventing the second service from forwarding the plurality of test response messages to the first service.
Fig. 5 schematically shows a flow chart of a copying apparatus processing a reply message according to an embodiment of the present disclosure. As shown in fig. 5, the flow of the copy apparatus processing the response message is as shown in operations S510 to S550.
In operation S510, after the copying apparatus is started, all returned transaction response messages are automatically monitored.
In operation S520, the received transaction response message is analyzed to determine whether the transaction response message is a test return message (i.e., a test response message). If the test returns a message, operation S530 is continued. If the transaction response message is not a test return message but a normal transaction response message, the operation S550 is skipped.
In operation S530, the targeted response packet is processed, and the termination and closing are performed for each test return packet.
In operation S540, it is determined whether the targeted response packet processing is completely completed. If all the test return messages are closed, the related operation of the targeted response message processing is finished, and the operation S550 is continued; otherwise, the operation jumps to operation S530 for loop processing.
In operation S550, the normal transaction response message is forwarded to the original request service, and the entire processing flow is completed.
It should be noted that, in the performance test process of the embodiment of the present disclosure, it is basically transparent to the initiating end, and only a special instruction about the performance test tag needs to be added to the message. Specifically, an initiation tool such as JMeter can be used to initiate a test request message from a Web front end or a suitable access point.
It should be understood that for the service in the transaction flow, the target performance test of the relevant service can be automatically completed by the framework only by adding the following mark column in the program of the user service, and the following reference is made for an example.
@Targeting
public RestTemplate restTemplate(){
}
Specifically, the target performance test process may include, for example, a copy apparatus transaction request process, a shutter apparatus process, a copy apparatus transaction response process, and the like, which may be implemented in a framework to which Targeting belongs. The detailed information (such as performance test results) of all the service nodes is mainly provided by the monitoring center in the microservice framework, and includes the start time and the end time of the specific service request of each node, and the statistical data of relevant CPUs, memories and IO.
It should be understood that, in the performance test process, the characteristic that the distributed system can dynamically stretch and contract can be fully utilized, and the purpose of saving system resources is achieved.
Specifically, before testing, related services can be mirrored and stored in a mirror warehouse. When the test is started, basic server resources can be used in the early stage as long as the transaction flow can be completed. In the process of the target performance test, if the performance capacity of the service B needs to be evaluated, a special instruction is added into a message, and the purpose of evaluating the performance index of the current service B in real time can be achieved. If the fact that the actual production requirements cannot be met due to the fact that the current service resources of the service B are insufficient is found, difference calculation is conducted according to actual performance data, a container is built and started in real time through a configuration center in a micro-service framework, and dynamic expansion is conducted on the service B until the performance capacity index reaches a preset target. When the performance capacity evaluation of the service B is completed, the next service to be evaluated may be started, for example, the performance test may be performed on the service D, and then the container where the service B is located may be deleted dynamically, and only the service D to be subjected to the target test at this time is subjected to corresponding dynamic capacity expansion.
As an optional embodiment, before obtaining the second transaction request message, the method may further include, for example: the method comprises the steps of obtaining a first transaction request message from a first service, and forwarding the first transaction request message to a second service.
As an alternative embodiment, after obtaining the transaction response message generated by the third service in response to the second transaction request message, the method may further include: and transmitting the transaction response message to the second service.
The present disclosure is described in detail below with reference to fig. 6 in conjunction with specific embodiments.
Fig. 6 schematically illustrates a schematic diagram of targeted performance testing based on a microservice architecture, according to an embodiment of the present disclosure. As shown in fig. 6, the complete flow of a transaction is detailed below.
The performance test transaction is initiated at the initiator and the dispatch service a begins the transaction flow. Since the service a is not a known test object, it is not necessary to initiate a high concurrent performance test, but only to initiate a normal transaction request message, but a special instruction (i.e., a predetermined instruction) related to the performance test tag needs to be injected into the transaction message. This special instruction may require an accurate targeted performance test for a particular service (e.g., service B or service D, etc.).
It should be understood that in the disclosed embodiment, the performance test indexes may include, but are not limited to, throughput (QPS, i.e., the number of requests that the system can process per unit time, such as TPS, i.e., the number of requests per second), concurrency (i.e., the number of requests that the system can process simultaneously), response time (TR, i.e., the total time from the beginning to the last response received by executing one request), performance counters (i.e., data indexes of the performance of the server or operating system), and the like. In an ideal model, qps (tps) is the concurrency/TR, so that a corresponding specific performance index can be obtained by controlling the difference in concurrency.
In the disclosed embodiment, the data structure of the special instruction may include the contents shown in table 4, for example.
TABLE 4
The performance test indicates whether the request needs to be subjected to a switch judgment (true/false) of the performance test. If and only if true, a high concurrent performance test needs to be initiated for a particular service.
The performance test target service represents a service for which the target performance test is required in the request. For example, if only service B is provided, then only service B is subjected to high concurrent target performance testing. If the service B and the service D are simultaneously set, only the service B and the service D are subjected to high-concurrency targeted performance test. For other services in the transaction flow, only a single request needs to be maintained, and a highly concurrent targeted performance test request does not need to be initiated.
The concurrency number represents the number of transaction requests which are set to be simultaneously initiated for the specific service by the performance test. For example, if the concurrency number is set to 100, then 100 test requests need to be initiated simultaneously for the service targeted by the current target performance test (e.g., service B).
The duration represents the duration for which this performance test is set. For example, if the duration is set to 600s, i.e. 600s, then 100 test requests are continuously issued for the service (e.g. service B) targeted by the current target performance test for 600s (10 min).
The parameter name indicates a name for which data parameterization is required for setting. For example, a cardno may be set, and when a subsequent high concurrency performance test is initiated, the cardno in each transaction request may be set to be a specific different bank card number by using the pre-embedded data processing module, instead of using the same card number for each transaction, so as to avoid hot spots caused by data.
When the transaction information carrying the special instruction enters a flow of calling the service B, the copying device can automatically detect that the service B is an accurate object needing high concurrency performance test at this time. At this time, besides a normal transaction request message is reserved for continuing to call the service B, the transaction information in the normal transaction request message and the data of the embedded point can be combined through the copying device and automatically copied into a plurality of test request messages, and the service B is called one by one, namely, high-concurrency scheduling operation is executed on the service B.
When the service B completes the response and enters the flow of the scheduling service C for the transaction request message, it can be automatically detected through the baffle device that the service C is not the object requiring the high concurrency test this time, and then the baffle device only calls the service C for the original normal transaction request message and obtains the return result, i.e. the normal transaction response message. The baffle device automatically assembles a correct test return message according to the current normal transaction response message and returns the test return message to the service B one by one without actually calling the service C for the test request message automatically copied by the copying device.
When all the transaction response messages are returned to the service A by the service B, the copying device only returns one normal transaction response message to the service A, and all the test return messages automatically responded by the baffle device are automatically filtered, so that the whole transaction process is completed.
It should be noted that, in the embodiments of the present disclosure, the target performance testing system may include two devices, i.e., a replication device and a baffle device.
Fig. 7 schematically shows a block diagram of a copying apparatus according to an embodiment of the present disclosure. The copying apparatus is mainly responsible for the following operations.
(1) Aiming at the normal transaction request message, combining the data of the embedded points, and automatically copying a plurality of test request messages, thereby initiating a high concurrency performance test for definite service.
(2) And automatically filtering the test return message, performing voltage reduction processing on the high-concurrency response message, and only keeping a normal transaction response message.
To implement the above operation, specifically, in the embodiment of the present disclosure, as shown in fig. 7, thereplication apparatus 700 may include, for example, a requestpacket monitoring module 701, a responsepacket monitoring module 702, a normalforwarding processing module 703, an embeddeddata processing module 704, a targetedrequest processing module 705, and a targetedresponse processing module 706.
The requestmessage monitoring module 701 is configured to monitor all transaction request messages, and analyze whether the received transaction request messages contain a special instruction of a performance test tag.
The responsemessage monitoring module 702 is configured to monitor all transaction response messages, and determine whether the received transaction response message is a normaltransaction response message 801 or a test return message.
The normalforwarding processing module 703 is configured to continue to invoke the next service according to the normal flow with the normal transaction request message for the received transaction request message.
The pre-burieddata processing module 704 is used to pre-define data resources for parameterization. Taking a service requiring parameterization of card numbers as an example, 100 actual card number values may be predefined in the pre-embeddeddata processing module 704, as shown in table 5.
The targetingrequest processing module 705 is configured to process the received transaction request message, automatically copy the transaction request message into multiple test request messages according to transaction information in the normal transaction request message and in combination with parameterized data resources of pre-embedded points, invoke the next service one by one, and perform highly concurrent scheduling execution on the next service. An example of each copy of the message is as follows.
startStresstesting(setStressMsg(data_msg,data_param))
Wherein data _ msg represents transaction information in the normal transaction request message, and data _ param represents a parameter named cardno predefined in the embeddeddata processing module 704.
TABLE 5
| cardno |
| 6222123456789000001 |
| 6222123456789000002 |
| … |
| 6222123456789000100 |
The targetedresponse processing module 706 is configured to process the received transaction response messages, return only one normal transaction response message to the original request service, and automatically filter all test return messages. An example of each end packet is as follows.
if(isStresstMsg(data_msg))
endStresstesting(data_msg)
FIG. 8 schematically illustrates a block diagram of a baffle device according to an embodiment of the present disclosure. As shown in fig. 8, thebaffle device 800 is mainly responsible for performing baffle processing on highly concurrent test request messages, assembling correct test return messages according to actual normal transaction response messages, and then directly and automatically returning.
To achieve the above operation, in the embodiment of the present disclosure, as shown in fig. 8, for example, thebarrier device 800 may include a requestmessage monitoring module 801, a responsemessage monitoring module 802, a normalforwarding processing module 803, an embeddeddata processing module 804, and a targetedresponse processing module 805.
The requestmessage monitoring module 801 is configured to monitor all transaction request messages, and analyze whether a received transaction request message is a normal transaction request message or a test request message.
The responsemessage monitoring module 802 is configured to monitor all transaction response messages and obtain all information of normal transaction response messages.
The normalforwarding processing module 803 is configured to forward the information of the received normal transaction request message, and return the information to the original request service.
The pre-burieddata processing module 804 is used for pre-defining data resources for parameterization.
The targetedresponse processing module 805 is configured to automatically assemble, for all test request messages, correct test return messages according to the normal transaction response messages acquired by the responsemessage monitoring module 802 in combination with pre-populated parameterized data resources, and return the test return messages to the original request service one by one.
FIG. 9 schematically shows a block diagram of a testing device according to an embodiment of the disclosure.
As shown in fig. 9, thetest apparatus 900 may be applied to, for example, a micro service architecture including a first service and a second service, and thetest apparatus 900 includes afirst listening module 901, a determiningmodule 902, agenerating module 903, a sendingmodule 904, and asecond listening module 905. The processing device may perform the method described above with reference to the method embodiment, which is not described in detail herein.
Specifically, thefirst monitoring module 901 may be configured to monitor a first transaction request packet sent by the first service to the second service, for example.
The determiningmodule 902 may be used, for example, to determine whether the first transaction request message can be used to test the second service.
Thegenerating module 903 may be configured to generate a plurality of test request messages for requesting a service from a second service based on a first transaction request message, for example, if it is determined that the first transaction request message can be used for testing the second service.
The sendingmodule 904 may be configured to send a plurality of first test request messages to the second service, for example.
Thesecond monitoring module 905 may be configured to monitor responses of the second service to the plurality of first test request messages, for example, to implement a test on the second service.
It should be noted that the embodiments of the apparatus portion and the method portion are similar to each other, and the achieved technical effects are also similar to each other, which are not described herein again.
Any of the modules according to embodiments of the present disclosure, or at least part of the functionality of any of them, may be implemented in one module. Any one or more of the modules according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules according to the embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging the circuit, or in any one of three implementations, or in any suitable combination of any of the software, hardware, and firmware. Alternatively, one or more of the modules according to embodiments of the disclosure may be implemented at least partly as computer program modules which, when executed, may perform corresponding functions.
For example, any plurality of thefirst listening module 901, the determiningmodule 902, thegenerating module 903, the sendingmodule 904 and thesecond listening module 905 may be combined and implemented in one module, or any one of them may be split into a plurality of modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to an embodiment of the present disclosure, at least one of thefirst listening module 901, the determiningmodule 902, thegenerating module 903, the sendingmodule 904, and thesecond listening module 905 may be at least partially implemented as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or implemented by any one of three implementations of software, hardware, and firmware, or implemented by a suitable combination of any of them. Alternatively, at least one of thefirst listening module 901, the determiningmodule 902, thegenerating module 903, the sendingmodule 904 and thesecond listening module 905 may be at least partly implemented as a computer program module, which when executed may perform a corresponding function.
FIG. 10 schematically shows a block diagram of an electronic device according to an embodiment of the disclosure. The electronic device shown in fig. 10 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 10, theelectronic device 1000 includes aprocessor 1010, a computer-readable storage medium 1020. Theelectronic device 1000 may perform a method according to an embodiment of the present disclosure.
In particular,processor 1010 may include, for example, a general purpose microprocessor, an instruction set processor and/or related chip set and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), and/or the like. Theprocessor 1010 may also include on-board memory for caching purposes.Processor 1010 may be a single processing unit or multiple processing units for performing different acts of a method flow according to embodiments of the disclosure.
Computer-readable storage media 1020, for example, may be non-volatile computer-readable storage media, specific examples including, but not limited to: magnetic storage devices, such as magnetic tape or Hard Disk Drives (HDDs); optical storage devices, such as compact disks (CD-ROMs); a memory, such as a Random Access Memory (RAM) or a flash memory; and so on.
The computer-readable storage medium 1020 may comprise acomputer program 1021, whichcomputer program 1021 may comprise code/computer-executable instructions that, when executed by theprocessor 1010, cause theprocessor 1010 to perform a method according to an embodiment of the disclosure, or any variant thereof.
Thecomputer program 1021 may be configured with computer program code, for example, comprising computer program modules. For example, in an example embodiment, code incomputer program 1021 may include one or more program modules, including, for example, 1021A, modules 1021B, … …. It should be noted that the division and number of modules are not fixed, and those skilled in the art may use suitable program modules or program module combinations according to actual situations, and when the program modules are executed by theprocessor 1010, theprocessor 1010 may execute the method according to the embodiment of the present disclosure or any variation thereof.
According to an embodiment of the present disclosure, at least one of thefirst listening module 901, the determiningmodule 902, thegenerating module 903, the sendingmodule 904 and thesecond listening module 905 may be implemented as a computer program module described with reference to fig. 8, which, when executed by the processor 810, may implement the respective operations described above.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It will be understood by those skilled in the art that while the present disclosure has been shown and described with reference to certain exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present disclosure as defined by the appended claims and their equivalents. Accordingly, the scope of the present disclosure should not be limited to the above-described embodiments, but should be defined not only by the appended claims, but also by equivalents thereof.