Disclosure of Invention
In view of this, the embodiments of the present invention provide a method and apparatus for processing requests, which can fairly process each request.
To achieve the above object, according to one aspect of an embodiment of the present invention, there is provided a method of processing a request.
The method for processing the request comprises the following steps:
Setting processing time length of a thread processing request according to a service identifier, wherein the request carries the service identifier;
and controlling the thread to process the request according to the processing duration.
In one embodiment, setting a processing duration of a thread processing request according to a service identifier includes:
judging whether to use a TP algorithm to set the processing time length of the thread processing request according to the service identifier;
if yes, setting the processing time length of the thread processing request by using the TP algorithm;
If not, setting the maximum processing time length corresponding to the service identifier as the processing time length of the thread processing request;
the processing time length is the sum of the time length of the thread sending the request to the service, the time length of the thread waiting for the service to respond to the request and return the result, and the time length of the thread sending the result;
The service identity comprises information assigned to the service by a gateway or information provided to the gateway by the service.
In one embodiment, determining whether to use a TP algorithm to set a processing duration of a thread processing request according to a service identifier includes:
Screening out a sampling queue with the same service identification from a plurality of sampling queues according to the names of the sampling queues, and taking the sampling queue as a target queue;
judging whether the number of the historical time lengths in the target queue is equal to the dimension of a TP algorithm or not;
the historical duration is the duration that the thread processes the historical request that is the same as the service identification.
In one embodiment, the TP algorithm is used to set a processing duration of a thread processing request, including:
Sequencing the historical time lengths in the target queue according to the order from small to large, and multiplying the number of the historical time lengths by the index number of the TP algorithm to obtain a sampling value;
Screening the historical time length arranged at the position of the sampling value from the ordered historical time lengths according to the sampling value;
And setting the processing time length of the thread processing request according to the historical time length arranged at the sampling value position.
In one embodiment, setting a processing duration of the thread processing request according to the historical duration arranged at the sampling value position includes:
judging whether the historical time length arranged at the sampling value position is larger than the minimum processing time length corresponding to the service identifier;
If yes, setting the historical time length arranged at the position of the sampling value as the processing time length of the thread processing request;
If not, setting the minimum processing time length as the processing time length of the thread processing request.
In one embodiment, after controlling the thread to process the request according to the processing duration, the method further comprises:
and inserting the duration of the processing of the request by the thread into the target queue according to a first-in first-out method.
In one embodiment, controlling the thread to process the request according to the processing duration includes:
starting timing when the thread starts to process the request, and judging whether the timing duration is equal to the processing duration;
if yes, stopping the thread to process the request, and if not, controlling the thread to continue to process the request.
To achieve the above object, according to another aspect of an embodiment of the present invention, there is provided an apparatus for processing a request.
The device for processing the request in the embodiment of the invention comprises the following components:
The device comprises a setting unit, a processing unit and a processing unit, wherein the setting unit is used for setting the processing time of a thread processing request according to a service identifier, and the service identifier is carried in the request;
And the control unit is used for controlling the thread to process the request according to the processing time length.
In one embodiment, the setting unit is specifically configured to:
judging whether to use a TP algorithm to set the processing time length of the thread processing request according to the service identifier;
if yes, setting the processing time length of the thread processing request by using the TP algorithm;
If not, setting the maximum processing time length corresponding to the service identifier as the processing time length of the thread processing request;
the processing time length is the sum of the time length of the thread sending the request to the service, the time length of the thread waiting for the service to respond to the request and return the result, and the time length of the thread sending the result;
The service identity comprises information assigned to the service by a gateway or information provided to the gateway by the service.
In an embodiment, the setting unit is specifically further configured to:
Screening out a sampling queue with the same service identification from a plurality of sampling queues according to the names of the sampling queues, and taking the sampling queue as a target queue;
judging whether the number of the historical time lengths in the target queue is equal to the dimension of a TP algorithm or not;
the historical duration is the duration that the thread processes the historical request that is the same as the service identification.
In an embodiment, the setting unit is specifically further configured to:
Sequencing the historical time lengths in the target queue according to the order from small to large, and multiplying the number of the historical time lengths by the index number of the TP algorithm to obtain a sampling value;
Screening the historical time length arranged at the position of the sampling value from the ordered historical time lengths according to the sampling value;
And setting the processing time length of the thread processing request according to the historical time length arranged at the sampling value position.
In an embodiment, the setting unit is specifically further configured to:
judging whether the historical time length arranged at the sampling value position is larger than the minimum processing time length corresponding to the service identifier;
If yes, setting the historical time length arranged at the position of the sampling value as the processing time length of the thread processing request;
If not, setting the minimum processing time length as the processing time length of the thread processing request.
In one embodiment, the control unit is specifically configured to:
And the method is used for inserting the duration of the processing of the request by the thread into the target queue according to a first-in first-out method after controlling the thread to process the request according to the processing duration.
In an embodiment, the control unit is specifically further configured to:
starting timing when the thread starts to process the request, and judging whether the timing duration is equal to the processing duration;
if yes, stopping the thread to process the request, and if not, controlling the thread to continue to process the request.
To achieve the above object, according to still another aspect of an embodiment of the present invention, there is provided an electronic apparatus.
The electronic equipment comprises one or more processors and a storage device, wherein the storage device is used for storing one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors are enabled to realize the method for processing the request provided by the embodiment of the invention.
To achieve the above object, according to still another aspect of an embodiment of the present invention, a computer-readable medium is provided.
A computer readable medium of an embodiment of the present invention has stored thereon a computer program which, when executed by a processor, implements a method for processing a request provided by the embodiment of the present invention.
The embodiment of the invention has the advantages that the processing time of the thread processing request is set, the thread processing request is controlled according to the processing time, so that the processing of each request is limited in a reasonable time range, the situation that the thread processing request is not affected is ensured under normal conditions, the other requests are not delayed in the processing of the thread under abnormal conditions, each request is fairly processed, the processing time is set through the service identification, various types of requests are processed according to the processing time, and the processing applicability is improved.
Further effects of the above-described non-conventional alternatives are described below in connection with the embodiments.
Detailed Description
Exemplary embodiments of the present invention will now be described with reference to the accompanying drawings, in which various details of the embodiments of the present invention are included to facilitate understanding, and are to be considered merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It is noted that embodiments of the invention and features of the embodiments may be combined with each other without conflict.
To solve the problems in the prior art, an embodiment of the present invention provides a method for processing a request, as shown in fig. 3, the method includes:
Step 301, setting processing time length of a thread processing request according to a service identifier, wherein the service identifier is carried in the request.
In the step, in the concrete implementation, the processing time length can be set according to the service identifier by adopting the maximum processing time length, the processing time length can be set by adopting the minimum processing time length, the historical time length can be screened out from the target queue based on the TP algorithm, and the processing time length can be set by adopting the historical time length. In addition, the service identification includes information assigned to the service by the gateway or information provided to the gateway by the service. The information assigned to the service by the gateway may be an ID assigned to the service by the gateway, and the information provided to the gateway by the service may be at least one of a service name, a service creation time, a name of a service party, and an IP address of a device where the service is located.
The request types are different, the service response to the request is different, the set processing time is different, and the processing of the request is different, so that the isolation of processing the request is realized. Setting processing time length and controlling the thread to process the request according to the processing time length is equivalent to limiting the processing of the request within a reasonable time range, ensuring that the processing of the request by the thread is not affected under normal conditions, and not delaying the processing of other requests by the thread under abnormal conditions.
And step S302, controlling the thread to process the request according to the processing time.
In the specific implementation, when the thread starts to process the request, the timing time is equal to the processing time, the thread is stopped to process the request, so that the processing of the request is limited in a reasonable time range, the thread is released, the thread can process other requests, and the thread resources are fully utilized. It should be understood that if the gateway implementing the embodiment of the present invention is a single process, each request is received by the process, the processing duration is set, and the thread is controlled to process the request according to the processing duration. If the gateway is multi-process, any process in the multi-process receives the request (which process can be controlled by the load balancer to receive the request), sets the processing duration, and controls the thread to process the request according to the processing duration.
In the embodiment of the invention, the processing time of the thread processing request is set, and the thread processing request is controlled according to the processing time, so that the processing of each request is limited in a reasonable time range, the thread processing request is not influenced under normal conditions, other requests are not delayed in the thread processing under abnormal conditions, each request is fairly processed, the processing time is set through the service identification, various types of requests are processed according to the processing time, and the processing applicability is improved.
To solve the problems of the prior art, another embodiment of the present invention provides a method for processing a request, as shown in fig. 4, the method includes:
step S4001, according to the names of the sample queues, selecting a sample queue with the same service identifier from the plurality of sample queues, and using the sample queue as a target queue.
In this step, the step is illustrated in a specific example, as shown in FIG. 5, where there are two requests in the request queue, request 1 (lightweight request) and request 2 (heavyweight request). The name of the sample queue 1 is the same as the service identifier 1 carried by the request 1, and the name of the sample queue 2 is the same as the service identifier 2 carried by the request 2. Thus, sample queue 1 serves as the destination queue for service identification 1 and sample queue 2 serves as the destination queue for service identification 2. Then, a processing duration 1 of one thread in the thread pool to process the request 1 is set based on the sample queue 1, and a processing duration 2 of one thread in the thread pool to process the request 2 is set based on the sample queue 2. And finally, controlling the processing of the thread to the request 1 according to the processing duration 1, and controlling the processing of the thread to the request 2 according to the processing duration 2. Thus, the processing of the request is isolated. In addition, the service identification includes information assigned to the service by the gateway or information provided to the gateway by the service. The information assigned to the service by the gateway may be an ID assigned to the service by the gateway, and the information provided to the gateway by the service may be at least one of a service name, a service creation time, a name of a service party, and an IP address of a device where the service is located.
It should be noted that, the historical duration is stored in the sample queue, and the maximum value of the number of the historical durations is the dimension of the TP algorithm and is also the length of the sample queue. The historical time length in the sample queues is the time length of the threads processing the historical requests of the same type, and the historical requests of the same type are responded by the same service, so that the sample queues can be named by the service identification of the service, and the sample queues are independent. In addition, the length of each sample queue may be different, that is, different dimensions of TP algorithms are used to set the processing time of threads to process different types of requests. For example, the processing duration of a thread to process a type a request is set using the TP99 algorithm (99 is the dimension of the TP99 algorithm, which is also the length of the target queue), and the processing duration of a type B request is set using the TP50 algorithm (50 is the dimension of the TP50 algorithm, which is also the length of the target queue). Furthermore, the service access gateway distributes the unique service identifier to the service, and the sending end of the request encapsulates the service identifier in the request. The service identification may be a service ID.
In addition, if the number of the processes of the gateway is multiple, multiple sampling queues can be set for each process, the historical time length in the sampling queues of the processes represents the performance of the processes, and the processing time length of the thread processing request is set according to the sampling queues of the processes, so that the setting of the processing time length is more accurate.
Moreover, the request types are different, the service identifiers are different, and the service response to the request is different, so that the screened target queues are different, the set processing time is different, and the request is processed differently, thereby improving the processing applicability, being applicable to processing the request with single type and various types, and ensuring the isolation of the request processing.
Step S4002, determining whether the number of history durations in the target queue is equal to the dimension of the TP algorithm, where the history durations are durations during which the thread processes the history request with the same service identifier.
In this step, it is determined whether the number of history periods in the target queue is equal to the dimension of the TP algorithm, if so, step S4004 is executed, and if not, step S4003 is executed.
The TP algorithm (tp= Top Percentile, top percentage is a term in statistics) may be a TP50 algorithm, a TP99 algorithm, a TP999 algorithm, etc., where 50, 99, and 999 are dimensions of the TP algorithm. It should be noted that, in the case where the number of history durations in the target queue is equal to the dimension of the TP algorithm, the setting of the processing duration using the TP algorithm may be adopted, because the TP algorithm itself determines. In addition, the history time length in the target queue is the time length of the thread for processing the history request with the same request type, and the processing time length is set based on the target queue, so that the setting of the processing time length is more accurate, the processing of the request is limited in a more reasonable time range, the condition that the thread is not affected to process the request is ensured, and the thread is not affected to process other requests under abnormal conditions.
Step S4003, setting the maximum processing duration corresponding to the service identifier as the processing duration of the thread processing request, where the processing duration is the sum of the duration of the thread sending the request to the service, the duration of the thread waiting for the service to respond to the request and return a result, and the duration of the thread sending the result.
In this step, the request types are different, the service identifiers carried by the requests are different, the services responding to the requests are different, and the response time is different, so that the maximum processing duration is set according to the service identifiers, thereby limiting the processing of various types of requests to a more reasonable time range, and further improving the processing applicability.
It should be noted that, no matter the maximum processing duration, the minimum processing duration or the historical duration arranged at the position of the sampling value is set as the processing duration, the duration of the thread processing request is inserted into the target queue, as the number of the historical durations in the target queue is continuously increased along with the increase of the processed request, the processing duration can be set by adopting the TP algorithm when the number of the historical durations is increased to the same dimension of the TP algorithm.
In addition, the maximum processing time length is set to be the processing time length, the influence of the processing of the lightweight request on the processing of the lightweight request is reduced, and the lightweight request is limited by the maximum processing time length and cannot be processed all the time, so that the processing fairness is improved, and the performance of the gateway is ensured. This effect is particularly evident in customer service systems, as the value added services of the access gateway are more complex and diverse.
Step S4004, sorting the historical time periods in the target queue in order from small to large.
Step S4005, the number of the historical time periods is multiplied by the index number of the TP algorithm to obtain sampling values.
In this step, the number of history periods is 99, the index number of the tp99 algorithm is 99%,99×99% =98.01, and 98 is taken as a sample value.
Step S4006, screening the historical time length ranked at the position of the sampling value from the ranked historical time lengths according to the sampling value.
In this step, the step of screening the history duration ranked at the 98 th bit from the ranked history durations according to 98 is described in a specific example on the basis of the example of step S4005.
Step S4007, determining whether the historical time period arranged at the sampling value position is greater than the minimum processing time period corresponding to the service identifier.
In this step, it is determined whether the history period of time arranged at the sampling value position is longer than the minimum processing period of time corresponding to the service identifier, if so, step S4008 is executed, and if not, step S4009 is executed.
Step S4008, setting the historical time length arranged at the position of the sampling value as the processing time length of the thread processing request.
In this step, the step is described in a specific example in which the history period arranged at the 98 th bit is 10 seconds on the basis of the example of step S4006, and 10 seconds is set as the processing period for the thread to process the request.
In addition, the TP algorithm is an important index for measuring performance, so that the history duration calculated by the TP algorithm can more represent the processing performance of the thread, when the history duration is set as the processing duration, the setting accuracy is higher, the processing of the request is limited in a more reasonable time range, the request processing is guaranteed not to be influenced under normal conditions, other requests are not delayed in the processing of the thread under abnormal conditions, and each request is processed fairly.
Step S4009 sets the minimum processing duration to a processing duration of the thread processing request.
In this step, the request types are different, the service identifiers carried by the requests are different, the services responding to the requests are different, and the response times are different, so that the minimum processing duration is set according to the service identifiers.
In addition, the duration of processing the lightweight request by the thread is much shorter than the duration of processing the lightweight request by the thread, and when the historical duration of responding to the lightweight request is smaller than the minimum processing duration, the minimum processing duration is adopted to set the processing duration, so that the processing of the lightweight request is ensured to the greatest extent, and the performance of the gateway is ensured.
Step S4010, starting timing when the thread starts to process the request, and determining whether the timing duration is equal to the processing duration.
In this step, it is determined whether the time length is equal to the processing time length, if so, step S4011 is executed, and if not, step S4012 is executed.
It should be noted that, when the time duration is equal to the processing duration, the processing of the confirmation request is abnormal, and the abnormality may be a gateway-to-service network abnormality or a service response request abnormality.
Step S4011, terminate the thread processing the request.
In this step, the request may be a user avatar acquisition request, a user weekly consultation session request, or a user monthly order request, etc. when embodied. When the timing duration is equal to the processing duration, the processing of the request is abnormal, the processing of the thread is terminated, the thread resource is released, the thread can process other requests, the thread resource is fully utilized, the other requests can be processed in time, the influence of the thread processing request on the processing of the other requests by the thread is reduced, and each request is processed fairly.
Step S4012, control the thread to continue processing the request.
In this step, it is noted that the thread processing request includes sending the request to the service, waiting for the result of the service response request and returning, and sending the result, so that the thread continuing to process the request includes continuing to send the request to the service, continuing to wait for the result of the service response request and returning, or continuing to send the result. In addition, the service may be a value added service of a customer service system, or the like.
Step S4013, inserts the duration of processing the request by the thread into the target queue according to a first-in-first-out method.
In this embodiment, the target queue is a first-in first-out circular queue. The duration of the thread processing the request is inserted into the target queue, with the start position (head), end position (tail) and insertion position (pos) of the target queue being changed accordingly. As shown in fig. 6, the insertion procedure is described below with a specific example in which the length of the target queue is assumed to be 3, and thus the dimension of the TP algorithm is also 3.
In the initial stage of the process, tail=head=pos=0;
inserting a into the target queue:
pos=original=0;
tail=original Tail +1=0+1=1;
new tail=tail MOD (remainder) 3 (dimension of TP algorithm is 3) =1mod3=1;
New tail (1) > original head (0);
New head=original head=0;
the target queue changes to:
list[pos=0]=A;
tail=1;
head=0;
Reinserting B into the target queue:
pos=original=1;
tail=original Tail +1=1+1=2;
new tail=tail MOD (remainder) 3 (dimension of TP algorithm is 3) =2mod3=2;
new tail (2) > original head (0);
New head=original head=0;
the target queue changes to:
list[pos=0]=A,list[pos=1]=B;
tail=2;
head=0;
Reinserting C into the target queue:
pos=original tail=2;
tail=original Tail +1=2+1=3;
New tail=tail MOD (remainder) 3 (dimension of TP algorithm is 3) =3mod3=0;
new tail (0) =original head (0);
New head = original head +1=0+1=1;
the target queue changes to:
list[pos=0]=A,list[pos=1]=B,list[pos=2]=C;
tail=0;
head=1。
At this point, the target queue is full and can be used to set the processing time, and the process of inserting the time for the thread to process the request (the time for the thread to process the request is denoted by D) into the target queue is as follows:
pos=original=0;
tail=original Tail +1=0+1=1;
new tail=tail MOD (remainder) 3 (dimension of TP algorithm is 3) =1mod3=1;
new tail (1) =original head (1);
new head = original head +1=1+1=2;
the target queue changes to:
list[pos=0]=B,list[pos=1]=C,list[pos=2]=D;
tail=1;
head=2。
When the target queue is inserted according to the first-in first-out method, the first-inserted A is removed, the B occupies the position of A, the C occupies the position of B, and the D occupies the position of C.
In addition, the time length of the thread processing request is inserted into the target queue according to the first-in first-out method, so that the history time length in the target queue is always the latest history time length, the latest history time length reflects the latest performance of the thread, when the processing time length is set by reflecting the latest performance history time length, the setting accuracy is higher, the processing of the request is limited in a more reasonable time range, the normal condition is ensured, the thread processing request is not influenced, and under the abnormal condition, other requests are not delayed in the thread processing, and each request is fairly processed.
It should be noted that the present invention may be applied to a server or a gateway, and in particular, may be applied to a gateway of a customer service system. Because there are many user-oriented services in the customer service system, in order to provide a good interactive experience, many value-added services are set in the customer service system, if each value-added service has an interface to the outside, the scale of the interface will not be controllable, and the problems of redundancy of general function implementation occur, which are more prominent under the circumstance that the micro service frame (the micro service frame is a design method of a software system, which divides the complete software system into a plurality of small systems which are independently operated and complete the specific function, and the small systems together complete the function which originally needs a huge system to complete). Therefore, a gateway is constructed between the sending end of the request and the value added service, and a unified interface is provided by the gateway. The gateway is mainly used for sending a request to the service, waiting for the result returned by the service response request and sending the result. The gateway is also used for security verification, information filtering and the like.
It should be understood that in the embodiment of the present invention, the type of the request, the service identifier of the request, the name of the sample queue, the service of the response request, and the dimension of the TP algorithm used for setting the processing duration have a correspondence. And setting the processing time of the newly received request based on the historical time length by utilizing the corresponding relation, thereby realizing the isolation of request processing.
In order to solve the problems of the prior art, a further embodiment of the present invention provides a method for processing a request. In an embodiment of the present invention, as shown in fig. 7, the method includes:
The gateway receives the request.
The gateway stores a Universal Unique Identifier (UUID) in the request into a waiting queue preset by the gateway, the control thread sends the request and the universal unique identifier to the service after storing, and the thread is stopped to process the request after sending.
In this step, it is noted that the thread is suspended from processing the request, the thread is released, and the thread can process other requests.
The service needs to maintain a receive queue, and store the received request and the universal unique identifier in the receive queue. And responding to the request by the service to obtain a result, and sending the result and the universal unique identification code to the gateway by the service in a gateway calling interface mode.
In this step, the receive queue holds requests, and the sequentiality of the request responses can be ensured.
The gateway receives the result of the service transmission and the universal unique identification code.
The gateway deletes the universal unique identification code in the waiting queue, and controls the thread to send the result to the sending end of the request after deleting.
In addition, the control thread only sends the request and the result, and does not wait for the service response request and returns the result, so that the processing of the request is limited, the influence of the time for service response request on the fairness of the thread processing the request is reduced, and each request is processed fairly.
It should be noted that, the method of the embodiment of the present invention has the following problems:
Firstly, the gateway needs to provide an interface for the service, so that the service can timely send the result to the gateway, and the gateway also needs to allocate additional resource to maintain a waiting queue.
Second, the flow of serving access gateways will become very complex.
Again, the performance of the gateway can be compromised by the service. The universal unique identification code is stored in the waiting queue, the sending end of the request is connected with the gateway, and when the service does not return the result for a long time, the problem that the gateway refuses the request occurs.
The method of processing the request is described above in connection with fig. 3-7 and the apparatus for processing the request is described below in connection with fig. 8.
In order to solve the problems in the prior art, an embodiment of the present invention provides an apparatus for processing a request, as shown in fig. 8, including:
A setting unit 801, configured to set a processing duration of a thread processing request according to a service identifier, where the request carries the service identifier.
And the control unit 802 is configured to control the thread to process the request according to the processing duration.
In the embodiment of the present invention, the setting unit 801 is specifically configured to:
judging whether to use a TP algorithm to set the processing time length of the thread processing request according to the service identifier;
if yes, setting the processing time length of the thread processing request by using the TP algorithm;
If not, setting the maximum processing time length corresponding to the service identifier as the processing time length of the thread processing request;
the processing time length is the sum of the time length of the thread sending the request to the service, the time length of the thread waiting for the service to respond to the request and return the result, and the time length of the thread sending the result;
The service identity comprises information assigned to the service by a gateway or information provided to the gateway by the service.
In the embodiment of the present invention, the setting unit 801 is specifically further configured to:
Screening out a sampling queue with the same service identification from a plurality of sampling queues according to the names of the sampling queues, and taking the sampling queue as a target queue;
judging whether the number of the historical time lengths in the target queue is equal to the dimension of a TP algorithm or not;
the historical duration is the duration that the thread processes the historical request that is the same as the service identification.
In the embodiment of the present invention, the setting unit 801 is specifically further configured to:
Sequencing the historical time lengths in the target queue according to the order from small to large, and multiplying the number of the historical time lengths by the index number of the TP algorithm to obtain a sampling value;
Screening the historical time length arranged at the position of the sampling value from the ordered historical time lengths according to the sampling value;
And setting the processing time length of the thread processing request according to the historical time length arranged at the sampling value position.
In the embodiment of the present invention, the setting unit 801 is specifically further configured to:
judging whether the historical time length arranged at the sampling value position is larger than the minimum processing time length corresponding to the service identifier;
If yes, setting the historical time length arranged at the position of the sampling value as the processing time length of the thread processing request;
if not, setting the minimum processing time length as the processing time length of the thread processing request. In the embodiment of the present invention, the control unit 802 is specifically configured to:
And the method is used for inserting the duration of the processing of the request by the thread into the target queue according to a first-in first-out method after controlling the thread to process the request according to the processing duration.
In the embodiment of the present invention, the control unit 802 is specifically further configured to:
starting timing when the thread starts to process the request, and judging whether the timing duration is equal to the processing duration;
if yes, stopping the thread to process the request, and if not, controlling the thread to continue to process the request.
It should be understood that the functions performed by the components of the apparatus for processing a request according to the embodiments of the present invention have been described in detail in the method for processing a request according to the foregoing embodiments, which is not described herein again.
Fig. 9 illustrates an exemplary system architecture 900 of a method of processing a request or an apparatus of processing a request to which embodiments of the present invention may be applied.
As shown in fig. 9, system architecture 900 may include terminal devices 901, 902, 903, a network 904, and a server 905. The network 904 is the medium used to provide communications links between the terminal devices 901, 902, 903 and the server 905. The network 904 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
A user may interact with the server 905 over the network 904 using the terminal devices 901, 902, 903 to receive or send messages, etc. Various communication client applications may be installed on the terminal devices 901, 902, 903, such as shopping class applications, web browser applications, search class applications, instant messaging tools, mailbox clients, social platform software, and the like (by way of example only).
Terminal devices 901, 902, 903 may be a variety of electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
The server 905 may be a server that provides various services, such as a background management server (by way of example only) that provides support for shopping-type websites browsed by users using terminal devices 901, 902, 903. The background management server may analyze and process the received data such as the product information query request, and feedback the processing result (e.g., the target push information, the product information—only an example) to the terminal device.
It should be noted that, the method for processing a request provided in the embodiment of the present invention is generally executed by the server 905, and accordingly, the device for processing a request is generally disposed in the server 905.
It should be understood that the number of terminal devices, networks and servers in fig. 9 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 10, there is illustrated a schematic diagram of a computer system 1000 suitable for use in implementing an embodiment of the present invention. The terminal device shown in fig. 10 is only an example, and should not impose any limitation on the functions and the scope of use of the embodiment of the present invention.
As shown in fig. 10, the computer system 1000 includes a Central Processing Unit (CPU) 1001, which can execute various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1002 or a program loaded from a storage section 1008 into a Random Access Memory (RAM) 1003. In the RAM 1003, various programs and data required for the operation of the system 1000 are also stored. The CPU 1001, ROM 1002, and RAM 1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
Connected to the I/O interface 1005 are an input section 1006 including a keyboard, a mouse, and the like, an output section 1007 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like, a storage section 1008 including a hard disk, and the like, and a communication section 1009 including a network interface card such as a LAN card, a modem, and the like. The communication section 1009 performs communication processing via a network such as the internet. The drive 1010 is also connected to the I/O interface 1005 as needed. A removable medium 1011, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like, is installed as needed in the drive 1010, so that a computer program read out therefrom is installed as needed in the storage section 1008.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 1009, and/or installed from the removable medium 1011. The above-described functions defined in the system of the present invention are performed when the computer program is executed by a Central Processing Unit (CPU) 1001.
The computer readable medium shown in the present invention may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of a computer-readable storage medium may include, but are not limited to, an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a unit, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present invention may be implemented in software or in hardware. The described units may also be arranged in a processor, which may be described, for example, as a processor comprising a receiving unit, a setting unit and a control unit. The names of these units do not constitute a limitation on the unit itself in some cases, and for example, the setting unit may also be described as "a unit that sets a processing time length of a thread processing request according to a service identification".
As a further aspect, the invention also provides a computer readable medium which may be comprised in the device described in the above embodiments or may be present alone without being fitted into the device. The computer readable medium carries one or more programs which, when executed by the device, cause the device to include setting a processing duration for processing a request by a thread according to a service identifier, wherein the request carries the service identifier, and controlling the thread to process the request according to the processing duration.
According to the technical scheme of the embodiment of the invention, the processing time of the thread processing request is set, the thread processing request is controlled according to the processing time, so that the processing of each request is limited in a reasonable time range, the thread processing request is not influenced under normal conditions, other requests are not delayed under abnormal conditions, each request is processed fairly, the processing time is set through the service identification, various types of requests are processed according to the processing time, and the processing applicability is improved.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives can occur depending upon design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.