Disclosure of Invention
In view of the above, the embodiments of the present invention provide a service processing method and apparatus, which can solve the problems of high service processing complexity and high code coupling degree of the existing software product.
To achieve the above object, according to an aspect of the embodiments of the present invention, there is provided a service processing method, including receiving a service data request, and acquiring a corresponding flow code; calling a data stream processing engine through an external unified calling interface, and further obtaining a corresponding executor according to the flow code; processing the service data request according to the configuration flow of the executor; wherein the configuration flow is set to a responsibility chain mode.
Optionally, the processing the service data request according to the configuration flow of the executor includes:
executing the service data request according to the chain node on the configuration flow responsibility chain mode;
judging whether the execution is successful or not when one chain node is completed in each execution, judging whether the next chain node exists according to the linked list if the execution is successful, continuing to execute the next chain node if the next chain node exists, and returning to the data stream processing engine if the next chain node does not exist, and generating an execution success result;
if not, judging whether the last chain node exists according to the linked list, and if so, executing a last chain node rollback method; and if not, returning to the data stream processing engine, and generating an execution failure result.
Optionally, determining whether the execution is successful for each execution completed for a chain node includes:
and each chain node executes the completion doHandler event, and whether the execution is successful or not is judged.
Optionally, the method further comprises: the linked list is a bidirectional linked list.
Optionally, the calling the data stream processing engine through an externally unified calling interface comprises:
the data stream processing engine is called through a calling interface provided by the resourceHandlerExecutor class.
Optionally, before the service data request is processed according to the configuration flow of the executor, the method includes:
and setting a configuration flow for each flow code through an Abstract resource HandlerConfiction configuration class.
Optionally, the processing the service data request according to the configuration flow of the executor includes:
initializing a configuration flow according to a flow code by using a build initialization method of a class issued by a resourceHandlerExecutor service calling party;
and executing a corresponding configuration flow by an exec handler method of the class issued by the resourceHandlerexeutor service caller.
In addition, the invention also provides a service processing device, which comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for receiving a service data request and acquiring a corresponding flow code; calling a data stream processing engine through an external unified calling interface, and further obtaining a corresponding executor according to the flow code; the processing module is used for processing the service data request according to the configuration flow of the executor; wherein the configuration flow is set to a responsibility chain mode.
One embodiment of the above invention has the following advantages or benefits: the invention carries out flow configuration aiming at each service, and each chain node has entering conditions, forward execution and reverse execution operations. The service is convenient to be disconnected and added, and the technical effect like hot plug is realized without modifying logic. That is, the invention can be continuously accessed into new service application scenes, and can be conveniently and flexibly expanded. And, business processes can be added or subtracted in a hot-plug manner. Meanwhile, each service chain node is provided with an event processor and a thing consistency rollback method.
Further effects of the above-described non-conventional alternatives are described below in connection with the embodiments.
Detailed Description
Exemplary embodiments of the present invention will now be described with reference to the accompanying drawings, in which various details of the embodiments of the present invention are included to facilitate understanding, and are to be considered merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic diagram of main flow of a service processing method according to a first embodiment of the present invention, and as shown in fig. 1, the service processing method includes:
step S101, receiving a service data request and acquiring a corresponding flow code.
In an embodiment, the corresponding service processing flows are uniquely defined by one flow code flowCode, that is, one service processing flow corresponds to one flow code flowCode, and the service data request corresponds to the corresponding service processing flow.
Step S102, calling a data stream processing engine through an external unified calling interface, and further obtaining a corresponding executor according to the flow code.
In some embodiments, as shown in fig. 2, a schematic diagram of a core processing class according to an embodiment of the present invention is shown. The data stream processing engine is called through a calling interface provided by the resourceHandlerExecutor class.
Preferably, an interface can be set for each chain node, and a resource handler abstract service processing node interface (i.e. a chain node interface) is realized through an abstract class of abstract resource hander, so that each service flow includes a plurality of realization instances of abstract resource hander. For example: freezeMoney, freezeGoods fund freeze, cargo freeze two example implementation classes inherit the abstract class of Abstract ResourceHandler.
Further, the validRequest method in the Abstract ResourceHandler abstract class, default true indicates that the current node is executed, and if a certain condition needs to be met, the current node needs to be executed, and the method needs to be rewritten. That is, a call condition may be set for each chain node by the validRequest method in the Abstract ResourceHandler abstract class. In addition, the failback method in the abstract class of Abstract resource Handler defaulttrue is used for realizing the rollback of the current chain node.
Preferably, for each chain node, this is implemented by a minimum event handling unit resource handler class. The method is realized for specific implementation classes by a doHandle method of a minimum event processing unit resource handler class, and subtasks of the current node can be processed. The handle method of the resource handler class is a chain (process) execution scheduling method through a minimum event processing unit. Whether the current node needs to execute verification is determined by a validRequest method of a minimum event processing unit ResourceHandler class. The current node rollback is performed by the failback method of the minimum event handling unit resource handler class.
It should be noted that, before the service data request is processed according to the configuration flow of the executor, a configuration flow may be set for each flow code through an abstract resource handleconfiguration class. Each business process is configured separately, such as a billing process, a transfer process.
As a specific embodiment, as shown in fig. 3 and 4, a specific actuator is configured, including two business processes, namely "general replenishment execution process" and "urgent replenishment execution process". Taking "general replenishment" as an example, the configuration is shown, the general replenishment flow configuration service node defines a flow code flowCode as 20423999 by a flowType method, and the handlerConfigname method sets a flow name as "general replenishment". The general replenishment service chain node is configured through a handler method, and the chain node comprises front verification, parameter entering, positioning verification acquisition, configuration information acquisition, safety stock, maximum stock and the like, replenishment target storage area calculation, replenishment stock acquisition according to the configuration information, replenishment calculation according to the configuration, positioning result encapsulation processing, loading recommendation and stock preemption.
It should be noted that, a process of adding or deleting a certain service only needs to add or delete a corresponding actuator and a configuration process.
Step S103, according to the configuration flow of the executor, processing the service data request; wherein the configuration flow is set to a responsibility chain mode.
In an embodiment, the configuration flow is set to a responsibility chain mode, so that the nodes on the responsibility chain guarantee sub-service independence, i.e. each node executes a corresponding sub-service. Wherein the responsibility chain mode refers to that a plurality of objects are connected by a reference of each object to the lower home to form a chain.
In some embodiments, the processing the service data request according to the configuration flow of the executor specifically includes:
executing the service data request according to the chain node on the configuration flow responsibility chain mode;
judging whether the execution is successful or not when one chain node is completed in each execution, judging whether the next chain node exists according to the linked list if the execution is successful, continuing to execute the next chain node if the next chain node exists, and returning to the data stream processing engine if the next chain node does not exist, and generating an execution success result; if not, judging whether the last chain node exists according to the linked list, and if so, executing a last chain node rollback method; and if not, returning to the data stream processing engine, and generating an execution failure result.
Preferably, each chain node executes a done doHandler event, and then determines whether the execution was successful.
In addition, it should be noted that each node in the flow is organized in the form of a linked list to form a head-to-tail bidirectional linked list.
It can be seen that each chain node is an atomic event, including forward commit doHandler event handling, and rollback handling failback methods.
As other embodiments, the processing the service data request according to the configuration flow of the executor includes:
and initializing a configuration flow according to the flow code by using a build initialization method of the class issued by the resourceHandlerExecutor service caller. And executing a corresponding configuration flow by an exec handler method of the class issued by the resourceHandlerexeutor service caller.
As a further embodiment of the present invention, after receiving a service data request and obtaining a corresponding flow code, as shown in FIG. 5, a data flow processing engine is called through an external unified call interface, and then an exec handler method corresponding to an executor entry resourceHandlerexeutor starts to execute according to a configuration flow. And processing the sub-service of the current chain node by a doHandle specific service processing method, judging whether the execution is successful, if so, judging whether the next chain node exists according to the linked list, if so, continuing to execute the next chain node, and if not, returning to the data stream processing engine to generate an execution success result. If not, judging whether the last chain node exists according to the linked list, and if so, executing a last chain node rollback method; and if not, returning to the data stream processing engine, and generating an execution failure result.
Fig. 6 is a schematic diagram of main modules of a service processing apparatus according to an embodiment of the present invention, and as shown in fig. 6, the service processing apparatus 600 includes an acquisition module 601 and a processing module 602. The acquiring module 601 receives a service data request and acquires a corresponding flow code; calling a data stream processing engine through an external unified calling interface, and further obtaining a corresponding executor according to the flow code; the processing module 602 processes the service data request according to the configuration flow of the executor; wherein the configuration flow is set to a responsibility chain mode.
In some embodiments, the processing module 602 processes the service data request according to a configuration flow of the executor, including:
executing the service data request according to the chain node on the configuration flow responsibility chain mode;
judging whether the execution is successful or not when one chain node is completed in each execution, judging whether the next chain node exists according to the linked list if the execution is successful, continuing to execute the next chain node if the next chain node exists, and returning to the data stream processing engine if the next chain node does not exist, and generating an execution success result;
if not, judging whether the last chain node exists according to the linked list, and if so, executing a last chain node rollback method; and if not, returning to the data stream processing engine, and generating an execution failure result.
In some embodiments, the processing module 602 determines whether execution was successful for each execution of one chain node, including:
and each chain node executes the completion doHandler event, and whether the execution is successful or not is judged.
In some embodiments, the linked list is a doubly linked list.
In some embodiments, the obtaining module 601 invokes a data stream processing engine through an external unified call interface, including:
the data stream processing engine is called through a calling interface provided by the resourceHandlerExecutor class.
In some embodiments, the processing module 602 includes, before processing the service data request according to a configuration flow of the executor:
and setting a configuration flow for each flow code through an Abstract resource HandlerConfiction configuration class.
In some embodiments, the processing module 602 processes the service data request according to a configuration flow of the executor, including:
initializing a configuration flow according to a flow code by using a build initialization method of a class issued by a resourceHandlerExecutor service calling party;
and executing a corresponding configuration flow by an exec handler method of the class issued by the resourceHandlerexeutor service caller.
In the service processing method and the service processing apparatus of the present invention, the specific implementation content has a corresponding relationship, so the repetitive content will not be described.
Fig. 7 illustrates an exemplary system architecture 700 to which a business processing method or business processing apparatus of embodiments of the present invention may be applied.
As shown in fig. 7, a system architecture 700 may include terminal devices 701, 702, 703, a network 704, and a server 705. The network 704 is the medium used to provide communication links between the terminal devices 701, 702, 703 and the server 705. The network 704 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
A user may interact with the server 705 via the network 704 using the terminal devices 701, 702, 703 to receive or send messages or the like. Various communication client applications such as shopping class applications, web browser applications, search class applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only) may be installed on the terminal devices 701, 702, 703.
The terminal devices 701, 702, 703 may be various electronic devices having a business processing screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
The server 705 may be a server providing various services, such as a background management server (by way of example only) providing support for shopping-type websites browsed by users using the terminal devices 701, 702, 703. The background management server may analyze and process the received data such as the product information query request, and feedback the processing result (e.g., the target push information, the product information—only an example) to the terminal device.
It should be noted that, the service processing method provided by the embodiment of the present invention is generally executed by the server 705, and accordingly, the computing device is generally disposed in the server 705.
It should be understood that the number of terminal devices, networks and servers in fig. 7 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 8, there is illustrated a schematic diagram of a computer system 800 suitable for use in implementing an embodiment of the present invention. The terminal device shown in fig. 8 is only an example, and should not impose any limitation on the functions and the scope of use of the embodiment of the present invention.
As shown in fig. 8, the computer system 800 includes a Central Processing Unit (CPU) 801, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 802 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 803. In the RAM803, various programs and data required for the operation of the computer system 800 are also stored. The CPU801, ROM802, and RAM803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
The following components are connected to the I/O interface 805: an input portion 806 including a keyboard, mouse, etc.; an output section 807 including a display such as a Cathode Ray Tube (CRT), a liquid crystal service processor (LCD), and a speaker; a storage section 808 including a hard disk or the like; and a communication section 809 including a network interface card such as a LAN card, a modem, or the like. The communication section 809 performs communication processing via a network such as the internet. The drive 810 is also connected to the I/O interface 805 as needed. A removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 810 as needed so that a computer program read out therefrom is mounted into the storage section 808 as needed.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication section 809, and/or installed from the removable media 811. The above-described functions defined in the system of the present invention are performed when the computer program is executed by a Central Processing Unit (CPU) 801.
The computer readable medium shown in the present invention may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules involved in the embodiments of the present invention may be implemented in software or in hardware. The described modules may also be provided in a processor, for example, as: a processor includes an acquisition module and a processing module. The names of these modules do not constitute a limitation on the module itself in some cases.
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be present alone without being fitted into the device. The computer readable medium carries one or more programs which, when executed by one of the devices, cause the device to include receiving a service data request to obtain a corresponding flow code; calling a data stream processing engine through an external unified calling interface, and further obtaining a corresponding executor according to the flow code; processing the service data request according to the configuration flow of the executor; wherein the configuration flow is set to a responsibility chain mode.
According to the technical scheme provided by the embodiment of the invention, the problems of high service processing complexity and high code coupling degree of the existing software product can be solved.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives can occur depending upon design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.