Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
For a better understanding, related concepts and principles that may be involved in embodiments of the disclosure are explained below.
The point cloud 3D detection task is to perform real-time 3D modeling on the current environment through a laser radar, and simultaneously sense the position, the size and the gesture of a certain 3D target in a laser radar point cloud coordinate system. Typically, the data collected by lidar is displayed and processed in the form of a point cloud, simply N points in three parts of space, each point containing XYZ three floating point values representing the spatial location and one R value representing the echo intensity, which is a data containing the object surface geometry information. In the related art, point cloud 3D detection is generally implemented through a point cloud 3D detection model.
Referring to fig. 1, fig. 1 is a flowchart of a training method of a point cloud 3D detection model according to an embodiment of the disclosure, as shown in fig. 1, the method includes the following steps:
step S101, acquiring first training data and second training data, wherein the first training data are point cloud data acquired from a marked 3D detection frame, and the second training data are point cloud data acquired by a radar.
It should be noted that, the method provided by the embodiment of the present disclosure may be applied to an electronic device such as a computer, a tablet computer, a mobile phone, etc., and in the subsequent embodiments, a specific implementation process of the method provided by the embodiment of the present disclosure will be explained by using the electronic device as an execution body.
In this embodiment of the present disclosure, the first training data may be point cloud data corresponding to a point cloud object in a manually marked 3D detection frame (may also be referred to as a 3D bounding box, etc.), for example, after acquiring point cloud data acquired by a radar for a vehicle, a user manually marks a 3D detection frame on the vehicle based on the point cloud data, and then the first training data is the point cloud data corresponding to the vehicle. It should be noted that, the user may acquire point cloud data through a large number of manually marked 3D detection frames, obtain a point cloud database based on the point cloud data, and the electronic device may acquire the first training data from the point cloud database, for example, may randomly select a part of the point cloud data as the first training data.
Alternatively, the second training data may be point cloud data collected by a lidar on a vehicle, which may be an autonomous vehicle.
Step S102, obtaining target number of the first training data which needs to be input into the point cloud 3D detection model under each iteration round, wherein the target number is associated with the iteration round of the point cloud 3D detection model.
It will be appreciated that neural network models typically need to be obtained through multiple rounds of iterative training. The point cloud 3D detection model in the embodiment of the disclosure is also required to be obtained through multiple rounds of iterative training, one iteration round represents performing iterative training on the point cloud 3D detection model, and the number of training data input by the point cloud 3D detection model can be different under each iteration round.
In an embodiment of the disclosure, the iteration round of the point cloud 3D detection model is associated with the number of first training data that need to be input. Alternatively, an association relationship between the iteration round and the number of first training data to be input may be preset. For example, the association relationship may be that as the iteration round increases, the amount of the first training data that needs to be input decreases. Or the number of the first training data to be input under each iteration round is set to be x under the first N iteration rounds, and after the nth iteration round, the number of the first training data to be input under each iteration round is set to be y, wherein x is different from y. Of course, the association between the iteration round and the number of first training data to be input may be other possible situations, which are not listed here too much.
Step S103, inputting the second training data and the first training data of the target number corresponding to any iteration round in the training process of the point cloud 3D detection model into the point cloud 3D detection model so as to train the point cloud 3D detection model.
For example, taking a first target iteration round as an example, the first target iteration round is any iteration round in the training process of the point cloud 3D detection model, and assuming that the target number of first training data, which is corresponding to the first target iteration round and needs to be input into the point cloud 3D detection model, is x, when the point cloud 3D detection model performs training on the first target iteration round, second training data and the first training data, which is the target number of x, are input into the point cloud 3D detection model, so as to perform training on the point cloud 3D detection model under the first target iteration round. It can be understood that, for any iteration round in the training process of the point cloud 3D detection model, the model training can be performed in the above manner, that is, the second training data and the first training data of the target number corresponding to the iteration round are input into the point cloud 3D detection model, so as to realize the training of the point cloud 3D detection model in the iteration round. Therefore, training of the point cloud 3D detection model in all iteration rounds can be completed based on the mode.
In the embodiment of the disclosure, after the target number of the first training data of the point cloud 3D detection model is acquired and needs to be input under each iteration round, the first training data and the second training data of the target number corresponding to the iteration round are input into the point cloud 3D detection model for each iteration round, so that the point cloud 3D detection model is trained for the iteration round, and based on the mode, training of the point cloud 3D detection model in all iteration rounds is completed, so that the trained point cloud 3D detection model is obtained.
It should be noted that, in the training process of each iteration round of the point cloud 3D detection model, the number of second training data input may be the same, while the number of first training data input is related to the current iteration round, that is, the number of first training data input into the point cloud 3D detection model may be different in different iteration rounds, that is, the number of input point cloud data obtained through the manually-labeled 3D detection frame may be different. Therefore, the training data input by the point cloud 3D detection model is variable, so that the control of the input data of the point cloud 3D detection model can be realized, the training of the point cloud 3D detection model is more flexible, and the accuracy of the trained point cloud 3D detection model can be improved by controlling the quantity of the input data.
In the embodiment of the disclosure, the trained point cloud 3D detection model may be applied to an automatic driving vehicle, and the input of the trained point cloud 3D detection model, that is, the point cloud data collected by the laser radar on the automatic driving vehicle, is output as a 3D detection frame, so as to better assist the automatic driving of the automatic driving vehicle, and improve the safety of the automatic driving vehicle.
Optionally, in step S102, obtaining the target number of the first training data to be input into the point cloud 3D detection model in each iteration round may specifically further include:
Under the condition that the iteration rounds are the first N iteration rounds, determining the target number of the first training data which needs to be input into the point cloud 3D detection model under each iteration round as a first preset number;
Under the condition that the iteration round is an iteration round after the Nth iteration round, determining that the target number of the first training data which needs to be input into the point cloud 3D detection model under each iteration round is a second preset number;
The first preset number is greater than the second preset number, the value of N is smaller than the value of the last iteration round of the point cloud 3D detection model, and the value of N may be a preset numerical value.
In the embodiment of the disclosure, the target number of the first training data to be input under each iteration round is set to a first preset number in the first N iteration rounds of the point cloud 3D detection model, the target number of the first training data to be input under each iteration round is set to a second preset number after the nth iteration round, and the second preset number is smaller than the first preset number. That is, after the point cloud 3D detection model completes a certain number of iterative training, the number of first training data input into the point cloud 3D detection model may be reduced for the training of the subsequent iteration round. The first training data is point cloud data acquired based on a 3D detection frame of manual annotation, the point cloud data may not be related to the current test scene, and the second training data is point cloud data acquired by the laser radar, that is, the second training data is acquired based on the current test scene on-site, and the second training data is related to the current test scene.
For the point cloud 3D detection model, if training data and test data of the model are respectively directed to different scenes, a problem of domain gap may occur. Domain gaps mean that there is a significant distribution difference between the two parts of data, and if one data is used for training and one data is used for testing, the performance of the model obtained by training and testing from the unified distribution data is worse. For example, a most visual situation of a domain gap exists, that is, a batch of training data is assumed to be collected on a certain loop of city a, a batch of test data is also assumed to be collected on a certain loop of city B, a model trained by taking the training data of city a runs on the test data of city B, and the effect is definitely not good, because obvious distribution differences exist between two data sets, such as the size of length and width of a vehicle, the distribution of road traffic participants and point cloud background objects, and the like, are not seen by the model during training, so the model can be degraded during testing, and the accuracy is lower.
In the embodiment of the disclosure, by reducing the number of the first training data, the influence of the first training data which is irrelevant to the current test scene on the point cloud 3D detection model can be reduced in the training process of the point cloud 3D detection model, so that the point cloud 3D detection model is more focused on the second training data which is relevant to the current test scene to finish training in the later stage of training, the point cloud 3D detection model obtained after training can be more applicable to the current test scene, the generation of domain clearance problems is effectively avoided, and the accuracy of the point cloud 3D detection model after training is improved.
Optionally, in the case that the iteration round is an iteration round after the nth iteration round, determining that the target number of the first training data required to be input into the point cloud 3D detection model under each iteration round is a second preset number includes:
Under the condition that the iteration round is an iteration round after the nth iteration round, determining that the values of N are 1,2 and 3 from the (n+1) th iteration round to the (n+n) th iteration round, wherein the value of n+n is smaller than or equal to the value of the last iteration round of the point cloud 3D detection model;
and determining a second preset number of the first training data which needs to be input into the point cloud 3D detection model under each iteration round in the (N+1) -th to (N+n) -th iteration rounds, wherein the second preset number gradually decreases along with the increase of the iteration round value.
For example, assuming that the iterative training of the point cloud 3D detection model includes 100 iteration rounds in total, the value of N is 60, then the number of first training data to be input to the point cloud 3D detection model in each iteration round is a first preset number in the first 60 iteration rounds, and assuming that the value of N is 20, that is, the number of first training data to be input to the point cloud 3D detection model in each iteration round gradually decreases in the 61 th to 80 th iteration rounds, for example, the number of first training data decreases linearly. Therefore, the influence of the first training data which is irrelevant to the current test scene on the point cloud 3D detection model can be gradually reduced in the later training period of the point cloud 3D detection model.
Alternatively, the values of N and N may be preset values. Furthermore, when training the point cloud 3D detection model, the number of the first training data required to be input into the point cloud 3D detection model can be gradually reduced under which iteration rounds through the preset values of N and N, so that the random sampling amount of the first training data can be gradually reduced in the training process of the iteration rounds, and the number of the first training data required to be input into the point cloud 3D detection model can be gradually reduced. Therefore, the influence of the first training data irrelevant to the current test scene on the point cloud 3D detection model can be gradually reduced by gradually reducing the quantity of the input first training data at the later training stage of the point cloud 3D detection model, so that the point cloud 3D detection model is more focused on completing training through the second training data relevant to the current test scene at the later training stage, the generation of the domain clearance problem is effectively avoided, and the accuracy of the point cloud 3D detection model is improved.
It should be noted that, after the n+n iteration rounds, the number of the first training data input into the point cloud 3D detection model may be kept unchanged, for example, the number of the first training data input under the n+n iteration rounds may be kept the same as the number of the first training data input under the n+n iteration rounds, or the number of the first training data input may be continuously reduced, so that the influence of the first training data irrelevant to the current test scene on the point cloud 3D detection model is effectively reduced, so that the trained point cloud 3D detection model has higher accuracy.
Optionally, the second preset number corresponding to the n+nth iteration round is 0. That is, the number of first training data input to the point cloud 3D detection model is 0 at the n+nth iteration round. That is, in the (n+1) -th to (n+n) -th iteration runs, the number of first training data input to the point cloud 3D detection model gradually decreases and until it decreases to 0 as the iteration run increases. Therefore, at the later training stage of the point cloud 3D detection model, the training data only comprise second training data related to the current test scene, but do not comprise first training data unrelated to the current test scene, so that the training of the point cloud 3D detection model is more fit with the current test scene, the generation of domain clearance problem is effectively avoided, the trained point cloud 3D detection model is ensured to have higher accuracy when being applied to the current test scene, and the robustness of the output data of the trained point cloud 3D detection model is improved.
It should be noted that after the n+nth iteration round, the number of the first training data input to the point cloud 3D detection model may still be kept to be 0, so that the point cloud 3D detection model only performs model training through the second training data related to the current test scene in the later training period, and the trained point cloud 3D detection model obtains a better and more robust test effect on the test data.
Optionally, the determining, in the n+1st iteration round to the n+nth iteration round, the second preset number of the first training data to be input to the point cloud 3D detection model under each iteration round includes:
Acquiring the target number of the first training data which needs to be input into the point cloud 3D detection model under the nth iteration round;
determining the number of first training data which are required to be input into the point cloud 3D detection model by a target iteration round according to the target number, the value of N+1 and the value of N+n;
The target iteration round is any one of the (n+1) -th iteration round to the (n+n) -th iteration round.
For example, assuming that the target number of first training data to be input to the point cloud 3D detection model under the nth iteration round is N, the n+1th iteration round is e0, that is, the value of n+1 is represented by e0, and the n+nth iteration round is e1, that is, the value of n+n is represented by e1, the number of first training data to be input to the point cloud 3D detection model under any one of the n+1th iteration round to the n+nth iteration round may be determined according to N, e0 and e1.
In the embodiment of the disclosure, the number of the first training data of the point cloud 3D detection model to be input in any iteration round from the n+1th iteration round to the n+nth iteration round can be determined according to the value of the iteration round and the target number of the first training data of the point cloud 3D detection model to be input in the N iteration round, so that flexible adjustment can be realized on the first training data of the point cloud 3D detection model to be input, and the training process of the point cloud 3D detection model can be controlled better, so that the trained point cloud 3D detection model has better accuracy.
Optionally, in the n+1th to n+nth iteration rounds, the second preset number decreases linearly or decreases in a curve with respect to the iteration rounds. For example, a predetermined algebraic relation may be satisfied between N, e0、e1 and the number of first training data.
Illustratively, in an alternative embodiment, the algebraic relationship satisfied between N, e0、e1 and the number of first training data to be input is as follows:
Wherein f (e) represents the number of first training data to be input to the point cloud 3D detection model under the e-th iteration round (i.e., the second target iteration round), e0 represents the (n+1) -th iteration round, e1 represents the (n+n) -th iteration round, and N represents the target number of first training data to be input to the point cloud 3D detection model under the N-th iteration round. In this case, as shown in fig. 2, in the n+1st to n+nth iteration runs, the number of first training data to be input to the point cloud 3D detection model at each iteration run is linearly decreased with respect to the iteration run.
Or in another alternative embodiment, N, e0、e1 and the number of first training data to be input satisfy the following algebraic relationship:
Wherein f (e) represents the number of first training data to be input to the point cloud 3D detection model under the e-th iteration round (i.e., the second target iteration round), e0 represents the (n+1) -th iteration round, e1 represents the (n+n) -th iteration round, and N represents the target number of first training data to be input to the point cloud 3D detection model under the N-th iteration round. In this case, in the n+1st to n+nth iteration runs, the number of first training data to be input to the point cloud 3D detection model under each iteration run is reduced in a curve with respect to the iteration run.
According to the embodiment of the disclosure, through different algebraic relations, flexible adjustment of the quantity of the first training data needing to be input into the point cloud 3D detection model under each iteration round in the (n+1) th iteration round to the (n+n) th iteration round can be achieved, and the quantity of the first training data input under each iteration round can be ensured to be gradually reduced, so that the training data of the point cloud 3D detection model is gradually shifted to the second training data related to the current test scene at the later training stage of the point cloud 3D detection model, the training of the point cloud 3D detection model is enabled to be more fit with the current test scene, the problem of domain gaps is effectively avoided, and the trained point cloud 3D detection model is ensured to have higher accuracy when being applied to the current test scene.
Referring to fig. 3, fig. 3 is a flowchart of a point cloud detection method according to an embodiment of the disclosure, as shown in fig. 3, the method includes the following steps:
step S301, acquiring point cloud data acquired by a radar.
Alternatively, the radar may be a lidar mounted on an autonomous vehicle. In the embodiment of the disclosure, the point cloud detection method may be applied to an automatic driving vehicle.
Step S302, inputting the point cloud data into a point cloud 3D detection model, and acquiring a 3D detection frame output by the point cloud 3D detection model.
The point cloud 3D detection model is a model obtained after training based on the training method of the point cloud 3D detection model in the above embodiment.
In the embodiment of the disclosure, the point cloud 3D detection model is obtained by training based on the training method, and the point cloud 3D detection model has better and more robust detection effects, so that a 3D detection frame output by the point cloud 3D detection model has higher accuracy, driving of an automatic driving vehicle can be better assisted, and safety of the automatic driving vehicle is effectively improved.
Referring to fig. 4, fig. 4 is one of the block diagrams of a training apparatus for a point cloud 3D detection model according to an embodiment of the present disclosure, as shown in fig. 4, a training apparatus 400 for a point cloud 3D detection model includes:
the first obtaining module 401 is configured to obtain first training data and second training data, where the first training data is point cloud data obtained from a labeled 3D detection frame, and the second training data is point cloud data collected by a radar;
A second obtaining module 402, configured to obtain a target number of the first training data to be input into the point cloud 3D detection model under each iteration round, where the target number is associated with the iteration round of the point cloud 3D detection model;
The training module 403 is configured to input, for any iteration round in the training process of the point cloud 3D detection model, the second training data and the first training data of the target number corresponding to the iteration round into the point cloud 3D detection model, so as to train the point cloud 3D detection model;
the input of the trained point cloud 3D detection model is point cloud data acquired by a radar, and the output is a 3D detection frame.
Optionally, referring to fig. 5, the second obtaining module 402 includes:
A first determining unit 4021, configured to determine, in a case where the iteration round is the first N iteration rounds, that a target number of the first training data to be input to the point cloud 3D detection model under each iteration round is a first preset number;
a second determining unit 4022, configured to determine, in a case where the iteration round is an iteration round after the nth iteration round, that a target number of the first training data to be input to the point cloud 3D detection model under each iteration round is a second preset number;
The first preset number is larger than the second preset number, and the value of N is smaller than the value of the last iteration round of the point cloud 3D detection model.
Optionally, the second determining unit 4022 is further configured to:
Under the condition that the iteration round is an iteration round after the nth iteration round, determining that the values of N are 1,2 and 3 from the (n+1) th iteration round to the (n+n) th iteration round, wherein the value of n+n is smaller than or equal to the value of the last iteration round of the point cloud 3D detection model;
and determining a second preset number of the first training data which needs to be input into the point cloud 3D detection model under each iteration round in the (N+1) -th to (N+n) -th iteration rounds, wherein the second preset number gradually decreases along with the increase of the iteration round value.
Optionally, the second determining unit 4022 is further configured to:
Acquiring the target number of the first training data which needs to be input into the point cloud 3D detection model under the nth iteration round;
determining the number of first training data which are required to be input into the point cloud 3D detection model by a target iteration round according to the target number, the value of N+1 and the value of N+n;
The target iteration round is any one of the (n+1) -th iteration round to the (n+n) -th iteration round.
Optionally, in the n+1th to n+nth iteration rounds, the second preset number decreases linearly or decreases in a curve with respect to the iteration rounds.
Optionally, the second preset number corresponding to the n+nth iteration round is 0.
It should be noted that, the device provided in the embodiment of the present disclosure may implement all the technical processes of the training method of the point cloud 3D detection model described in fig. 1, and may achieve the same technical effects, so that repetition is avoided, and no further description is provided herein.
Referring to fig. 6, fig. 6 is a block diagram of a point cloud detection apparatus according to an embodiment of the disclosure, and as shown in fig. 6, the point cloud detection apparatus 600 includes:
A third acquiring module 601, configured to acquire point cloud data acquired by a radar;
A fourth obtaining module 602, configured to input the point cloud data into a point cloud 3D detection model, and obtain a 3D detection frame output by the point cloud 3D detection model;
the point cloud 3D detection model is obtained after training by the training device based on the point cloud 3D detection model.
It should be noted that, the device provided in the embodiment of the present disclosure can implement all the technical processes of the point cloud detection method described in fig. 3, and achieve the same technical effects, so that repetition is avoided, and no further description is provided herein.
The embodiment of the disclosure also provides an automatic driving vehicle, which comprises the point cloud detection device, and the automatic driving vehicle provided by the embodiment of the disclosure can obtain a 3D detection frame with higher accuracy by adopting the point cloud detection device, so that the running of the automatic driving vehicle is better assisted, and the safety of the automatic driving vehicle is effectively improved.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 7 illustrates a schematic block diagram of an example electronic device 700 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the apparatus 700 includes a computing unit 701 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the device 700 may also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in the device 700 are connected to the I/O interface 705, including an input unit 706, e.g., keyboard, mouse, etc., an output unit 707, e.g., various types of displays, speakers, etc., a storage unit 708, e.g., magnetic disk, optical disk, etc., and a communication unit 709, e.g., network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 701 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 701 performs the respective methods and processes described above, for example, a training method of a point cloud 3D detection model or a point cloud detection method. For example, in some embodiments, the training method of the point cloud 3D detection model or the point cloud detection method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 700 via ROM 702 and/or communication unit 709. When the computer program is loaded into the RAM 703 and executed by the computing unit 701, one or more steps of the training method of the point cloud 3D detection model or the point cloud detection method described above may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured to perform the training method or the point cloud detection method of the point cloud 3D detection model described above by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include being implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be a special or general purpose programmable processor, operable to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user, for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback), and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a Local Area Network (LAN), a Wide Area Network (WAN), and the Internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.