Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
Because the traditional collision early warning strategy can not output more accurate early warning results aiming at complex and changeable scenes, the reliability of a driver to an early warning system is low. In addition, the conventional collision early warning strategy relies on only a single message, resulting in a large environmental impact. For example, visual information can be influenced by factors such as weather, light and shielding, and communication information can be influenced by factors such as obstacles and signal interference, so that depending on single information can lead to poor system robustness and can not output accurate early warning results.
Based on the above, the embodiment of the application provides a collision early warning method, which realizes intelligent collision early warning based on multi-information fusion, solves the problem that the early warning result cannot be quickly and accurately output aiming at sudden dangerous scenes such as 'ghost probes', and the like at present by adopting a multi-input parameter mode, and enables a system to more intelligently process various complex and changeable scenes encountered in driving so as to adapt to various actual environments.
The collision early warning method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server. The terminal 102 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, vehicle-mounted devices, internet of things devices, and portable wearable devices, and the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart vehicle-mounted devices, projection devices, and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server 104 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud computing services.
In one embodiment, as shown in fig. 2, a collision early warning method is provided, and the method is applied to a vehicle-mounted device for illustration in this embodiment, and may include the following steps:
step 202, performing target detection and tracking based on image information acquired by a vehicle camera, and acquiring target detection and tracking information.
Among them, object detection is an important task in the field of computer vision, which aims at detecting an object of interest from an image or video. Tracking refers to the process of continuously monitoring and locating the position of one or more target objects in a sequence of video frames. Object detection focuses on determining and locating objects in a single image or video frame, while object tracking focuses on tracking dynamic changes of objects in a video sequence.
The target detection and tracking information is related information obtained by performing target detection and tracking based on image information acquired by a vehicle camera, and comprises a target object which is positioned through target detection, a first distance between the target object and a vehicle which is obtained by performing distance measurement based on the target object, a target running track of the target object which is determined by performing target tracking based on the target object, and a safety early warning area of the vehicle which is determined by performing target detection based on the image information.
In this embodiment, the vehicle may collect image information in real time through the camera during the driving process, and the vehicle-mounted device may perform target detection and tracking based on the image information collected by the vehicle camera in real time, so as to obtain a first distance between the target object and the vehicle, a target moving track of the target object, and determine a safety pre-warning area of the vehicle.
Step 204, based on the communication device information in the environment collected by the vehicle, a second distance between the communication device and the vehicle is obtained.
The communication device information may include, among other things, a value of the communication device and a corresponding signal strength RSSI (RECEIVED SIGNAL STRENGTH Indication, i.e., signal strength, refers to a received signal strength indicator, typically used to measure and indicate the strength of a wireless signal received by a receiver).
In this embodiment, the vehicle may further collect communication device information in the environment during the driving process, and the vehicle-mounted terminal may further process based on the communication device information in the environment collected by the vehicle, so as to obtain a second distance between the communication device and the vehicle.
And step 206, matching the target object with the communication equipment according to the first distance and the second distance to acquire the target distance between the successfully matched target object and the vehicle.
Wherein matching is the process of combining co-located target objects with communication devices. For example, the target object may be a pedestrian, and the communication device matched with the target object is a communication device carried by the pedestrian, such as a mobile phone, a telephone watch, a tablet computer, and the like.
In the present embodiment, the in-vehicle apparatus may match the target object and the communication apparatus based on the first distance between the target object and the vehicle and the second distance between the communication apparatus and the vehicle obtained as described above. For example, if the distance difference between the two is smaller than a certain distance threshold, the two can be considered to be successfully matched. And under the condition that the matching is determined to be successful, carrying out fusion processing on the first distance and the second distance to obtain the target distance between the target object which is successfully matched and the vehicle.
Specifically, the fusion processing may be to perform weighted summation on two distances, that is, the first distance and the second distance, based on a certain weight, so that the finally obtained target distance is the distance measurement fused with the image information and the distance measurement of the communication equipment information, and further, the accuracy of the distance measurement can be improved.
And step 208, performing collision early warning on the vehicle according to the safety early warning area, the target running track and the target distance of the successfully matched target object.
The collision early warning is a strategy for giving a warning to a driver when judging that potential collision danger exists in the distance between the vehicle and the target object. In this embodiment, the vehicle-mounted device may perform collision warning on the vehicle according to the determined safety warning area, and the target running track and the target distance of the successfully matched target object, so as to avoid a safety accident.
In the collision early warning method, the vehicle-mounted device performs target detection and tracking based on image information acquired by the vehicle camera, acquires target detection and tracking information, acquires second distance between the communication device and the vehicle based on communication device information in an environment acquired by the vehicle, matches the target object with the communication device according to the first distance and the second distance, performs fusion processing on the first distance and the second distance under the condition that the matching is determined to be successful, acquires target distance between the successfully matched target object and the vehicle, and performs collision early warning on the vehicle according to a safety early warning area, and target running track and target distance of the successfully matched target object. The distance measurement of the image information and the distance measurement of the communication equipment information are integrated with the target distance, so that the accuracy of the distance measurement can be improved, and the safety early warning area of the vehicle, the target running track of the successfully matched target object and the target distance are considered during collision early warning, so that the accuracy of early warning can be improved.
In an exemplary embodiment, as shown in fig. 3, the collision pre-warning method described above may be further described below, and specifically includes:
and step 1, acquiring video images in the running process of the vehicle based on a vehicle camera, and detecting and tracking targets.
Specifically, YOLOv (which is the latest iterative product of the YOLO-series real-time object detector) may be used for object detection to output data such as class, position, and bounding box of the preceding object (i.e., the target object). Can be expressed as:
wherein, (cx,cy) is the transverse coordinate and the longitudinal coordinate of the upper left corner of the network in the feature map, and (pw,ph) is the length and the width of the anchor frame. tw is the width of the bounding box, th is the height of the bounding box, tx is the abscissa of the bounding box center point, and ty is the ordinate of the bounding box center point. (bx、by) is the abscissa and ordinate of the center point of the detected target object, and (bw、bh) is the width and height of the minimum bounding box of the detected target object.
And 2, performing monocular distance measurement on the target object based on a small-hole imaging algorithm to obtain a first distance between the target object and the vehicle.
Specifically, the effective related data output in the step 1 can be utilized to finish monocular ranging of the target object based on a small-hole imaging algorithm, and a specific ranging formula can be:
Wherein d is the calculated first distance between the target object and the vehicle, d0 is the distance between the camera mounting position and the head of the vehicle (i.e. error), h is the distance between the camera mounting position and the ground, α is the pitch angle of the camera, f is the focal length of the camera, and (y-y0) is the imaging width of the object.
And 3, carrying out track prediction and tracking on the central coordinates of the target object output in the step 1 by using a Kalman filtering algorithm.
Specifically, the kalman filter algorithm is as follows:
Kalman filtering can calculate data by using the system state of the last time node k-1 and the system state of the current time node k, and then estimate the optimal value of the current time node k, wherein the formula is xk=Akxk-1+Bkuk+wk
Where xk is the state at time node k, xk-1 is the state at time node k-1, uk is the influence of external factors when time node k, a is the state transition matrix, B is the input control matrix, and wk is the internally generated interference when time node k.
The optimal value of the time node k, namely the observed value zk, can be calculated, and the formula is zk=Hkxk+vk. Where H is the observation matrix, vk is the observation noise at time k, and xk is the state at time node k calculated above.
The tracking process comprises prediction and updating, and is specifically as follows:
The first step, the system starts to predict, based on x 'k-1 when the time node is k-1, the system state x'k when the time node is k, where x 'k is a predicted value, and x'k can correspond to the value of pk, where the formula is:
x′k=Akx′k-1+Bkuk
Wherein pk is a covariance matrix, Q is a prediction noise covariance matrix, A is a state transition matrix, and B is an input control matrix.
The second step, the system starts updating, the system calculates K first, then the system corrects, adds a correction data for the result predicted by the system, thereby outputting an optimal pre-value, and finally completes updating the error matrix, and provides support for the subsequent iteration, wherein the formula is as follows:
x′’k=x′k+K'(ZK-Hkx′k)
p′k=(I-K'Hk)pk
Wherein K is a kalman gain matrix, K ' is an updated kalman gain matrix corresponding to the updated parameter of K, H is an observation matrix, Zk is an observation value when the time node is K, I is a unit matrix, R is an observation noise covariance, x″k is an updated system state when the corresponding parameter is updated to x 'k, and p 'k is an updated covariance matrix.
And 4, calculating a safety early warning area of the vehicle, namely a safety area.
The method comprises the steps of firstly outputting the width of a vehicle driving lane line by using a lane line detection technology based on a Kalman filtering technology and the like so as to calculate the transverse distance of a safety early warning area, and then calculating the minimum emergency braking distance of the vehicle according to parameters such as human reaction time, friction force and the like so as to calculate a safety distance model to determine the longitudinal distance between a first area and a second area (for example, safety early warning areas with red and yellow different early warning levels). The calculation expression of the red region longitudinal distance ymin and the yellow region longitudinal distance ymax includes:
ymin=D+trmaxu
ymax=(trmax+TTCc)u
Where D is the actual distance the driver needs to travel from the time the driver is aware of the danger to the time the driving vehicle is completely stationary, trmax is the maximum hazard response time the driver needs in the conventional case, u is the speed at which the vehicle is traveling, and TTCc (Time to Collision Collision, a key indicator used in automatic driving and intelligent driving assistance systems to measure the time before the vehicle collides with an obstacle ahead) represents the threshold of TTC.
And 5, collecting communication equipment information in the environment of the vehicle based on the communication sensing equipment in the vehicle.
The communication awareness device may be a vehicle-mounted Wifi sniffer device. Specifically, communication equipment information in an environment where the vehicle is located is collected based on communication sensing equipment in the vehicle, wherein the communication equipment information includes a MAC (MEDIA ACCESS Control, medium access Control address, also called physical address, hardware address) address of the communication equipment and a corresponding signal strength RSSI value. And filtering the signal intensity RSSI value of the communication equipment by using a Dixon test filtering algorithm, converting the filtered RSSI data into distance data, namely RSSI ranging, by using a logarithmic fitting algorithm, and determining the distance data as a second distance between the communication equipment and the vehicle.
The dixon test filtering algorithm can be realized through the following four steps:
and 5.1, arranging the observation samples of the RSSI values in order from small to large to obtain rsti1<rssi2<rssi3<...<rssin, and assuming that the test significance level alpha=0.05, setting the critical value as D (alpha, n).
Step 5.2, respectively checking high-end abnormal values and low-end abnormal values, wherein the formula is as follows:
Where rsie is the sample data with the largest distance, rsin is the sample data with the nearest but unequal distances, rsif is the sample data with the smallest distance, and rsi1 is the first sample data value in the sequence.
Step 5.3, comparing the high-end outlier with the low-end outlier whenWhen gamma11 > D (alpha, n), rssin is an abnormal valueAt the same time andWhen rsti1 is an outlier, other cases indicate that no outlier is present.
And 5.4, after eliminating the abnormal value in the RSSI observation sample, continuously and repeatedly executing the steps until the abnormal data does not appear any more.
The RSSI data collected and filtered of noise is then converted to range data using a logarithmic fit algorithm, i.e., a second range of the communication device from the vehicle is determined. Firstly, calculating the actual height of the acquisition equipment, the path loss factor n and the constant A so as to reduce the influence of the actual environment.
The RSSI value may be expressed as:
Wherein, pr (d) represents the power loss when the distance is d, d0 is the reference distance, pr(d0) is the reference distance d0, n is the path loss factor, and XdBm is the gaussian random variable with the standard deviation in the range of (4, 10) when the mean value is 0.
In general, XdBm is affected by environmental factors in a negligible way, and the d0 value is set to 1m, so that the conversion formula of RSSI and distance can be calculated as follows:
[RSSI]dBm=A-10nlgd
wherein, A is the signal intensity value when the value is 1m, namely when the distance between the receiving node and the transmitting node is 1 m. Based on this, the corresponding RSSI values may be converted into range data to determine a second range of the communication device from the vehicle.
And 6, matching the target object with the communication equipment according to the first distance and the second distance, and under the condition that the matching is successful, carrying out fusion processing on the first distance and the second distance to obtain the target distance between the successfully matched target object and the vehicle.
Specifically, the camera of the device in front of the vehicle is used for capturing a front target object, the vehicle-mounted Wifi sniffing equipment is used for capturing communication data such as RSSI, MAC value and the like, then the visual information and the communication information are combined in a mode of constructing a visual-wireless positioning fingerprint library, a unique MAC value is matched for the detected target object (namely, the target object is bound with the MAC value of the communication equipment), and a foundation is laid for combining a visual ranging result and a communication ranging subsequently.
In this embodiment, the first distance and the second distance obtained above may be used for calculation of a dynamic weight ranging formula to obtain a target distance value. The dynamic weight ranging formula is expressed as:
Di=β1Ai+β2Bi
Wherein Di is the target distance at time i, ai is the first distance at time i, Bi is the second distance at time i, β1 and β2 are weight coefficients, and the sum is 1.
If the vehicle is running in a scene with better environment, such as sufficient light and good air (i.e. is beneficial to utilizing the visual information), the system judges the time coefficient of the vehicle staying in the early warning area and increases the weight coefficient beta1 according to the corresponding proportion, and if the vehicle is running in a scene with worse environment but better communication signal, such as a main urban area and the vicinity of a base station (i.e. is beneficial to utilizing the communication information), the system judges the time coefficient and increases the weight coefficient beta2 according to the corresponding proportion, wherein the formula is as follows:
β1=β10+0.1tβ2=β20+0.1t
Wherein, β10 and β20 are initial default weight values, specifically can be set to 0.5, and t is time.
And 7, carrying out collision early warning on the vehicle according to the safety early warning area, the target running track and the target distance of the successfully matched target object.
The method comprises the steps of determining a safety pre-warning area and a corresponding weight coefficient according to a target running track of a successfully matched target object, determining pre-warning parameters corresponding to the safety pre-warning area and duration of the safety pre-warning area, determining a collision pre-warning value of a vehicle according to the weight coefficient, the pre-warning parameters, the duration and the target distance, and carrying out pre-warning prompt according to the collision pre-warning value.
In this embodiment, the dynamic weight early warning calculation formula is constructed by setting different early warning parameters for different areas (such as the first area and the second area, or the red early warning area and the yellow early warning area) and combining the target motion track of the target object obtained by target tracking and the calculated target distance.
Specifically, the IMF-FCW algorithm (Forward Collision Warning Algorithm Based on Improved-Multi-Information Fusion, multi-information fusion collision early warning algorithm) proposed in this embodiment may be used to calculate the collision early warning value.
For example, if the red early warning region and the yellow early warning region are respectively assigned with different early warning parameters R1 and R2, the early warning parameter corresponding to the red early warning region is greater than the early warning parameter corresponding to the yellow early warning region. And further, different weight coefficients can be given to the early warning parameter according to the change of the target track, and an early warning calculation formula F can be obtained by combining time and the target distance, wherein the early warning calculation formula F is as follows:
F=αRi*t+Di
wherein F is a collision early warning value, Ri is an early warning parameter corresponding to an early warning region, t is a time value, namely a time length, in the region, and α is a weight coefficient, which varies according to a predicted target movement trend, specifically:
Wherein, p0 and p1 are the position information of the successfully matched target object at different moments in the target moving track.
Furthermore, the method can provide different levels of early warning results for the driver according to the collision early warning value F output by the system, and specifically can comprise the following steps:
Wherein f1 is greater than f0,f1 and is an early warning upper threshold, and f0 is an early warning lower threshold. For example, the collision risk can be prompted when the collision early-warning value is determined to be greater than or equal to a preset early-warning upper threshold, the collision risk can be prompted when the collision early-warning value is determined to be greater than or equal to a preset early-warning lower threshold and less than the preset early-warning upper threshold, and the safety can be prompted when the collision early-warning value is determined to be less than the preset early-warning lower threshold.
In the above embodiment, by fusing various types and forms of information, relevant information parameters of a system are continuously adjusted according to a dynamically changing actual environment, a dynamic fusion early warning strategy is innovatively constructed by adopting a mode of multiple input parameters, a new IMF-FCW algorithm is provided, and a more appropriate weight coefficient can be selected according to the complexity of a scene and the situation of estimating the moving direction of a target object, so that the output early warning result is more reasonable and reliable, the system robustness difference caused by fewer parameters can be reduced to the greatest extent, or the system is excessively sensitive, excessively duller to early warning and the like.
Compared with the ranging method in the IMF-FCW algorithm which only depends on single information, the ranging method provided by the application effectively fuses various information and parameters by utilizing the method for constructing the vision-wireless database, and can furthest reduce the problems of low ranging speed, poor robustness and low ranging precision, poor stability and the like caused by using only single vision information and only depending on communication information, so as to improve the accuracy.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a collision early-warning device for realizing the collision early-warning method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in one or more embodiments of the collision warning device provided below may refer to the limitation of the collision warning method hereinabove, and will not be repeated herein.
In one exemplary embodiment, as shown in FIG. 4, a collision warning apparatus is provided, comprising an image processing module 402, a communication processing module 404, a distance fusion module 406, and a warning module 408, wherein:
The image processing module 402 is configured to perform target detection and tracking based on image information acquired by a vehicle camera, obtain a first distance between a target object and the vehicle, a target moving track of the target object, and determine a safety pre-warning area of the vehicle;
the communication processing module 404 is configured to obtain a second distance between the communication device and the vehicle based on the communication device information in the environment where the vehicle is located;
a distance fusion module 406, configured to match the target object with the communication device according to the first distance and the second distance, and, if it is determined that the matching is successful, perform a fusion process on the first distance and the second distance, so as to obtain a target distance between the target object and the vehicle, where the matching is successful;
and the early warning module 408 is configured to perform collision early warning on the vehicle according to the safety early warning area, and the target running track of the successfully matched target object and the target distance.
In an exemplary embodiment, the safety early-warning area comprises a first area and a second area, the early-warning module is further used for determining the safety early-warning area and a corresponding weight coefficient according to a target running track of a successfully matched target object, determining an early-warning parameter corresponding to the safety early-warning area and a duration of the safety early-warning area, determining a collision early-warning value of the vehicle according to the weight coefficient, the early-warning parameter, the duration and the target distance, and carrying out early-warning prompt according to the collision early-warning value.
In an exemplary embodiment, the early warning module is further configured to prompt that a collision risk exists if the collision early warning value is determined to be greater than or equal to a preset early warning upper threshold value, prompt that a collision risk is noted if the collision early warning value is determined to be greater than or equal to a preset early warning lower threshold value and less than the preset early warning upper threshold value, and prompt that safety is required if the collision early warning value is determined to be less than a preset early warning lower threshold value.
In an exemplary embodiment, the determining the collision warning value of the vehicle according to the weight coefficient, the warning parameter, the duration, and the target distance includes:
F=αRi*t+Di
wherein, F is a collision early warning value, Ri is an early warning parameter corresponding to a belonging safety early warning region, t is a duration of the belonging safety early warning region, Di is a target distance at i time, and α is a weight coefficient, which varies according to a movement trend of a target moving track of a successfully matched target object, and the method specifically comprises:
Wherein p0 and p1 are position information of the successfully matched target object at different moments in the target running track.
In an exemplary embodiment, the fusing the first distance and the second distance to obtain the target distance between the successfully matched target object and the vehicle includes:
Di=β1Ai+β2Bi
Wherein Di is the target distance at time i, ai is the first distance at time i, Bi is the second distance at time i, β1 and β2 are weight coefficients, and the sum is 1.
In an exemplary embodiment, the communication processing module is further configured to collect communication device information in an environment where the vehicle is located based on a communication perception device in the vehicle, where the communication device information includes the communication device and a corresponding signal strength RSSI value, filter the signal strength RSSI value of the communication device using a dixon test filtering algorithm, convert the filtered RSSI data into distance data using a logarithmic fitting algorithm, and determine the distance data as a second distance between the communication device and the vehicle.
The above-mentioned respective modules in the collision early warning apparatus may be realized in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In an exemplary embodiment, a computer device is provided, which may be an in-vehicle device, and an internal structure thereof may be as shown in fig. 5. The computer device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input means. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used to exchange information between the processor and the external device. The Communication interface of the computer device is used for conducting wired or wireless Communication with an external terminal, and the wireless Communication can be realized through WIFI, a mobile cellular network, near field Communication (NEAR FIELD Communication) or other technologies. The computer program is executed by a processor to implement a collision warning method. The display unit of the computer device is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device. The display screen can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be a key, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in FIG. 5 is a block diagram of only some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements are applicable, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In an exemplary embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor performing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are both information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data are required to meet the related regulations.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile memory and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (RESISTIVE RANDOM ACCESS MEMORY, reRAM), magneto-resistive Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (PHASE CHANGE Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in various forms such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computation, an artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) processor, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the present application.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.