Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described and illustrated below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden on the person of ordinary skill in the art based on the embodiments provided herein, are intended to be within the scope of the present application.
It is apparent that the drawings in the following description are only some examples or embodiments of the present application, and it is possible for those of ordinary skill in the art to apply the present application to other similar situations according to these drawings without inventive effort. Moreover, it should be appreciated that while such a development effort might be complex and lengthy, it would nevertheless be a routine undertaking of design, fabrication, or manufacture for those of ordinary skill having the benefit of this disclosure, and thus should not be construed as having the benefit of this disclosure.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is to be expressly and implicitly understood by those of ordinary skill in the art that the embodiments described herein can be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar terms herein do not denote a limitation of quantity, but rather denote the singular or plural. The terms "comprising," "including," "having," and any variations thereof, are intended to cover a non-exclusive inclusion; for example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed or may include additional steps or elements not expressly listed or inherent to such process, method, article, or apparatus. The terms "connected," "coupled," and the like in this application are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as used herein refers to two or more. "and/or" describes an association relationship of an association object, meaning that there may be three relationships, e.g., "a and/or B" may mean: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. The terms "first," "second," "third," and the like, as used herein, are merely distinguishing between similar objects and not representing a particular ordering of objects.
Fig. 1 is a schematic view of an application environment of a method for providing an augmented reality map service according to an embodiment of the present application, as shown in fig. 1, a client 10 generates a service request according to an operation instruction output by a user and sends the service request to a server 11, where the server 11 splits a mapping production flow into a plurality of independent services, and deploys each independent service through a micro service unit architecture; after responding to the service request, the server 11 instructs the internal corresponding micro service unit to process one or more of the diagramming production flows. In this embodiment, since each service may be invoked in combination or separately, convenience in efficiency of the mapping process is greatly improved.
Fig. 2 is a flowchart of a method of providing an augmented reality map service according to an embodiment of the present application, as shown in fig. 2, the flowchart including the steps of:
s201, dividing a map building production flow into a plurality of production links, and respectively deploying each production link as a micro-service unit in a micro-service architecture platform;
micro services (Microservices) is a software architecture, and can split a large application into multiple small, relatively independent services, each with its own specific functions. These services can be independently developed, deployed, and maintained, thereby increasing the flexibility, scalability, and maintainability of the system.
In this embodiment, the drawing production flow is split according to the following rules:
1. according to the decoupling of the computing characteristics, different types of computing tasks are separated and used as independent micro-service units, for example, CPU, GPU and memory are consumed as required to split the flow;
2. according to functional decoupling, splitting tasks with different functions, respectively serving as independent micro-service units, for example, performing sparse reconstruction and dense reconstruction, aligning and splitting a video frame and a map, and the like;
3. according to the function linkage combination, combining a plurality of functions with association/binding requirements into a micro-service unit, such as data collection and data verification, data cleaning and missing value filling;
in one exemplary embodiment, the specifically split micro-service unit includes:
1. the data extraction unit is used for preparing key frame data required by map construction, including pictures and other auxiliary data, and can be divided into three sub-services:
1.1 a data gathering unit for acquiring image material and auxiliary data of a target scene required for map reconstruction, for example: gps, motion trail of acquisition equipment, etc.;
it should be noted that the target scene is any scene of the offline space, including but not limited to scenic spots, exhibition halls, malls, etc.;
Further, a laser radar can be adopted to cooperate with a panoramic camera acquisition scheme to acquire data of a large-scale scene; the pure vision camera device can also be selected for collection; still further, the image capturing device may be a panoramic camera, a video camera, and a mobile client device with an image capturing function, such as a smart phone, a tablet computer, etc.;
it can be understood that the laser radar can acquire data with higher quality by matching with the acquisition scheme of the panoramic camera, and is suitable for large-scale scenes, but the cost and the operation threshold are higher; the acquisition scheme adopting the conventional camera device is low in cost and easy to operate, but the image quality is low;
1.2 data verification Unit
Because the quality of the collected data directly influences the quality of map reconstruction, the reconstruction efficiency and quality can be improved by high-quality data acquisition, and therefore, the verification process of the acquired data is required;
specific verification items include, but are not limited to: integrity check, accuracy check, validity confirmation, etc.; by checking the unit, problems in the data can be found and corrected to take corrective action to optimize the data or to re-gather the data.
1.3 data preprocessing Unit
To further enhance the quality of the mapping data, specific preprocessing steps include, but are not limited to, image denoising, brightness/contrast optimization, morphological operations, and the like.
Furthermore, because the acquired original data may have larger redundancy, frame extraction processing is needed to acquire key frames in the original data, and the key frames which fully reflect scene characteristics and have smaller number are sent to a map reconstruction link;
specifically, in an alternative embodiment, after the actual deployment of the micro service, the data preprocessing unit performs the following processes of key frame extraction, including:
step1, analyzing the image data to acquire each image frame in each image data set;
step2: after the frame extraction cycle starts, judging whether the current image set is a first sequence or judging whether the frame extraction is unsuccessful before the current sequence; if either of the above is met, it is necessary to initialize the temporary map, executing a frame extraction flow of the current sequence after the initialization is successful; if the initialization fails, judging that the quality of the graph set of the sequence is poor, and discarding the sequence;
otherwise, if the temporary map exists when the image set of the current sequence is processed, repositioning the sequence, and continuing to traverse the subsequent sequence after the repositioning is successful; if the repositioning fails, the sequence is added into a list of frames to be extracted of the next batch.
Step3, selecting proper key frames in each image set by utilizing information of map construction and camera pose estimation according to a frame extraction mode based on SLAM technology aiming at the image set of the current sequence; the key frames selected are typically frames in which the camera pose changes significantly or frames with high information content.
It should be noted that, in the process of extracting frames from each image set, if a certain frame fails to track, the preprocessing unit cannot determine the relative motion of the current frame with respect to other frames, so that the current frame cannot be aligned with the map to cause a map updating failure; therefore, the processing of the sequence image set needs to be terminated in advance, the specific client position in the sequence is recorded, and the sequence is added into a sequence list of the next frame to be extracted;
step4, deleting the sequence from the sequence list after the image set processing of the current sequence is finished, and continuing to traverse the next sequence to extract frames until the sequence to be processed in the sequence list is empty, and completing the processing of all sequences in the cycle, at the moment, emptying the temporary map and entering the next cycle;
step5, circularly executing the steps Step1-Step4, and performing circular iteration extraction until the list of the sequences to be processed is empty, which indicates that all sequences have completed frame extraction processing, and ending the frame extraction process. The specific flow of the subsequent cyclic frame extraction is the same as the above steps.
The preprocessing unit can intelligently select key frames in the original data, so that storage and calculation cost is reduced, meanwhile, effective representation of a scene is maintained, and the construction efficiency and quality of the augmented reality map are improved. In addition, it should be noted that the policy in selecting the key frames may be adjusted according to the requirements of the specific application, so as to balance the storage and the utilization of the computing resources.
2. Feature calculation unit
The method is used for extracting visual features of key frames and completing additional feature matching among the visual features and specifically comprises a feature calculation unit and a feature matching unit;
2.1 feature extraction Unit
Extracting image feature points is a key step of performing tasks such as map reconstruction, SLAM (Simultaneous Localization and Mapping) and the like based on visual images, and optionally, a method for extracting image features includes, but is not limited to, sift (scale invariant feature transform) features, orb (Oriented FAST and Rotated BRIEF) features, superpoint (acceleration robust features) features and the like;
2.2 feature matching Unit
Another basic premise of map reconstruction based on visual images is to acquire matching relationships between identical feature points between images, wherein the identical feature points are different pixel points where the same point is photographed on different images, and the points are from the same physical world point.
3. Map reconstruction unit
Reconstructing sparse point cloud, dense point cloud, textured grid and positioning map of the map according to the key frame data and the characteristic information; the map reconstruction unit obtains the matching relation between each key frame of the sparse map and the characteristic points of the candidate scene images, and obtains a positioning map after generating the pose of the candidate scene images based on the matching relation between the characteristic points;
Further, the map reconstruction unit may be further split into a sparse map reconstruction unit, a dense map reconstruction unit, a grid reconstruction unit, a texture mapping unit, and a map reconstruction unit, specifically:
3.1 sparse map reconstruction Unit
According to the matching relation between the feature points and the feature points, coefficient 3D map points in the scene, camera pose, internal parameters and the like of the key frame can be reconstructed through SFM (structure from motion, motion restoration structure) technology.
Wherein sparse reconstruction is an initial step of three-dimensional reconstruction for estimating a coarse structure of the scene. Specifically, the above-mentioned reconstruction process of the sparse map may be, but is not limited to, implemented by SFM (Structure From Motion, motion recovery structure), and the resulting sparse map is a set of sparse feature points.
3.2 dense reconstruction Unit
From the sparse map and key frames, dense depth of the key frames, and dense 3D map points of the scene, i.e., dense point clouds, can be restored by MVS (multi view stereo, multi-view stereoscopic) techniques.
3.3 grid reconstruction Unit
A dense grid map (water-light mesh) of the scene can be generated, typical methods such as poisson reconstruction, delaunay segmentation, etc.;
3.4 texture mapping unit
Texture mapping refers to determining the texture and texture coordinates of each patch in a grid map, thereby generating a three-dimensional grid map with high definition texture mapping. Such methods are available in many open source software, such as pcl (point cloud library), mvs-texturing, aliceVision, etc.
3.5 positioning map reconstruction unit
A set of positioning map which can be used for visual positioning is generated according to a sparse map or an additional auxiliary grid map.
It should be noted that, the sparse reconstruction is an initial step of three-dimensional reconstruction, and is used for estimating a rough structure of a scene; the dense reconstruction is to further generate richer three-dimensional point clouds or models on the basis of sparse reconstruction; the grid reconstruction is used for converting the point cloud data into a continuous three-dimensional grid model; finally, texture maps may be applied to the generated three-dimensional model to provide visual texture and reality.
In addition, in this embodiment, the map reconstruction may be performed by using the result of the grid reconstruction in addition to directly performing the positioning map making using the sparse map. The process can utilize the result of dense reconstruction, and further obtain a positioning map with more abundant information and more lifelike effect.
4. Map alignment unit
The system is used for carrying out scale recovery, gravitational axis alignment and the like on the map output by the map construction unit;
specifically, the map alignment unit may be further divided into a scale alignment unit, a gravity alignment unit, and a multi-map alignment unit;
4.1 Scale alignment Unit
By detecting a physical marker (such as a calibration plate preset in a scene) in an image frame, and calculating a scale factor from the map to the real world according to the known size of the physical marker and the size of the map, wherein i represents different key frame sequences, and S1 is obtained by weighting and averaging the scale factors of the key frame sequences;
in another embodiment, the scale alignment unit obtains a scale scaling coefficient for locating the map to the real vision by extracting the GPS information of each key frame sequence and aligning the track of the key frame in the map with the track obtained based on the GPS information, wherein i represents different key frame sequences, and S2 is obtained by weighted average of the scale coefficients of each key frame sequence;
in another embodiment, the scale alignment unit aligns the trajectories of the key frames in the map to the real world scale factor by extracting a priori, scaled trajectory data (such as arkit pose information) stored during data acquisition from the additional data of each key frame sequence, where i represents a different key frame sequence, and then obtains S3 by weighted averaging the scale factors of the respective key frame sequences.
Wherein, because the scale information of the artificial marker is more reliable, in the micro-service platform, S is preferable as the basis of scale recovery. Of course, the scaling coefficients may be obtained by weighted averaging based on the scaling information S1, S2, and S3 obtained in the above three manners, and scale alignment may be further implemented according to the scaling coefficients.
4.2 gravity alignment Unit
Further, in view of the fact that when reconstructing a three-dimensional map of a scene by taking a purely visual picture, the orientation of a certain target axis (generally, the Z axis is selected as the target axis) of the coordinate system of the output three-dimensional map and the direction of gravity in the map may not be parallel, i.e., the direction perpendicular to the ground level. Therefore, the three-dimensional map needs to be gravity aligned by the gravity alignment module;
specifically, the gravity alignment unit is used for acquiring vertical line segments of each key frame, acquiring a gravity direction vector of the sparse map according to the vertical line segments and the pose and internal references of the corresponding key frames, and acquiring a rotation transformation matrix from the gravity direction to the Z-axis quantity;
4.3 multiple map alignment Unit
Considering that a plurality of maps may be reconstructed independently, the respective coordinate systems may not be uniform, and thus, the plurality of maps may be aligned in the same coordinate system by the multi-map alignment unit;
Specifically, the map may be aligned to a historical version or a specified coordinate system: aligning the current map to a coordinate system of a certain historical version map, wherein both maps are aimed at the same physical scene;
5. map evaluation unit
After the positioning map is generated, in order to verify the positioning accuracy, the positioning performance is evaluated by a positioning evaluation unit
The map evaluation loop can automatically calculate various quality indexes of the map, generate corresponding quality reports, including abnormal jumping conditions of shooting tracks, map registration rate, scale verification conditions, gravity alignment conditions, positioning success rate and the like, and automatically early warn quality conditions.
6. Product management unit
The map product is automatically stored in a designated server for acquisition by a business layer. When the number of maps increases, the number of map versions increases, and the number of service modes increases, it is necessary to archive and manage the produced maps for a long period of time. In addition, the product management unit is also used for carrying out transformation, combination, partial extraction and other processes on the map product.
Further, splitting the graph building task into a plurality of flow nodes, wherein the flow nodes can be independently developed and debugged, are aligned to a unified interface service, and can provide services on site after service packaging and deployment are performed through a unified service deployment platform; the service deployment platform applied in the embodiment may be, but is not limited to: jenkins, travis CI, circlecI, and the like.
Specifically, fig. 3 is a flowchart of service package deployment according to an embodiment of the present application, and as shown in fig. 3, the flowchart includes the following steps:
s301, independently developing codes in the category of each micro-service, and pushing the codes to a remote server after the test passes; code hosting platforms such as git, gitlab, etc.;
s302, creating or updating workflow scripts required by service deployment on a service deployment platform; wherein, this step can be realized by manual writing or by an automated program;
s303, starting a workflow of a service deployment platform, wherein the main process comprises the following steps: code compiling, packaging, mirror image making, mirror image deployment to a testing machine, running testing and mirror image deployment to a formal server;
where the process is generally time consuming and requires repair if unexpected results occur during the process, multiple rounds of execution may ultimately be required.
It should be noted that, the service interface template on which the service depends also needs to be updated to each client, so that the client generates the configuration file required by the downstream according to the user input and the interface template.
In addition, some provisioning information, such as a message Code and an input/output path template of the service, may need to be updated on the aggregator by a certain service, so that the aggregator can analyze the latest message feedback and determine the final service interface content according to the configuration file and the input/output path template provided by the client.
Through the step S201, the map-building production process is divided into a plurality of production links, and each production link is deployed as a micro-service unit in the micro-service architecture platform; each micro-service unit can be used as an independent service for executing each link in the graph construction production flow respectively, and a plurality of micro-service units can be called jointly to flexibly support various graph construction requirements.
In addition, it should be noted that the above-mentioned method for splitting and deploying the mapping production flow is only a specific example, and when the complex requirements of the actual service scenario are faced, other mapping links can be further split, and the corresponding services can be flexibly extended and deployed based on the micro-service architecture.
S202, an aggregator responds to an interaction instruction input by a client to generate a service request, and instructs a micro-service unit to process at least one production link in an augmented reality map production flow according to the service request by calling a calling interface of the micro-service unit;
FIG. 4 is a schematic diagram of an aggregator and a micro-service according to an embodiment of the present application, where as shown in FIG. 4, a user may send an interaction instruction to a micro-service platform through a mobile phone, a tablet computer, a PC, etc.;
The aggregator receives the interaction instruction to generate a service request, calls various micro-service units at the lower layer and sends the service request to instruct the aggregator to complete a specific mapping production flow.
Specifically, the response operation according to the interaction instruction of the user may be: individual keyframe extraction, map alignment, map evaluation, etc.; a combination of links is also possible, for example: data acquisition and key frame extraction; or sparse map reconstruction and dense map reconstruction of data; of course, the whole map-building production flow can be completed in a serialization manner, namely, from the beginning of data acquisition to the end of map positioning index evaluation.
It should be noted that, in this embodiment, the Aggregator (agregator) may aggregate data or information from multiple diagramming micro services into a single response to meet the requirements of a specific client request or for display. The aggregator acts as an intermediary layer integrating the individual microservice data, enabling clients to obtain the required information in a simpler manner.
In addition, the aggregator can collect and combine the data of different micro-service units, and perform data conversion processing and processing to meet the customization requirement of the client, and perform security control, authority control and the like. Since such applications are a conventional arrangement in the art, a description thereof will not be repeated in this embodiment.
Through the steps S101 to S102, an interaction instruction may be sent to the aggregator through the terminal, and each micro service unit is instructed by the aggregator, so as to flexibly complete any one link or multiple links in the graph construction production flow; and each link does not have strict path dependence, so that a user can directly process intermediate-stage data acquired in other modes without starting from the beginning, and the use efficiency of the system is improved. In addition, compared with the related art, when the whole system needs to be updated in a certain map-building production link, the problem of high cost is caused by the fact that the whole system needs to be updated.
In some embodiments, considering that the aggregator may need to face multiple types of rich micro services, an interface with uniform format needs to be formulated, so as to realize efficient and accurate calling, and in particular:
the aggregator configures calling interfaces of all micro-service units based on interface configuration information input by a user;
wherein, interface configuration information is generated based on the service configuration parameter table, and the interface configuration information comprises: service name configuration item, service ID, input path, and output path. Each service provides a number of configurable parameters to the upper layers, which parameters are passed to the service layer by the upper layers after user configuration is completed.
The service name corresponds to the split service and is used for identifying the service name; the service ID is an identifier corresponding to each called behavior of the service; the input path and the output path input the names and paths of the products and the output products respectively,
furthermore, paths and names of input and output data of all the mapping services are designated by an upper layer, so that the requirements of following services and upper layer applications can be flexibly adjusted; it can be understood that for the primary microservice units, such as an alignment unit and a mapping unit, the upper layer is an aggregator; for the micro-service units which are more subdivided, the upper layer is the upper level micro-service, such as a gravity alignment unit and a scale alignment unit, and the upper layer is the alignment unit.
It can be understood that each micro service is provided with the calling interface, the formats of different calling interfaces are uniform and are obtained based on filling data of the service configuration parameter table, but different interfaces are provided with different parameter values; because the interface of each micro-service corresponds to a unique global identifier and the corresponding relation between the interface information and the specific service is managed in the aggregator, by designating different identifiers, accurate and efficient service calling can be realized.
In an exemplary embodiment, each micro service unit is encapsulated as a web service, an upper layer unit of the micro service unit initiates a service request through an HTTP protocol after receiving an input parameter, encapsulates specific call interface content and service content into the service request through a format of JSON, XML, YAML and the like in the request, and then sends the service request to the micro service unit, and the micro service unit parses the request and executes a mapping task corresponding to the service content.
Further, each micro service unit returns a response message to the aggregator after receiving the service request, wherein the response message includes: the task corresponding to the service name, the service ID, the response code and the service request is estimated to be completed, wherein the response code is used for indicating the execution condition and/or the execution failure reason of the request;
in addition, after receiving the response message, the aggregator obtains execution state information of the task from the micro-service units according to a preset time interval in a polling or long connection mode based on the estimated completion time returned by the plurality of micro-service units; optionally, necessary execution state information can be fed back to the user front-end interface until the service is terminated.
Through the steps, the execution state information of the tasks is fed back to the aggregator, and after the execution state information of each micro-service unit is collected to the aggregator, the aggregator can be helped to carry out global control, so that the graph construction efficiency is further optimized. In addition, since each service has a unified call interface, call method and communication interface, the specific input logic and execution logic of the service are insensitive to the upper layers, thereby providing necessary support for some rights operations or encryption operations.
In some embodiments, considering that in practical applications, it may be necessary to provide services for multiple clients at the same time, further, multiple mapping tasks may exist at the same time, and these mapping tasks may be very complex; therefore, in this embodiment, a service scheduling manner is provided, which can efficiently process scheduling allocation between different mapping tasks, and fig. 5 is a flowchart of a service scheduling manner according to an embodiment of the present application, as shown in fig. 5, where the flowchart includes the following steps:
s501, obtaining estimated completion time and execution state information from response requests returned by each micro-service unit, and generating queue information of a mapping task based on operation instructions input by each client;
s502, based on an operation instruction, acquiring association information corresponding to any mapping task between the non-communication micro service units;
it can be understood that through the above steps, the aggregator may obtain the running state of the current service and the mapping task queues sent by different clients, and obtain the association information of one mapping task between different micro services (mapping links), for example, the micro service with the serial number of 10 is the mapping task, the micro service with the serial number of 12 is the map alignment service, the micro service 10 and the micro service 12 together complete one mapping task, and the two have a time sequence relationship.
S503, global scheduling distribution is carried out based on estimated completion time, execution state information, queue information and task association information through a preset scheduling algorithm, service requests corresponding to a plurality of micro service units are generated or updated, and the service requests are sent to calling interfaces of all the micro service units.
Through the steps S501 to S503, based on the multidimensional state data, a specific service request is generated or updated through a scheduling algorithm to instruct each micro service to complete the mapping task, so that the resource allocation of the micro service can be better managed and optimized, and the efficiency and performance of the system are improved;
specifically, the scheduling algorithm can effectively allocate computing resources (CPU, memory, GPU, etc.) according to the requirements of different micro-services, so as to ensure that each micro-service obtains the required resources. Therefore, the method is beneficial to improving the utilization efficiency of system resources and reducing the resource waste so as to meet the performance requirements of different services;
in addition, the scheduling algorithm can balance the workload of different micro-services, ensuring that they share tasks evenly. To avoid overload of some micro services while other micro services are in idle state, thereby improving the overall performance and scalability of the system. In addition, the execution sequence of the tasks can be managed according to the task priority of the micro-service, so that the high-priority tasks can be timely processed.
In some embodiments, considering a map resource construction process, multiple micro services are required to be completed together, and data dependence exists between different services, for example, a map reconstruction service depends on key frames output by a data extraction service.
In this embodiment, production data having dependencies between the micro services is stored in a shared read-write memory system, and the memory system may be, but is not limited to, NFS (Network File System), HDFS (Hadoop Distributed File System), etc.
It should be noted that, considering the production data involved in the graph building link, not all the data are suitable for direct reading and writing in a system similar to NFS; therefore, for data suitable for NFS reading and writing, each service can directly read and write; for data unsuitable for NFS reading and writing, each service needs to synchronize the data to the local server from NFS (for example, using rsync) when the data needs to be input, and outputs the data to the local server when the data needs to be output, and then synchronizes the data to NFS from the local server to indirectly realize data sharing between services.
In this embodiment, because multiple services access the same data, data consistency among the micro-services can be ensured by the data sharing system without having respective data copies. In addition, the shared file system can also provide high availability, ensuring that data is always available; in the case of having a large number of micro-services, storing the data centrally may save storage space because the same data does not need to be replicated between the plurality of micro-services.
It should be noted that the steps illustrated in the above-described flow or flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order other than that illustrated herein.
The present embodiment also provides a system for providing an augmented reality map production service, which is used for implementing the foregoing embodiments and preferred embodiments, and will not be described again. As used below, the terms "module," "unit," "sub-unit," and the like may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated. Fig. 6 is a block diagram of a system for providing an enhanced display map production service according to an embodiment of the present application, as shown in fig. 6, the system comprising: an aggregator 60 and a plurality of micro-service units 61, wherein:
the aggregator 60 is configured to generate a service request in response to an interaction instruction input by the client, and invoke a calling interface of the micro-service unit 61 to instruct the micro-service unit, and process at least one production link in the augmented reality map production flow according to the service request;
The user can send interaction instructions to the micro-service platform through a mobile phone, a tablet personal computer, a PC computer and the like;
the aggregator 60 receives the interaction instruction to generate a service request and invokes the various micro service units 61 of the lower layer to send the service request to instruct it to complete a specific mapping production flow.
Specifically, the response operation according to the interaction instruction of the user may be: individual keyframe extraction, map alignment, map evaluation, etc.; a combination of links is also possible, for example: data acquisition and key frame extraction; or sparse map reconstruction and dense map reconstruction of data; of course, the whole map-building production flow can be completed in a serialization manner, namely, from the beginning of data acquisition to the end of map positioning index evaluation.
It should be noted that, in this embodiment, the Aggregator 60 (agregator) may aggregate data or information from multiple diagramming micro services into a single response to meet the requirements of a specific client request or for display. The aggregator 60 acts as an intermediary layer integrating the individual microservice data, enabling clients to obtain the required information in a simpler manner.
The micro-service unit 61 is configured to process at least one sub-production link in the augmented reality map production flow in response to the service request, where the aggregator and the server node are deployed on a micro-service unit platform, and the map production flow is split into a plurality of sub-production links, and any one sub-production link corresponds to at least one micro-service unit in the micro-service unit platform.
It should be noted that micro services (Microservices) is a software architecture that can split a large application into multiple small, relatively independent services, each with its own specific functions. These services can be independently developed, deployed, and maintained, thereby increasing the flexibility, scalability, and maintainability of the system.
In this embodiment, the drawing production flow is split according to the following rules:
1. according to the decoupling of the computing characteristics, different types of computing tasks are separated and used as independent micro-service units, for example, CPU, GPU and memory are consumed as required to split the flow;
2. according to functional decoupling, splitting tasks with different functions, respectively serving as independent micro-service units, for example, performing sparse reconstruction and dense reconstruction, aligning and splitting a video frame and a map, and the like;
3. according to the function linkage combination, combining a plurality of functions with association/binding requirements into a micro-service unit, such as data collection and data verification, data cleaning and missing value filling;
in one exemplary embodiment, the specifically split micro-service unit includes: the system comprises a data extraction unit, a feature calculation unit, a map reconstruction unit, a map alignment unit, a map evaluation unit and a product management unit;
Further, the data extraction unit can be further divided into a data collection unit, a data verification unit and a data preprocessing unit; the feature calculation unit can be further divided into a feature extraction unit and a feature matching unit; the map reconstruction unit can be further divided into a sparse map reconstruction unit, a dense reconstruction unit, a texture mapping unit and a positioning map reconstruction unit; the map alignment unit can be further divided into a scale alignment unit, a gravity alignment unit and a multi-map alignment unit;
in addition, the mapping task is split into a plurality of flow node nodes which can be independently examined and debugged, the flow node nodes are aligned to a unified interface service, and after service packaging and deployment are carried out through a unified service deployment platform, services can be provided outside the ground; the service deployment platform applied in the embodiment may be, but is not limited to: jenkins, travis CI, circlecI, and the like.
Through the system, the terminal can send the interaction instruction to the aggregator, and each micro-service unit is indicated by the aggregator so as to flexibly complete any one link or a plurality of links in the graph construction production flow; and each link does not have strict path dependence, so that a user can directly process intermediate-stage data acquired in other modes without starting from the beginning, and the use efficiency of the system is improved. In addition, compared with the related art, when the whole system needs to be updated in a certain map-building production link, the problem of high cost is caused by the fact that the whole system needs to be updated.
In one embodiment, fig. 7 is a schematic diagram of an internal structure of an electronic device according to an embodiment of the present application, as shown in fig. 7, and an electronic device, which may be a server, may be provided, and an internal structure diagram thereof may be shown in fig. 7. The electronic device includes a processor, a network interface, an internal memory, and a non-volatile memory connected by an internal bus, where the non-volatile memory stores an operating system, computer programs, and a database. The processor is for providing computing and control capabilities, the network interface is for communicating with an external terminal via a network connection, the internal memory is for providing an environment for the operation of the operating system, the computer program, when executed by the processor, is for providing a method of augmented reality map based production service, and the database is for storing data.
It will be appreciated by those skilled in the art that the structure shown in fig. 7 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the electronic device to which the present application is applied, and that a particular electronic device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples represent only a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.