Movatterモバイル変換


[0]ホーム

URL:


CN119006678A - Three-dimensional Gaussian sputtering optimization method for pose-free input - Google Patents

Three-dimensional Gaussian sputtering optimization method for pose-free input
Download PDF

Info

Publication number
CN119006678A
CN119006678ACN202410997744.9ACN202410997744ACN119006678ACN 119006678 ACN119006678 ACN 119006678ACN 202410997744 ACN202410997744 ACN 202410997744ACN 119006678 ACN119006678 ACN 119006678A
Authority
CN
China
Prior art keywords
light
dimensional
scene
camera
distribution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410997744.9A
Other languages
Chinese (zh)
Inventor
曲强
王余希
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CASfiledCriticalShenzhen Institute of Advanced Technology of CAS
Priority to CN202410997744.9ApriorityCriticalpatent/CN119006678A/en
Publication of CN119006678ApublicationCriticalpatent/CN119006678A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开一种针对无位姿输入的三维高斯溅射优化方法。该方法包括:对于输入图像,利用光线预测模型预测光线束分布,获得光线束形式的分布特征;基于光线束形式的分布特征,计算相机位姿;对于光线束分布,基于光线的体密度进行采样,获得聚焦在视觉中心区域的三维高斯点云的初始空间分布;对于输入图像,通过视锥体投影和物体遮罩计算,获得可视壳;基于所述三维高斯点云的初始空间分布和所述可视壳进行三维高斯溅射场景训练,获得满足设定损失函数标准的三维场景重建模型,其中所述损失函数中包含相机位姿参数的训练正则项。本发明为三维高斯溅射训练提供了重要的初始化场景信息,显著提升了最终三维结构的质量与细节丰富度。

The present invention discloses a three-dimensional Gaussian sputtering optimization method for non-pose input. The method comprises: for an input image, using a light prediction model to predict the distribution of light beams, and obtaining the distribution characteristics of the light beam form; based on the distribution characteristics of the light beam form, calculating the camera pose; for the light beam distribution, sampling based on the volume density of the light, and obtaining the initial spatial distribution of the three-dimensional Gaussian point cloud focused on the visual center area; for the input image, obtaining the visible shell through cone projection and object mask calculation; based on the initial spatial distribution of the three-dimensional Gaussian point cloud and the visible shell, performing three-dimensional Gaussian sputtering scene training, obtaining a three-dimensional scene reconstruction model that meets the set loss function standard, wherein the loss function contains the training regularization term of the camera pose parameters. The present invention provides important initialization scene information for three-dimensional Gaussian sputtering training, and significantly improves the quality and detail richness of the final three-dimensional structure.

Description

Three-dimensional Gaussian sputtering optimization method for pose-free input
Technical Field
The invention relates to the technical field of computer vision, in particular to a three-dimensional Gaussian sputtering optimization method for pose-free input.
Background
New view synthesis refers to rendering a picture corresponding to a target pose given a source image and the source pose and the target pose, and generally involves three-dimensional understanding of a scene. The new view synthesis has wide application in the fields of 3D reconstruction, AR/VR and the like. In recent years, deep learning, especially convolutional neural networks, neural radiation fields, three-dimensional gaussian sputtering and other models play a key role in new view synthesis, and can learn complex scene representations and illumination models to generate more realistic images. For example, three-dimensional gaussian sputtering (3 DGaussian Splatting) uses a series of gaussian functions (typically 3D gaussian distributions) to represent objects in a scene. The scene is broken down into a number of gaussian ellipsoids, each with its own center position, color, direction and size. Upon rendering, these ellipsoids are rasterized into pixels, and the color value of each pixel is determined by an integration or sampling process, thereby producing a final image. Compared with traditional volume rendering or grid-based rendering, three-dimensional gaussian sputtering can achieve real-time rendering speeds.
With the rapid development of the three-dimensional Gaussian sputtering technology, the new view synthesis technology has a remarkable breakthrough in rendering quality, efficiency and interactivity, and the wide expansion of the application of the vision industry is strongly promoted. The application covers the fields of digital people, automatic driving scene simulation, three-dimensional content generation, unmanned aerial vehicle rapid mapping, large-scale scene reconstruction, three-dimensional language fields and the like. Typically, such commercial applications require that a multi-view scene picture be input and then the scene reconstruction work be performed at the high performance server side. Although the training efficiency of new view synthesis applications has been significantly improved, this limits the quality of the reconstruction effort, as its construction basis relies on sparse point clouds estimated by the Structure-from-Motion technique during the camera calibration phase. Motion restoration structures are methods that automatically restore the three-dimensional structure of a scene and the camera's internal and external parameters from a plurality of images or video sequences, typically estimating the depth of the scene and the trajectory of the camera based on geometric constraints and motion models. The workflow of the motion recovery structure comprises the steps of feature detection, feature matching, relative pose estimation, triangularization, global optimization, dense reconstruction and the like. Major challenges faced by current motion restoration structures include handling occlusion, illumination changes, low texture regions, and image noise, which can lead to feature matching errors and reconstruction inaccuracies. This reliance on the efficiency and accuracy of the motion restoration structure matching often means that the user is required to wait for a long period of time for a service oriented application.
Furthermore, supporting multi-view pictures captured in an arbitrary manner by an arbitrary device is of great importance for new view composition applications, in particular for application scenarios where services are popular on common mobile devices. The camera pose estimation method based on the motion restoration structure faces serious experiments in the face of sparse photo input with insufficient labeling of pose data. In an environment with a sparse view, there may be a lack of sufficient matching texture, which directly affects the accuracy of camera pose estimation.
In order to solve the problems of low efficiency and inaccurate estimation of camera pose in the three-dimensional Gaussian sputtering initialization stage of the motion restoration structure method, two main stream improvement strategies exist at present:
1) A more accurate and robust camera pose estimation method is employed. The core of the direction is to adopt a deep neural network based on a visual transducer architecture to improve the accuracy and the robustness of the camera pose estimation. The transducer model exhibits excellent performance in dealing with complex spatial relationships and long-range dependencies with its strong sequence modeling capabilities and parallel computing advantages. Specifically, by introducing a self-attention mechanism, the transducer can effectively capture the relationship between elements in the input sequence, thereby realizing accurate estimation of the pose of the camera without explicit geometric constraint. In addition, the training process of the deep neural network can automatically learn rich characteristic representations, so that the adaptability to shielding, illumination change and dynamic scenes is further enhanced, and the reliability of an estimation result is improved.
2) The camera pose is used as part of the iterative training parameters. This strategy is to incorporate camera pose parameters into the iterative training process, treating them as learnable variables, rather than a fixed, a priori knowledge. The core idea of the method is to optimize the pose parameters of the camera by using a gradient descent algorithm and minimizing loss functions such as reprojection errors or luminosity errors. In the training process, the model can continuously adjust the pose of the camera until the pose is converged to the optimal estimated value.
The prior art, as analyzed, has mainly the following drawbacks:
1) The motion restoration structure technique is capable of restoring the motion trajectory of a camera and the three-dimensional structure of a scene from a series of images. In the process, the pose of the camera can be deduced, and the generated scene point cloud is a key basis for scene initialization in three-dimensional Gaussian sputtering training, so that the final scene training effect is influenced. Although the accuracy of camera pose prediction under a sparse view can be improved by adopting a deep neural network method, if a three-dimensional Gaussian sputtering motion recovery structure method is directly replaced, the accuracy of scene initialization is inevitably damaged, and a reconstruction result is blurred.
2) The combination of camera pose parameters and iterative training of a three-dimensional Gaussian sputtering scene can provide a more accurate model optimization approach in theory, but the model optimization process is often caused to become abnormally complex in actual operation due to the deep fusion of the camera pose parameters and the three-dimensional Gaussian sputtering scene. The excessive coupling not only makes the model difficult to reach the global optimal solution, but also can cause the problems of unstable training and the like, and limits the further improvement of the performance of the model. More importantly, the camera pose parameters are incorporated into the training link, meaning that each iteration needs to be recalculated and adjusted, the number of iterative training times can be significantly increased, the overall training time is prolonged, and the consumption of computing resources and the time efficiency are severely challenged.
In summary, in the technical field of new view synthesis, camera pose information is an essential part of scene training. The prior art either presets that camera pose information is known or extrapolates the position of the camera by applying motion restoration structure algorithms from dense views. This process is accompanied by high time costs, especially due to the time-consuming overall matching calculation. In addition, in the face of viewing angle occlusion or highly repetitive texture structures in sparse scenes, it is often difficult for motion restoration structure algorithms to accurately restore camera pose, resulting in inaccurate positioning. In addition, the point cloud data preliminarily generated by the motion restoration structure, although serving as a starting point of the neural field training to construct the scene model, is limited in quality by the uncertainty of the above-described pose estimation.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a three-dimensional Gaussian sputtering optimization method aiming at pose-free input. The method comprises the following steps:
for an input image, predicting light ray bundle distribution by utilizing a light ray prediction model to obtain distribution characteristics of a light ray bundle form, wherein the distribution characteristics comprise the momentum of light rays, the direction of the light rays and the volume density of the light rays;
Calculating the pose of the camera based on the distribution characteristics of the light beam form;
sampling the light beam distribution based on the volume density of the light to obtain initial spatial distribution of three-dimensional Gaussian point clouds focused in a vision center area;
For the input image, obtaining a visual shell through view cone projection and object mask calculation, wherein the visual shell reflects structural information of a scene object;
And performing three-dimensional Gaussian sputtering scene training based on the initial spatial distribution of the three-dimensional Gao Sidian cloud and the visual shell to obtain a three-dimensional scene reconstruction model meeting the set loss function standard, wherein the loss function comprises training regular terms of camera pose parameters.
Compared with the prior art, the method has the advantage that a three-dimensional Gaussian sputtering scene initialization scheme based on camera ray prediction is provided. According to the scheme, the camera pose estimation method and the scene reconstruction flow are deeply integrated, an end-to-end optimization framework is constructed, and the integrated design realizes more accurate and efficient model training by jointly optimizing the camera pose and the scene structure. On the basis of ensuring the pose estimation precision of the camera, the method accelerates the initialization process of the training scene, and improves the precision of synthesizing a new view by adopting a three-dimensional Gaussian sputtering method under a sparse pose-free view.
Other features of the present invention and its advantages will become apparent from the following detailed description of exemplary embodiments of the invention, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a general process schematic diagram of a three-dimensional Gaussian sputtering optimization method for pose-free input according to one embodiment of the invention;
FIG. 2 is a flow chart of a three-dimensional Gaussian sputtering optimization method for pose-free input according to an embodiment of the invention;
FIG. 3 is a schematic diagram of a density ray prediction model in accordance with one embodiment of the present invention;
FIG. 4 is a schematic diagram of density-based light sampling in accordance with one embodiment of the present invention;
FIG. 5 is a schematic diagram of a visual shell combined with bundle density information initialization three-dimensional scene in accordance with one embodiment of the invention.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless it is specifically stated otherwise.
The following description of at least one exemplary embodiment is merely exemplary in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of exemplary embodiments may have different values.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
The three-dimensional Gaussian sputtering technology is an important innovation in the field of new view synthesis, and has the potential of reshaping the production flow of digital content. The invention solves the problem of scene training under sparse pose-free view, can effectively expand the general capability and hardware scene of new view synthesis technology, thereby promoting the development of visual applications such as digital people, automatic driving scene simulation, three-dimensional content generation, unmanned aerial vehicle rapid mapping, large-scale scene reconstruction, three-dimensional language field and the like, and being beneficial to promoting the popularization of new view synthesis application to common mobile equipment.
In general, referring to fig. 1, the provided three-dimensional gaussian sputtering optimization method for pose-free input mainly comprises four parts of content including camera density ray prediction, three-dimensional gaussian scene initialization (three-dimensional gaussian initial training scene), visual shell redundancy elimination and camera pose regularization.
Camera density ray prediction is improved from camera ray prediction. The core idea of camera ray prediction is that neural network learning generally benefits from an over-parameterized distributed representation, and therefore, rather than using a compact camera feature representation, such as an internal and external matrix, the possible orientation direction of the camera is represented by a set of ray bundles. The light beam is from the pixel point of the image, converges to the center point of the camera, and is represented by the Pluker coordinates. The neural network is to recover the position and direction of the light beam as much as possible from a set of images, so as to solve the intrinsic and extrinsic properties of the camera according to the converging position of the light beam, for example, using a least squares optimization method. The camera density ray prediction of the present invention supports ray information with density to provide prior information for scene initialization, as will be described in detail later.
Three-dimensional gaussian scene initialization, the three-dimensional gaussian points are initialized initially by means of a sparse point cloud of scene structures derived from motion restoration structures. In order to form a more accurate pose estimation method, an end-to-end training process is formed, and priori is provided for scene initialization by means of over-parameterized information of camera density rays, so that camera pose estimation and initialization of a three-dimensional Gaussian scene are integrated. The three-dimensional gaussian points are sampled on the light according to the density relationship, rather than using conventional uniform random sampling.
In order to achieve redundancy elimination of the visual shell, depth estimation information of an image is obtained from an input multi-view picture, the visual shell of a scene is constructed, and then the false initialization point cloud positions outside the shell are eliminated. In the training process of the three-dimensional Gaussian scene, the visual shell provides rough structural information of scene objects and is used for eliminating unreasonable Gaussian point cloud distribution, so that the generation of visual floaters is restrained, and the consistency of the scene is maintained.
In the three-dimensional Gaussian scene training process, training regular terms of camera pose parameters are added, and the camera pose parameters are adjusted to achieve a scene reconstruction effect with higher precision.
In the invention, in the initialization process of a three-dimensional Gaussian scene, a sparse picture (or called an image) without pose mark is input, and the depth information and the pose information of the sparse picture are estimated at first so as to perform joint method optimization. Specifically, referring to fig. 2, the provided three-dimensional gaussian sputtering optimization method for pose-free input comprises the following steps:
Step S210, for an input image, predicting the light beam distribution by using a light prediction model to obtain distribution characteristics of the light beam form, including the momentum of the light, the direction of the light and the volume density of the light.
Camera density ray prediction is a proxy task for camera pose estimation, and a large number of distribution parameters of rays are output. The camera light method only provides the function of recovering the pose of the camera from a plurality of pictures, and has no function of deriving an initialization point cloud like a motion recovery structure algorithm. In the embodiment of the invention, the constructed light distribution type characterization is improved, so that the light characteristic under the Pluker coordinate is improved into the light characteristic with density. Based on the characteristics of three-dimensional Gaussian sputtering volume rendering, rays with volume density information are predicted, and a basis is provided for sampling an initial three-dimensional Gaussian training scene from a ray bundle.
FIG. 3 is a dense ray prediction model constructed based on a visual transducer model in which the pose parameters λ of the camera are encoded as a series of raysPi denotes any point on the ith ray, and di denotes the direction vector of the ith ray. The position information of each ray in space is represented by the pllck coordinates in the predictive model. To achieve initialization of a three-dimensional gaussian scene, in one embodiment, a key parameter of ray density ρ is introduced. Thus, each ray may be specifically represented as an expanded version of li=[di;mii, where di and mi are the direction and momentum characteristics of the ray, respectively, calculated from mi=pi×di. ρi is the spatial volume density information carried by the modified light. Specifically, the input image I is subjected to a block processing to form a sequence S, and the sequence with the position code Φ (·) is input into the visual transducer model V to predict the light direction d, the momentum m, and the density ρ corresponding to the pixel block, which are expressed as:
Wherein,Representing predicted ray bundle distribution parameters, including parameter information for a series of rays. xi denotes a tile position, yi denotes tile input information, phi (xi) denotes position coding information corresponding to a tile, and N denotes the number of tiles. Hereinafter, also adopted areReplacement ofAre used to refer to the distribution of light bundles unless otherwise indicated by context.
In summary, the light prediction model is used to obtain the distribution characteristics of the light beam, including the momentum of the light, the direction of the light, and the bulk density of the light.
Step S220, calculating the pose of the camera based on the distribution characteristics of the light beam form.
Camera pose estimation based on the distribution position of a large number of light rays, camera poses associated with a plurality of view angles are calculated from least squares optimization. For example, for light bundles formed for input picture IThe camera light prediction method solves internal and external parameters lambda of a camera through a least square method, wherein the internal and external parameters lambda comprise specific pose parameters theta.
Because the three-dimensional Gaussian sputtering technology is generally used for inputting a plurality of pictures with multiple visual angles, the joint optimal solution of the light beams of the pictures with the same visual angles is considered in the camera pose calculation, and the light beams can be combined into one light beam for the calculation of least square optimization due to the distributed representation of the light beam forms.
In one embodiment, in order to further improve the accuracy of pose prediction, to cope with prediction uncertainty caused by view angle deletion in a sparse view, the predicted light direction and momentum can be further adjusted. For example, by means of a diffusion model, the light is subjected to noise adding and denoising, so that richer light beam distribution characteristics are obtained, and further, more accurate light prediction under a sparse view is obtained.
In summary, in step S220, based on the camera lighting method, a proxy task for predicting the light is used to recover pose information of the camera. By considering the application characteristics of the volume rendering technology in the three-dimensional Gaussian scene modeling, the prediction task is improved, the key dimension of density is increased, and the density information related to the initial scene is output while the pose of the camera is acquired. Such ray-carried density information essentially represents the inverse operation of the volume rendering process, i.e., the inverse derivation of scene structure from the density information. And initializing Gaussian point clouds of the scene through the density information, and providing a basis for training a high-precision three-dimensional Gaussian scene.
Step S230, for the ray bundle distribution, sampling is performed based on the volume density of the rays, and an initial spatial distribution of the three-dimensional gaussian point cloud focused on the visual center area is obtained.
In this step, under the camera ray prediction framework with density, the initialization of the three-dimensional gaussian scene, that is, the distribution of gaussian in the three-dimensional space based on ray density sampling is initialized.
The process begins with a proxy task, i.e., camera ray prediction acquires a large number of ray parametersEach ray li is defined by a direction vector di and a momentum vector mi in the plgram coordinate system. By resolving these ray parameters, their locations can be reconstructed in three-dimensional space and the volume density ρ of the rays used to reveal the point cloud distribution characteristics of the scene. The strategy utilizes the agent task of regression prediction of the light beam adopted by the pose estimation method based on the camera light, and captures prior information closely related to the camera pose by analyzing the distribution characteristics of the light beam in the scene space.
In particular, starting from an image pixel, along a ray path through the center of the cameraAnd (3) extracting samples (M is the number of ray paths) to serve as a starting point cloud position of three-dimensional Gaussian sputtering scene reconstruction. Each ray li maps from three-dimensional world spaceTo two-dimensional image spaceBy sampling the beam of light, an initial point cloud layout that is focused on the visual center region can be obtained.
FIG. 4 is a schematic diagram of an initial spatial distribution of ray sampling based on density to construct a three-dimensional Gaussian point cloud, specifically comprising the steps of:
Step S41, the light parameters in the Pluker coordinate systemMapping back to the actual position in the three-dimensional space to obtain the distribution of light rays in the three-dimensional space
Step S42, initializing a set of voxel grids in three-dimensional spaceAnd light rays are processed according to a certain ruleDivided into a plurality of segments. Wherein each voxel gridMay contain a plurality of light raysAnd each ray also carries its density distribution along the path { ρi(z)}z∈[0,L], k is the voxel grid index, i is the ray index intersecting the voxel grid, z is the argument, L represents the grid width,Representing a bundle of rays intersecting the voxel grid.
Step 43, initializing a three-dimensional gaussian sphere point in the center of the voxel grid based on the information, wherein the size of the gaussian point is based on the integration of a plurality of ray densities in the voxel grid, thereby forming an initial spatial distribution of the three-dimensional gaussian point cloud. Initializing a three-dimensional Gaussian distribution point gk at the central position of each voxel grid vk according to the light density information, and determining the size s of the three-dimensional Gaussian distribution point according to the integral density of the light in the voxelThereby constructing an initial spatial distribution of the three-dimensional Gaussian point cloudK represents the number of gaussian kernels in the scene.
These gaussian points will be further tuned in subsequent scene density control training and a number of small gaussian points will form larger gaussian points by deletion and gradient updating. In a subsequent scene density control training phase, these gaussian points will undergo further optimization adjustments, and the small gaussian points gk are evolved into larger gaussian points by gradient descent or merging operations, ultimately forming a finer three-dimensional scene representation. The process not only strengthens the geometric structure of the scene, but also improves the accuracy of scene density prediction.
In summary, the method is used for initializing the scene point cloud based on the transform predicted ray density distribution, and sampling the initial three-dimensional Gaussian points in a voxel grid integration mode, so that the initial spatial distribution of the three-dimensional Gaussian point cloud is formed. The method reduces the iteration times of the density control algorithm in the subsequent scene training process, and improves the accuracy of rendering results.
In step S240, a visual shell reflecting structural information of the scene object is obtained through view cone projection and object mask calculation.
In order to further improve the quality and robustness of three-dimensional scene training, in one embodiment, a visual shell is introduced, such as a three-dimensional visual shell is constructed based on two-dimensional depth segmentation information to eliminate redundancy, and the visual shell incorporates prior knowledge of the structure of objects in the input image. For example, the visual shell is calculated by view cone projection and object masking, and the visual shell can better preserve structural consistency required by multi-view reconstruction and has the capability of rejecting invalid point cloud data which may lead to unreasonable gaussian distribution compared with sparse point clouds generated by sampling alone. FIG. 5 is a schematic diagram of a visual shell combined with bundle density information to initialize a three-dimensional scene, forming a scene density grid suitable for initialization.
It is worth mentioning that the calculation cost of the visual shell is relatively low, and the visual shell is generated from a small number of key mask views only by means of the segmentation model, so that the visual shell can be easily integrated into an initialization process, and efficient and accurate three-dimensional scene reconstruction starting conditions are realized.
In one embodiment, in a three-dimensional scene reconstruction process, constructing a three-dimensional mesh in combination with a visual shell includes the steps of:
Step S51, extracting depth information and segmentation mask of the two-dimensional picture from the semantic segmentation modelWhere Ii denotes I input images, Di and Mi correspond to their depth maps and segmentation masks, respectively, and N is the number of images.
In step S52, the camera internal parameter matrix K and the external parameter matrix E of the picture are acquired to determine the conversion relationship between the camera coordinate system and the world coordinate system.
Step S53, initializing the scene voxel gridProjecting the camera parameter matrix K and E to a two-dimensional plane to obtain a corresponding two-dimensional projection grid
Step S54, filtering the projections of the non-reconstructed object content, and using the object mask Mi to apply to the two-dimensional projection gridFiltering to eliminate the content of non-reconstructed target object to form three-dimensional grid with filtered visual shell in three-dimensional spaceThis process ensures the accuracy and effectiveness of three-dimensional reconstruction while reducing the complexity of subsequent processing.
In sum, the redundant point cloud elimination technology based on the visual shell can effectively accelerate the model training speed and obviously inhibit the occurrence of unreasonable visual elements such as visual floaters and the like. The visual shell technology remarkably improves the efficiency and the precision of the three-dimensional scene initialization stage, reduces unnecessary calculation load, lays a solid foundation for the subsequent Gaussian point cloud optimization and scene density control, ensures the definition and the consistency of the reconstruction result,
And S250, carrying out three-dimensional Gaussian sputtering scene training based on the initial spatial distribution of the three-dimensional Gaussian point cloud and the visual shell by taking the set loss function minimization as an optimization target, and obtaining a three-dimensional scene reconstruction model, wherein the loss function comprises training regular terms of camera pose parameters.
In this step, three-dimensional scene training is initiated based on the initial spatial distribution of the three-dimensional gaussian point cloud and the visual shell until a set loss function criterion is met. In the training process, the loss function contains a training regular term of the camera pose parameter, and the scene reconstruction effect with higher precision can be realized by adjusting the pose parameter in the three-dimensional Gaussian scene training process.
Correction of camera parameters becomes particularly important when dealing with scenes from multiple perspectives, because of the dimensional differences that may exist between different perspectives. In addition, the final correction effect is poor due to the inconsistency of illumination between the viewing angles or slight motion blur generated when the handheld device photographs. Therefore, the invention optimizes the external parameters of the camera and the three-dimensional model simultaneously in the scene training process. In this process, the camera parameters are co-participated in the adjustment with the gaussian properties to achieve finer control. In addition, the invention applies a certain regularization condition lambda·IIE-E0 II to the camera parameters. Wherein E0 is a camera extrinsic matrix of an initial pose predicted by a camera ray method, and is also a global camera pose for scene training, E represents a camera extrinsic matrix of a current view angle, and lambda is a coefficient of a regularization term. In order to prevent the position and the posture of the optimized camera from being excessively shifted, the alignment quality in the whole three-dimensional Gaussian sputtering scene training process is optimized.
In one embodiment, the loss function is set to:
The loss function represents an optimization process for three-dimensional gaussian scene and camera pose parameters. Wherein,Representing an optimized three-dimensional Gaussian scene, Ej* representing an optimized camera outlier matrix at the current view j,Is an initial three-dimensional Gaussian scene to be optimized in the training process, and Ej represents a camera external parameter matrix to be optimized under the current view angle j.Representing a three-dimensional Gaussian scene to be optimizedAnd rendering into a 2D image under the corresponding pose parameters of the visual angle j.The real input image of the ith sheet under the visual angle j is represented, and N represents the number of the input images corresponding to the visual angle j. For the canonical term, λ is the coefficient of the canonical term,An initial camera outlier matrix representing the perspective j association, V represents the perspective classification of the input image.
In the training process, a regular term for the pose of the camera is effectively added, and the real pose of the camera is optimized through a neural network. Because the camera light method provides an initial term of camera pose parameters, the estimation of the camera pose can be more accurate and robust only by a small amount of constraint and training cost, so that a more accurate scene reconstruction result is obtained from a sparse view.
After training is completed, a three-dimensional scene reconstruction model can be obtained and further applied to new view synthesis, and the model can synthesize a multi-view new view aiming at an input image without pose.
In summary, compared with the prior art, the invention has the following advantages:
1) Aiming at the emerging three-dimensional Gaussian sputtering method in the technical field of new view synthesis, the invention provides a sparse pose-free input-oriented scene initialization strategy which can efficiently realize the precise estimation of the camera pose and the joint initialization of the scene. The invention combines the advantages of a camera ray method, provides critical initialization scene information for three-dimensional Gaussian sputtering training on the basis of ensuring the estimation efficiency and accuracy of the camera pose in the motion recovery structure, and remarkably improves the quality and detail richness of the final three-dimensional structure.
2) The invention provides an end-to-end implementation scheme for a sparse pose-free input-oriented three-dimensional Gaussian sputtering technology. On the premise of ensuring scene synthesis quality and consistency, the efficiency of initializing the three-dimensional Gaussian sputtering scene is greatly improved. The camera light method is integrated into the three-dimensional Gaussian sputtering training process, so that the unfixed problem of camera pose prediction caused by insufficient information and repeated textures in a sparse view is solved. The estimation precision and efficiency of the camera ray method under the sparse view are better than those of the current motion recovery structure.
3) The invention fully utilizes the agent task of light prediction, increases the dimension of light density, and realizes the prediction of the density information of the three-dimensional space structure. And the camera pose prediction and the three-dimensional Gaussian scene initialization process are combined to initialize the three-dimensional Gaussian point cloud, so that the accuracy and the rendering quality of the training scene are ensured.
4) According to the invention, the visual depth segmentation model is used for acquiring the depth information of the picture, and the two-dimensional scene is re-projected into the three-dimensional visual shell, so that the occurrence of redundant point clouds in the training process is restrained, unreasonable visual floaters are reduced, and the calculation burden of training is lightened.
5) The method and the device are combined with the training mode of camera pose optimization, so that the inconsistency of the camera pose in the multi-view picture is further corrected, and the accuracy of camera pose estimation under the sparse view is improved. Meanwhile, excessive offset of the initial camera pose predicted value is restrained by using a regular term, updating is terminated in advance to avoid introducing excessive calculation cost, and the alignment quality in the whole three-dimensional Gaussian sputtering scene training process can be optimized with less calculation amount.
6) Through multiple simulation tests, the method and the device can synthesize the expected multi-view picture no matter for the input image with or without pose, and improve the quality of the synthesized image while improving the calculation efficiency.
The present invention may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present invention.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present invention may be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as SMALLTALK, C ++, python, etc., and conventional procedural programming languages, such as the "C" language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information for computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, implementation by software, and implementation by a combination of software and hardware are all equivalent.
The foregoing description of embodiments of the invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the technical improvements in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the invention is defined by the appended claims.

Claims (10)

Translated fromChinese
1.一种针对无位姿输入的三维高斯溅射优化方法,包括以下步骤:1. A three-dimensional Gaussian sputtering optimization method for no pose input, comprising the following steps:对于输入图像,利用光线预测模型预测光线束分布,获得光线束形式的分布特征,包括光线的动量、光线的方向和光线的体密度;For the input image, the light prediction model is used to predict the light bundle distribution, and the distribution characteristics of the light bundle form are obtained, including the momentum of the light, the direction of the light and the volume density of the light;基于所述光线束形式的分布特征,计算相机位姿;Calculating the camera pose based on the distribution characteristics of the light beam form;对于所述光线束分布,基于所述光线的体密度进行采样,获得聚焦在视觉中心区域的三维高斯点云的初始空间分布;For the light beam distribution, sampling is performed based on the volume density of the light to obtain an initial spatial distribution of a three-dimensional Gaussian point cloud focused on a visual center area;对于所述输入图像,通过视锥体投影和物体遮罩计算,获得可视壳,所述可视壳反映场景物体的结构信息;For the input image, a visual shell is obtained through frustum projection and object mask calculation, where the visual shell reflects the structural information of the scene object;基于所述三维高斯点云的初始空间分布和所述可视壳进行三维高斯溅射场景训练,获得满足设定损失函数标准的三维场景重建模型,其中所述损失函数中包含相机位姿参数的训练正则项。Based on the initial spatial distribution of the three-dimensional Gaussian point cloud and the visible shell, a three-dimensional Gaussian sputtering scene training is performed to obtain a three-dimensional scene reconstruction model that meets the set loss function standard, wherein the loss function includes a training regularization term of the camera pose parameters.2.根据权利要求1所述的方法,其特征在于,所述利用光线预测模型预测光线束分布包括:2. The method according to claim 1, wherein predicting the light beam distribution using the light prediction model comprises:将输入图像I经分块处理后,形成图像块序列S;After the input image I is processed into blocks, an image block sequence S is formed;将带有位置编码的序列输入到光线预测模型V中,以预测光线的方向d、动量m及体密度ρ,表示为:The sequence with position encoding is input into the light prediction model V to predict the direction d, momentum m and volume density ρ of the light, expressed as:其中,是预测的光线束分布,相机的位姿参数λ被编码为光线束每条光线由其普吕克坐标拓展后的向量l=[d;m;ρ]表示,d和m分别是光线的方向和动量特征,ρ是光线携带的空间体密度信息,光线束每条光线li空间位置信息在普吕克坐标系下由方向向量di和动量向量mi定义,xi表示图像块位置,yi表示图像块输入信息,φ(xi)表示图像块对应的位置编码信息,N表示图像块的数量。in, is the predicted light bundle distribution, and the camera pose parameter λ is encoded as a light bundle Each ray is represented by the vector l = [d; m; ρ] after the expansion of its Plücker coordinates, where d and m are the direction and momentum characteristics of the ray, respectively, and ρ is the spatial volume density information carried by the ray. The spatial position information of each light ray li is defined by the direction vector di and the momentum vectormi in the Plücker coordinate system. xi represents the image block position, yi represents the image block input information, φ(xi ) represents the position encoding information corresponding to the image block, and N represents the number of image blocks.3.根据权利要求1所述的方法,其特征在于,所述相机位姿利用最小二乘法计算,对于同视角多张图片,基于光线束形式的分布表征,将光线组合并成一个光线束,进而通过最小二乘法求解相机的内外参数,包括位姿参数。3. The method according to claim 1 is characterized in that the camera posture is calculated using the least squares method. For multiple pictures with the same viewing angle, the light beams are combined into a light beam based on the distribution representation in the form of light beams, and then the internal and external parameters of the camera, including the posture parameters, are solved by the least squares method.4.根据权利要求1所述的方法,其特征在于,根据以下步骤获得聚焦在视觉中心区域的三维高斯点云的初始空间分布:4. The method according to claim 1, characterized in that the initial spatial distribution of the three-dimensional Gaussian point cloud focused on the visual center area is obtained according to the following steps:对于所述光线束分布,从图像像素出发,沿着穿过相机中心的光线路径抽取样本,得到聚焦在视觉中心区域的三维高斯点云的初始空间分布,作为三维高斯溅射场景重建的起始点云位置,其中M是抽取的光线路径的数量,Pj表示第j条光线路径。For the ray bundle distribution, starting from the image pixel, along the ray path passing through the center of the camera Samples are extracted to obtain the initial spatial distribution of the three-dimensional Gaussian point cloud focused on the visual center area, which is used as the starting point cloud position for the reconstruction of the three-dimensional Gaussian sputtering scene, where M is the number of extracted light paths andPj represents the jth light path.5.根据权利要求2所述的方法,其特征在于,根据以下步骤获得聚焦在视觉中心区域的三维高斯点云的初始空间分布5. The method according to claim 2, characterized in that the initial spatial distribution of the three-dimensional Gaussian point cloud focused on the visual center area is obtained according to the following steps:将普吕克坐标系下的光线束分布映射回三维空间中的实际位置,得到光线在三维空间中的分布The light beam distribution in the Plücker coordinate system Map back to the actual position in three-dimensional space to obtain the distribution of light in three-dimensional space初始化一组在三维空间中的体素网格并按设定规则将光线划分成多段,其中每个体素网格包含多条光线每条光线携带其沿路径的密度分布其中,k是体素网格索引,i是与体素网格相交的光线索引,为自变量,L表示网格宽度,表示与体素网格相交的光线束;Initialize a set of voxel grids in 3D space The light is divided into multiple segments according to the set rules, where each voxel grid Contains multiple rays Each ray carries the density distribution along its path Where k is the voxel grid index, i is the ray index that intersects the voxel grid, is the independent variable, L represents the grid width, represents a bundle of rays that intersect a voxel grid;在体素网格中心初始化一个三维高斯球点,高斯点的大小基于多条光线密度在体素网格中的积分,形成三维高斯点云的初始空间分布;Initialize a 3D Gaussian sphere point at the center of the voxel grid. The size of the Gaussian point is based on the integration of multiple light density in the voxel grid to form the initial spatial distribution of the 3D Gaussian point cloud.根据光线体密度信息,在每个体素网格vk的中心位置初始化一个三维高斯分布点gk,其大小s依据光线在该体素内的积分密度确定得到构建三维高斯点云的初始空间分布其中K表示场景中的高斯核数量。According to the light volume density information, a three-dimensional Gaussian distribution point gk is initialized at the center of each voxel grid vk , and its size s is determined by the integral density of the light in the voxel Get the initial spatial distribution of constructing a three-dimensional Gaussian point cloud Where K represents the number of Gaussian kernels in the scene.6.根据权利要求5所述的方法,其特征在于,在所述三维高斯溅射场景训练过程中,根据以下步骤构建三维网格:6. The method according to claim 5, characterized in that, during the three-dimensional Gaussian sputtering scene training process, a three-dimensional grid is constructed according to the following steps:从语义分割模型中提取出二维图像的深度信息及分割掩码,表示为其中Ii表示i张输入图像,Di是Ii的深度图,Mi是Ii分割掩码;The depth information and segmentation mask of the two-dimensional image are extracted from the semantic segmentation model, which is expressed as WhereIi represents the i input image,Di is the depth map ofIi , andMi is the segmentation mask ofIi ;获取图像的相机内参数矩阵K和外参数矩阵E,以确定相机坐标系与世界坐标系之间的转换关系;Obtain the camera intrinsic parameter matrix K and extrinsic parameter matrix E of the image to determine the conversion relationship between the camera coordinate system and the world coordinate system;将初始化的场景体素网格通过相机内参数矩阵K和外参数矩阵E投影到二维平面,得到对应的二维投影网格The scene voxel grid that will be initialized By projecting the camera's intrinsic parameter matrix K and extrinsic parameter matrix E onto a two-dimensional plane, we get the corresponding two-dimensional projection grid.利用物体遮罩Mi对二维投影网格进行过滤,排除非重建目标物体的内容,在三维空间中形成经过滤的三维网格Use the object maskMi to project the 2D grid Filtering is performed to exclude the content of non-reconstructed target objects, and a filtered three-dimensional grid is formed in the three-dimensional space.7.根据权利要求6所述的方法,其特征在于,所述损失函数设置为:7. The method according to claim 6, characterized in that the loss function is set to:其中,是优化后的三维高斯场景,Ej*表示优化后的当前视角j的相机外参矩阵,是训练过程中的待优化的初始三维高斯场景,Ej表示视角j下待优化的当前相机外参矩阵,表示待优化的三维高斯场景在视角j相应位姿参数下的渲染成2D图像的过程,表示视角j下第i张的真实输入图像,N代表视角j对应的输入图像的数量,λ是正则项的系数,Ej表示当前视角j的相机外参矩阵,V表示输入图像的视角分类。in, is the optimized three-dimensional Gaussian scene, Ej* represents the optimized camera extrinsic matrix of the current viewing angle j, is the initial three-dimensional Gaussian scene to be optimized during the training process,Ej represents the current camera extrinsic parameter matrix to be optimized under the viewing angle j, Represents the three-dimensional Gaussian scene to be optimized The process of rendering into a 2D image under the corresponding pose parameters of the viewing angle j, represents the i-th real input image under view j, N represents the number of input images corresponding to view j, λ is the coefficient of the regularization term,Ej represents the camera extrinsic parameter matrix of the current view j, and V represents the view classification of the input image.8.根据权利要求1所述的方法,其特征在于,还包括:针对实际采集的图像,利用经训练的所述三维场景重建模型,获得对应的多视角目标图像。8. The method according to claim 1 is characterized in that it also includes: using the trained three-dimensional scene reconstruction model to obtain corresponding multi-view target images for the actually collected images.9.一种计算机可读存储介质,其上存储有计算机程序,其中,该计算机程序被处理器执行时实现根据权利要求1至8中任一项所述方法的步骤。9. A computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the steps of the method according to any one of claims 1 to 8.10.一种计算机设备,包括存储器和处理器,在所述存储器上存储有能够在处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至8中任一项所述的方法的步骤。10. A computer device comprising a memory and a processor, wherein a computer program that can be run on the processor is stored in the memory, wherein the processor implements the steps of any one of the methods of claims 1 to 8 when executing the computer program.
CN202410997744.9A2024-07-242024-07-24Three-dimensional Gaussian sputtering optimization method for pose-free inputPendingCN119006678A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202410997744.9ACN119006678A (en)2024-07-242024-07-24Three-dimensional Gaussian sputtering optimization method for pose-free input

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202410997744.9ACN119006678A (en)2024-07-242024-07-24Three-dimensional Gaussian sputtering optimization method for pose-free input

Publications (1)

Publication NumberPublication Date
CN119006678Atrue CN119006678A (en)2024-11-22

Family

ID=93492881

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202410997744.9APendingCN119006678A (en)2024-07-242024-07-24Three-dimensional Gaussian sputtering optimization method for pose-free input

Country Status (1)

CountryLink
CN (1)CN119006678A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN119169208A (en)*2024-11-252024-12-20杭州电子科技大学 Three-dimensional scene reconstruction method based on multimodal regularization and temperature smoothness constraint
CN119273813A (en)*2024-12-122025-01-07南京邮电大学 A new viewpoint image synthesis method based on camera pose light field coding

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN119169208A (en)*2024-11-252024-12-20杭州电子科技大学 Three-dimensional scene reconstruction method based on multimodal regularization and temperature smoothness constraint
CN119169208B (en)*2024-11-252025-02-25杭州电子科技大学 Three-dimensional scene reconstruction method based on multimodal regularization and temperature smoothness constraint
CN119273813A (en)*2024-12-122025-01-07南京邮电大学 A new viewpoint image synthesis method based on camera pose light field coding

Similar Documents

PublicationPublication DateTitle
CN110782490B (en)Video depth map estimation method and device with space-time consistency
Gao et al.A general deep learning based framework for 3D reconstruction from multi-view stereo satellite images
WO2019174377A1 (en)Monocular camera-based three-dimensional scene dense reconstruction method
Zhou et al.Nerflix: High-quality neural view synthesis by learning a degradation-driven inter-viewpoint mixer
CN119006678A (en)Three-dimensional Gaussian sputtering optimization method for pose-free input
CN113850900B (en)Method and system for recovering depth map based on image and geometric clues in three-dimensional reconstruction
CN113450396A (en)Three-dimensional/two-dimensional image registration method and device based on bone features
US12086965B2 (en)Image reprojection and multi-image inpainting based on geometric depth parameters
CN116958492A (en)VR editing application based on NeRf reconstruction three-dimensional base scene rendering
CN118822906A (en) Indoor dynamic environment map construction method and system based on image restoration and completion
CN119006714A (en)Multi-view three-dimensional reconstruction method based on feature enhancement
CN119516076A (en) Image processing method, device and equipment for scene reconstruction and rendering
Hu et al.3D map reconstruction using a monocular camera for smart cities
CN116168162A (en) A 3D Point Cloud Reconstruction Method Based on Multi-View Weighted Aggregation
Liu et al.A review on 3D Gaussian splatting for sparse view reconstruction
Zhang et al.Dyna-depthformer: Multi-frame transformer for self-supervised depth estimation in dynamic scenes
CN119625162A (en) Incremental optimal view selection method for neural radiance field based on hybrid uncertainty estimation
CN119027590A (en) Image reconstruction method and system based on unmanned aerial vehicle aerial photography
CN118967912A (en) A neural radiance field rendering method based on superpixel constraints
CN120236003A (en)Three-dimensional modeling method and device
Ding et al.Multi-step depth enhancement refine network with multi-view stereo
Xie et al.IOVS4NeRF: Incremental Optimal View Selection for Large-Scale NeRFs
CN118229872B (en)Multi-view stereo method based on double uncertainty estimation
CN119832029B (en)Implicit image enhancement and optical flow estimation method based on multi-mode collaborative optimization
CN118570100B (en) A method and system for removing video floating artifacts based on neural radiation field

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp