Movatterモバイル変換


[0]ホーム

URL:


CN112307917A - Indoor positioning method integrating visual odometer and IMU - Google Patents

Indoor positioning method integrating visual odometer and IMU
Download PDF

Info

Publication number
CN112307917A
CN112307917ACN202011131968.XACN202011131968ACN112307917ACN 112307917 ACN112307917 ACN 112307917ACN 202011131968 ACN202011131968 ACN 202011131968ACN 112307917 ACN112307917 ACN 112307917A
Authority
CN
China
Prior art keywords
scene
pose
indoor
scene image
feature point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011131968.XA
Other languages
Chinese (zh)
Inventor
邵宇鹰
李新利
王孝伟
刘文杰
苏填
王一帆
彭鹏
陈怡君
陆启宇
张琪祁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North China Electric Power University
State Grid Shanghai Electric Power Co Ltd
Original Assignee
North China Electric Power University
State Grid Shanghai Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North China Electric Power University, State Grid Shanghai Electric Power Co LtdfiledCriticalNorth China Electric Power University
Priority to CN202011131968.XApriorityCriticalpatent/CN112307917A/en
Publication of CN112307917ApublicationCriticalpatent/CN112307917A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种融合视觉里程计及IMU的室内定位方法,包括以下步骤:步骤1:对室内场景进行目标分析,获取场景图像;步骤2:对场景图像提取关键帧;步骤3:对连续两个关键帧进行特征点匹配,获得位姿约束信息;步骤4:基于因子图优化算法,根据位姿约束信息,对场景图像进行位姿全局优化,获得优化位姿;步骤5:根据位姿约束信息和优化位姿,对相机的位姿进行实时优化,获得场景轨迹和全局地图,完成室内定位。此发明解决了传统机器人定位实时性和鲁棒性差的问题,利用双目相机结合场景结构化特征,基于室内场景结构化特征的立体匹配方法,提升了立体匹配精度,与基于因子图的后端全局优化相结合,提升了机器人定位的实时性和鲁棒性。

Figure 202011131968

The invention discloses an indoor positioning method integrating visual odometry and IMU, comprising the following steps: Step 1: perform target analysis on an indoor scene to obtain scene images; The two key frames are matched with feature points to obtain the pose constraint information; Step 4: Based on the factor graph optimization algorithm, according to the pose constraint information, perform global pose optimization on the scene image to obtain the optimized pose; Step 5: According to the pose Constrain information and optimize pose, optimize camera pose in real time, obtain scene trajectory and global map, and complete indoor positioning. This invention solves the problems of poor real-time and robustness of traditional robot positioning. The stereo matching method based on the structural features of the indoor scene by using the binocular camera combined with the structural features of the scene improves the accuracy of the stereo matching. The combination of global optimization improves the real-time and robustness of robot positioning.

Figure 202011131968

Description

Indoor positioning method integrating visual odometer and IMU
Technical Field
The invention relates to the technical field of robot positioning, in particular to an indoor positioning method integrating a visual odometer and an IMU.
Background
With the rapid development of technologies such as sensors and artificial intelligence, the research of robots is more and more focused. The robot acquires external environment information and self state information through the sensor, and realizes autonomous movement and completes certain operation tasks according to the information.
However, autonomous positioning is the basis of intelligent navigation and environment exploration research of the robot, and since a single sensor is difficult to acquire all information required by the system, information fusion of multiple sensors becomes a key for realizing autonomous positioning of the robot.
At present, the positioning accuracy and stability of a single sensor or two sensors are difficult to meet requirements, a visual or odometer method is mature, but indoor motion and illumination environments have great influence on the stability and accuracy of the sensors.
Therefore, it is possible to obtain the instantaneous displacement increment of the robot by using an Inertial Measurement Unit (IMU) to calculate the trajectory of the robot, and then assist the positioning.
Disclosure of Invention
The invention aims to provide an indoor positioning method integrating a visual odometer and an IMU. The method aims to solve the problem that the traditional robot positioning is poor in real-time performance and robustness, the stereoscopic matching precision is improved by using a binocular camera in combination with scene structural features and a stereoscopic matching method based on indoor scene structural features, and the method is combined with back-end global optimization based on a factor graph so as to improve the real-time performance and robustness of the robot positioning.
In order to achieve the above object, the present invention provides an indoor positioning method integrating a visual odometer and an Inertial Measurement Unit (IMU), comprising the following steps:
step 1: performing target analysis on an indoor scene by using a camera to obtain a scene image;
step 2: extracting key frames of the scene images to obtain the key frames of the scene images;
and step 3: based on a random sampling consistency algorithm, performing feature point matching on key frames in two continuous scene images of the camera under different poses to obtain pose constraint information of the scene images;
and 4, step 4: based on a factor graph optimization algorithm, according to pose constraint information obtained by matching the characteristics of two continuous scene images, giving an initial value of edges between pose nodes in a factor graph, and performing pose global optimization on the scene images to obtain an optimized pose;
and 5: and optimizing the pose of the camera in real time according to the pose constraint information and the optimized pose to obtain a scene track and a global map of an indoor scene, so as to complete indoor positioning.
Most preferably, the key frame extraction comprises the steps of:
step 2.1: based on the combination of the line segment characteristics and the binary line descriptors, extracting the line structure relationship of the scene image to obtain the scene space structure of the scene image;
step 2.2: based on an ORB feature point extraction algorithm, extracting feature points of the scene image to obtain a feature point matrix of the scene image;
step 2.3: and combining the scene space structure of the scene image with the characteristic point matrix to obtain a key frame of the scene image.
Most preferably, the feature point extraction includes the steps of:
step 2.2.1: constructing a multilayer Gaussian pyramid according to the scene image;
step 2.2.2: calculating the position of the feature point of the Gaussian pyramid of each layer according to the multi-layer Gaussian pyramid based on a FAST algorithm;
step 2.2.3: dividing the Gaussian pyramid of each layer into a plurality of areas according to the positions of the feature points of the Gaussian pyramid of each layer;
step 2.2.4: and extracting the interest points with the maximum response value in the Gaussian pyramid of each layer, and performing descriptor calculation to obtain a characteristic point matrix of the scene image.
Most preferably, the camera is a binocular camera.
Most preferably, the indoor scene is any one of an indoor zenith texture image and a floor texture image.
By applying the method, the problem that the traditional robot positioning is poor in real-time performance and robustness is solved, the stereoscopic matching precision is improved by utilizing a stereoscopic matching method based on the indoor scene structural features by combining a binocular camera with the scene structural features, and the real-time performance and robustness of the robot positioning are improved by combining with the back-end global optimization based on the factor graph.
Compared with the prior art, the invention has the following beneficial effects:
1. according to the indoor positioning method fusing the visual odometer and the IMU, provided by the invention, the stereoscopic matching precision and the drawing effect are improved by utilizing a binocular camera in combination with the scene structural characteristics and a stereoscopic matching method based on the indoor scene structural characteristics, and the visual SLAM system is constructed in combination with the back-end global optimization based on the factor graph so as to improve the real-time property and the robustness of robot positioning.
2. According to the indoor positioning method fusing the visual odometer and the IMU, provided by the invention, the target scene is analyzed, the accurate information constraint condition of pose estimation is obtained based on the inherent characteristics of the indoor scene, and the pose is optimized by adopting a factor graph algorithm.
3. The indoor positioning method fusing the visual odometer and the IMU, provided by the invention, has the advantages that the visual odometer is arranged at the front end, the motion of the camera between adjacent images and a local map are estimated, the camera poses measured by the visual odometer at different moments are received by the back end through a factor graph, and the camera poses are optimized to obtain globally consistent tracks and maps.
Drawings
Fig. 1 is a flowchart of an indoor positioning method according to the present invention.
Detailed Description
The invention will be further described by the following specific examples in conjunction with the drawings, which are provided for illustration only and are not intended to limit the scope of the invention.
The invention provides an indoor positioning method integrating a visual odometer and an IMU (inertial measurement Unit), which comprises the following steps as shown in figure 1:
step 1: and performing target analysis on the indoor scene of the transformer substation by adopting a binocular camera to obtain a scene image of the indoor scene of the transformer substation.
In the embodiment, the model of the binocular camera is MYNT S1030-IR-120; the indoor scene of the transformer substation is an indoor zenith texture image, a floor texture image and the like.
Step 2: extracting key frames of the scene images of the indoor scene of the transformer substation to obtain the key frames of the scene images of the indoor scene of the transformer substation;
the key frame extraction method comprises the following steps:
step 2.1: and based on the combination of the line segment characteristics and the binary line descriptors, extracting the line structure relationship of the scene image of the indoor scene of the transformer substation, and acquiring the scene space structure of the scene image of the indoor scene of the transformer substation.
Step 2.2: based on an ORB (organized FAST and rotaed BRIEF) feature point extraction algorithm, feature point extraction is carried out on the scene image, and a feature point matrix of the scene image of the indoor scene of the transformer substation is obtained.
The feature point extraction method comprises the following steps:
step 2.2.1: according to the scene image, a multilayer Gaussian pyramid of the scene image is constructed to realize scale invariance transformation of the scene image and to realize rotation invariance transformation by calibrating the direction through a gray scale centroid.
In this embodiment, the C language program corresponding to the multi-layered gaussian pyramid for constructing the scene image is as follows:
inputting: InputAlrray image, vector feature point, OutputAlrray descriptor
Gaussian blur of input image
The scale of change in pyramid is 1.2; pyramid layer number nLevels 8
for (current layer number 0; layer number < nLevels; +++ current layer number)
Downsampling a picture by number of layers
if (layer number! ═ 0)
Edges are added to the image.
Step 2.2.2: based on a FAST algorithm, calculating the feature point position of the Gaussian pyramid of each layer of the scene image according to the Gaussian pyramids of the layers of the scene image.
In this embodiment, the C language program for calculating the feature point position is as follows:
default threshold iniThFAST of FAST feature point is 20
for (current tier number 0; l current tier number < nlevels; ++ current tier number).
Step 2.2.3: dividing the Gaussian pyramid of each layer into a plurality of areas according to the position of the feature point of the Gaussian pyramid of each layer;
step 2.2.4: and extracting the interest points with the maximum response value in the Gaussian pyramid of each layer, and performing descriptor calculation to obtain a characteristic point matrix of the scene image.
In this embodiment, the descriptor calculates the corresponding C language program as follows:
for (current feature point ID ═ 0; ID < n; +++ ID).
Step 2.3: and combining the scene space structure of the scene image of the indoor scene of the transformer substation and the characteristic point matrix of the scene image to obtain the key frame of the scene image.
And step 3: based on Random Sample Consensus (RANSAC), feature point matching is performed on key frames in two continuous scene images of the camera at different poses, so that the two scene images in continuous time are related in information, and pose constraint information of the scene images is obtained.
The matching effect of the feature points directly influences the accuracy and the real-time performance of the feature point tracking process, and further greatly influences the accuracy and the efficiency of the motion estimation result.
And 4, step 4: and constructing a factor graph optimization only with tracks based on a factor graph optimization algorithm, giving an initial value of an edge between pose nodes according to pose constraint information obtained by feature matching between key frames of two continuous scene images, and performing pose global optimization on the scene images to obtain the optimized pose of the scene images.
Wherein, the global pose optimization means: obtaining a Motion edge (Motion Arcs) and a Measurement edge (Measurement Arcs) from camera pose and map features, wherein the Measurement edge connects the pose and feature points measured on the pose, each edge corresponds to a nonlinear pose constraint value, the pose constraint information represents negative log-likelihood of a Measurement and Motion model, and an objective function is a set of the pose constraint information; and (3) linearizing a series of constraints at the factor graph optimization rear end to obtain an information matrix and an information vector, and maximizing the product of factors by adjusting the value of each variable to obtain the map posterior.
And 5: and (3) according to the motion of the camera between the key frames of the continuous scene images estimated by the front-end visual odometer and the pose constraint information of the scene images, and the optimized pose of the scene images measured by the rear-end visual odometer at different moments through a factor graph, optimizing the pose of the camera in real time to obtain a globally consistent scene track and a globally consistent map, and completing indoor positioning.
The working principle of the invention is as follows:
performing target analysis on an indoor scene by using a camera to obtain a scene image; extracting key frames of the scene images to obtain the key frames of the scene images; based on a random sampling consistency algorithm, performing feature point matching on key frames in two continuous scene images of the camera under different poses to obtain pose constraint information of the scene images; based on a factor graph optimization algorithm, according to pose constraint information obtained by matching the characteristics of two continuous scene images, giving an initial value of edges between pose nodes in a factor graph, and performing pose global optimization on the scene images to obtain an optimized pose; and optimizing the pose of the camera in real time according to the pose constraint information and the optimized pose to obtain a scene track and a global map of an indoor scene, so as to complete indoor positioning.
In conclusion, the indoor positioning method fusing the visual odometer and the IMU solves the problem that the traditional robot is poor in positioning instantaneity and robustness, improves the stereo matching precision by combining the binocular camera with the scene structural feature and based on the stereo matching method of the indoor scene structural feature, and improves the instantaneity and robustness of robot positioning by combining with the factor graph-based back-end global optimization.
While the present invention has been described in detail with reference to the preferred embodiments, it should be understood that the above description should not be taken as limiting the invention. Various modifications and alterations to this invention will become apparent to those skilled in the art upon reading the foregoing description. Accordingly, the scope of the invention should be determined from the following claims.

Claims (5)

Translated fromChinese
1.一种融合视觉里程计及IMU的室内定位方法,其特征在于,包括以下步骤:1. an indoor positioning method of fusion visual odometer and IMU, is characterized in that, comprises the following steps:步骤1:采用相机对室内场景进行目标分析,获取场景图像;Step 1: Use the camera to analyze the indoor scene and obtain the scene image;步骤2:对所述场景图像进行关键帧提取,获得所述场景图像的关键帧;Step 2: perform key frame extraction on the scene image to obtain the key frame of the scene image;步骤3:基于随机抽样一致算法,对所述相机在不同位姿下的两幅连续所述场景图像中的关键帧,进行特征点匹配,获得所述场景图像的位姿约束信息;Step 3: Based on the random sampling consensus algorithm, perform feature point matching on the key frames in the two consecutive scene images of the camera under different poses, and obtain pose constraint information of the scene images;步骤4:基于因子图优化算法,根据所述位姿约束信息,给定因子图中位姿节点之间边的初始值,并对所述场景图像进行位姿全局优化,获得优化位姿;Step 4: Based on the factor graph optimization algorithm, according to the pose constraint information, the initial value of the edge between the pose nodes in the factor graph is given, and the scene image is globally optimized for the pose to obtain the optimized pose;步骤5:根据所述位姿约束信息以及所述优化位姿,对所述相机的位姿进行实时优化,获得室内场景的场景轨迹和全局地图,完成室内定位。Step 5: According to the pose constraint information and the optimized pose, the pose of the camera is optimized in real time, a scene trajectory and a global map of the indoor scene are obtained, and indoor positioning is completed.2.如权利要求1所述的融合视觉里程计及IMU的室内定位方法,其特征在于,所述关键帧提取包括以下步骤:2. the indoor positioning method of fusion visual odometry and IMU as claimed in claim 1, is characterized in that, described key frame extraction comprises the following steps:步骤2.1:基于线段特征与二进制线描述符相结合,对所述场景图像进行线条结构关系提取,获取所述场景图像的场景空间结构;Step 2.1: based on the combination of line segment features and binary line descriptors, extracting the line structure relationship on the scene image to obtain the scene space structure of the scene image;步骤2.2:基于ORB特征点提取算法,对所述场景图像进行特征点提取,获取所述场景图像的特征点矩阵;Step 2.2: Based on the ORB feature point extraction algorithm, feature point extraction is performed on the scene image, and a feature point matrix of the scene image is obtained;步骤2.3:结合所述场景图像的场景空间结构与所述特征点矩阵,获得所述场景图像的关键帧。Step 2.3: Combine the scene space structure of the scene image and the feature point matrix to obtain the key frame of the scene image.3.如权利要求2所述的融合视觉里程计及IMU的室内定位方法,其特征在于,所述特征点提取包括以下步骤:3. the indoor positioning method of fusion visual odometry and IMU as claimed in claim 2, is characterized in that, described feature point extraction comprises the following steps:步骤2.2.1:根据所述场景图像,构建多层的高斯金字塔;Step 2.2.1: According to the scene image, construct a multi-layer Gaussian pyramid;步骤2.2.2:基于FAST算法,根据所述多层的高斯金字塔,计算出每层的高斯金字塔的特征点位置;Step 2.2.2: Based on the FAST algorithm, according to the multi-layer Gaussian pyramid, calculate the feature point position of the Gaussian pyramid of each layer;步骤2.2.3:根据所述每层的高斯金字塔的特征点位置,将每层的高斯金字塔划分为若干个区域;Step 2.2.3: Divide the Gaussian pyramid of each layer into several regions according to the position of the feature points of the Gaussian pyramid of each layer;步骤2.2.4:提取每层每个区域的所述高斯金字塔中响应值最大的兴趣点,并进行描述子计算,获得所述场景图像的特征点矩阵。Step 2.2.4: Extract the interest point with the largest response value in the Gaussian pyramid of each layer and each region, and perform descriptor calculation to obtain the feature point matrix of the scene image.4.如权利要求1所述的融合视觉里程计及IMU的室内定位方法,其特征在于,所述相机为双目相机。4 . The indoor positioning method integrating visual odometry and IMU according to claim 1 , wherein the camera is a binocular camera. 5 .5.如权利要求1所述的融合视觉里程计及IMU的室内定位方法,其特征在于,所述室内场景为室内天顶纹理图像、地板纹理图像中的任意一种。5 . The indoor positioning method integrating visual odometry and IMU according to claim 1 , wherein the indoor scene is any one of an indoor ceiling texture image and a floor texture image. 6 .
CN202011131968.XA2020-10-212020-10-21Indoor positioning method integrating visual odometer and IMUPendingCN112307917A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202011131968.XACN112307917A (en)2020-10-212020-10-21Indoor positioning method integrating visual odometer and IMU

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202011131968.XACN112307917A (en)2020-10-212020-10-21Indoor positioning method integrating visual odometer and IMU

Publications (1)

Publication NumberPublication Date
CN112307917Atrue CN112307917A (en)2021-02-02

Family

ID=74328605

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202011131968.XAPendingCN112307917A (en)2020-10-212020-10-21Indoor positioning method integrating visual odometer and IMU

Country Status (1)

CountryLink
CN (1)CN112307917A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112991515A (en)*2021-02-262021-06-18山东英信计算机技术有限公司Three-dimensional reconstruction method, device and related equipment
CN114088104A (en)*2021-07-232022-02-25武汉理工大学Map generation method under automatic driving scene
CN114998385A (en)*2022-05-272022-09-02中国计量大学Indoor space auxiliary positioning method based on visual features

Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107590827A (en)*2017-09-152018-01-16重庆邮电大学A kind of indoor mobile robot vision SLAM methods based on Kinect
CN109558879A (en)*2017-09-222019-04-02华为技术有限公司A kind of vision SLAM method and apparatus based on dotted line feature
CN110044354A (en)*2019-03-282019-07-23东南大学A kind of binocular vision indoor positioning and build drawing method and device
CN110853100A (en)*2019-10-242020-02-28东南大学 A Structured Scene Vision SLAM Method Based on Improved Point and Line Features
CN111024066A (en)*2019-12-102020-04-17中国航空无线电电子研究所Unmanned aerial vehicle vision-inertia fusion indoor positioning method
CN111489393A (en)*2019-01-282020-08-04速感科技(北京)有限公司VS L AM method, controller and mobile device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107590827A (en)*2017-09-152018-01-16重庆邮电大学A kind of indoor mobile robot vision SLAM methods based on Kinect
CN109558879A (en)*2017-09-222019-04-02华为技术有限公司A kind of vision SLAM method and apparatus based on dotted line feature
CN111489393A (en)*2019-01-282020-08-04速感科技(北京)有限公司VS L AM method, controller and mobile device
CN110044354A (en)*2019-03-282019-07-23东南大学A kind of binocular vision indoor positioning and build drawing method and device
CN110853100A (en)*2019-10-242020-02-28东南大学 A Structured Scene Vision SLAM Method Based on Improved Point and Line Features
CN111024066A (en)*2019-12-102020-04-17中国航空无线电电子研究所Unmanned aerial vehicle vision-inertia fusion indoor positioning method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘宏伟,余辉亮等: "ORB特征四叉树均匀分布算法", 自动化仪表, vol. 39, no. 5, pages 218 - 219*

Cited By (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112991515A (en)*2021-02-262021-06-18山东英信计算机技术有限公司Three-dimensional reconstruction method, device and related equipment
US12387428B2 (en)2021-02-262025-08-12Shandong Yingxin Computer Technologies Co., Ltd.Three-dimensional reconstruction method, system, and storage medium
CN114088104A (en)*2021-07-232022-02-25武汉理工大学Map generation method under automatic driving scene
CN114088104B (en)*2021-07-232023-09-29武汉理工大学Map generation method under automatic driving scene
CN114998385A (en)*2022-05-272022-09-02中国计量大学Indoor space auxiliary positioning method based on visual features
CN114998385B (en)*2022-05-272025-08-12中国计量大学Indoor space auxiliary positioning method based on visual features

Similar Documents

PublicationPublication DateTitle
CN111311666B (en)Monocular vision odometer method integrating edge features and deep learning
CN112734765B (en)Mobile robot positioning method, system and medium based on fusion of instance segmentation and multiple sensors
CN108242079B (en) A VSLAM method based on multi-feature visual odometry and graph optimization model
Shi et al.CalibRCNN: Calibrating camera and LiDAR by recurrent convolutional neural network and geometric constraints
Walch et al.Image-based localization using lstms for structured feature correlation
CN106679648B (en)Visual inertia combination SLAM method based on genetic algorithm
Aslan et al.Visual-Inertial Image-Odometry Network (VIIONet): A Gaussian process regression-based deep architecture proposal for UAV pose estimation
CN111899280B (en) Monocular Visual Odometry Method Using Deep Learning and Hybrid Pose Estimation
CN109341703B (en)Visual SLAM algorithm adopting CNNs characteristic detection in full period
CN109186606B (en)Robot composition and navigation method based on SLAM and image information
CN110298914B (en) A Method of Establishing Characteristic Map of Fruit Tree Canopy in Orchard
CN110070615A (en)A kind of panoramic vision SLAM method based on polyphaser collaboration
Yusefi et al.LSTM and filter based comparison analysis for indoor global localization in UAVs
CN112307917A (en)Indoor positioning method integrating visual odometer and IMU
CN106780484A (en)Robot interframe position and orientation estimation method based on convolutional neural networks Feature Descriptor
CN111860651B (en)Monocular vision-based semi-dense map construction method for mobile robot
CN111126385A (en)Deep learning intelligent identification method for deformable living body small target
CN104408760A (en)Binocular-vision-based high-precision virtual assembling system algorithm
CN114036969B (en)3D human body action recognition algorithm under multi-view condition
Han et al.Robust shape estimation for 3D deformable object manipulation
Ding et al.Stereo vision SLAM-based 3D reconstruction on UAV development platforms
Fu et al.CBAM-SLAM: A semantic SLAM based on attention module in dynamic environment
CN114581616B (en)Visual inertia SLAM system based on multi-task feature extraction network
CN115482282A (en) Dynamic SLAM method with multi-target tracking capability in autonomous driving scenarios
CN117576218B (en) An adaptive visual inertial odometry output method

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication

Application publication date:20210202

RJ01Rejection of invention patent application after publication

[8]ページ先頭

©2009-2025 Movatter.jp