Movatterモバイル変換


[0]ホーム

URL:


CN111428563B - Image identification method for automobile full-liquid crystal instrument - Google Patents

Image identification method for automobile full-liquid crystal instrument
Download PDF

Info

Publication number
CN111428563B
CN111428563BCN202010114458.5ACN202010114458ACN111428563BCN 111428563 BCN111428563 BCN 111428563BCN 202010114458 ACN202010114458 ACN 202010114458ACN 111428563 BCN111428563 BCN 111428563B
Authority
CN
China
Prior art keywords
image
liquid crystal
crystal instrument
full liquid
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010114458.5A
Other languages
Chinese (zh)
Other versions
CN111428563A (en
Inventor
刘卫平
关哲
胡博春
孟金
刘佳
王兆枫
王郁霖
郭玉峰
张希明
刘祥港
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin Yugong Intelligent Technology Co ltd
Jilin University
Original Assignee
Jilin Yugong Intelligent Technology Co ltd
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin Yugong Intelligent Technology Co ltd, Jilin UniversityfiledCriticalJilin Yugong Intelligent Technology Co ltd
Priority to CN202010114458.5ApriorityCriticalpatent/CN111428563B/en
Publication of CN111428563ApublicationCriticalpatent/CN111428563A/en
Application grantedgrantedCritical
Publication of CN111428563BpublicationCriticalpatent/CN111428563B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明涉及图像识别技术领域,具体而言,涉及一种汽车全液晶仪表图像识别方法,采集全液晶仪表不同功能状态下的图像并创建标准的全液晶仪表图像的特征模型库;采用摄像装置在暗箱无光源条件下对待检测的全液晶仪表不同显示功能状态下进行图像采集;对采集到的全液晶仪表图像进行增强处理,包括图像平滑处理、图像锐化处理;利用改进的SIFT算法对待检测全液晶仪表图像与特征模型库中的标准图像进行特征点匹配;根据设定好的匹配率阈值得到识别结果。通过改进原SIFT算法的尺度空间、关键点描述子的生成以及特征点匹配等方面,提升了识别匹配的效率和实时性,可应用于汽车全液晶仪表显示功能的自动化测试中。

Figure 202010114458

The invention relates to the technical field of image recognition, in particular to an image recognition method of an automobile full liquid crystal instrument, which collects images of the full liquid crystal instrument in different functional states and creates a standard full liquid crystal instrument image feature model library; Under the condition of no light source in the dark box, image acquisition is performed under different display function states of the full liquid crystal instrument to be detected; the collected full liquid crystal instrument image is enhanced, including image smoothing processing and image sharpening processing; the improved SIFT algorithm is used to detect the full liquid crystal instrument. The liquid crystal instrument image is matched with the standard image in the feature model library, and the recognition result is obtained according to the set matching rate threshold. By improving the scale space of the original SIFT algorithm, the generation of key point descriptors, and the matching of feature points, the efficiency and real-time performance of identification and matching are improved, and it can be applied to the automatic test of the display function of automobile full liquid crystal instrumentation.

Figure 202010114458

Description

Image identification method for automobile full-liquid crystal instrument
Technical Field
The invention relates to the technical field of image recognition, in particular to an image recognition method for an automobile full-liquid-crystal instrument.
Background
With the development of electronic information technology, 5G and other communication technologies, the full liquid crystal instrument is widely applied and popularized in automobiles, image information contained in the full liquid crystal instrument is very rich, the detection technology of the display function of the full liquid crystal instrument mainly adopts manual visual inspection, and the detection speed and the detection efficiency are greatly influenced by human subjective factors.
Disclosure of Invention
The invention aims to solve the technical problem of providing an image identification method for an automobile full-liquid crystal instrument. By improving the scale space of the original SIFT algorithm, the generation of the key point descriptors, the matching of the feature points and the like, the efficiency and the real-time performance of recognition and matching are improved, and the method can be applied to the automatic test of the display function of the automobile full-liquid crystal instrument.
The invention is realized in this way, a method for identifying the image of the full liquid crystal instrument of the automobile comprises the following steps:
s1: collecting images of the full liquid crystal instrument in different functional states and creating a standard characteristic model library of the full liquid crystal instrument image;
s2: adopting a camera device to acquire images of the full liquid crystal instrument to be detected in different display function states under the condition that a dark box has no light source;
s3: carrying out enhancement processing on the acquired full liquid crystal instrument image, wherein the enhancement processing comprises image smoothing processing and image sharpening processing;
s4: carrying out feature point matching on the full liquid crystal instrument image to be detected and a standard image in a feature model library by utilizing an improved SIFT algorithm;
s5: and obtaining the recognition result according to the set matching rate threshold value.
Wherein: the specific steps of carrying out feature point matching on the full liquid crystal instrument image to be detected and the standard image in the feature model library by using the improved SIFT algorithm are as follows:
s41, simplifying the scale space, and simplifying the 5-layer number and the 5-layer number of the scale space in the original algorithm into 4 layers;
s42, detecting the extreme point of the scale space, comparing the sampling point with all the points adjacent to the space to determine the extreme point;
s43, locating extreme points, and eliminating unstable and low-contrast extreme points;
s44, determining the principal direction of the key point, taking the extreme point after positioning as the feature point, calculating the gradient and direction of each feature point, and then counting in the gradient direction of the pixel point by a histogram method, wherein the peak value of the histogram is used for representing the principal direction of the key point;
s45: simplifying the key point descriptors, namely simplifying the key point descriptors in the original SIFT algorithm from 128-dimensional vectors to 24-dimensional vectors;
s46: and matching the characteristic points, namely matching the characteristic points by adopting a Randac algorithm.
Further, the camera device in step S2 takes a picture of the full liquid crystal instrument to be detected under the condition of dark installation and no light source, the sampled picture is uploaded to an upper computer to be processed, the camera is a CCD camera, and the pixel size is 2560 × 1920.
Further, the step S3 includes the specific steps of the full liquid crystal instrument image enhancement processing:
s31: and (3) image smoothing processing: smoothing the image by adopting a median filtering method: the conversion formula is as follows:
g(x,y)=median{f(x-k,y-l),(k,l)∈W}
wherein f (x, y) is an original image, g (x, y) is a filtered output image, W is a median filtering template, and l and k are boundary values in x and y directions respectively;
s32: image sharpening processing: adopting a Laplace operator to sharpen the image: the sharpening formula is as follows:
Figure BDA0002391029000000031
wherein f (x, y) is an original image, g (x, y) is a sharpened output image, and c is a laplacian sharpened template.
Further, S43, locating the extreme points, and eliminating the unstable and low-contrast extreme points includes: fitting function formula in scale space by using Gaussian difference function:
Figure BDA0002391029000000032
where X ═ X, y, σ)TThe derivation can obtain the value of the corresponding extreme point equation
Figure BDA0002391029000000033
If it is
Figure BDA0002391029000000034
The extreme point is eliminated and the extreme point is eliminated,
Figure BDA0002391029000000035
represents the offset of the extreme point, and D (X) is a Gaussian difference function value.
Further, S45: the method for simplifying the key point descriptor includes the following steps:
changing a rectangular region in the original SIFT algorithm into a concentric region, taking a key point as a circle center, taking a diameter of 8, and dividing concentric circles with radiuses of 1, 2, 3 and 4 respectively;
respectively calculating gradients of four concentric circles in 6 directions of 0 degrees, 60 degrees, 120 degrees, 180 degrees, 240 degrees and 300 degrees, wherein each concentric circle forms an 8-dimensional vector;
a simplified 4 x 6-24 keypoint descriptor was obtained.
Further, step S46 includes:
(1) setting the acquired key point descriptor sample set as N, determining the minimum number of samples required by the model parameter as N, ensuring that the number of the samples in the N is greater than the value of N, and randomly selecting a subset S of the N containing r samples and an initialization model W in the sample set N;
(2) taking a residual set N/S, taking a sample set with the error of W being less than a certain threshold value t, and forming an inner point set S together with S*
(3) If S is*N or more, representing that the correct model parameter is generated according to the inner point set S*Recalculating a new model W, selecting a new subset S, and performing k iterations on the process;
(4) after k iterations, when the maximum consistent set Y is found, the interior and exterior points in the dataset are judged according to Y.
Compared with the prior art, the invention has the beneficial effects that:
the method can reduce the matching time and improve the matching efficiency and the matching real-time performance of the SIFT algorithm. The invention is used for image recognition of the full liquid crystal instrument and can realize automatic testing of the full liquid crystal instrument in different functional states.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is a flow chart of the improved SIFT algorithm of the present invention;
FIG. 3 is a schematic diagram of a simplified key point descriptor of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, the invention provides an image recognition method for an automobile full liquid crystal instrument based on an improved SIFT algorithm, which specifically comprises the following steps:
step 1: collecting images of the full liquid crystal instrument in different functional states and creating a standard characteristic model library of the full liquid crystal instrument image;
step 2: adopting a camera device to acquire images of the full liquid crystal instrument to be detected in different display function states under the condition that a dark box has no light source; the camera device shoots the full liquid crystal instrument to be detected under the condition of dark installation without a light source, a sampling photo is uploaded to an upper computer to be processed, a CCD camera is selected as the camera, and the pixel size is 2560 multiplied by 1920.
And step 3: the image enhancement processing of the full liquid crystal instrument in the upper computer specifically comprises the following steps:
step 31: and (3) smoothing the image, namely smoothing the image by adopting a median filtering method, wherein a conversion formula is as follows:
g(x,y)=median{f(x-k,y-l),(k,l)∈W}
wherein f (x, y) is an original image, g (x, y) is a filtered output image, W is a median filtering template, and l and k are boundary values in x and y directions respectively.
Step 32: and (5) image sharpening processing. Adopting a Laplace operator to sharpen the image, wherein the sharpening formula is as follows:
Figure BDA0002391029000000051
wherein f (x, y) is an original image, g (x, y) is a sharpened output image, and c is a laplacian sharpened template.
And 4, step 4: the method comprises the following steps of carrying out feature point matching on a full liquid crystal instrument image to be detected and a standard image in a feature model library by utilizing an improved SIFT algorithm, and specifically comprises the following steps:
and 41, simplifying the scale space, and simplifying the 5-layer number and the 5-layer number of the scale space in the original algorithm into 4 layers.
And 42, detecting the extreme point of the scale space, and comparing the sampling point with all the points adjacent to the space to determine the extreme point.
And 43, positioning extreme points and removing unstable and low-contrast extreme points. Fitting function formula in scale space by using Gaussian difference function:
Figure BDA0002391029000000052
where X ═ X, y, σ)TThe derivation can obtain the value of the corresponding extreme point equation
Figure BDA0002391029000000053
If it is
Figure BDA0002391029000000061
The extreme point is eliminated and the extreme point is eliminated,
Figure BDA0002391029000000062
represents the offset of the extreme point, and D (X) is a Gaussian difference function value.
And 44, determining the main direction of the key point, wherein the positioned extreme point is a characteristic point. After the gradient and the direction of each feature point are calculated, statistics is carried out on the gradient direction of the pixel points by a histogram method, and the peak value of the histogram is used for representing the main direction of the key point.
Step 45: the keypoint descriptor is simplified. The key point descriptors in the original algorithm are reduced from 128-dimensional vectors to 24-dimensional vectors. Referring to fig. 3, the simplified method is as follows:
(1) a rectangular region in the original SIFT algorithm is changed into a concentric circle region, a key point is used as a circle center, the diameter is 8, and concentric circles with the radiuses of 1, 2, 3 and 4 are respectively divided.
(2) Gradients of four concentric circles in 6 directions of 0 degrees, 60 degrees, 120 degrees, 180 degrees, 240 degrees and 300 degrees are respectively calculated, and each concentric circle forms an 8-dimensional vector.
(3) A simplified 4 x 6-24 keypoint descriptor can be obtained.
Step 46: and matching the characteristic points. And (4) carrying out feature point matching by adopting a Ranpac algorithm to replace an Euclidean distance method of the original algorithm. The method comprises the following steps: the Ransac algorithm steps are as follows:
(1) setting the acquired key point descriptor sample set as N, determining the minimum number of samples required by the model parameter as N, ensuring that the number of the samples in the N is greater than the value of N, and randomly selecting a subset S of the N containing r samples and an initialization model W in the sample set N;
(2) taking a residual set N/S, taking a sample set with the error of W being less than a certain threshold value t, and forming an inner point set S together with S*
(3) If S is*N or more, representing that the correct model parameter is generated according to the inner point set S*Recalculating a new model W, selecting a new subset S, and performing k iterations on the process;
(4) after k iterations, when the maximum consistent set Y is found, the interior and exterior points in the dataset are judged according to Y.
And 5: the recognition result is obtained according to a matching rate threshold, which is generally set to 90%.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (4)

1. An image identification method for an automobile full liquid crystal instrument is characterized by comprising the following steps:
s1: collecting images of the full liquid crystal instrument in different functional states and creating a standard characteristic model library of the full liquid crystal instrument image;
s2: adopting a camera device to acquire images of the full liquid crystal instrument to be detected in different display function states under the condition that a dark box has no light source;
s3: carrying out enhancement processing on the acquired full liquid crystal instrument image, wherein the enhancement processing comprises image smoothing processing and image sharpening processing;
s4: carrying out feature point matching on the full liquid crystal instrument image to be detected and a standard image in a feature model library by utilizing an improved SIFT algorithm;
s5: obtaining an identification result according to a set matching rate threshold;
wherein: the specific steps of carrying out feature point matching on the full liquid crystal instrument image to be detected and the standard image in the feature model library by using the improved SIFT algorithm are as follows:
s41, simplifying the scale space, and simplifying the 5-layer number and the 5-layer number of the scale space in the original algorithm into 4 layers;
s42, detecting the extreme point of the scale space, comparing the sampling point with all the points adjacent to the space to determine the extreme point;
s43, locating extreme points, and eliminating unstable and low-contrast extreme points;
s44, determining the principal direction of the key point, taking the extreme point after positioning as the feature point, calculating the gradient and direction of each feature point, and then counting in the gradient direction of the pixel point by a histogram method, wherein the peak value of the histogram is used for representing the principal direction of the key point;
s45: simplifying the key point descriptors, namely simplifying the key point descriptors in the original SIFT algorithm from 128-dimensional vectors to 24-dimensional vectors;
s46: matching the characteristic points, namely matching the characteristic points by adopting a Randac algorithm;
wherein, S45: the method for simplifying the key point descriptor includes the following steps:
changing a rectangular region in the original SIFT algorithm into a concentric region, taking a key point as a circle center, taking a diameter of 8, and dividing concentric circles with radiuses of 1, 2, 3 and 4 respectively;
respectively calculating gradients of four concentric circles in 6 directions of 0 degrees, 60 degrees, 120 degrees, 180 degrees, 240 degrees and 300 degrees, wherein each concentric circle forms an 8-dimensional vector;
obtaining simplified 4 x 6 ═ 24 key point descriptors;
step S46 includes:
(1) setting the acquired key point description sub-sample set as N, determining the minimum number of samples required by the model parameters as N, ensuring that the number of samples in N is greater than the value of N, and randomly selecting a subset S of N containing r samples and an initialization model W in the sample set N;
(2) taking a residual set N/S, taking a sample set with the error of W being less than a certain threshold value t, and forming an inner point set S together with S*
(3) If S is*N or more, representing that the correct model parameter is generated according to the inner point set S*Recalculating a new model W, selecting a new subset S, and performing k iterations on the process;
(4) after k iterations, when the maximum consistent set Y is found, the interior and exterior points in the dataset are judged according to Y.
2. The method of claim 1, wherein the camera of step S2 is used to take a picture of the full liquid crystal instrument to be tested under dark-mounted and no-light conditions, and the sampled picture is uploaded to an upper computer to be processed, wherein the camera is a CCD camera with a pixel size of 2560 × 1920.
3. The method as claimed in claim 1, wherein the step S3 full liquid crystal instrument image enhancement processing comprises the following specific steps:
s31: and (3) image smoothing processing: smoothing the image by adopting a median filtering method: the conversion formula is as follows:
g(x,y)=median{f(x-k,y-l),(k,l)∈W}
wherein f (x, y) is an original image, g (x, y) is a filtered output image, W is a median filtering template, and l and k are boundary values in x and y directions respectively;
s32: image sharpening processing: adopting a Laplace operator to sharpen the image: the sharpening formula is as follows:
Figure FDA0003514541830000031
wherein f (x, y) is an original image, g (x, y) is a sharpened output image, and c is a laplacian sharpened template.
4. The method of claim 1, wherein S43 locating the extreme points, and wherein culling the unstable and low contrast extreme points comprises: fitting function formula in scale space by using Gaussian difference function:
Figure FDA0003514541830000032
where X ═ X, y, σ)TThe derivation can obtain the value of the corresponding extreme point equation
Figure FDA0003514541830000033
If it is
Figure FDA0003514541830000034
The extreme point is eliminated and the extreme point is eliminated,
Figure FDA0003514541830000035
represents the offset of the extreme point, and D (X) is a Gaussian difference function value.
CN202010114458.5A2020-02-252020-02-25Image identification method for automobile full-liquid crystal instrumentActiveCN111428563B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202010114458.5ACN111428563B (en)2020-02-252020-02-25Image identification method for automobile full-liquid crystal instrument

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202010114458.5ACN111428563B (en)2020-02-252020-02-25Image identification method for automobile full-liquid crystal instrument

Publications (2)

Publication NumberPublication Date
CN111428563A CN111428563A (en)2020-07-17
CN111428563Btrue CN111428563B (en)2022-04-01

Family

ID=71547226

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202010114458.5AActiveCN111428563B (en)2020-02-252020-02-25Image identification method for automobile full-liquid crystal instrument

Country Status (1)

CountryLink
CN (1)CN111428563B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113110909A (en)*2021-04-202021-07-13肇庆小鹏汽车有限公司Vehicle instrument testing method and device
CN114323303B (en)*2021-12-312023-08-29深圳技术大学Body temperature measuring method, device, infrared thermometer and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102722731A (en)*2012-05-282012-10-10南京航空航天大学Efficient image matching method based on improved scale invariant feature transform (SIFT) algorithm
CN103927507A (en)*2013-01-122014-07-16山东鲁能智能技术有限公司Improved multi-instrument reading identification method of transformer station inspection robot
CN105426929A (en)*2014-09-192016-03-23佳能株式会社Object shape alignment device, object processing device and methods thereof
CN109063717A (en)*2018-07-302018-12-21安徽慧视金瞳科技有限公司A kind of acquisition instrument center point method
CN109542041A (en)*2019-01-072019-03-29吉林大学A kind of automation equipment based on the detection of automobile double screen
CN110044405A (en)*2019-05-162019-07-23吉林大学A kind of automobile instrument automatic detection device and method based on machine vision

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP2012222674A (en)*2011-04-122012-11-12Sony CorpImage processing apparatus, image processing method, and program
CN106529424B (en)*2016-10-202019-01-04中山大学A kind of logo detection recognition method and system based on selective search algorithm

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102722731A (en)*2012-05-282012-10-10南京航空航天大学Efficient image matching method based on improved scale invariant feature transform (SIFT) algorithm
CN103927507A (en)*2013-01-122014-07-16山东鲁能智能技术有限公司Improved multi-instrument reading identification method of transformer station inspection robot
CN105426929A (en)*2014-09-192016-03-23佳能株式会社Object shape alignment device, object processing device and methods thereof
CN109063717A (en)*2018-07-302018-12-21安徽慧视金瞳科技有限公司A kind of acquisition instrument center point method
CN109542041A (en)*2019-01-072019-03-29吉林大学A kind of automation equipment based on the detection of automobile double screen
CN110044405A (en)*2019-05-162019-07-23吉林大学A kind of automobile instrument automatic detection device and method based on machine vision

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
"Vision based feature diagnosis for automobile instrument cluster using machine learning";M.Deepan Raj 等;《2017 Fourth International Conference on Signal Processing,Communication and Networking (ICSCN)》;20171030;第1-3页*
"图像识别技术在汽车仪表板检测中的应用与研究";张娇;《中国优秀硕士学位论文全文数据库(信息科技辑)》;20151215(第2015年第12期);第I138-699页正文第14-36页*
"基于SIFT的图像匹配方法的研究与改进";李宁;《万方数据(学位)》;20141013;第7-26页*
"基于快速SIFT特征提取的模板匹配算法";李忠海 等;《计算机工程》;20111220;第37卷(第24期);第223-224页*
"基于机器视觉的圆形指针式仪表自动读数识别关键技术研究";韩绍超;《中国优秀硕士学位论文全文数据库(信息科技辑)》;20190215(第2019年第02期);第I138-1295页正文第10-35页*

Also Published As

Publication numberPublication date
CN111428563A (en)2020-07-17

Similar Documents

PublicationPublication DateTitle
Lopez-Molina et al.Multiscale edge detection based on Gaussian smoothing and edge tracking
CN108564092A (en)Sunflower disease recognition method based on SIFT feature extraction algorithm
CN111881923B (en)Bill element extraction method based on feature matching
CN111428563B (en)Image identification method for automobile full-liquid crystal instrument
CN117173437B (en) Multimodal remote sensing image hybrid matching method and system
CN105930852A (en)Method for identifying bubble image
CN107766864B (en)Method and device for extracting features and method and device for object recognition
CN111553422A (en)Automatic identification and recovery method and system for surgical instruments
CN113129260A (en)Automatic detection method and device for internal defects of lithium battery cell
CN119205786B (en)Underwater pipeline detection method, device, equipment and storage medium
CN112836726B (en)Pointer instrument indication reading method and device based on video information
CN110222661A (en)It is a kind of for motion estimate and the feature extracting method of tracking
CN115620132A (en) An Unsupervised Method for Contrastive Learning Ice Lake Extraction
CN106157240B (en)Remote sensing image super-resolution method based on dictionary learning
CN116433978A (en)Automatic generation and automatic labeling method and device for high-quality flaw image
CN115689929A (en) Aperture Measurement Method and Device Based on Distortion Compensation and Adaptive Mean Fuzzy
CN105761207B (en)Image Super-resolution Reconstruction method based on the insertion of maximum linear block neighborhood
CN118392815B (en)Microplastic identification method
CN106709516B (en) A Blurred Image Detection Method Based on Naive Bayesian Method
CN117876270B (en)Archive scanning image restoration method, archive scanning image restoration device and storage medium
CN106296688B (en)Image blur detection method and system based on overall situation estimation
CN113052234A (en)Jade classification method based on image features and deep learning technology
CN113743413B (en)Visual SLAM method and system combining image semantic information
CN117726837A (en) A nonlinear optimization feature matching method
CN117474916A (en)Image detection method, electronic equipment and storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
TA01Transfer of patent application right
TA01Transfer of patent application right

Effective date of registration:20201023

Address after:130000 Changchun Qianjin Street, Jilin, No. 2699

Applicant after:Jilin University

Applicant after:Jilin Yugong Intelligent Technology Co.,Ltd.

Address before:130012 Chaoyang District Qianjin Street, Jilin, China, No. 2699, No.

Applicant before:Jilin University

GR01Patent grant
GR01Patent grant
CB03Change of inventor or designer information
CB03Change of inventor or designer information

Inventor after:Zhang Ximing

Inventor after:Liu Xianggang

Inventor after:Liu Weiping

Inventor after:Hu Bochun

Inventor after:Meng Jin

Inventor after:Liu Jia

Inventor after:Wang Zhaofeng

Inventor after:Wang Yulin

Inventor after:Guo Yufeng

Inventor after:Guan Zhe

Inventor before:Liu Weiping

Inventor before:Liu Xianggang

Inventor before:Guan Zhe

Inventor before:Hu Bochun

Inventor before:Meng Jin

Inventor before:Liu Jia

Inventor before:Wang Zhaofeng

Inventor before:Wang Yulin

Inventor before:Guo Yufeng

Inventor before:Zhang Ximing


[8]ページ先頭

©2009-2025 Movatter.jp