Movatterモバイル変換


[0]ホーム

URL:


CN119295957B - A tea bud screening method based on machine vision - Google Patents

A tea bud screening method based on machine vision
Download PDF

Info

Publication number
CN119295957B
CN119295957BCN202411833058.4ACN202411833058ACN119295957BCN 119295957 BCN119295957 BCN 119295957BCN 202411833058 ACN202411833058 ACN 202411833058ACN 119295957 BCN119295957 BCN 119295957B
Authority
CN
China
Prior art keywords
bud
pixel
region
value
fluff
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411833058.4A
Other languages
Chinese (zh)
Other versions
CN119295957A (en
Inventor
周彦君
王浩
马倩
杨巧玉
邓佳
吴建
林川尧
刘小谭
叶江红
李清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Agricultural Machinery Science Research Institute
Original Assignee
Sichuan Agricultural Machinery Science Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Agricultural Machinery Science Research InstitutefiledCriticalSichuan Agricultural Machinery Science Research Institute
Priority to CN202411833058.4ApriorityCriticalpatent/CN119295957B/en
Publication of CN119295957ApublicationCriticalpatent/CN119295957A/en
Application grantedgrantedCritical
Publication of CN119295957BpublicationCriticalpatent/CN119295957B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The invention discloses a tea bud screening method based on machine vision, which belongs to the technical field of image processing, and comprises the steps of firstly screening suspicious bud area pixels of a tea image to construct a bud area significant filtering image; and then, calculating the fluff coefficient value of the pixel point, accurately dividing the fluff region, introducing a feature fusion unit, carrying out deep fusion on the pixel value of the fluff region and the fluff coefficient value, combining the bud morphology value features, and finally realizing high-precision tea bud region identification through a classification model. Compared with the traditional method, the method has the advantages that the technical optimization is carried out in the links of image preprocessing, feature extraction and classification recognition, the technical problem of low tea bud screening precision is effectively solved, and a new technical path is provided for intelligent screening of tea.

Description

Tea bud screening method based on machine vision
Technical Field
The invention relates to the technical field of image processing, in particular to a tea bud screening method based on machine vision.
Background
In the mechanized process of tea production, tea picking machinery is facing serious technical challenges. The existing tea picking equipment has obvious short plates in the selective picking capability, and is difficult to accurately identify and distinguish new tender tea buds, so that the impurity mixing rate in the picking process is high, and the picking quality is extremely unstable. The traditional reciprocating cutting picking mode not only causes remarkable mechanical damage to tea trees, but also seriously hurts the picking efficiency and the tea quality.
The existing tea bud screening method applied to the tea picking machine depends on an image processing technology and generally comprises the steps of tea bud identification, segmentation, positioning and the like. The prior art is difficult to accurately identify tea buds in a complex natural environment, and particularly, the tea buds are low in screening precision under the conditions of large illumination change and background interference.
Disclosure of Invention
Aiming at the defects in the prior art, the tea bud screening method based on machine vision solves the problem of low tea bud screening precision in the prior art.
In order to achieve the aim of the invention, the technical scheme adopted by the invention is that the tea bud screening method based on machine vision comprises the following steps:
s1, screening suspicious bud area pixel points from a tea image to obtain a bud area significant filtering image;
S2, calculating a fluff coefficient value of the pixel point for the bud region significant filtering image;
S3, screening the nap pixel points according to the nap coefficient value, carrying out partition treatment on the nap pixel points, and then intercepting a nap region from the bud region significant filtering image;
s4, carrying out feature fusion on the pixel value and the nap coefficient value of the nap area by adopting a feature fusion unit to obtain fusion features;
S5, calculating bud morphology values of the villus regions, classifying the fusion characteristics by using a classification model, and obtaining tea bud regions based on characteristic enhancement of the bud morphology values.
Further, S1 comprises the following sub-steps:
s11, calculating the average value of pixel values of all pixel points on the tea image;
S12, marking pixel points with pixel values larger than or equal to the average value of the pixel values as suspected bud area pixel points on the tea image, and discarding other pixel points to obtain a bud area significant image;
s13, discarding isolated pixel points on the bud region significant image to obtain a bud region significant filter image.
Further, S2 comprises the following sub-steps:
s21, converting the bud region significant filtering image into an HSI color space, and extracting an I component;
s22, calculating the brightness contrast of each pixel point on the bud region significant filtering image according to the component I;
s23, performing remarkable filtering image gray scale processing on the bud region to obtain a gray scale value of each pixel point;
S24, calculating texture complexity of each pixel point on the bud region significant filtering image according to the gray value of each pixel point;
s25, adding the brightness contrast and texture complexity of the same pixel point to obtain a fluff coefficient value.
Further, the formula for calculating the brightness contrast in S22 is:
Wherein mui is the brightness contrast of the ith pixel point, Ii is the I component of the ith pixel point on the bud region significant filter image, Ic is the mean value of the I component in the neighborhood range of the ith pixel point on the bud region significant filter image, and I is the number of the pixel point on the bud region significant filter image.
Further, the formula for calculating the texture complexity in S24 is:
Wherein εi is the texture complexity of the ith pixel, Gi is the gray value of the ith pixel, Gi,j is the gray value of the jth pixel in the neighborhood of the ith pixel, Gc is the average gray value in the neighborhood, i is the number of the pixel on the image of the bud region significant filter, j is the number of the pixel in the neighborhood, and N is the size of the neighborhood.
Further, S3 comprises the following sub-steps:
S31, taking pixel points with the nap coefficient value larger than a nap coefficient threshold value as nap pixel points on the bud region significant filtering image;
s32, taking an undivided fluff pixel point as an initial growth point;
s33, judging the initial growth pointIf the non-partitioned villus pixel points exist in the neighborhood range, classifying the non-partitioned villus pixel points and the initial growing points into a candidate area, and jumping to the step S34, if not, classifying the initial growing points into a candidate area, and jumping to the step S32;
s34, taking a fluff pixel point which does not become an overgrowth point in the edge of the candidate area as a new growth point;
S35, judging new growth pointsWhether non-partitioned villus pixel points exist in the neighborhood range, if so, classifying the non-partitioned villus pixel points as the candidate region, and jumping to step S34, if not, directly jumping to step S34 until all edge fluff pixel points in the candidate areaNon-partitioned fluff pixel points do not exist in the neighborhood range;
S36, jumping to the step S32 until all fluff pixel points on the bud region significant filtering image have corresponding candidate regions;
S37, discarding candidate areas with the pixel points less than the threshold value;
S38, setting a minimum circumscribed rectangle in each remaining candidate area;
S39, cutting down the villus region from the bud region significant filtering image according to the position of the minimum circumscribed rectangle.
Further, the expression of the feature fusion unit in S4 is:
Wherein X is a fusion feature,Is thatA is the pixel value of the pile area, B is the pile coefficient value of the pile area,Is multiplied by element.
Further, the formula for calculating the bud morphology value in S5 is:
Where γ is the bud morphology value, length is the Length of the fluff region, and Width is the Width of the fluff region.
Further, the classification model in S5 comprises a CNN network, a characteristic enhancement unit and a full connection layer;
the input end of the CNN is used for inputting fusion characteristics, and the output end of the CNN is connected with the first input end of the characteristic enhancement unit;
the second input end of the characteristic enhancement unit is used for inputting bud morphology values, and the output end of the characteristic enhancement unit is connected with the input end of the full-connection layer;
The output end of the full connection layer is used as the output end of the classification model.
Further, the expression of the feature enhancement unit is: wherein Z is the output of the characteristic enhancing unit, Y is the output of the CNN network, and gamma is the bud morphology value.
In summary, the invention has the following beneficial effects:
1. According to the invention, the suspected bud area pixel points are screened from the tea image, and the filter treatment is carried out, so that the tea bud area is effectively enhanced, and the interference of other areas is reduced.
2. According to the invention, the fluff coefficient value of the pixel point is calculated for the bud region significant filtering image, so that the fluff characteristic condition of each pixel point is reflected, and the bud region characteristics are further represented.
3. According to the invention, the fluff pixel points are screened according to the fluff coefficient value, so that the zoning treatment of the fluff pixel points is realized, the area belonging to one tea bud is stripped from the bud area obvious filtering image, the interference of other areas is further reduced, and the bud area characteristics are highlighted.
4. According to the invention, the pixel value of the villus region and the villus coefficient value are subjected to feature fusion, so that the characteristic of tea buds is enhanced, the identification capability of the tea bud shape is improved, the screening accuracy is improved, and the robustness of the method under different illumination conditions and complex backgrounds is improved.
5. According to the invention, bud morphology values are calculated for the villus region, classification processing is carried out on the fusion characteristics by using a classification model, characteristic enhancement is carried out on the basis of the bud morphology values, interference of non-tea bud images is reduced, and screening precision is improved.
Drawings
FIG. 1 is a flow chart of a tea bud screening method based on machine vision;
Fig. 2 is a schematic structural diagram of a feature fusion unit and a classification model.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and all the inventions which make use of the inventive concept are protected by the spirit and scope of the present invention as defined and defined in the appended claims to those skilled in the art.
As shown in fig. 1, a tea bud screening method based on machine vision comprises the following steps:
s1, screening suspicious bud area pixel points from a tea image to obtain a bud area significant filtering image;
S2, calculating a fluff coefficient value of the pixel point for the bud region significant filtering image;
S3, screening the nap pixel points according to the nap coefficient value, carrying out partition treatment on the nap pixel points, and then intercepting a nap region from the bud region significant filtering image;
s4, carrying out feature fusion on the pixel value and the nap coefficient value of the nap area by adopting a feature fusion unit to obtain fusion features;
S5, calculating bud morphology values of the villus regions, classifying the fusion characteristics by using a classification model, and obtaining tea bud regions based on characteristic enhancement of the bud morphology values.
In the invention, the tea image is an RGB image.
In this embodiment, S1 includes the following sub-steps:
s11, calculating the average value of pixel values of all pixel points on the tea image;
S12, marking pixel points with pixel values larger than or equal to the average value of the pixel values as suspected bud area pixel points on the tea image, and discarding other pixel points to obtain a bud area significant image;
s13, discarding isolated pixel points on the bud region significant image to obtain a bud region significant filter image.
According to the method, the areas with obvious brightness and color characteristics in the tea image can be distinguished preliminarily by calculating the average value of the pixel values, the method can be used for filtering out the background and irrelevant areas quickly, focusing on the image areas possibly containing tea buds, compared with a traditional fixed threshold value method, the average value screening can be more suitable for the image characteristics under different illumination conditions, the points with the pixel values larger than or equal to the average value are marked as suspicious bud areas, the texture and color characteristics of the tea buds can be captured effectively, and a large amount of background noise is filtered preliminarily. The invention can further improve the structural integrity of the image by discarding the isolated pixel points in the bud region significant image, is beneficial to eliminating random noise points and interference in the image, and enhances the accuracy of subsequent image processing.
Isolated pixel pointsThere are no pixels in the neighborhood.
In this embodiment, S2 includes the following substeps:
s21, converting the bud region significant filtering image into an HSI color space, and extracting an I component;
s22, calculating the brightness contrast of each pixel point on the bud region significant filtering image according to the component I;
s23, performing remarkable filtering image gray scale processing on the bud region to obtain a gray scale value of each pixel point;
S24, calculating texture complexity of each pixel point on the bud region significant filtering image according to the gray value of each pixel point;
s25, adding the brightness contrast and texture complexity of the same pixel point to obtain a fluff coefficient value.
According to the invention, the bud region significant filtering image is converted into the HSI color space, the I component is extracted, the I component (Intensity) represents the brightness information of the color, the brightness change of the image can be accurately reflected, and the image is more sensitive to the fine brightness difference of the fluff on the tea bud surface.
Because the fluff is distributed on the tea buds in a staggered way, the brightness contrast of each pixel point on the obvious filtering image of the bud region can show the fluff characteristic. According to the invention, the distribution condition of the fluff is reflected by calculating the texture complexity through the gray value, and the fluff coefficient value is obtained by combining the brightness contrast and the texture complexity, so that the fluff recognition precision is improved.
In the present embodiment, the formula for calculating the brightness contrast in S22 is:
Wherein mui is the brightness contrast of the ith pixel point, Ii is the I component of the ith pixel point on the bud region significant filter image, Ic is the mean value of the I component in the neighborhood range of the ith pixel point on the bud region significant filter image, and I is the number of the pixel point on the bud region significant filter image.
The invention calculates the difference between the I component of each pixel point and the average value of the I components of the neighborhood pixel points, reflects the brightness difference condition, and sets the proportionality coefficient according to the brightness differenceThe larger the luminance difference is, the more remarkable the luminance contrast is.
Because the brightness value of the pixel point on the fluff is higher than that of the pixel point at other parts of the bud region, when the I component is smaller than or equal to the I component mean value of the neighborhood, the brightness contrast of the pixel point is 0.
In this embodiment, the formula for calculating the texture complexity in S24 is:
Wherein εi is the texture complexity of the ith pixel, Gi is the gray value of the ith pixel, Gi,j is the gray value of the jth pixel in the neighborhood of the ith pixel, Gc is the average gray value in the neighborhood, i is the number of the pixel on the image of the bud region significant filter, j is the number of the pixel in the neighborhood, and N is the size of the neighborhood.
In this embodiment, the size of the neighborhood range in steps S22 and S24 is:
Because the villi are distributed in the bud area in a staggered way, the invention calculates the discrete condition of gray values in the neighborhood range of the pixel point, and the texture is more complex as the texture complexity is larger. Meanwhile, since the gray value of the pixel points on the fluff is higher than that of the pixel points at other parts of the bud region, when the gray value is smaller than the gray value average value in the neighborhood range, the texture complexity is set to 0.
In this embodiment, S3 includes the following substeps:
S31, taking pixel points with the nap coefficient value larger than a nap coefficient threshold value as nap pixel points on the bud region significant filtering image;
s32, taking an undivided fluff pixel point as an initial growth point;
s33, judging the initial growth pointIf the non-partitioned villus pixel points exist in the neighborhood range, classifying the non-partitioned villus pixel points and the initial growing points into a candidate area, and jumping to the step S34, if not, classifying the initial growing points into a candidate area, and jumping to the step S32;
s34, taking a fluff pixel point which does not become an overgrowth point in the edge of the candidate area as a new growth point;
S35, judging new growth pointsWhether non-partitioned villus pixel points exist in the neighborhood range, if so, classifying the non-partitioned villus pixel points as the candidate region, and jumping to step S34, if not, directly jumping to step S34 until all edge fluff pixel points in the candidate areaNon-partitioned fluff pixel points do not exist in the neighborhood range;
In step S35, all edge pile pixel points in the candidate regionWhen the non-partitioned fluff pixel points do not exist in the neighborhood range, the fact that the fluff pixel points do not exist at the periphery of the candidate region is indicated, so that the candidate region cannot be continuously expanded, and the candidate region is distributed completely;
S36, jumping to the step S32 until all fluff pixel points on the bud region significant filtering image have corresponding candidate regions;
in step S36, step S32 is skipped to find the next candidate region;
S37, discarding candidate areas with the pixel points less than the threshold value;
S38, setting a minimum circumscribed rectangle in each remaining candidate area;
S39, cutting down the villus region from the bud region significant filtering image according to the position of the minimum circumscribed rectangle.
In the present invention, since the fluff is discretely distributed, the neighborhood range is set toAnd the fluff pixel points are conveniently searched.
The shape and position of the fluff region in step S39 are the same as the minimum bounding rectangle.
According to the invention, an undivided fluff pixel point is taken as an initial growth point, and the fluff pixel points belonging to a neighborhood range are gradually brought into a zone in a neighborhood searching mode, so that the clustering of the fluff pixel points adjacent in position is realized.
The fluff coefficient threshold is a threshold set for the fluff coefficient value, and the number threshold and the fluff coefficient threshold are specifically set according to experiments or experience.
In this embodiment, the minimum bounding rectangle is the smallest rectangle that can completely cover the candidate region.
The invention realizes the segmentation of the villus region belonging to one bud through the step S3, so that S4 and S5 classify the single villus region and identify whether the villus region belongs to tea buds or not.
As shown in fig. 2, the expression of the feature fusion unit in S4 is:
Wherein X is a fusion feature,Is thatA is the pixel value of the pile area, B is the pile coefficient value of the pile area,Is multiplied by element.
The pile area itself is a two-dimensional area, and since part of the pixel points are discarded in step S12, there may be a partial missing of the pixel values of the pile area, the missing positions are filled with 0, the invention adopts two convolution layers to respectively process the pixel value of the villus region and the villus coefficient value of the villus region, does not change the image size, multiplies the pixel value characteristic and the villus coefficient value characteristic which belong to the same pixel point position, realizes characteristic fusion, and enhances the identification capability of the sprout region.
In this embodiment, the formula for calculating the bud morphology value in S5 is:
Where γ is the bud morphology value, length is the Length of the fluff region, and Width is the Width of the fluff region.
Because of the shape of the tea bud strip, the invention introduces bud morphology values and shows the state of length-width ratio.
As shown in fig. 2, the classification model in S5 includes a CNN network, a feature enhancement unit, and a full connection layer;
the input end of the CNN is used for inputting fusion characteristics, and the output end of the CNN is connected with the first input end of the characteristic enhancement unit;
the second input end of the characteristic enhancement unit is used for inputting bud morphology values, and the output end of the characteristic enhancement unit is connected with the input end of the full-connection layer;
The output end of the full connection layer is used as the output end of the classification model.
In the present embodiment, the expression of the feature enhancing unit is:
wherein Z is the output of the characteristic enhancing unit, Y is the output of the CNN network, and gamma is the bud morphology value.
According to the invention, after the CNN network is adopted to extract the characteristics, the bud morphology value is adopted to enhance the characteristics, so that buds with larger length-width ratio have higher characteristic values, and the screening precision is improved.
According to the invention, the suspected bud area pixel points are screened from the tea image, and the filter treatment is carried out, so that the tea bud area is effectively enhanced, and the interference of other areas is reduced.
According to the invention, the fluff coefficient value of the pixel point is calculated for the bud region significant filtering image, so that the fluff characteristic condition of each pixel point is reflected, and the bud region characteristics are further represented.
According to the invention, the fluff pixel points are screened according to the fluff coefficient value, so that the zoning treatment of the fluff pixel points is realized, the area belonging to one tea bud is stripped from the bud area obvious filtering image, the interference of other areas is further reduced, and the bud area characteristics are highlighted.
According to the invention, the pixel value of the villus region and the villus coefficient value are subjected to feature fusion, so that the characteristic of tea buds is enhanced, the identification capability of the tea bud shape is improved, the screening accuracy is improved, and the robustness of the method under different illumination conditions and complex backgrounds is improved.
According to the invention, bud morphology values are calculated for the villus region, classification processing is carried out on the fusion characteristics by using a classification model, characteristic enhancement is carried out on the basis of the bud morphology values, interference of non-tea bud images is reduced, and screening precision is improved.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (6)

CN202411833058.4A2024-12-132024-12-13 A tea bud screening method based on machine visionActiveCN119295957B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202411833058.4ACN119295957B (en)2024-12-132024-12-13 A tea bud screening method based on machine vision

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202411833058.4ACN119295957B (en)2024-12-132024-12-13 A tea bud screening method based on machine vision

Publications (2)

Publication NumberPublication Date
CN119295957A CN119295957A (en)2025-01-10
CN119295957Btrue CN119295957B (en)2025-04-04

Family

ID=94164362

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202411833058.4AActiveCN119295957B (en)2024-12-132024-12-13 A tea bud screening method based on machine vision

Country Status (1)

CountryLink
CN (1)CN119295957B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN120030332B (en)*2025-04-182025-07-25安化县仙山茶叶开发有限公司 A kind of intelligent tea screening method and system

Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114842240A (en)*2022-04-062022-08-02盐城工学院Method for classifying images of leaves of MobileNet V2 crops by fusing ghost module and attention mechanism

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US8913831B2 (en)*2008-07-312014-12-16Hewlett-Packard Development Company, L.P.Perceptual segmentation of images
CN102789579B (en)*2012-07-262015-06-03同济大学Identification method for stressed state of water fertilizer of greenhouse crop on basis of computer vision technology
CN112633212B (en)*2020-12-292022-10-04长沙湘丰智能装备股份有限公司 A computer vision-based method for identifying and classifying tea sprouts
CN114842187A (en)*2022-03-082022-08-02中国农业科学院茶叶研究所Tea tender shoot picking point positioning method based on fusion of thermal image and RGB image
CN115100205B (en)*2022-08-262022-11-15南通东德纺织科技有限公司Method for detecting quality of fluff on surface of fabric based on machine vision
CN115797367A (en)*2022-12-172023-03-14杭州电子科技大学Tea tree tea bud segmentation method based on deep learning

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114842240A (en)*2022-04-062022-08-02盐城工学院Method for classifying images of leaves of MobileNet V2 crops by fusing ghost module and attention mechanism

Also Published As

Publication numberPublication date
CN119295957A (en)2025-01-10

Similar Documents

PublicationPublication DateTitle
CN119295957B (en) A tea bud screening method based on machine vision
CN109255757B (en)Method for segmenting fruit stem region of grape bunch naturally placed by machine vision
CN112861654B (en) A method for obtaining location information of famous and high-quality tea picking points based on machine vision
CN102096808B (en) Automatic Forecasting Method of Rice Planthopper Insect Situation
CN113674226A (en) A deep learning-based detection method for tea bud tip of tea picking machine
CN114581801A (en) A method for fruit tree identification and quantity monitoring based on UAV data collection
CN102214306A (en)Leaf disease spot identification method and device
CN112507911B (en)Real-time recognition method of pecan fruits in image based on machine vision
CN109871900A (en) A method for identifying and locating apples in complex backgrounds based on image processing
CN113313692B (en)Automatic banana young plant identification and counting method based on aerial visible light image
CN106651882A (en)Method and device for identifying and detecting cubilose impurities based on machine vision
CN111784764A (en) A tea sprout identification and localization algorithm
Wang et al.An maize leaf segmentation algorithm based on image repairing technology
CN107315012A (en)Composite polycrystal-diamond end face collapses the intelligent detecting method at angle
CN117974706A (en)Rock slice particle pit segmentation method based on dynamic threshold and local search
CN114677674A (en) A fast identification and positioning method of apple based on binocular point cloud
CN114549668B (en)Method for detecting fruit maturity on tree based on visual saliency map
CN111882529A (en) Method and device for detecting Mura defect of display screen based on Gaussian-Surface edge-preserving filter
CN111401121A (en) A method for citrus segmentation based on superpixel feature extraction
CN111369497B (en)Walking type tree fruit continuous counting method and device
CN107239761A (en)Fruit tree branch pulling effect evaluation method based on skeleton Corner Detection
CN117237384A (en) A visual inspection method and system for crops grown in smart agriculture
CN115294164B (en) An automatic threshold color factor algorithm for green vegetation image segmentation
Germain et al.Non destructive counting of wheatear with picture analysis
CN113192100B (en)Time-sharing overlapped plant image key feature area edge path acquisition method

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp