Movatterモバイル変換


[0]ホーム

URL:


CN108550169B - Method for determining positions of chess pieces in three-dimensional space and calculating heights of chess pieces - Google Patents

Method for determining positions of chess pieces in three-dimensional space and calculating heights of chess pieces
Download PDF

Info

Publication number
CN108550169B
CN108550169BCN201810374622.9ACN201810374622ACN108550169BCN 108550169 BCN108550169 BCN 108550169BCN 201810374622 ACN201810374622 ACN 201810374622ACN 108550169 BCN108550169 BCN 108550169B
Authority
CN
China
Prior art keywords
coordinate system
point
chess
image
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810374622.9A
Other languages
Chinese (zh)
Other versions
CN108550169A (en
Inventor
韩燮
孙福盛
赵融
郭晓霞
贾彩琴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North University of China
Original Assignee
North University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North University of ChinafiledCriticalNorth University of China
Priority to CN201810374622.9ApriorityCriticalpatent/CN108550169B/en
Publication of CN108550169ApublicationCriticalpatent/CN108550169A/en
Application grantedgrantedCritical
Publication of CN108550169BpublicationCriticalpatent/CN108550169B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明具体涉及一种三维空间中象棋棋子的位置确定及高度的计算方法,解决了象棋机器人中,棋子位置确定不准确及单目相机计算棋子高度方法的复杂度高等问题。象棋棋子位置的确定方法包括计算棋子的空间位置和距相机的距离:先对图像进行预处理,根据棋子文字颜色分别进行分割,对分割后的图像做运算,确定棋子圆心的像素坐标,再根据彩色相机的标定参数,计算棋子的空间位置,最后将彩色相机坐标系下的空间点与深度图像坐标系下的像素点匹配,提取彩色图中指定像素点的景深并计算棋子到相机的实际距离;象棋棋子高度的计算方法:通过彩色相机的两次标定,以及在相机坐标系和世界坐标系下同一向量的关系,通过相机高度的变化来计算棋子的高度。

Figure 201810374622

The invention specifically relates to a method for determining the position and height of a chess piece in a three-dimensional space, which solves the problems of inaccurate determination of the position of a chess piece and high complexity of a method for calculating the height of a chess piece with a monocular camera in a chess robot. The method for determining the position of a chess piece includes calculating the spatial position of the piece and the distance from the camera: first, preprocess the image, segment it according to the text color of the piece, perform operations on the segmented image, determine the pixel coordinates of the center of the piece, and then according to the The calibration parameters of the color camera, calculate the spatial position of the chess piece, and finally match the spatial point in the color camera coordinate system with the pixel point in the depth image coordinate system, extract the depth of field of the specified pixel point in the color image and calculate the actual distance between the chess piece and the camera ; The calculation method of the height of the chess piece: through the two calibrations of the color camera and the relationship between the same vector in the camera coordinate system and the world coordinate system, the height of the chess piece is calculated through the change of the camera height.

Figure 201810374622

Description

Method for determining positions of chess pieces in three-dimensional space and calculating heights of chess pieces
Technical Field
The invention relates to a chess robot, and belongs to the field of machine vision and image processing. In particular to a method for determining the position of chess pieces in a three-dimensional space and calculating the height of the chess pieces. This method can also be applied to the calculation of the height of a single regular object in space.
Background
With the rapid development of robot technology and its wide application in various industries, machine vision plays an increasingly important role in robot application. In chess robots, the determination of the position of the pieces in space and the calculation of the height of the pieces are also an essential part of the chess robot.
For determining the position of a space chess piece, in the existing chess piece image processing method, because external environmental factors (illumination, interference of other objects and the like) have great influence on the extraction of the chess piece, after the chess piece is extracted, the noise point can be removed only by carrying out image preprocessing for many times, the processing steps are repeated and numerous, and the increase of uncertain factors increases the image processing process and time.
For calculating the height of the chessmen, methods such as binocular vision or extraction of three-dimensional space point cloud are mostly adopted to measure the object, and the methods have the problems of high algorithm complexity, high price for purchasing a camera and the like.
Disclosure of Invention
In order to solve the technical problems in the prior art, the technical scheme adopted by the invention is divided into the following two parts:
firstly, a method for determining the positions of chess pieces in a three-dimensional space comprises the following steps: the method comprises the following steps:
step 1, collecting images of chess pieces to be determined in spatial positions, preprocessing the collected images, determining H, S, V ranges of red and green characters on the chess pieces according to different colors of characters on the chess pieces and meanings of components in an HSV color model, and segmenting the red and green chess pieces respectively to obtain two binary images;
step 2, carrying out linear fusion on the two binary images obtained by respectively segmenting the red chess pieces and the green chess pieces, and after the two binary images are fused, mutually covering noise points generated in the extraction process respectively to obtain a pair of binary images which have no noise points and contain the red chess pieces and the green chess pieces;
because the red is brighter, the influence of environmental interference on the red chess pieces during extraction is small, and the method can adapt to most of varied environments; the green is darker, and the main noise when extracting comes from the pixel that has red piece, if the light darkens, then the noise point of red piece position will constantly increase. At this time, when the image extracted from the green chess pieces is preprocessed, the processing process becomes complicated, and the expected effect cannot be achieved. Therefore, the two extracted images are directly fused, so that noise points existing in the green extraction image are covered by the positions of the red chessmen in the red extraction image, the later image preprocessing process is reduced, and the expected extraction effect can be achieved.
Step 3, performing expansion operation on the fused binary image, and connecting adjacent elements in the image to enable the region where each extracted chessman is located to be a single and unrelated connected domain;
step 4, extracting outlines of the chesses in the expanded image, drawing circumscribed circles of the outlines of the chesses, and determining the positions of circle centers through the circumscribed circles, namely the positions of the 32 chesses in the image coordinate system;
step 5, placing the calibration board on the plane where the upper surfaces of the 32 chess pieces are located, and calculating internal and external parameters (H) of the color camera of the Kinect through a calibration function calibretacarama in OpenCVrgb,Rrgb,Trgb) Then, the position of the chess piece under the world coordinate system, namely the position of the chess piece in the space, is further calculated through a conversion relation between an image coordinate system and the world coordinate system in the pinhole-hole model;
step 6, placing the calibration board on the plane where the upper surfaces of the 32 chessmen are located, and calculating internal and external parameters (H) of the depth camera of the Kinect through a calibration function calibretacarama in OpenCVd,Rd,Td);
Because the infrared camera of the Kinect and the depth camera are in the same position, only the infrared camera can be used for acquiring calibration images, and therefore the infrared camera is calibrated, namely the depth camera is calibrated; in the images collected by the infrared camera, the angular points can not be extracted from each image, and the angular point detection is invalid due to the influence of the material of an object, so that more images need to be collected to enable the calibration of the infrared camera to be more accurate, and the number of the collected images is recommended to be 20-25;
step 7, according to world coordinatesPoint of contact (W)p) And a point (K) in the color camera coordinate systemp) The conversion relationship of (1): kp=Rrgb*Wp+TrgbAnd points in the world coordinate system (W)p) And point (K) in the depth camera coordinate systemdp) The conversion relationship of (1): kdp=Rd*Wp+TdCalculating the relation between a Kinect color camera coordinate system and a depth camera coordinate system, and converting a space point under the depth camera coordinate system into a depth pixel coordinate system according to an internal reference calibrated by the depth camera, so that the matching of the space point under the color camera coordinate system and a depth pixel point under the depth image coordinate system is realized; according to the matching method, the positions of the chessmen under the color camera coordinate system are converted into the depth image coordinate system, and the coordinates of the chessmen under the depth coordinate system are calculated;
step 8, extracting depth-of-field information of the coordinates of the chess pieces under the obtained depth pixel coordinate system according to a storage mode of depth-of-field data in the depth image acquired by the Kinect, and obtaining a vertical distance d from the circle center of the upper surface of the chess piece in the space to the plane where the sensor is located;
step 9, projecting the coordinates of the chess pieces obtained in the step 7 in an XOY plane of a depth camera coordinate system according to the coordinates of the chess pieces in the depth camera coordinate system, calculating the distance D1 between a projection point and the origin of the coordinate system, and combining the distance D obtained in the step 8 according to the Pythagorean theorem D2=d12+d2And calculating the actual distance D from the circle center of the upper surface of the chess piece in the space to the sensor, namely, determining the position of the chess piece in the three-dimensional space.
In step 1, gather the image of waiting to confirm position chess piece, carry out image preprocessing to the image of gathering earlier, again according to the difference of characters colour on the chess piece, according to the meaning of each weight in the HSV colour model, confirm the H, S, V scope of red, green characters on the piece, its step includes:
2.1, carrying out spatial correction on the collected chess piece images by adopting a perspective transformation method, and correcting the images into an orthographic projection form;
2.2, setting ROI for the image after perspective transformation;
2.3, determining H, S, V component values of the red and green characters of the chess pieces as follows according to different colors of characters on the chess pieces and the meaning of each component in the HSV color model: the value ranges of red H, S, V are respectively 1-15, 60-255 and 0-93, and the value ranges of green H, S, V are respectively 0-95, 0-255 and 0-93.
Firstly, carrying out spatial correction on an acquired image, and adopting a perspective transformation method to keep a projection geometric figure on a bearing surface unchanged so as to correct the image into an orthographic projection form; secondly, in order to eliminate unnecessary interference of surrounding environment to the chess piece extraction, reduce the time of post image processing and increase the positioning precision of the chess piece coordinates, ROI is set for the image after perspective transformation. Through the perspective transformation and the setting of ROI, the collected source image is processed into the image only with the chessman and the chessboard, and the preparation of the previous stage is made for the next segmentation and space positioning of the chessman.
Because the HSV color model facilitates segmentation of a given color, the chess piece is segmented using the HSV model. According to the difference of the colors of characters on the chess pieces and the significance of each component in the HSV color model, H, S, V ranges of the red characters and the green characters on the chess pieces are determined, and the red characters and the green characters are respectively segmented to obtain two binary images.
In the step 7, the point (W) in the world coordinate system is used as a referencep) And a point (K) in the color camera coordinate systemp) The conversion relationship of (1): kp=Rrgb*Wp+TrgbAnd points in the world coordinate system (W)p) And point (K) in the depth camera coordinate systemdp) The conversion relationship of (1): kdp=Rd*Wp+TdThe method comprises the following steps of calculating the relation between a Kinect color camera coordinate system and a depth camera coordinate system, converting a space point under the depth camera coordinate system into a depth pixel coordinate system according to internal parameters calibrated by a depth camera, and realizing the matching of the space point under the color camera coordinate system and a depth pixel point under a depth image coordinate system, wherein the specific matching method comprises the following steps:
points under the world coordinate System (W)p) Under the color coordinatePoint (K) ofp) The conversion relationship is as follows: kp=Rrgb*Wp+TrgbFirst, a point (W) in the world coordinate systemp) And point (K) in the depth camera coordinate systemdp) The conversion relationship is as follows: kdp=Rd*Wp+TdAnd secondly, obtaining the following components by the formula (i):
Figure BDA0001639197300000041
the formula III is substituted into the formula II to obtain:
Figure BDA0001639197300000051
point K in the color camera coordinate systempAnd point K in the depth camera coordinate systemdpSatisfies the following conditions: kdp=R*Kp+ T, where R, T is the rotation matrix and translation vector between two coordinates;
combining the formula IV to obtain:
Figure BDA0001639197300000052
Figure BDA0001639197300000053
and substituting the formula into the formula to obtain the conversion relation between the point under the color camera coordinate system and the point under the depth camera coordinate system:
Figure BDA0001639197300000054
then K is putdpAnd multiplying the pixel points by the internal parameters of the depth camera to obtain pixel points under the depth image coordinate system corresponding to the space points under the color camera coordinate system, so that the matching from the space points under the color camera coordinate system to the depth pixel points under the depth image coordinate system is realized.
Secondly, the method for calculating the height of the chess pieces in the three-dimensional space comprises the following steps:
step 1, collecting images of chess pieces to be determined in spatial positions, preprocessing the collected images, determining H, S, V ranges of red and green characters on the chess pieces according to different colors of characters on the chess pieces and meanings of components in an HSV color model, and segmenting the red and green chess pieces respectively to obtain two binary images;
and 2, performing linear fusion on the two binary images obtained by respectively segmenting the red chess pieces and the green chess pieces, and after the two binary images are fused, mutually covering noise points generated in the extraction process to obtain a pair of binary images which have no noise points and contain the red chess pieces and the green chess pieces. Because the noise point is not existed, the speed of the post-image processing is greatly improved, and the steps of the image processing are simplified.
Because the red is brighter, the influence of environmental interference on the red chess pieces during extraction is small, and the method can adapt to most of varied environments; the green is darker, and the main noise when extracting comes from the pixel that has red piece, if the light darkens, then the noise point of red piece position will constantly increase. At this time, when the image extracted from the green chess pieces is preprocessed, the processing process becomes complicated, and the expected effect cannot be achieved. Therefore, the two extracted images are directly fused, so that noise points existing in the green extraction image are covered by the positions of the red chessmen in the red extraction image, the later image preprocessing process is reduced, and the expected extraction effect can be achieved.
Step 3, performing expansion operation on the fused binary image, and connecting adjacent elements in the image to enable the region where each extracted chessman is located to be a single and unrelated connected domain;
step 4, extracting outlines of the chesses in the expanded image, drawing circumscribed circles of the outlines of the chesses, and determining the positions of circle centers through the circumscribed circles, namely the positions of the 32 chesses in the image coordinate system;
step 5, placing the calibration board on the plane where the upper surfaces of the 32 chess pieces are located, and calculating internal and external parameters (H) of the color camera of the Kinect through a calibration function calibretacarama in OpenCVrgb,Rrgb,Trgb) Then passing through a pinhole machine dieThe conversion relation between the image coordinate system and the world coordinate system in the model further calculates the position of the chessman under the world coordinate system, namely the position of the chessman in the space;
step 6, obtaining three-dimensional coordinates of 32 chessmen under the coordinate system of the color camera according to the conversion relation from the world coordinate system to the coordinate system of the color camera, selecting a coordinate point A of one chessman under the coordinate system, and making a vector from the point A to an original point O of the coordinate system of the color camera
Figure BDA0001639197300000061
Step 7, according to the rotation matrix R of the color camera coordinate system and the world coordinate systemrgbWill vector
Figure BDA0001639197300000062
Conversion to world coordinate system
Figure BDA0001639197300000063
Namely, it is
Figure BDA0001639197300000064
Knowing the world coordinates of the selected point A, and then based on the vector
Figure BDA0001639197300000065
Uniquely determine the position 1 (X) of the Kinect color camera in the world coordinate system1,Y1,Z1),Z1Namely the vertical distance from the camera to the plane of the upper surface of the chessman;
step 8, placing the calibration board on the plane of the chessboard surface, collecting the scene image, and carrying out secondary calibration (H) on the color camerargb,Rrgb,Trgb) Then selecting any point B on the calibration plate, converting the point B in the world coordinate system into the camera coordinate system according to the conversion relation between the world coordinate system and the camera coordinate system in the camera pinhole model, and making a vector from the point B to the origin O of the color camera coordinate system
Figure BDA0001639197300000066
9, according to the rotation matrix R of the color camera coordinate system and the world coordinate system after the secondary calibrationrgbWill vector
Figure BDA0001639197300000071
Conversion to world coordinate system
Figure BDA0001639197300000072
Namely, it is
Figure BDA0001639197300000073
Knowing the world coordinates of the selected point B, and then based on the vector
Figure BDA0001639197300000074
Uniquely determine the position 2 (X) of the Kinect color camera in the world coordinate system2,Y2,Z2) I.e. the perpendicular distance Z from the camera to the plane of the board surface2
Step 10, according to the formula h ═ z2-z1And h is the actual height of the chess pieces.
In step 1, gather the image of waiting to confirm position chess piece, carry out image preprocessing to the image of gathering earlier, again according to the difference of characters colour on the chess piece, according to the meaning of each weight in the HSV colour model, confirm the H, S, V scope of red, green characters on the piece, its step includes:
2.1, carrying out spatial correction on the collected chess piece images by adopting a perspective transformation method, and correcting the images into an orthographic projection form;
2.2, setting ROI for the image after perspective transformation;
2.3, determining H, S, V component values of the red and green characters of the chess pieces as follows according to different colors of characters on the chess pieces and the meaning of each component in the HSV color model: the value ranges of red H, S, V are respectively 1-15, 60-255 and 0-93, and the value ranges of green H, S, V are respectively 0-95, 0-255 and 0-93.
Firstly, carrying out spatial correction on an acquired image, and adopting a perspective transformation method to keep a projection geometric figure on a bearing surface unchanged so as to correct the image into an orthographic projection form; secondly, in order to eliminate unnecessary interference of surrounding environment to the chess piece extraction, reduce the time of post image processing and increase the positioning precision of the chess piece coordinates, ROI is set for the image after perspective transformation. Through the perspective transformation and the setting of ROI, the collected source image is processed into the image only with the chessman and the chessboard, and the preparation of the previous stage is made for the next segmentation and space positioning of the chessman.
Because the HSV color model facilitates segmentation of a given color, the chess piece is segmented using the HSV model. According to the difference of the colors of characters on the chess pieces and the significance of each component in the HSV color model, H, S, V ranges of the red characters and the green characters on the chess pieces are determined, and the red characters and the green characters are respectively segmented to obtain two binary images.
Through the steps 1-10, the reality of the chess pieces is accurately obtained.
The method for determining the positions of the chess pieces in the three-dimensional space can effectively improve the speed of image processing, simplify the flow of image preprocessing by a method for processing noise points, and improve the accuracy of determining the positions of the chess pieces; the method for determining the height of the chess pieces in the three-dimensional space is applicable to height measurement of a single regular object, reduces algorithm complexity and expense of camera purchase, and can be widely applied; the combination of the two methods improves the accuracy of the mechanical hand in grabbing the chessmen.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a perspective transformation view;
FIG. 3 is a ROI map;
FIG. 4 is an extraction diagram of a red chess piece;
FIG. 5 is a green pawn extraction diagram;
FIG. 6 is a linear fusion map;
FIG. 7 is a dilation map;
figure 8 is a pixel coordinate diagram of a pawn.
Detailed Description
The two methods of the present invention are described in further detail below with reference to the figures and examples.
The method for determining the positions of the chess pieces in the three-dimensional space comprises the following steps:
step 1, collecting images of chess pieces to be determined in spatial position, preprocessing the collected images, firstly, carrying out spatial correction on the collected images of the chess pieces by adopting a perspective transformation method, and correcting the images into an orthographic projection form, as shown in fig. 2; secondly, setting ROI for the image after perspective transformation, as shown in FIG. 3; finally, according to the difference of the character colors on the chess pieces and the significance of each component in the HSV color model, the H, S, V component values of the red and green characters of the chess pieces are determined to be as follows through comparison of different illumination conditions and different environments: the value ranges of red H, S, V are respectively 1-15, 60-255 and 0-93, and the value ranges of green H, S, V are respectively 0-95, 0-255 and 0-93. Dividing the red chess pieces and the green chess pieces respectively to obtain two binary images, as shown in figures 4 and 5;
step 2, the two binary images obtained by respectively segmenting the red chess pieces and the green chess pieces are linearly fused, and after the two binary images are fused, noise points generated in the extraction process of the two binary images are mutually covered to obtain a pair of binary images which have no noise points and contain the red chess pieces and the green chess pieces, as shown in fig. 6;
and 3, performing expansion operation on the fused binary image, and connecting adjacent elements in the image to enable the region where each extracted chessman is located to be a single and mutually irrelevant connected domain. In the expansion operation, the kernel with the reference point at the center 10 × 10 is used, as shown in fig. 7;
step 4, extracting outlines of the chess pieces in the expanded image, drawing circumscribed circles of the outlines of the chess pieces, and determining the positions of circle centers through the circumscribed circles, namely the positions of the 32 chess pieces in the image coordinate system, as shown in fig. 8;
step 5, placing the calibration board on the plane where the upper surfaces of the 32 chess pieces are located, and calculating internal and external parameters (H) of the color camera of the Kinect through a calibration function calibretacarama in OpenCVrgb,Rrgb,Trgb) Then go back toThe position of the chess piece under the world coordinate system, namely the position of the chess piece in the space, is further calculated by a conversion relation between an image coordinate system and the world coordinate system in the pinhole camera model;
step 6, placing the calibration board on the plane where the upper surfaces of the 32 chessmen are located, and calculating internal and external parameters (H) of the depth camera of the Kinect through a calibration function calibretacarama in OpenCVd,Rd,Td);
In the images collected by the infrared camera, the angular points can not be extracted from each image, and the angular point detection is invalid due to the influence of the material of an object, so that more images need to be collected to enable the calibration of the infrared camera to be more accurate, and the number of the collected images is 25;
step 7, according to the point (W) in the world coordinate systemp) And a point (K) in the color camera coordinate systemp) The conversion relationship of (1): kp=Rrgb*Wp+TrgbAnd points in the world coordinate system (W)p) And point (K) in the depth camera coordinate systemdp) The conversion relationship of (1): kdp=Rd*Wp+TdCalculating the relation between a Kinect color camera coordinate system and a depth camera coordinate system, and converting a space point under the depth camera coordinate system into a depth pixel coordinate system according to an internal reference calibrated by the depth camera, so that the matching of the space point under the color camera coordinate system and a depth pixel point under the depth image coordinate system is realized; according to the matching method, the positions of the chessmen under the color camera coordinate system are converted into the depth image coordinate system, and the coordinates of the chessmen under the depth coordinate system are calculated;
the matching method of the space point under the color camera coordinate system and the depth pixel point under the depth image coordinate system comprises the following steps:
points under the world coordinate System (W)p) With point (K) in colour coordinatesp) The conversion relationship is as follows: kp=Rrgb*Wp+TrgbFirst, a point (W) in the world coordinate systemp) And point (K) in the depth camera coordinate systemdp) The conversion relationship is as follows: kdp=Rd*Wp+TdAnd secondly, obtaining the following components by the formula (i):
Figure BDA0001639197300000101
the formula III is substituted into the formula II to obtain:
Figure BDA0001639197300000102
point K in the color camera coordinate systempAnd point K in the depth camera coordinate systemdpSatisfies the following conditions: kdp=R*Kp+ T, where R, T is the rotation matrix and translation vector between two coordinates;
combining the formula IV to obtain:
Figure BDA0001639197300000103
Figure BDA0001639197300000104
and substituting the formula into the formula to obtain the conversion relation between the point under the color camera coordinate system and the point under the depth camera coordinate system:
Figure BDA0001639197300000105
then K is putdpAnd multiplying the pixel points by the internal parameters of the depth camera to obtain pixel points under the depth image coordinate system corresponding to the space points under the color camera coordinate system, so that the matching from the space points under the color camera coordinate system to the depth pixel points under the depth image coordinate system is realized.
Step 8, extracting depth-of-field information of the coordinates of the chess pieces under the obtained depth pixel coordinate system according to a storage mode of depth-of-field data in the depth image acquired by the Kinect, and obtaining a vertical distance d from the circle center of the upper surface of the chess piece in the space to the plane where the sensor is located;
the Kinect depth information is stored in a 16-bit image, the first 13bits are depth, the second 3bits are index values, when no person appears in the image, 16bits can be directly extracted to obtain distance information;
step 9, projecting the coordinates of the chess pieces obtained in the step 7 in an XOY plane of a depth camera coordinate system according to the coordinates of the chess pieces in the depth camera coordinate system, calculating the distance D1 between a projection point and the origin of the coordinate system, and combining the distance D obtained in the step 8 according to the Pythagorean theorem D2=d12+d2And calculating the actual distance D from the circle center of the upper surface of the chess piece in the space to the sensor, namely, determining the position of the chess piece in the three-dimensional space, and obtaining the position of the chess piece in the space and the accurate distance D between the chess piece and the Kinect infrared camera through the steps 1-9, as shown in the following table.
Figure BDA0001639197300000111
The actual distance from the chess pieces to the sensor is in the image collected by the infrared camera, the angular points of each image can not be extracted, the angular point detection is invalid due to the influence of material of an object, so that more images need to be collected to enable the calibration of the infrared camera to be more accurate, and the number of the collected images is suggested to be 20-25. Secondly, a method for calculating the height of the chessmen comprises the following steps:
step 1, collecting images of chess pieces to be determined in spatial position, preprocessing the collected images, firstly, carrying out spatial correction on the collected images of the chess pieces by adopting a perspective transformation method, and correcting the images into an orthographic projection form, as shown in fig. 2; secondly, setting ROI for the image after perspective transformation, as shown in FIG. 3; finally, according to the difference of the character colors on the chess pieces and the significance of each component in the HSV color model, the H, S, V component values of the red and green characters of the chess pieces are determined to be as follows through experiments on different illumination conditions and different environments: the value ranges of red H, S, V are respectively 1-15, 60-255 and 0-93, and the value ranges of green H, S, V are respectively 0-95, 0-255 and 0-93. Dividing the red chess pieces and the green chess pieces respectively to obtain two binary images, as shown in figures 4 and 5;
step 2, the two binary images obtained by respectively segmenting the red chess pieces and the green chess pieces are linearly fused, and after the two binary images are fused, noise points generated in the extraction process of the two binary images are mutually covered to obtain a pair of binary images which have no noise points and contain the red chess pieces and the green chess pieces, as shown in fig. 6;
and 3, performing expansion operation on the fused binary image, and connecting adjacent elements in the image to enable the region where each extracted chessman is located to be a single and mutually irrelevant connected domain. In the expansion operation, the kernel with the reference point at the center 10 × 10 is used, as shown in fig. 7;
step 4, extracting outlines of the chess pieces in the expanded image, drawing circumscribed circles of the outlines of the chess pieces, and determining the positions of circle centers through the circumscribed circles, namely the positions of the 32 chess pieces in the image coordinate system, as shown in fig. 8;
step 5, placing the calibration board on the plane where the upper surfaces of the 32 chess pieces are located, and calculating internal and external parameters (H) of the color camera of the Kinect through a calibration function calibretacarama in OpenCVrgb,Rrgb,Trgb) Then, the position of the chess piece under the world coordinate system, namely the position of the chess piece in the space, is further calculated through a conversion relation between an image coordinate system and the world coordinate system in the pinhole-hole model;
step 6, obtaining three-dimensional coordinates of 32 chessmen under the coordinate system of the color camera according to the conversion relation from the world coordinate system to the coordinate system of the color camera, selecting a coordinate point A (124, 68, 0) of one chessman under the coordinate system, and making a vector from the point A to an original point O of the coordinate system of the color camera
Figure BDA0001639197300000131
Step 7, according to the rotation matrix R of the color camera coordinate system and the world coordinate systemrgbWill vector
Figure BDA0001639197300000132
Conversion to world coordinate system
Figure BDA0001639197300000133
Namely, it is
Figure BDA0001639197300000134
Knowing the world coordinates of the selected point A, and then based on the vector
Figure BDA0001639197300000135
The position 1(-56, 287, 874) of the Kinect color camera under the world coordinate system can be uniquely obtained, namely the vertical distance from the camera to the plane where the upper surface of the chessman is located is 874 mm;
step 8, placing the calibration board on the plane of the chessboard surface, collecting the scene image, and carrying out secondary calibration (H) on the color camerargb,Rrgb,Trgb) Then selecting any point B (0, 0, 0) on the calibration plate, converting the point B in the world coordinate system into the camera coordinate system according to the conversion relation between the world coordinate system and the camera coordinate system in the camera pinhole model, and making a vector from the point B to an origin O of the color camera coordinate system
Figure BDA0001639197300000136
9, according to the rotation matrix R of the color camera coordinate system and the world coordinate system after the secondary calibrationrgbWill vector
Figure BDA0001639197300000137
Conversion to world coordinate system
Figure BDA0001639197300000138
Namely, it is
Figure BDA0001639197300000139
Knowing the world coordinates of the selected point B, and then based on the vector
Figure BDA00016391973000001310
The position 2(-87, 209, 886) of the Kinect color camera under the world coordinate system can be uniquely obtained, namely the vertical distance from the camera to the plane where the chessboard surface is located is 886 mm;
step 10, according to the formula h ═ z2-z1H is the actual height of the chess pieces, and the calculated actual height of the chess pieces is 12 mm.

Claims (5)

1. The method for determining the positions of the chess pieces in the three-dimensional space is characterized by comprising the following steps of: the method comprises the following steps:
step 1, collecting images of chess pieces to be determined in spatial positions, preprocessing the collected images, determining H, S, V ranges of red and green characters on the chess pieces according to different colors of characters on the chess pieces and meanings of components in an HSV color model, and segmenting the red and green chess pieces respectively to obtain two binary images;
step 2, carrying out linear fusion on the two binary images obtained by respectively segmenting the red chess pieces and the green chess pieces, and after the two binary images are fused, mutually covering noise points generated in the extraction process respectively to obtain a pair of binary images which have no noise points and contain the red chess pieces and the green chess pieces;
step 3, performing expansion operation on the fused binary image, and connecting adjacent elements in the image to enable the region where each extracted chessman is located to be a single and unrelated connected domain;
step 4, extracting outlines of the chesses in the expanded image, drawing circumscribed circles of the outlines of the chesses, and determining the positions of circle centers through the circumscribed circles, namely the positions of the 32 chesses in the image coordinate system;
step 5, placing the calibration board on the plane where the upper surfaces of the 32 chess pieces are located, and calculating internal and external parameters (H) of the color camera of the Kinect through a calibration function calibretacarama in OpenCVrgb,Rrgb,Trgb) Then, the position of the chess piece under the world coordinate system, namely the position of the chess piece in the space, is further calculated through a conversion relation between an image coordinate system and the world coordinate system in the pinhole-hole model;
step 6, placing the calibration board on the plane where the upper surfaces of the 32 chessmen are located, and calculating the internal and external parameters (H) of the depth camera of the Kinect through the calibration function calibretacarama in the O-penCVd,Rd,Td);
Step 7, according to the point W in the world coordinate systempPoint K in the coordinate system of the color camerapThe conversion relationship of (1): kp=Rrgb*Wp+TrgbAnd point W in world coordinate systempPoint K in the coordinate system of the depth cameradpThe conversion relationship of (1): kdp=Rd*Wp+TdCalculating the relation between a Kinect color camera coordinate system and a depth camera coordinate system, and converting a space point under the depth camera coordinate system into a depth pixel coordinate system according to an internal reference calibrated by the depth camera, so that the matching of the space point under the color camera coordinate system and a depth pixel point under the depth pixel coordinate system is realized; according to the matching method, the positions of the chessmen under the color camera coordinate system are converted into the depth pixel coordinate system, and the coordinates of the chessmen under the depth pixel coordinate system are calculated;
step 8, extracting depth-of-field information of the coordinates of the chess pieces under the obtained depth pixel coordinate system according to a storage mode of depth-of-field data in the depth image acquired by the Kinect, and obtaining a vertical distance d from the circle center of the upper surface of the chess piece in the space to the plane where the sensor is located;
step 9, according to the coordinates of the chess piece in the depth pixel coordinate system obtained in the step 7, projecting the coordinates of the chess piece in the depth pixel coordinate system onto an XOY plane of a depth camera coordinate system, calculating the distance D1 from a projection point to the origin of the depth camera coordinate system, and according to the Pythagorean theorem D by combining the distance D obtained in the step 82=d12+d2And calculating the actual distance D from the circle center of the upper surface of the chess piece in the space to the sensor, namely, determining the position of the chess piece in the three-dimensional space.
2. A method of chess piece position determination in a three-dimensional space according to claim 1, characterized in that: in step 1, gather the image of waiting to confirm position chess piece, carry out image preprocessing to the image of gathering earlier, again according to the difference of characters colour on the chess piece, according to the meaning of each weight in the HSV colour model, confirm the H, S, V scope of red, green characters on the piece, its step includes:
2.1, carrying out spatial correction on the collected chess piece images by adopting a perspective transformation method, and correcting the images into an orthographic projection form;
2.2, setting ROI for the image after perspective transformation;
2.3, determining H, S, V component values of the red and green characters of the chess pieces as follows according to different colors of characters on the chess pieces and the meaning of each component in the HSV color model: the value ranges of red H, S, V are respectively 1-15, 60-255 and 0-93, and the value ranges of green H, S, V are respectively 0-95, 0-255 and 0-93.
3. A method of determining the position of a chess piece in a three-dimensional space according to claim 2, wherein in step 7, the method is based on the point W in the world coordinate systempPoint K in the coordinate system of the color camerapThe conversion relationship of (1): kp=Rrgb*Wp+TrgbAnd point W in world coordinate systempPoint K in the coordinate system of the depth cameradpThe conversion relationship of (1): kdp=Rd*Wp+TdThe method comprises the following steps of calculating the relation between a Kinect color camera coordinate system and a depth camera coordinate system, converting a space point under the depth camera coordinate system into a depth pixel coordinate system according to internal parameters calibrated by a depth camera, and realizing the matching of the space point under the color camera coordinate system and a depth pixel point under the depth pixel coordinate system, wherein the specific matching method comprises the following steps:
point W in world coordinate systempPoint K in the coordinate system of the color camerapThe conversion relationship is as follows: kp=Rrgb*Wp+TrgbFirstly, a point W in a world coordinate systempPoint K in the coordinate system of the depth cameradpThe conversion relationship is as follows: kdp=Rd*Wp+TdAnd secondly, obtaining the following components by the formula (i):
Figure FDA0003101094700000031
the formula III is substituted into the formula II to obtain:
Figure FDA0003101094700000032
point K in the color camera coordinate systempAnd point K in the depth camera coordinate systemdpSatisfies the following conditions: kdp=R*Kp+ T, where R, T is the rotation matrix and translation vector between two coordinates;
combining the formula IV to obtain:
Figure FDA0003101094700000033
Figure FDA0003101094700000034
and substituting the formula into the formula to obtain the conversion relation between the point under the color camera coordinate system and the point under the depth camera coordinate system:
Figure FDA0003101094700000035
then K is putdpAnd multiplying the pixel points by the internal parameters of the depth camera to obtain pixel points under the depth pixel coordinate system corresponding to the space points under the color camera coordinate system, so that the matching from the space points under the color camera coordinate system to the depth pixel points under the depth pixel coordinate system is realized.
4. The method for calculating the height of the chess pieces in the three-dimensional space is characterized by comprising the following steps of: the method comprises the following steps:
step 1, collecting images of chess pieces to be positioned, preprocessing the collected images, determining H, S, V ranges of red and green characters on the chess pieces according to different colors of characters on the chess pieces and the meanings of components in an HSV color model, and segmenting the red and green chess pieces respectively to obtain two binary images;
step 2, carrying out linear fusion on the two binary images obtained by respectively segmenting the red chess pieces and the green chess pieces, and after the two binary images are fused, mutually covering noise points generated in the extraction process respectively to obtain a pair of binary images which have no noise points and contain the red chess pieces and the green chess pieces;
step 3, performing expansion operation on the fused binary image, and connecting adjacent elements in the image to enable the region where each extracted chessman is located to be a single and unrelated connected domain;
step 4, extracting outlines of the chesses in the expanded image, drawing circumscribed circles of the outlines of the chesses, and determining the positions of circle centers through the circumscribed circles, namely the positions of the 32 chesses in the image coordinate system;
step 5, placing the calibration board on the plane where the upper surfaces of the 32 chess pieces are located, and calculating internal and external parameters (H) of the color camera of the Kinect through a calibration function calibretacarama in OpenCVrgb,Rrgb,Trgb) Then, the position of the chess piece under the world coordinate system, namely the position of the chess piece in the space, is further calculated through a conversion relation between an image coordinate system and the world coordinate system in the pinhole-hole model;
step 6, obtaining three-dimensional coordinates of 32 chessmen under the coordinate system of the color camera according to the conversion relation from the world coordinate system to the coordinate system of the color camera, selecting a coordinate point A of one chessman under the coordinate system of the color camera, and making a vector from the point A to an original point O of the coordinate system of the color camera
Figure FDA0003101094700000041
Step 7, according to the rotation matrix R of the color camera coordinate system and the world coordinate systemrgbWill vector
Figure FDA0003101094700000042
Conversion to world coordinate system
Figure FDA0003101094700000043
Namely, it is
Figure FDA0003101094700000044
Knowing the world coordinates of the selected point A, and then based on the vector
Figure FDA0003101094700000045
Uniquely determine the position 1 (X) of the Kinect color camera in the world coordinate system1,Y1,Z1),Z1Namely the vertical distance from the camera to the plane of the upper surface of the chessman;
step 8, placing the calibration board on the plane of the chess board surface, collecting the scene image of the plane of the calibration board on the chess board surface, and carrying out secondary calibration (H) on the color camerargb,Rrgb,Trgb) Then selecting any point B on the calibration plate, converting the point B in the world coordinate system into the camera coordinate system according to the conversion relation between the world coordinate system and the camera coordinate system in the camera pinhole model, and making a vector from the point B to the origin O of the color camera coordinate system
Figure FDA0003101094700000051
9, according to the rotation matrix R of the color camera coordinate system and the world coordinate system after the secondary calibrationrgbWill vector
Figure FDA0003101094700000052
Conversion to world coordinate system
Figure FDA0003101094700000053
Namely, it is
Figure FDA0003101094700000054
Knowing the world coordinates of the selected point B, and then based on the vector
Figure FDA0003101094700000055
Uniquely determine the position 2 (X) of the Kinect color camera in the world coordinate system2,Y2,Z2) I.e. the perpendicular distance Z from the camera to the plane of the board surface2
Step 10, according to the formula h ═ Z2-Z1And h is the actual height of the chess pieces.
5. The method of calculating the height of a chess piece in a three-dimensional space according to claim 4, wherein: in step 1, gather the image of waiting to confirm position chess piece, carry out image preprocessing to the image of gathering earlier, again according to the difference of characters colour on the chess piece, according to the meaning of each weight in the HSV colour model, confirm the H, S, V scope of red, green characters on the piece, its step includes:
2.1, carrying out spatial correction on the collected chess piece images by adopting a perspective transformation method, and correcting the images into an orthographic projection form;
2.2, setting ROI for the image after perspective transformation;
2.3, determining H, S, V component values of the red and green characters of the chess pieces as follows according to different colors of characters on the chess pieces and the meaning of each component in the HSV color model: the value ranges of red H, S, V are respectively 1-15, 60-255 and 0-93, and the value ranges of green H, S, V are respectively 0-95, 0-255 and 0-93.
CN201810374622.9A2018-04-242018-04-24Method for determining positions of chess pieces in three-dimensional space and calculating heights of chess piecesActiveCN108550169B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201810374622.9ACN108550169B (en)2018-04-242018-04-24Method for determining positions of chess pieces in three-dimensional space and calculating heights of chess pieces

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201810374622.9ACN108550169B (en)2018-04-242018-04-24Method for determining positions of chess pieces in three-dimensional space and calculating heights of chess pieces

Publications (2)

Publication NumberPublication Date
CN108550169A CN108550169A (en)2018-09-18
CN108550169Btrue CN108550169B (en)2021-08-10

Family

ID=63512354

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201810374622.9AActiveCN108550169B (en)2018-04-242018-04-24Method for determining positions of chess pieces in three-dimensional space and calculating heights of chess pieces

Country Status (1)

CountryLink
CN (1)CN108550169B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110342252B (en)*2019-07-012024-06-04河南启迪睿视智能科技有限公司Automatic article grabbing method and automatic grabbing device
CN110335224B (en)*2019-07-052022-12-13腾讯科技(深圳)有限公司Image processing method, image processing device, computer equipment and storage medium
CN111798511B (en)*2020-05-212023-03-24扬州哈工科创机器人研究院有限公司Chessboard and chessman positioning method and device
CN112784717B (en)*2021-01-132022-05-13中北大学 An automatic sorting method for pipe fittings based on deep learning
CN114734456A (en)*2022-03-232022-07-12深圳市商汤科技有限公司Chess playing method, device, electronic equipment, chess playing robot and storage medium
CN119067894B (en)*2024-08-282025-07-29爱棋道(北京)文化传播有限公司Chessboard image correction method, chessboard image correction device, electronic equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107766855A (en)*2017-10-252018-03-06南京阿凡达机器人科技有限公司Chess piece localization method, system, storage medium and robot based on machine vision

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20030062675A1 (en)*2001-09-282003-04-03Canon Kabushiki KaishaImage experiencing system and information processing method
JP3809838B2 (en)*2001-11-072006-08-16ダバー ピシュバ Image highlight correction method using image source specific HSV color coordinates, image highlight correction program, and image acquisition system
US20100015579A1 (en)*2008-07-162010-01-21Jerry SchlabachCognitive amplification for contextual game-theoretic analysis of courses of action addressing physical engagements
KR102085228B1 (en)*2014-03-272020-03-05한국전자통신연구원Imaging processing method and apparatus for calibrating depth of depth sensor

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107766855A (en)*2017-10-252018-03-06南京阿凡达机器人科技有限公司Chess piece localization method, system, storage medium and robot based on machine vision

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Chinese Chess Recognition Algorithm Based on Computer Vision";Wu Gui,Tao Jun;《2014 26th Chinese Control and Decision Conference》;20140602;全文*
"基于视觉的中国象棋棋子识别定位技术";王殿君;《清华大学学报(自然科学版)》;20130831;全文*

Also Published As

Publication numberPublication date
CN108550169A (en)2018-09-18

Similar Documents

PublicationPublication DateTitle
CN108550169B (en)Method for determining positions of chess pieces in three-dimensional space and calculating heights of chess pieces
US9836645B2 (en)Depth mapping with enhanced resolution
CN107766855B (en)Chessman positioning method and system based on machine vision, storage medium and robot
CN110634161B (en) A fast and high-precision estimation method and device for workpiece pose based on point cloud data
CN114332689B (en) Citrus identification and positioning method, device, equipment and storage medium
CN109018591A (en)A kind of automatic labeling localization method based on computer vision
CN109255813A (en)A kind of hand-held object pose real-time detection method towards man-machine collaboration
CN110569784B (en)Human body size measuring method and system, storage medium and electronic equipment
CN113674360B (en) A covariant-based method for line structured light and light plane calibration
CN107169475A (en)A kind of face three-dimensional point cloud optimized treatment method based on kinect cameras
CN109887030A (en) Image pose detection method of textureless metal parts based on CAD sparse template
CN104766309A (en)Plane feature point navigation and positioning method and device
CN110648362B (en) A Binocular Stereo Vision Badminton Positioning Recognition and Attitude Calculation Method
CN107543496A (en)A kind of stereo-visiuon measurement handmarking point based on speckle image matching
CN112634435A (en)Ceramic product three-dimensional point cloud reconstruction method based on Eye in Hand model
CN114119718B (en) Binocular vision green vegetation matching and positioning method integrating color features and edge features
CN108917640A (en)A kind of laser blind hole depth detection method and its system
CN108171753A (en)Stereoscopic vision localization method based on centroid feature point Yu neighborhood gray scale cross correlation
Yuan et al.3D foot scanning using multiple RealSense cameras
CN115131442A (en) A calibration method and device, and a computer-readable storage medium
CN115082538A (en) 3D reconstruction system and method of multi-vision gimbal parts surface based on line structured light projection
JP6942566B2 (en) Information processing equipment, information processing methods and computer programs
JP3668769B2 (en) Method for calculating position / orientation of target object and method for calculating position / orientation of observation camera
CN118918265A (en)Three-dimensional reconstruction method and system based on monocular camera and line laser
CN106296797A (en)A kind of spatial digitizer characteristic point modeling data processing method

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp