Movatterモバイル変換


[0]ホーム

URL:


CN119991780A - Target tracking method and device, electronic device and readable storage medium - Google Patents

Target tracking method and device, electronic device and readable storage medium
Download PDF

Info

Publication number
CN119991780A
CN119991780ACN202311472737.9ACN202311472737ACN119991780ACN 119991780 ACN119991780 ACN 119991780ACN 202311472737 ACN202311472737 ACN 202311472737ACN 119991780 ACN119991780 ACN 119991780A
Authority
CN
China
Prior art keywords
target object
image
target
reference pattern
identified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311472737.9A
Other languages
Chinese (zh)
Inventor
陈林俐
陶展
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yinhe Fangyuan Technology Co ltd
Original Assignee
Beijing Yinhe Fangyuan Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yinhe Fangyuan Technology Co ltdfiledCriticalBeijing Yinhe Fangyuan Technology Co ltd
Priority to CN202311472737.9ApriorityCriticalpatent/CN119991780A/en
Publication of CN119991780ApublicationCriticalpatent/CN119991780A/en
Pendinglegal-statusCriticalCurrent

Links

Landscapes

Abstract

The invention discloses a target object tracking method, a target object tracking device, electronic equipment and a readable storage medium, and belongs to the technical field of computers. The object tracking method comprises the steps of providing at least two objects, wherein each object comprises a marking pattern and a reference pattern which are located in the same plane, the reference patterns are close to the periphery of the object, the reference patterns of each object are different, an RGB image of the object is acquired through a camera, the reference patterns of the object are identified based on the RGB image of the object, and the object corresponding to the reference pattern of the identified object is determined based on the reference patterns of the identified object. Embodiments of the present invention are capable of determining multiple objects in one image.

Description

Target tracking method and device, electronic equipment and readable storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a target tracking method, a target tracking device, an electronic apparatus, and a readable storage medium.
Background
With the development of computer image processing technology, the calibration plate plays an increasingly important role in applications such as machine vision, image measurement, photogrammetry, three-dimensional reconstruction and the like, for example, correcting lens distortion, determining conversion relation between physical dimensions and pixels, determining interrelation between a three-dimensional geometric position of a point on the surface of a space object and a corresponding point in an image, establishing a geometric model of camera imaging and the like.
However, calibration plate-based applications are currently mostly aimed at the recognition of calibration patterns, such as squares, circles or specified patterns. The scene of identifying a plurality of calibration plates simultaneously in the same image cannot be satisfied. For example, in the case where the camera is required to simultaneously track a plurality of similar objects or to distinguish different surfaces of the same object (for example, front, rear, left, right, up, down surfaces), if the same calibration plate is used, the tracked objects or the different surfaces of the same object cannot be distinguished, and if different types of calibration plates are used, the maximum number of objects that can be tracked is limited.
Disclosure of Invention
In order to solve at least one of the above problems and disadvantages of the related art, embodiments of the present invention provide an object tracking method, an object tracking device, an electronic apparatus, and a readable storage medium to solve the problem of how to determine or identify a plurality of calibration boards (e.g., at least three) in one image.
It is an object of the present invention to provide a method for tracking a target.
It is another object of the present invention to provide a target tracking apparatus.
It is a further object of the invention to provide an electronic device.
It is a further object of the invention to provide a readable storage medium.
According to one aspect of the present invention, there is provided a target tracking method including:
Providing at least two targets, wherein each target comprises a marking pattern and a reference pattern which are positioned in the same plane, the reference patterns are close to the periphery of the target, and the reference patterns of each target are different;
Collecting RGB images of a target object through a camera;
identifying a reference pattern of the target object based on the RGB image of the target object;
A target object corresponding to the reference pattern of the identified target object is determined based on the reference pattern of the identified target object.
In some embodiments, the marking pattern of the object is a regular pattern comprising at least one of circles, ovals, rectangles, squares, polygons, and triangles and ChArUco arranged in a regular pattern, or
The marking pattern of the target is an irregular pattern, the irregular pattern comprises logo arranged in a regular manner,
The reference pattern of the target object comprises at least one of two-dimensional codes, triangles, diamonds, pentagram, hexagram and cartoon figures.
In some embodiments, identifying the reference pattern of the object based on the RGB image of the object includes:
determining a gray image of the target object based on the RGB image of the target object;
Determining a binary image of the target object based on the gray level image of the target object;
a reference pattern of the target object is identified based on the binary image of the target object.
In some embodiments, identifying the reference pattern of the target based on the binary image of the target includes:
When the reference pattern is a two-dimensional code, the two-dimensional code in the binary image is decoded to distinguish different two-dimensional codes,
When the reference pattern is at least one of triangle, diamond, five-pointed star, six-pointed star and cartoon pattern,
Providing a binary image of the reference pattern as an identification template;
providing a moving template, and moving the moving template on the binary image of the target object to obtain an overlapped sub-image;
a reference pattern of the object is identified based on the overlapping sub-images and the identification template.
In some embodiments, identifying the reference pattern of the target object based on the overlapping sub-images and the identification template includes:
determining the matching degree of the overlapped sub-images and the recognition template;
when the matching degree indicates that the two images are matched, the images in the overlapped sub-images are identified as the reference pattern of the target object.
In some embodiments, identifying the reference pattern of the object based on the RGB image of the object includes identifying an image of the reference pattern,
Determining an object corresponding to the reference pattern of the identified object based on the reference pattern of the identified object comprises:
determining position information of a pixel point of a first center point of an image of the identified reference pattern;
Determining whether a target object exists in a search area taking R as a field or not based on the pixel point of each first center point;
When it is determined that an object exists in the search area, extracting image information in the identified image with a pixel point of the first center point as a center and a search area of R as a field, determining the image information as an object corresponding to a reference pattern of the identified object,
Wherein the initial value of R is determined based on the target and the camera.
In some embodiments, the initial value of R is determined by:
Placing at least one target object at a preset distance from a camera, and shooting RGB images of the at least one target object through the camera;
determining position information of a target object in an RGB image of the photographed target object;
Determining a length and a width of the at least one object based on the position information of the at least one object;
An initial value of R is determined based on the length and width of the at least one object.
In some embodiments, determining the initial value of R based on the length and width of the at least one target comprises:
Determining a first pixel value corresponding to the length and a second pixel value corresponding to the width based on the length and the width of the at least one object;
determining a maximum of the first pixel values and the second pixel values of all of the at least one object;
the maximum value is determined as an initial value of R.
In some embodiments, determining a target object corresponding to the reference pattern of the identified target object based on the reference pattern of the identified target object further comprises:
When it is determined that no object exists in the search area, increasing R, forming a new search area with the increased R as a field, and re-determining whether an object exists in the new search area based on the pixel point of each first center point until an object corresponding to the reference pattern of the identified object is identified,
Wherein increasing the field R comprises gradually increasing with a1 pixel size.
In some embodiments, determining a target object corresponding to the reference pattern of the identified target object based on the reference pattern of the identified target object further comprises:
Determining position information of a second center point of the mark pattern of the identified target object under an image coordinate system;
Determining whether the identified object is duplicated or not based on the position information of the second center point of the mark pattern of the identified object in the image coordinate system;
when there is a duplication in the identified targets, all identified targets that are duplicated are deleted.
In some embodiments, the target tracking method further comprises:
determining a first number of identified objects in an initial image of the object;
determining a second number of the target objects in the next frame of image of the target objects;
and determining the relation between the first number and the second number, and tracking the target object according to the relation.
In some embodiments, tracking the target according to the relationship comprises:
when the second number is equal to or less than the first number,
Determining position data of each object in the initial image under an image coordinate system and position data of each object in the next frame of image under the image coordinate system;
Determining the distance between the position data of the object in the initial image under the image coordinate system and the position data of the object in the next frame of image under the image coordinate system, and taking the position data closest to the distance as pairing data;
And updating the position data of the object in the initial image matched with the position data of the object in the next frame image in the image coordinate system by using the position data of the object in the next frame image in the image coordinate system, and deleting the position data of the unpaired object.
In some embodiments, tracking the target according to the relationship comprises:
When the second number is greater than the first number,
Identifying a reference pattern of the target object based on a next frame of RGB image of the target object;
Determining a target object corresponding to the reference pattern of the identified target object based on the reference pattern of the identified target object;
And determining the position data of all the targets in the RGB image of the next frame under the image coordinate system and the number of the targets.
According to another aspect of the present invention, there is provided an object tracking device adapted to track at least two objects, each object including a marker pattern and a reference pattern lying in the same plane, the reference patterns being adjacent to the periphery of the object, the reference patterns of each object being different, the object tracking device comprising:
A camera configured to capture RGB images of a target;
An identification module in communicative connection with the camera, the identification module configured to identify a reference pattern of the target object based on an RGB image of the target object from the camera;
And a tracking module communicatively coupled to the camera and the recognition module, respectively, the tracking module configured to determine a target object corresponding to the reference pattern of the recognized target object based on the reference pattern from the target object recognized by the recognition module.
In some embodiments, the tracking module is further configured to determine a first number of objects in the initial image of objects and a second number of objects in the next frame of image, determine a relationship of the first number and the second number, and track objects according to the relationship.
According to still another aspect of the present invention, there is provided an electronic device including a memory and a processor, the memory having a program stored thereon, wherein the processor implements the object tracking method according to any one of the foregoing embodiments when executing the program on the memory.
According to still another aspect of the present invention, there is provided a readable storage medium having stored therein a computer readable program or instructions which when executed by a processor implement the object tracking method according to any one of the preceding embodiments.
The object tracking method, the object tracking device, the electronic apparatus, and the readable storage medium according to the present invention have at least one of the following advantages:
(1) The object tracking method, the object tracking device, the electronic equipment and the readable storage medium can simultaneously determine or identify the objects to be distinguished by determining or identifying the objects (such as the calibration plate or the tracked object with the calibration plate) provided with different reference patterns, thereby meeting the requirement of simultaneously identifying the scenes of a plurality of objects (such as the calibration plate) in the same image;
(2) According to the target object tracking method, the target object tracking device, the electronic equipment and the readable storage medium, the requirements for identifying a plurality of calibration plates can be met by setting the number of different reference patterns, and the number requirement for the maximum trackable objects is eliminated;
(3) The target object tracking method, the target object tracking device, the electronic equipment and the readable storage medium can realize dynamic tracking of multiple target objects;
(4) The target object tracking method, the target object tracking device, the electronic equipment and the readable storage medium have real-time performance and robustness for the determination or identification process of a plurality of calibration plates.
Drawings
These and/or other aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the preferred embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 shows a flow chart of a target tracking method according to an embodiment of the invention;
FIG. 2 shows a flow chart of a target tracking method according to another embodiment of the invention;
FIGS. 3-6 illustrate targets suitable for use in the target tracking method illustrated in FIG. 1;
Fig. 7 illustrates a flowchart of identifying a reference pattern of a target object based on an RGB image of the target object according to an embodiment of the present invention;
FIG. 8 shows a schematic diagram of dynamic tracking according to an embodiment of the invention;
Fig. 9 shows a target tracking apparatus according to an embodiment of the invention.
Detailed Description
The technical scheme of the invention is further specifically described below through examples and with reference to the accompanying drawings. In the specification, the same or similar reference numerals denote the same or similar components. The following description of embodiments of the present invention with reference to the accompanying drawings is intended to illustrate the general inventive concept and should not be taken as limiting the invention.
In an embodiment of the present invention, a target tracking method is provided, which can identify or distinguish multiple targets at the same time, so as to track the targets.
The object of the embodiment of the invention comprises an object which is expected to be tracked, such as a calibration plate, an object provided with the calibration plate, or different surfaces of the same object provided with the calibration plate, etc.
As shown in fig. 1 and 2, the target tracking method includes:
Step S1, providing at least two targets, wherein each target comprises a marking pattern and a reference pattern which are positioned in the same plane, the reference patterns are close to the periphery of the target, and the reference patterns of each target are different;
S2, collecting RGB images of a target object through a camera;
step S3, identifying a reference pattern of the target object based on the RGB image of the target object;
Step S4 determines a target object corresponding to the reference pattern of the identified target object based on the reference pattern of the identified target object.
According to the embodiment of the invention, the objects (such as the calibration plate or the tracked object with the calibration plate) provided with different reference patterns are determined or identified simultaneously to determine or identify the objects to be distinguished, so that the scene of identifying a plurality of objects (such as the calibration plate) simultaneously in the same image is satisfied. For example, when tracking multiple objects through a calibration plate, or when it is desired to distinguish between different areas on an object through a calibration plate, these scenarios require the simultaneous identification of multiple calibration plates.
The embodiment of the invention meets the requirement for identifying a plurality of calibration plates by setting the number of different reference patterns, and eliminates the requirement for the number of maximum trackable objects.
The embodiment of the invention has real-time performance and robustness for the determination or identification process of a plurality of calibration plates.
Here, fig. 2 shows an object tracking method using a calibration plate as an example. However, embodiments of the present invention are not limited thereto, and the object may be other objects that need to be tracked.
Embodiments of the present invention need to provide at least two objects to distinguish between the different objects by providing different reference patterns. Of course, embodiments of the present invention are not limited to a particular number of targets, and those skilled in the art may set as desired.
Specifically, each object (e.g., reference numeral 1 in fig. 3) includes a reference pattern (e.g., reference numeral 11 in fig. 3) and a reference pattern (e.g., reference numeral 12 in fig. 3) that lie in the same plane.
The marking pattern of the object is a regular pattern or an irregular pattern.
The regular pattern includes at least one of circles, ovals, rectangles, squares, polygons, and triangles and ChArUco arranged in a regular pattern, as shown in fig. 3-6.
The embodiment of fig. 3 is a checkerboard pattern. The pattern of the embodiment of fig. 3 has high precision characteristics during post processing, but it is necessary to ensure that the camera obtains a complete pattern image. Moreover, as the distance from the camera becomes larger, the stability during processing may be reduced.
The embodiments of fig. 4 and 5 are circular patterns, which differ in the arrangement rules of the circles. The use of the patterns in the embodiments of fig. 4 and 5 can improve stability during processing as compared to the embodiment of fig. 3. When using the patterns in fig. 4 and 5, it is also necessary to ensure that the camera obtains a complete pattern image.
The embodiment of fig. 6 is ChArUco. The pattern of the embodiment of fig. 6 allows processing of partial images based on the pattern, as compared to the pattern in the embodiment of fig. 3-5. That is, the embodiment of FIG. 6 does not require that the pattern image obtained by the camera be complete.
The irregular pattern includes asymmetric polygons or asymmetric curved polygons arranged in a regular pattern. For example, the irregular pattern includes logo arranged in a regular pattern. For patterns employing irregular patterns, the degree of freedom and the degree of recognition of product design can be increased.
The reference pattern of the target object comprises at least one of five stars (shown in fig. 3), triangles (shown in fig. 4), two-dimensional codes (shown in fig. 5), diamonds (shown in fig. 6), hexagrams and cartoon figures. The reference pattern is near the periphery of the target. For example, the reference pattern is located at one of four corners of the object (as shown in fig. 3, 4, and 6), or at the upper periphery, or at the lower periphery (as shown in fig. 5), or at the left or right periphery. Embodiments of the present invention are not limited to a specific location of the reference pattern, and may be set as desired by those skilled in the art.
The embodiment of the invention needs to ensure that the reference patterns of each object are different so as to realize the same-time distinguishing of different objects, and the reference patterns cannot be repeated with the patterns existing in the environment so as to avoid the problem of identification failure. However, embodiments of the present invention do not require that the marking pattern be different for each object. After the marker pattern and the reference pattern are determined or selected for each object, the object is designed, i.e. the position of the reference pattern on the object is determined, and the shape of the marker pattern and the shape of the reference pattern for each object are recorded for subsequent identification. Embodiments of the present invention are not limited to the specific shapes of the marking pattern and the reference pattern, and may be set as desired by those skilled in the art.
In one example, after the target is designed, it may be printed as a decal and affixed to the plane of the object, or printed by a laser on a metal or plastic plane.
And after the design of the target object is completed, acquiring an image stream or a video stream of the target object. And placing the designed target object in a space, and collecting RGB images of the target object through a camera. The image at this time may be an image acquired in real time or may be a pre-stored image. The image may be in a picture format or in a video format. This acquisition process may be manually triggered or may be automated.
After the RGB image of the target object is acquired, the identification of the reference pattern is performed. Specifically, as shown in fig. 7, step S3 includes:
S31, determining a gray image of the target object based on the RGB image of the target object;
s32, determining a binary image of the target object based on the gray level image of the target object;
s33, identifying a reference pattern of the target object based on the binary image of the target object.
A gray scale image is an image with only one sampling color per pixel. Such images are typically displayed in gray scale from black to white. Gray scale images exist with at least multiple levels of color depth between black and white. Converting an RGB image into a gray scale image can reduce the data that needs to be processed. The RGB image may be converted into a gray image by an averaging method, a maximum-minimum averaging method, a weighted averaging method, or the like. The averaging method averages the RGB values of 3 channels of the same pixel. The maximum-minimum average method averages the maximum brightness value and the minimum brightness value of RGB at the same pixel position. The weighted average method is to use three weighting coefficients of 0.3, 0.59 and 0.11 to calculate the weighted sum of RGB values of 3 channels of the same pixel, wherein the three weighting coefficients are parameters regulated according to a human brightness perception system. Of course, other known methods may be used by those skilled in the art to implement the conversion process.
After the gradation image is obtained, binarization processing is required for the gradation image. The binarization process can set the gray value of a pixel point on the image to 0 or 255. In this way, the entire image exhibits a visual effect of only black and white, so that the identification of the reference pattern is performed. Embodiments of the present invention may use an average method, a bimodal method, an oxford algorithm (OTSU), etc. to obtain a binary image.
The process of identifying the reference pattern varies from one pattern to another.
In an example, when the reference pattern is a two-dimensional code, step S33 includes decoding the two-dimensional code in the binary image to distinguish different two-dimensional codes. The reference pattern is different for each object and the pattern of the reference pattern is known for each object. When the reference pattern is a two-dimensional code, information that the two-dimensional code can be recognized is known. The information of the identified two-dimensional code can be determined by decoding the two-dimensional code in the binary image, thereby identifying the two-dimensional code based on the decoded information. Embodiments of the present invention may be decoded using existing techniques and are not described in detail herein.
In an example, when the reference pattern is at least one of triangle, diamond, five-pointed star, six-pointed star, and cartoon figure, a template matching algorithm may be used to identify the reference pattern. Specifically, step S33 includes:
s331, providing a binary image of the reference pattern as an identification template;
S332, providing a moving template, and moving the moving template on the binary image of the target object to obtain an overlapped sub-image;
s333 identifies a reference pattern of the target object based on the overlapping sub-images and the identification template.
In step S331, a binary image of the reference pattern is set based on the image of the reference pattern. For example, using a five-pointed star as a reference pattern, the five-pointed star pattern is set to black on the inside and white on the outside, and this picture forms a recognition template.
In step S332, the moving template includes a rectangular frame. In an example, the rectangular box is a rectangular box representing a pixel unit, such as 3*3, 3*6, 3*9, 6*3, 9*3, and the like. In an example, the rectangular box is a rectangular box obtained by scaling a rectangular box representing a pixel unit, for example 6*6, 6×12, 6×18, 12×6, 18×6, etc.
The moving template is moved on the binary image obtained in step S32, and the image framed by the rectangular frame on the binary image constitutes the superimposed sub-image. In one example, the moving template may be moved in units of pixels from left to right, top to bottom. Of course, a specific movement pattern can be set by those skilled in the art as required.
Step S333 includes determining a degree of matching between the overlapping sub-images and the recognition template, and recognizing the image in the overlapping sub-images as a reference pattern of the object when the degree of matching indicates that the two match. That is, the reference pattern identifying the target object includes an image (overlapping sub-image) identifying the reference pattern.
The greater the degree of matching, the greater the likelihood that the overlapping sub-images and the recognition templates are identical. The method for determining the matching degree can be to perform matching by using square difference, and the numerical value of the result is 0, which indicates that the matching degree of the two is highest, and the larger the numerical value of the result is, the lower the matching degree of the two is. The method for determining the matching degree can also adopt multiplication operation between the recognition template and the overlapped sub-images, wherein the larger the numerical value of the result is, the better the matching degree is, the numerical value of the result is 0, and the lowest matching degree is indicated. The method for determining the matching degree may also be to match the relative value of the average value of the identification template with the relative value of the average value of the overlapping sub-image, where the value of the result is 1, which indicates that the matching degree is highest, the value of the result is-1, which indicates that the matching degree is lowest, and the value of the result is 0, which indicates that there is no correlation (random sequence).
In an example, step S3 further includes:
S34 determines positional information of a pixel point of the first center point of the image of the reference pattern identified in step S33.
After the reference pattern is identified in the overlapping sub-images, a first center point of the image of the reference pattern can be determined, a pixel point of the first center point can be determined according to the first center point, and position coordinates under an image coordinate system are determined according to the pixel point of the first center point. The position coordinates may be used for subsequent extraction of the target.
After the reference pattern is identified, a target object corresponding to the reference pattern is extracted. Specifically, step S4 includes:
And S41, determining whether an object exists in a search area taking R as a field or not based on the pixel point of each first center point (determined by step S34), and when the object exists in the search area, extracting image information in the identified image taking the pixel point of the first center point as the center and taking R as the search area of the field, and determining the image information as the object corresponding to the reference pattern of the identified object.
The pixel point of the first center point may be taken as the center of the search area, and it may be determined whether or not a target object exists in the image of the reference pattern framed by the search area. For example, in the case where the target is a calibration plate, whether there is a pattern of the calibration plate in the selected area may be checked by a calibration plate recognition method.
And when the existence of the target object in the searching area is determined, extracting image information in the identified image by taking the pixel point of the first center point as the center and taking R as the searching area of the field, and determining the image information as the target object corresponding to the reference pattern of the identified target object. After the object is identified, position information of the second center point of the marker pattern of the identified object in the image coordinate system may be determined. For example, the position information includes coordinates of the pixel point of the second center point in the image coordinate system.
And when determining that no target object exists in the searching area, increasing R, forming a new searching area by taking the increased R as the field, and re-determining whether the target object exists in the new searching area based on the pixel point of each first center point until the target object corresponding to the reference pattern of the identified target object is identified. In one example, increasing the field R includes gradually increasing by 1 pixel size.
In an example, the initial value of R is set to 1 pixel, 2 pixels, or more.
In an example, an initial value of R is determined based on the target and the camera. Specifically, the initial value of R is determined by:
Placing at least one target object at a preset distance from a camera, and shooting RGB images of the at least one target object through the camera;
determining position information of a target object in an RGB image of the photographed target object;
Determining a length and a width of the at least one object based on the position information of the at least one object;
An initial value of R is determined based on the length and width of the at least one object.
The at least one object described herein is at least one object of the at least two objects provided. In an example, when the marking patterns of each of the at least two targets are different, the at least one target may be all of the at least two targets or a portion of the at least two targets. Of course, when at least one object is all objects, the initial value of R is determined more accurately.
When the marking patterns of the partial objects exist in the at least two objects, the initial value of R is determined for the objects with different marking patterns. That is, when there are M targets and the M targets have N different marking patterns (M > N >0, M and N all belong to a positive integer), the number of at least one target may be N, or may be any positive integer smaller than N.
The preset distance is the maximum distance between the camera and the target object in the application scene, and the camera still can clearly shoot the target object at the maximum distance. The embodiment of the present invention is not limited to a specific value of the preset distance, and those skilled in the art can set the value as required.
According to the existing method, the position information of the target object can be identified in the photographed RGB image of the target object. The position information of the target includes coordinates of the pixel points of the marker pattern of the target in the image coordinate system.
The length and width of the target can be determined based on the position information of the target. For example, the length and width of the target object may be determined according to coordinates of the pixel points of the marker pattern in the image coordinate system.
An initial value of R is determined based on the length and width of the at least one object. This process includes:
Determining a first pixel value corresponding to the length and a second pixel value corresponding to the width based on the length and the width of the at least one object;
determining a maximum of the first pixel values and the second pixel values of all of the at least one object;
the maximum value is determined as an initial value of R.
The process of determining the length and width of the object includes determining a first pixel value corresponding to the length and a second pixel value corresponding to the width. And comparing the first pixel value and the second pixel value of all the targets in at least one target respectively, and determining the maximum value in the first pixel value and the second pixel value. The maximum value is determined as the initial value of R.
In some cases, the reference pattern P of the first object is closer to the second object in the identified image, and during the identification process, the second object may be identified based on the reference pattern P, and the second object may be identified based on the reference pattern Q of the second object, which may result in inaccurate identification and be unfavorable for the later tracking process. In view of this, the embodiment of the present invention can determine whether the recognition result is correct. Specifically, step S4 includes:
s42, determining whether the identified object is duplicated or not based on the position of the second center point of the mark pattern of the identified object under the image coordinate system, and deleting all the duplicated identified objects when the identified object is duplicated.
As described above, after the object is identified, the coordinates of the pixel point of the second center point of the mark pattern of the object in the image coordinate system may be recorded. It is determined whether the coordinates of the second center point of the marker pattern of the identified object are the same. And when the coordinates of the second center points of the marking patterns of at least two targets are the same, deeming that the identified targets exist in a repeated mode, and deleting all the repeated identified targets. The determining whether the coordinates of the second center points of the marker patterns of the target object are the same may be performed based on the existing method, which is not limited in the embodiment of the present invention.
In an example, step S4 further includes:
In the result of the processing in step S42, S43 extracts the image information of the designed target object with the coordinates of the pixel point of the first center point recorded in step S34 as the center and R calculated in step S41 as the field, and records the coordinate information of the pixel point of the second center point of the mark pattern of the identified target object in the image coordinate system for the subsequent tracking algorithm.
In another example of the present invention, as shown in fig. 2, the tracking method of the target object further includes:
determining a first number of identified objects in an initial image of the object;
determining a second number of the target objects in the next frame of image of the target objects;
and determining the relation between the first number and the second number, and tracking the target object according to the relation.
Thus, embodiments of the present invention implement a dynamic tracking process by tracking a current target object according to a relationship of a first number and a second number.
After the determination or identification of the objects in the initial image according to the previous embodiments, the first number of determined or identified objects may be known.
The determination or identification process for the object in the next frame image does not identify the object based on the difference in the reference pattern (e.g., the solution of the foregoing embodiment), but does identify based on the mark pattern of the object. For example, feature points may be set on the object, and the object may be identified based on spatial data of the feature points in a pattern coordinate system, internal reference data of the camera, a world coordinate system, a camera coordinate system, and the like. Of course, embodiments of the present disclosure are not limited to a specific identification process, and those skilled in the art may also utilize other disclosed techniques to identify the number of objects in the next frame of image.
The target is tracked by comparing the first number to a second number of relationships (e.g., size relationships).
Specifically, when the second number is less than or equal to the first number, the object determined in the next frame of image is determined or identified in the initial image, and the method includes the following steps:
determining position data of each object in the initial image under an image coordinate system and position data of each object in the next frame of image under the image coordinate system;
Determining the distance between the position data of the object in the initial image under the image coordinate system and the position data of the object in the next frame of image under the image coordinate system, and taking the position data closest to the distance as pairing data;
And updating the position data of the object in the initial image matched with the position data of the object in the next frame image in the image coordinate system by using the position data of the object in the next frame image in the image coordinate system, and deleting the position data of the unpaired object.
The position data of the object includes coordinate data of a second center point of the marker pattern of the object in the image coordinate system. After the object is determined, the coordinate value of the region where the marking pattern is located in the image coordinate system, such as an O-XY rectangular coordinate system, is determined, the coordinate value of the second center point of the region where the marking pattern is located is determined, xmid=(xmax+xmin)/2,ymid=(ymax+ymin)/2,xmax is the maximum coordinate value of the region in the x-axis of the image coordinate system, xmin is the minimum coordinate value of the region in the x-axis of the image coordinate system, ymax is the maximum coordinate value of the region in the y-axis of the image coordinate system, ymin is the minimum coordinate value of the region in the y-axis of the image coordinate system, and the coordinate value of the second center point of the region is determined as the position data of the object in the image coordinate system. In this way, the position data of each object in the initial image under the image coordinate system and the position data of each object in the next frame of image under the image coordinate system can be determined.
When the position data of the object in the initial image and the position data of the object in the next frame image are determined, a distance (e.g., euclidean distance) between the position data in the two images is determined. The euclidean distance between the center points of the target regions in the two images can be calculated separately. The position data closest to the nearest position (i.e., the euclidean distance value is smallest) is determined as the paired data.
And for paired data, updating the position data of the target object in the initial image in the pairing by using the position data of the target object in the next frame image in the pairing, and deleting the position data of the unpaired target object, thereby realizing dynamic tracking of the target object.
Taking fig. 8 as an example, three objects A, B and C are identified in the initial image, three objects a ', B ' and C ' are identified in the next frame image, the first number and the second number are equal, the euclidean distance between the second center point of a and the second center point of each object (a ', B ' and C ') in the next frame image is determined, which may be denoted as AA ', AB ' and AC ', respectively, the euclidean distance between the second center point of B and the second center point of each object in the next frame image is determined, which may be denoted as BA ', BB ' and BC ', respectively, and the euclidean distance between the second center point of C and the second center point of each object in the next frame image is determined, which may be denoted as CA ', CB ' and CC ', respectively. In the example of fig. 8, AA 'is shortest among AA', AB 'and AC', so that the objects a and a 'are paired data, the position data of a is updated using the position data of a', BB 'is shortest among BA', BB 'and BC', so that the objects B and B 'are paired data, the position data of B is updated using the position data of B', and CC 'is shortest among CA', CB 'and CC', so that the objects C and C 'are paired data, and the position data of C is updated using the position data of C', thereby completing tracking of the objects.
When the second number is larger than the first number, then the object which is not determined or identified in the initial image appears in the next frame image, so that it is necessary to identify the object based on the difference of the reference pattern according to the method of the foregoing embodiment. The method specifically comprises the following steps:
identifying a reference pattern of the target object based on the next frame of RGB image of the target object, which is referred to above and will not be described in detail herein;
Determining a target object corresponding to the identified reference pattern of the target object based on the identified reference pattern of the target object, which is referred to in the foregoing, and will not be described in detail herein;
And determining the position data of all the targets in the RGB image of the next frame under the image coordinate system and the number of the targets.
After identifying all objects in the next frame of image based on the reference pattern, the position data of all objects in the next frame of image and the number of objects need to be determined for use in later dynamic tracking.
Thus, the updated image can update the position data of the target object by using the method, thereby completing the dynamic tracking of the target object.
In an embodiment of the present invention, a target tracking device is provided, which is adapted to track at least two targets. Each object comprises a marking pattern and a reference pattern which are positioned in the same plane, the reference pattern is close to the periphery of the object, and the reference patterns of each object are different. The object tracking device can execute the object tracking method according to any of the above embodiments.
As shown in fig. 9, the object tracking device 100 includes a camera 10, an identification module 20, and a tracking module 30.
The camera 10 is a camera capable of capturing RGB images of a target object, and may include a monocular camera, a binocular camera, and the like.
The identification module 20 is communicatively connected to the camera 10. The recognition module 20 is configured to recognize a reference pattern of the target object based on the RGB image of the target object from the camera 10. The specific process of identifying the reference pattern may be referred to the above embodiments, and will not be described herein.
The tracking module 30 is communicatively connected to the camera 10 and the identification module 20, respectively. The tracking module 30 is configured to determine a target object corresponding to the reference pattern of the identified target object based on the reference pattern of the target object identified from the identification module 20. The specific process of determining the target object may be referred to the above embodiments, and will not be described herein.
In one example, tracking module 30 is further configured to determine a first number of objects in the initial image of objects and a second number of objects in the next frame of images, determine a relationship of the first number and the second number, and track the objects according to the relationship. The specific process of tracking the target object can be referred to the above embodiments, and will not be described herein.
In an embodiment of the present invention, a readable storage medium is provided. The readable storage medium stores a program or instructions that when executed by a processor implement the object tracking method according to any one of the above embodiments.
A "readable storage medium" of embodiments of the present invention refers to any medium that participates in providing programs or instructions to a processor for execution. Such a medium may take many forms, including but not limited to, non-volatile media, and transmission media. Non-volatile media includes, for example, optical or magnetic disks, such as a storage device. Volatile media includes dynamic memory, such as main memory. Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise a bus. Transmission media can also take the form of acoustic or light waves, such as those generated during Radio Frequency (RF) and Infrared (IR) data communications. Common forms of readable storage media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read.
In the embodiment of the invention, a cloud server is also provided. The cloud server includes a memory and a processor. The processor may be a central processing unit (central processing unit, CPU). The memory stores a program or instructions that when executed by the processor perform the object tracking method described above. In one example, the cloud server may be a virtual server formed by mapping physical servers through virtualization techniques. Wherein, the physical servers can be one or more, and when the physical servers are a plurality of, the cloud server can be a virtual server formed by mapping a server cluster through a virtualization technology. The virtual server may also be one or more. In one example, a cloud server may provide users with use through a cloud platform. In one example, the memory may be a readable storage medium.
In an embodiment of the invention, an electronic device is also provided. The electronic device (not shown) comprises a processor (not shown) and a memory (not shown). The memory has stored thereon a program which, when executed by the processor, is capable of implementing any of the object tracking methods described above.
In one example, the Processor may be a microprocessor, such as a general purpose Processor like a Graphics Processor (GPU), a Central Processing Unit (CPU), a digital signal Processor (DIGITAL SIGNAL Processor, DSP), or the like. In one example, the processor may also be a microprocessor core implemented in hardware circuitry, such as a microprocessor core implemented in hardware logic components comprising Field Programmable Gate Arrays (FPGAs), complex Programmable Logic Devices (CPLDs), program Application Specific Integrated Circuits (ASICs), program specific standard products (ASSPs), systems on a chip (SOCs), or the like, via reconfigurable logic.
In one example, the processor may also be a virtual processor, which may be a virtual processor with intel x86 processor characteristics, or may be a virtual processor with PowerPC processor characteristics. Preferably, the processor is a graphics processor. In one example, the processor may be a single-core processor or a multi-core processor.
In one example, the memory includes volatile memory (i.e., random access memory) and nonvolatile memory. Volatile memory includes main memory, cache (Cache), etc., and nonvolatile memory includes auxiliary memory, etc. In one example, the memory may be provided as a remote memory that may be connected to the processor through a network (wired or wireless). Including but not limited to wide area networks, local area networks, metropolitan area networks, personal area networks, the internet, satellite communications networks, and any combination thereof.
In one example, a processor executes a task thread that creates a correspondence based on a program obtained in memory and executes the thread. In one example, a processor retrieves a program from memory based on read instructions in memory to create a corresponding task thread and execute the thread. The program is used for realizing a control method for tracking the target object.
While the subject matter described herein is provided in the general context of operating systems and application programs that execute in conjunction with execution on a computer system, those skilled in the art will recognize that other types of program modules may also be implemented in combination. Generally, program modules include routines, programs, components, data structures, and other types of structures that perform particular tasks or implement particular abstract data types. Those of skill in the art will appreciate that the steps of a method described in connection with any of the examples herein can be implemented in electronic hardware, or in combinations of computer software and electronic hardware. Implementation in hardware or software depends primarily on the particular application and design constraints of the solution. Those skilled in the art may use different methods for each particular application to achieve the described functionality, but such implementation should not be considered to be beyond the scope of the present application.
When the method steps are implemented in the form of software functional units and sold or used as a stand-alone product, they may be stored in a computer-readable storage medium. Thus, the aspects of the present invention, in essence or contributing to the prior art or portions of the same, may be embodied in the form of a software product stored in a storage medium, comprising instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the examples of the present invention.
Although a few embodiments of the present general inventive concept have been shown and described, it would be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the general inventive concept, the scope of which is defined in the claims and their equivalents.

Claims (17)

Translated fromChinese
1.一种目标物追踪方法,包括:1. A target tracking method, comprising:提供至少两个目标物,每个目标物包括位于同一平面中的标记图案和参考图案,参考图案靠近目标物的周边,每个目标物的参考图案都不相同;Providing at least two targets, each target comprising a marking pattern and a reference pattern located in the same plane, the reference pattern being close to the periphery of the target, and the reference pattern of each target being different;通过相机采集目标物的RGB图像;Collect the RGB image of the target object through the camera;基于目标物的RGB图像识别目标物的参考图案;Recognize a reference pattern of the target object based on an RGB image of the target object;基于识别出的目标物的参考图案确定与所述识别出的目标物的参考图案对应的目标物。An object corresponding to the reference pattern of the identified object is determined based on the reference pattern of the identified object.2.根据权利要求1所述的目标物追踪方法,其中,2. The target tracking method according to claim 1, wherein:目标物的标记图案是规则图形,The marking pattern of the target object is a regular shape.所述规则图形包括按规则排列的圆形、椭圆形、矩形、正方形、多边形和三角形以及ChArUco中的至少一种;或者The regular graphics include at least one of a circle, an ellipse, a rectangle, a square, a polygon, a triangle, and ChArUco arranged in a regular pattern; or目标物的标记图案是不规则图形,所述不规则图形包括按规则排列的logo,The marking pattern of the target object is an irregular pattern, and the irregular pattern includes logos arranged in a regular pattern.目标物的参考图案包括二维码、三角形、菱形、五角星、六芒星和卡通图形中的至少一种。The reference pattern of the target object includes at least one of a QR code, a triangle, a rhombus, a five-pointed star, a six-pointed star and a cartoon graphic.3.根据权利要求2所述的目标物追踪方法,其中,3. The target tracking method according to claim 2, wherein:基于目标物的RGB图像识别目标物的参考图案包括:Reference patterns for identifying a target based on its RGB image include:基于目标物的RGB图像确定目标物的灰度图像;Determine a grayscale image of the target object based on the RGB image of the target object;基于目标物的灰度图像确定目标物的二值图像;Determine a binary image of the target object based on the grayscale image of the target object;基于目标物的二值图像识别目标物的参考图案。A reference pattern of the target object is recognized based on the binary image of the target object.4.根据权利要求3所述的目标物追踪方法,其中,4. The target tracking method according to claim 3, wherein:基于目标物的二值图像识别目标物的参考图案包括:Reference patterns for identifying a target based on its binary image include:当所述参考图案为二维码时,通过对二值图像中的二维码进行解码以区分不同的二维码,When the reference pattern is a two-dimensional code, different two-dimensional codes are distinguished by decoding the two-dimensional codes in the binary image.当所述参考图案为三角形、菱形、五角星、六芒星和卡通图形中的至少一种时,When the reference pattern is at least one of a triangle, a rhombus, a five-pointed star, a six-pointed star and a cartoon figure,提供参考图案的二值图像作为识别模板;providing a binary image of a reference pattern as a recognition template;提供移动模板,并将移动模板在目标物的二值图像上移动并获得重叠子图像;Providing a moving template, and moving the moving template on the binary image of the target object to obtain overlapping sub-images;基于重叠子图像和识别模板识别目标物的参考图案。The reference pattern of the target object is recognized based on the overlapping sub-images and the recognition template.5.根据权利要求4所述的目标物追踪方法,其中,5. The target tracking method according to claim 4, wherein:基于重叠子图像和识别模板识别目标物的参考图案包括:Reference patterns for identifying targets based on overlapping sub-images and recognition templates include:确定重叠子图像和识别模板的匹配程度;Determining the matching degree between the overlapping sub-images and the recognition template;当所述匹配程度表示为二者匹配时,则将重叠子图像中的图像识别为目标物的参考图案。When the matching degree indicates that the two are matched, the image in the overlapping sub-image is identified as a reference pattern of the target object.6.根据权利要求1-5中任一项所述的目标物追踪方法,其中,6. The target tracking method according to any one of claims 1 to 5, wherein:基于目标物的RGB图像识别目标物的参考图案包括识别参考图案的图像,Recognizing a reference pattern of a target object based on an RGB image of the target object includes recognizing an image of the reference pattern,基于识别出的目标物的参考图案确定与所述识别出的目标物的参考图案对应的目标物包括:Determining a target object corresponding to the reference pattern of the identified target object based on the reference pattern of the identified target object includes:确定识别出的参考图案的图像的第一中心点的像素点的位置信息;Determine position information of a pixel point of a first center point of an image of the identified reference pattern;基于每个所述第一中心点的像素点确定在以R为领域的搜索区域内是否存在目标物;Determine whether there is a target object in the search area with R as the domain based on each pixel point of the first center point;当确定在所述搜索区域内存在目标物时,以所述第一中心点的像素点为中心并以R为领域的搜索区域提取所述识别的图像中的图像信息,将所述图像信息确定为与所述识别出的目标物的参考图案对应的目标物,When it is determined that there is a target object in the search area, extract image information from the recognized image in a search area centered on the pixel point of the first center point and with R as the area, and determine the image information as a target object corresponding to the reference pattern of the recognized target object,其中,R的初始值基于目标物和相机确定。The initial value of R is determined based on the target object and the camera.7.根据权利要求6所述的目标物追踪方法,其中,7. The target tracking method according to claim 6, wherein:R的初始值通过如下步骤确定:The initial value of R is determined by the following steps:将至少一个目标物放置成距离相机预设距离,并通过相机拍摄所述至少一个目标物的RGB图像;Placing at least one target object at a preset distance from the camera, and capturing an RGB image of the at least one target object through the camera;确定所拍摄的目标物的RGB图像中的目标物的位置信息;Determine the position information of the target object in the captured RGB image of the target object;基于所述至少一个目标物的位置信息确定所述至少一个目标物的长度和宽度;determining a length and a width of the at least one target object based on the position information of the at least one target object;基于所述至少一个目标物的长度和宽度确定R的初始值。An initial value of R is determined based on the length and width of the at least one object.8.根据权利要求7所述的目标物追踪方法,其中,8. The target tracking method according to claim 7, wherein:基于所述至少一个目标物的长度和宽度确定R的初始值包括:Determining an initial value of R based on the length and width of the at least one target object includes:基于所述至少一个目标物的长度和宽度确定长度对应的第一像素值和宽度对应的第二像素值;Determine a first pixel value corresponding to the length and a second pixel value corresponding to the width based on the length and the width of the at least one target object;确定所述至少一个目标物中的所有目标物的第一像素值和第二像素值中的最大值;Determine a maximum value among the first pixel value and the second pixel value of all the target objects in the at least one target object;将所述最大值确定为R的初始值。The maximum value is determined as the initial value of R.9.根据权利要求8所述的目标物追踪方法,其中,9. The target tracking method according to claim 8, wherein:基于识别出的目标物的参考图案确定与所述识别出的目标物的参考图案对应的目标物还包括:Determining the target object corresponding to the reference pattern of the identified target object based on the reference pattern of the identified target object further includes:当确定在所述搜索区域内不存在目标物时,增大R并以增大后的R为领域形成新的搜索区域,并基于每个所述第一中心点的像素点重新确定在新的搜索区域内是否存在目标物,直至识别出与所述识别出的目标物的参考图案对应的目标物为止,When it is determined that there is no target object in the search area, R is increased and a new search area is formed with the increased R as the area, and whether there is a target object in the new search area is re-determined based on the pixel points of each of the first center points until a target object corresponding to the reference pattern of the identified target object is identified,其中增大领域R包括以1个像素大小逐渐递增。The enlarged area R includes a gradual increase in size of 1 pixel.10.根据权利要求9所述的目标物追踪方法,其中,10. The target tracking method according to claim 9, wherein:基于识别出的目标物的参考图案确定与所述识别出的目标物的参考图案对应的目标物还包括:Determining the target object corresponding to the reference pattern of the identified target object based on the reference pattern of the identified target object further includes:确定识别出的目标物的标记图案的第二中心点在图像坐标系下的位置信息;Determine the position information of the second center point of the marking pattern of the identified target object in the image coordinate system;基于识别出的目标物的标记图案的第二中心点在图像坐标系下的位置信息确定识别出的目标物是否有重复;Determining whether the identified target object is repeated based on the position information of the second center point of the marking pattern of the identified target object in the image coordinate system;当识别出的目标物中存在重复,则删除重复的所有识别出的目标物。When there are duplicates among the identified objects, all the duplicate identified objects are deleted.11.根据权利要求10所述的目标物追踪方法,其中,11. The target tracking method according to claim 10, wherein:所述目标物追踪方法还包括:The target object tracking method further includes:确定目标物的初始图像中所识别出的目标物的第一数目;determining a first number of objects identified in an initial image of the objects;确定目标物的下一帧图像中目标物的第二数目;Determining a second number of the target object in a next frame image of the target object;确定第一数目与第二数目的关系,并根据所述关系追踪目标物。A relationship between the first number and the second number is determined, and the target object is tracked based on the relationship.12.根据权利要求11所述的目标物追踪方法,其中,12. The target tracking method according to claim 11, wherein:根据所述关系追踪目标物包括:Tracking targets according to the relationship includes:当第二数目小于等于第一数目时,When the second number is less than or equal to the first number,确定初始图像中每个目标物在图像坐标系下的位置数据和下一帧图像中每个目标物在图像坐标系下的位置数据;Determine the position data of each target object in the initial image in the image coordinate system and the position data of each target object in the next frame image in the image coordinate system;确定初始图像中的目标物在图像坐标系下的位置数据与下一帧图像中的目标物在图像坐标系下的位置数据之间的距离,并将距离最近的位置数据作为配对数据;Determine the distance between the position data of the target object in the initial image in the image coordinate system and the position data of the target object in the next frame image in the image coordinate system, and use the position data with the closest distance as the pairing data;利用下一帧图像中的目标物在图像坐标系下的位置数据更新与下一帧图像中的目标物在图像坐标系下的位置数据配对的初始图像中的目标物在图像坐标系下的位置数据,删除未配对的目标物的位置数据。The position data of the target object in the next frame image in the image coordinate system is used to update the position data of the target object in the initial image paired with the position data of the target object in the next frame image in the image coordinate system, and the position data of the unpaired target object is deleted.13.根据权利要求12所述的目标物追踪方法,其中,13. The target tracking method according to claim 12, wherein:根据所述关系追踪目标物包括:Tracking targets according to the relationship includes:当第二数目大于第一数目时,When the second number is greater than the first number,基于目标物的下一帧RGB图像识别目标物的参考图案;Recognize a reference pattern of the target object based on the next frame RGB image of the target object;基于识别出的目标物的参考图案确定与所述识别出的目标物的参考图案对应的目标物;Determining a target object corresponding to the reference pattern of the identified target object based on the reference pattern of the identified target object;确定下一帧RGB图像中所有目标物在图像坐标系下的位置数据和目标物的数目。Determine the position data of all targets in the next frame of RGB image in the image coordinate system and the number of targets.14.一种目标物追踪装置,其特征在于,所述目标物追踪装置适于追踪至少两个目标物,每个目标物包括位于同一平面中的标记图案和参考图案,参考图案靠近目标物的周边,每个目标物的参考图案都不相同,所述目标物追踪装置包括:14. A target tracking device, characterized in that the target tracking device is suitable for tracking at least two targets, each target includes a marking pattern and a reference pattern located in the same plane, the reference pattern is close to the periphery of the target, and the reference pattern of each target is different, the target tracking device comprises:相机,所述相机配置成采集目标物的RGB图像;A camera configured to capture an RGB image of a target object;与相机通信连接的识别模块,所述识别模块配置成基于来自相机的目标物的RGB图像识别目标物的参考图案;a recognition module in communication with the camera, the recognition module being configured to recognize a reference pattern of the target object based on an RGB image of the target object from the camera;分别与相机和识别模块通信连接的追踪模块,所述追踪模块配置成基于来自识别模块识别出的目标物的参考图案确定与所述识别出的目标物的参考图案对应的目标物。A tracking module is communicatively connected to the camera and the recognition module, respectively, and the tracking module is configured to determine a target object corresponding to the reference pattern of the recognized target object based on the reference pattern of the target object recognized by the recognition module.15.根据权利要求14所述的目标物追踪装置,其特征在于,15. The target tracking device according to claim 14, characterized in that:所述追踪模块还配置成确定目标物的初始图像中目标物的第一数目和下一帧图像中目标物的第二数目,确定第一数目和第二数目的关系,并根据所述关系追踪目标物。The tracking module is also configured to determine a first number of targets in an initial image of the target and a second number of targets in a next frame of image, determine a relationship between the first number and the second number, and track the target according to the relationship.16.一种电子设备,其特征在于,所述电子设备包括:16. An electronic device, characterized in that the electronic device comprises:存储器和处理器,所述存储器上存储有程序,其中,所述处理器在执行所述存储器上的程序时实施根据权利要求1-13中任一项所述的目标物追踪方法。A memory and a processor, wherein the memory stores a program, wherein the processor implements the target tracking method according to any one of claims 1-13 when executing the program on the memory.17.一种可读存储介质,其特征在于,17. A readable storage medium, characterized in that:所述可读存储介质中存储有计算机可读程序或指令,所述计算机可读程序或指令被处理器执行时实施根据权利要求1-13中任一项所述的目标物追踪方法。The readable storage medium stores a computer-readable program or instruction, and when the computer-readable program or instruction is executed by a processor, the target tracking method according to any one of claims 1-13 is implemented.
CN202311472737.9A2023-11-072023-11-07 Target tracking method and device, electronic device and readable storage mediumPendingCN119991780A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202311472737.9ACN119991780A (en)2023-11-072023-11-07 Target tracking method and device, electronic device and readable storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202311472737.9ACN119991780A (en)2023-11-072023-11-07 Target tracking method and device, electronic device and readable storage medium

Publications (1)

Publication NumberPublication Date
CN119991780Atrue CN119991780A (en)2025-05-13

Family

ID=95623194

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202311472737.9APendingCN119991780A (en)2023-11-072023-11-07 Target tracking method and device, electronic device and readable storage medium

Country Status (1)

CountryLink
CN (1)CN119991780A (en)

Similar Documents

PublicationPublication DateTitle
CN110478892B (en) A three-dimensional interactive method and system
JP6507730B2 (en) Coordinate transformation parameter determination device, coordinate transformation parameter determination method, and computer program for coordinate transformation parameter determination
US9330307B2 (en)Learning based estimation of hand and finger pose
JP5699788B2 (en) Screen area detection method and system
JP6587421B2 (en) Information processing apparatus, information processing method, and program
CN111291584B (en)Method and system for identifying two-dimensional code position
JP5873442B2 (en) Object detection apparatus and object detection method
WO2018176938A1 (en)Method and device for extracting center of infrared light spot, and electronic device
JP2013089252A (en)Video processing method and device
CN107248174A (en)A kind of method for tracking target based on TLD algorithms
CN108986152A (en)A kind of foreign matter detecting method and device based on difference image
CN112132907A (en) A camera calibration method, device, electronic device and storage medium
CN111160291B (en)Human eye detection method based on depth information and CNN
CN107038758B (en) An Augmented Reality 3D Registration Method Based on ORB Operator
CN117173225B (en)High-precision registration method for complex PCB
CN115511716B (en) A multi-view global map stitching method based on calibration plate
CN112819892A (en)Image processing method and device
JP5794427B2 (en) Marker generation device, marker generation detection system, marker generation detection device, marker, marker generation method and program thereof
JP2018055367A (en)Image processing device, image processing method, and program
CN112634377B (en)Camera calibration method, terminal and computer readable storage medium of sweeping robot
JP2019036030A (en) Object detection apparatus, object detection method, and object detection program
WO2023193763A1 (en)Data processing method and apparatus, and tracking mark, electronic device and storage medium
CN107767366B (en) A transmission line fitting method and device
CN106067025A (en)A kind of recognition methods of Chinese chess beginning in kind
JP4550768B2 (en) Image detection method and image detection apparatus

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp