Disclosure of Invention
Aiming at the problems, the invention provides the interactive advertising equipment which has the somatosensory interactive function, strong intuition, convenience and practicability and the working method thereof.
In order to achieve the purpose, the invention adopts the following technical scheme:
there is provided an interactive advertising apparatus, comprising:
a display device;
the somatosensory interaction device is used for capturing user action information;
the main control device is electrically connected with the display device and the somatosensory interaction device and is used for controlling the display device to play video information, identifying the selection of the user on the advertisement product according to the user action information captured by the somatosensory interaction device, evaluating the matching degree of the user action and the video action related to the selected advertisement product, and judging whether to output a shipment instruction according to the matching degree;
and the delivery executing device is electrically connected with the main control device and is used for controlling the advertising equipment to output the advertising products according to the delivery instruction sent by the main control device.
The somatosensory interaction device comprises an infrared camera or an RGB-D camera to record user action information comprising depth data and user skeleton node data.
Furthermore, the somatosensory interaction device further comprises an optical camera.
According to an embodiment of the present invention, the interactive advertisement device may further include a communication device electrically connected to the main control device, and configured to connect to the internet according to a control instruction transmitted from the main control device, so as to implement remote operation of the advertisement device.
According to an embodiment of the present invention, the interactive advertisement device may further include a sound generating device electrically connected to the main control device, and configured to play the voice message according to a control instruction transmitted from the main control device.
In addition, the invention also provides a working method of the interactive advertising equipment, which comprises the following steps:
s10, prompting the user to select an advertisement product;
s20, capturing user action information and identifying user selection;
s30, detecting whether the advertisement product selected by the user is out of stock:
if yes, playing a video related to the advertisement product selected by the user, and prompting the user to imitate a video action;
s40, capturing user action information, and evaluating the matching degree of the user action and the video action;
and S50, judging whether to output the advertisement product selected by the user according to the matching degree.
According to an embodiment of the present invention, in the above steps S20 and S40, the captured user motion information includes depth data and user skeletal node data.
Specifically, in the above step S40, the degree of matching between the user motion and the video motion is evaluated based on the following steps:
s100, selecting a user skeleton node and a user contour node based on depth data;
s200, constructing a user limb vector based on a user skeleton node, calculating a space included angle between the user limb vector and a corresponding video template limb vector, weighting and normalizing the user limb vector and the corresponding video template limb vector, and calculating a space included angle accumulated error between the user limb vector and the corresponding video template limb vector to serve as a human body action difference degree based on skeleton node analysis;
s300, constructing a user contour vector based on user contour nodes, calculating a space included angle between two adjacent contour vectors of a user, constructing an energy function by using a difference value between the space included angle and the space included angle of the corresponding video template contour vector, and solving a minimum value of the energy function as a human body action difference degree based on contour node analysis;
s400, weighting and summing the human body action difference degree based on the skeleton node analysis and the human body action difference degree based on the contour node analysis, and taking the weighted sum as an evaluation parameter for measuring the matching degree of the user action and the video action.
Further, in the above-described step S30, if the stock out is found, stock out information is presented to the user, and the stock out information is transmitted to the operator via the internet.
Compared with the prior art, the invention has the beneficial effects that:
1) the invention provides the advertising equipment with the somatosensory interaction device and the main control device preset with the image processing algorithm, identifies the gestures, postures and other actions of the user, and realizes the switching of product selection, thereby saving the traditional physical commodity display window and saving the hardware cost.
2) The invention provides the advertisement equipment with the somatosensory interaction device and the main control device preset with the image processing algorithm, evaluates the matching degree of the user action and the video action related to the advertisement product by using the image processing algorithm after the user selects a certain advertisement product, and provides the advertisement product to the user for reward according to the action matching degree, and the interaction mode improves the participation degree and interest of the user and is beneficial to expanding the product popularization degree.
3) According to the invention, the infrared camera or the RGB-D camera is preferably selected to capture the user action information comprising depth data and user skeleton node data, and the human body action automatic evaluation algorithm based on skeleton node analysis and contour node analysis is combined to accurately position the body joint points and gestures of the user and the human body posture direction, so that the evaluation result accuracy is higher.
4) The invention configures the communication device for the advertising equipment to access the Internet, and can further realize the remote operation and management of the advertising equipment.
The invention changes the traditional advertisement putting mode, realizes the somatosensory interactive interaction between the user and the advertisement video, and enhances the popularization effect of the product while experiencing the product by self.
Detailed Description
Fig. 1 is a schematic diagram illustrating an interactive advertisement device according to an embodiment of the present invention. The embodiment is an interactive advertising beverage machine, which mainly comprises a main control device 10, a body sensing interactive device 20 electrically connected with the main control device 10, a display device 30, a sound generating device 40, a delivery execution device 50, a refrigerating device 60 and a communication device 70. Of course, on the basis of the above, the equipment can be expanded and adjusted according to the needs of users or using conditions, for example, heating devices can be adopted to replace refrigerating devices in some cold regions. It will be understood by those skilled in the art and those who are skilled in the relevant art that various changes in form and details of the embodiments may be made without departing from the spirit of the invention as disclosed.
Different from the traditional advertising equipment, the interactive advertising equipment provided by the invention mainly utilizes the motion sensing interaction device 20 and the display device 30 to realize interaction with a user, switches product selection through gesture recognition in the motion sensing interaction, analyzes and evaluates the matching degree of the user action and the action of the video related to the advertising product in real time by utilizing an image processing algorithm preset in the main control device 10 after the user selects a certain advertising product, and delivers the product as a reward according to the action matching degree. The following describes the functions and actions of each device in detail by taking an interactive advertisement beverage machine as an example, with reference to the accompanying drawings, so as to fully understand and implement the implementation process of how to apply technical means to solve the technical problems and achieve the technical effects.
The main control device 10 is a core control component of the interactive advertising beverage machine. In this embodiment, a computer is preferably used as the main control device 10 to perform arithmetic and logical operations, so as to control and schedule other devices in the interactive advertisement beverage machine to work in coordination.
The somatosensory interaction device 20 is a key component for realizing human-computer interaction of the interactive advertisement beverage machine. In order to accurately capture the user action, the present embodiment preferably selects an infrared camera and optical camera combination kit to collect the human body action information during the advertisement interaction process. Specifically, according to a control command sent by the main control device 10, the infrared camera emits infrared rays at certain time intervals, detects infrared rays returned by irradiating the human body, and transmits a detection result to the main control device 10, and the main control device 10 obtains information such as depth data and human skeleton node data by calculating time and phase difference of reflected infrared rays, and further tracks the three-dimensional motion of the human body. Optical cameras are generally used for graphic image processing applications such as static motion recognition, face recognition, scene recognition and the like.
The display device 30 displays a prompt message to the user according to a control command from the main control device 10, and plays a predetermined advertisement video or the like.
The sound generating device 40 plays the voice information to the user according to the control command from the main control device 10.
The shipment executing device 50 controls the interactive advertising beverage machine to ship, such as outputting the beverage product selected by the user to the user for reward, according to the shipment instruction sent by the main control device 10.
The refrigerating device 60 refrigerates and cools the beverage products stored in the beverage machine according to the control command sent by the main control device 10.
The communication device 70 is connected to the internet according to the control command sent by the main control device 10, and performs data interaction with a communication device at the operator control end in the internet, so that an operator can inquire the outflow volume, the inventory, the running state and the fault condition of a machine at any time, add new interactive applications, update a system, analyze user behaviors and the like.
Of course, there are also power supply devices that supply operating voltages to the respective devices.
Fig. 2 is a flow chart of a working method of the interactive advertising beverage machine. It should be noted that the method is only a preferred embodiment of the present invention, and all equivalent process changes made by using the contents of the description and the drawings of the present invention are included in the technical solution of the present invention.
And S10, the main control device sends out a control instruction to control the display device to display a selection panel and related prompt information to prompt a user to select the beverage product of the heart instrument in a somatosensory interaction mode.
However, the user may be specified to switch between beverage selection by swinging his/her hand to the left and right, and the user may be instructed to confirm the selected beverage by lifting his/her hand upward.
S20, the main control device receives user action information such as depth data and user skeleton node data recorded by the somatosensory interaction device, and the user selection is identified through a built-in image processing algorithm;
s30, the main control device detects whether the beverage selected by the user is out of stock:
if the goods are out of stock, prompting the information of the out of stock, and returning to the step S10;
if the advertisement is in good, the main control device sends a control instruction to control the display device to play a corresponding advertisement video and prompt a user to imitate a video action;
s40, when the user imitates the video action, the main control device receives user action information such as depth data and user skeleton node data recorded by the somatosensory interaction device, and the matching degree of the user action and the video action is evaluated through a built-in image processing algorithm;
s50, the main control device determines whether the matching degree meets the requirement, for example, whether the matching degree is higher than a preset threshold value of 80%:
if not, returning to the step S10;
if the requirement is met, the main control device sends out a shipment instruction, the shipment execution device is started to work, and the beverage machine is controlled to output the beverage product to the user to be rewarded.
In step S30, if the user is out of stock, the user may be prompted with out-of-stock information, which may be sent to the operator via the internet.
In step S40, the main control device may evaluate the matching degree between the user motion and the video motion through the following human motion automatic evaluation algorithm based on the bone node analysis and the contour node analysis. As shown in fig. 3, the automatic human body motion estimation algorithm includes the following steps:
s100, selecting a user skeleton node and a user contour node based on depth data;
s200, constructing a user limb vector based on a user skeleton node, calculating a space included angle between the user limb vector and a corresponding video template limb vector, weighting and normalizing the user limb vector and the corresponding video template limb vector, and calculating a space included angle accumulated error between the user limb vector and the corresponding video template limb vector to serve as a human body action difference degree based on skeleton node analysis;
s300, constructing a user contour vector based on user contour nodes, calculating a space included angle between two adjacent contour vectors of a user, constructing an energy function by using a difference value between the space included angle and the space included angle of the corresponding video template contour vector, and solving a minimum value of the energy function as a human body action difference degree based on contour node analysis;
s400, weighting and summing the human body action difference degree based on the skeleton node analysis and the human body action difference degree based on the contour node analysis, and taking the weighted sum as an evaluation parameter for measuring the matching degree of the user action and the video action.
When the above human body motion automatic evaluation algorithm is adopted, the somatosensory interaction device 20 preferably captures user motion information by using an RGB-D camera.
Of course, the invention may also identify or evaluate user actions using image processing algorithms, for example, based on two-dimensional graphical sequences to infer human anatomy, but this approach is relatively less accurate.
The above-mentioned automatic human body motion estimation algorithm is further described in detail by an embodiment.
In the step S100, the RGB-D device may be used to acquire depth data, so as to ensure that the field of view of the device includes all of the human body, then convert the acquired depth data into a depth image with a certain resolution, and create a human body segmentation image based on the depth image.
And determining human skeleton nodes for analyzing human body movement action in the human body segmentation image through fitting. In this embodiment, the following twenty bone nodes are preferred: head, neck, left shoulder, left elbow, left wrist, left hand, right shoulder, right elbow, right wrist, right hand, spine, waist, left hip, left knee, left ankle, left foot, right hip, right knee, right ankle, and right foot. The influence degree of the bone nodes on the motion according to the motion mode of the human body can be roughly divided into the following categories:
trunk node: the spine, the waist, the left shoulder, the right shoulder, the left hip, the right hip and the neck are seven nodes. As can be seen from observation, the trunk node generally shows a strong autonomous movement trend and rarely shows highly independent movement, so that the human trunk can be regarded as a rigid body with larger movement inertia, and the movement of the trunk node is not considered in the similarity measurement of overall image registration.
A first-level node: the head, left elbow, right elbow, left knee, and right knee, which are directly connected to the trunk. A small amount of motion deviation of the primary nodes can cause visually large differences.
Secondary nodes: a left wrist, a right wrist, a left ankle and a right ankle which are connected with the primary node. Compared with the primary node, the secondary node is farther away from the trunk of the human body, the movement trend is only influenced by the primary node, and free rotation is easily carried out in the space, so that the movement amplitude is larger, but the tolerance of the angle deviation is higher in vision.
End node: left hand, right hand, left foot, right foot. The distance between the end node and the secondary node is very short, the flexibility is high, and inaccurate positioning is easily caused by noise interference during tracking imaging, so that the influence of the end node on human body actions is ignored in the embodiment.
And selecting human body contour nodes from the human body segmentation image. Firstly, extracting a human body contour line from a human body segmentation image, then converting the contour line into a representation form of sequence points, and selecting a human body contour node for analyzing human body movement. As shown in fig. 4, according to the characteristics of the body limb movements, the present embodiment preferably selects thirteen contour nodes of the body under the left armpit, the left elbow, the left wrist, the left hip, the left knee, the left ankle, the crotch, the right ankle, the right knee, the right hip, the right wrist, the right elbow, and the right armpit in the following manners:
the left underarm contour node is selected by drawing a straight line parallel to the X axis through the left shoulder skeleton node, and searching a sequence point which is closest to the left shoulder skeleton node on a contour line below the straight line to serve as the left underarm contour node. The right underarm contour nodes are the same.
The right elbow contour node is selected by drawing a straight line parallel to the Y axis through the right elbow skeleton node, and searching a sequence point closest to the right elbow skeleton node on the contour line on the right side of the straight line to serve as the right elbow contour node. The right wrist, the right hip, the right knee and the right ankle.
The left elbow contour node is selected by drawing a straight line parallel to the Y axis through the left elbow skeleton node, and searching a sequence point closest to the left elbow skeleton node on the contour line on the left side of the straight line to serve as the left elbow contour node. The left wrist, the left hip, the left knee and the left ankle are the same.
The selection method of the crotch contour node is as shown in fig. 5, a line connecting the left hip bone node and the left knee bone node is taken, one fourth of the line segment is taken as a point a, the right hip bone node and the right knee bone node are taken as a line connecting the right hip bone node and the right knee bone node, the one fourth of the line segment is taken as a point B, A, B points are respectively passed through to make a straight line parallel to the vertical axis, and a sequence point P closest to the waist bone node O is searched on the contour line between the two straight lines to be taken as the crotch contour node.
As shown in fig. 6, it is a flowchart of the method for calculating human body motion difference degree based on human body skeleton node analysis in step S200 shown in fig. 3, and it includes the following steps:
s201, constructing a human body limb vector based on human body skeleton nodes, wherein the human body limb vector is used as a descriptor of human body action data:
because the coordinates of the human skeleton nodes do not have relativity and directionality, the invention adopts the limb vectors to replace the human skeleton nodes as descriptors of the skeleton data. On one hand, the limb vector has directivity, the space position of the limb vector can be represented by means of three-dimensional coordinates of the skeleton node, on the other hand, the limb vector corresponds to the human limb, the motion of the human limb can be described by using the motion of the limb vector, the number of data is greatly reduced, and the complexity of calculation is reduced. In addition, the motion mode of the human body shows that the influence degree of the motion of the head and the trunk of the human body on the motion of the human body is small, and the influence degree of the motion of the limbs of the human body on the motion of the human body is large, so that certain simplification measures are adopted when the limb vectors are adopted to describe the motion of the human body. As shown in fig. 7, in this embodiment, 12 skeletal nodes on the left and right of the wrist joint, elbow joint, shoulder joint, hip joint, knee joint, and ankle joint of the human body are selected as the constituent points of the limb vector, and the high-level skeletal node points to the low-level skeletal node as the direction of the limb vector, that is, the body node points to the first-level node, and the first-level node points to the second-level node.
S202, calculating a space included angle between the human body limb vector and the corresponding template limb vector according to the following formula so as to measure the matching degree between the human body skeleton data acquired in real time and corresponding points in the template skeleton data preset by the system.
In the above formula, θ is a spatial angle (also called a limb vector spatial angle) between a human body limb vector and a corresponding template limb vector, and a smaller value thereof indicates that the human body limb vector is more matched with the corresponding template limb vector, so that θ is used for measuring the matching degree of the human body motion and the template motion in the analysis method based on the human body skeleton node.Respectively representing a human body limb vector and a template limb vector. x is the number of1,y1,z1And x2,y2,z2Are respectively asThree-dimensional coordinates of (a). The three-dimensional coordinates of the human body limb vector are determined by the three-dimensional coordinates of the human body skeleton node, which are determined based on the depth data obtained in step S100. In this embodiment, preferably, the human waist bone node is an origin, the horizontal direction is an X axis, and the vertical direction is a Y axis to establish a rectangular spatial coordinate system, and three-dimensional coordinates of the human bone node and the limb vectorThe rectangular coordinates in the rectangular coordinate system are all rectangular coordinates in the same order of magnitude.
According to the above, when the human body moves, the action differences at different types of skeleton nodes have different subjective feelings to people, so that different weights are respectively given to the limb vector related to the primary node, the limb vector related to the secondary node, the limb vector at the upper part of the human body and the limb vector at the lower part of the human body in differential expression through a large amount of data comparison and practical experience. The specific settings may be as follows:
in this embodiment, the second-level node is farther from the trunk than the first-level node, and the motion amplitude is only affected by the first-level node, and is easier to control during the motion, so that the limb vectors (the limb vectors 4, 5, 6, and 7 shown in fig. 7) related to the second-level node occupy less proportion of the difficulty consideration of the motion matching, and the first-level node is closer to the trunk and is affected by the motion inertia of the trunk, and is also affected by the motion amplitude of the second-level node, so that the limb vectors (the limb vectors 0, 1, 2, and 3 shown in fig. 7) related to the first-level node occupy more proportion of the difficulty consideration of the motion matching. Meanwhile, it is also necessary to consider the situation that the local spatial angle may be too large, and in order to make the spatial angle between each human body limb vector and the corresponding template limb vector of the same motion as average as possible, in this embodiment, the standard deviation of the spatial angle between each human body limb vector and the corresponding template limb vector of the same motion is also used as a consideration factor for measuring the motion matching degree. In the present embodiment, it is preferable to balance the visual sensation by giving a small weight to the difference data of the limb vectors (the limb vectors 0, 1, 4, and 5 shown in fig. 7) relating to the upper limbs and giving a large weight to the difference data of the limb vectors (the limb vectors 2, 3, 6, and 7 shown in fig. 7) relating to the lower limbs.
S203, weighting and normalizing the spatial included angles, and calculating the accumulated error of the spatial included angles between the human body limb vectors and the corresponding template limb vectors according to the following formula to serve as the human body action difference based on human body skeleton node analysis:
Metric=SD+AngDiff1×f1+AngDiff2×f2+AngDiffU×fU+AngDiffL×fL
in the above formula, Metric is the accumulated error of the spatial angle between the body limb vector and the corresponding template limb vector, and SD is the standard deviation of the spatial angle of the limb vector. AngDiffU、AngDiffL、AngDiff1、AngDiff2And the cumulative sum of the body vector space included angles related to the upper limb, the lower limb, the primary node and the secondary node in the same action and an experimental sample is respectively represented. In the present embodiment, only eight limb vectors 0 to 7 shown in fig. 5 are considered, so there are:
wherein,i ∈ {0, 1, …, 7} is the spatial angle between the eight human limb vectors and the corresponding template limb vector.
fU、fL、f1、f2Respectively representing the weight of the limb vectors related to the upper limbs, the lower limbs, the primary nodes and the secondary nodes in the differential expression, and respectively embodying the upper limbs, the lower limbs, the primary nodes and the secondary nodesThe degree of influence of the relevant limb vector on the human body action.
Of the above formula, AngDiff'U、AngDiff'L、AngDiff1'、AngDiff2' respectively represents the cumulative sum of the limb vector space included angles related to all upper limbs, lower limbs, primary nodes and secondary nodes in the multi-group experiment sample set. Here, a group of experimental samples is composed of a plurality of experimental samples, and an experimental sample specifically means that if a preset template motion is assumed as a (e.g., a motion of vertically extending two arms), and a human body motion similar to the template motion a collected at a certain time is a, the template motion a and the human body motion a constitute an experimental sample of the template motion a. The group of experimental samples specifically refers to the fact that similar actions of the same person at different times and similar actions of different persons at different times form a group of experimental samples together with the template actions for the same template action A. Each template action has a group of experimental samples, and a plurality of different template actions (such as template action A, template action B, template action C and the like) construct a plurality of groups of experimental sample sets.
As shown in fig. 8, it is a flowchart of the method for calculating human body motion difference degree based on human body contour node analysis in step S300 shown in fig. 3, and it includes the following steps:
s301, constructing a human body contour vector based on the human body contour nodes:
the human body contour nodes are sequentially connected end to end, every two human body contour nodes are connected to construct human body contour vectors, in the embodiment, thirteen human body contour vectors can be formed by sequentially connecting end to end under the left armpit, the left elbow, the left wrist, the left hip, the left knee, the left ankle, the hip, the right ankle, the right knee, the right hip, the right wrist, the right elbow and the right armpit.
S302, calculating a space included angle between two adjacent contour vectors of the human body according to the following formula, wherein the space included angle is used as a descriptor of human body motion data:
in the above formula, θ is the spatial angle between two adjacent contour vectors of the human body (also called the contour vector spatial angle),respectively representing two adjacent contour vectors, x, of the human body1,y1,z1And x2,y2,z2Here are respectivelyIs determined (unlike the formula definition based on step S202 in the human skeleton node analysis method described above). The three-dimensional coordinates of the body contour vector are determined by the three-dimensional coordinates of the body contour nodes, which are determined based on the depth data obtained in step 100. Similar to the analysis method of the human body skeleton nodes, in this embodiment, it is preferable that the human body waist skeleton node is an origin, the horizontal direction is an X axis, and the vertical direction is a Y axis to establish a spatial rectangular coordinate system, and the three-dimensional coordinates of the human body contour node and the contour vector are rectangular coordinates in the spatial rectangular coordinate system and have the same order of magnitude.
S303, calculating the difference value between the space included angle of each contour vector of the human body and the space included angles of all contour vectors of the template:
in this embodiment, in step S302, thirteen human body contour vectors are obtained to obtain thirteen human body contour vector space included angles, the thirteen contour vector space included angles of the template are subtracted from the first human body contour vector space included angle, the thirteen contour vector space included angles of the template are subtracted from the second human body contour vector space included angle, and so on, so as to obtain 13 × 13 difference values in total.
S304, constructing an energy function based on the difference values, and calculating the minimum value of the energy function as the human body action difference degree based on human body contour node analysis:
the 13 × 13 difference values obtained in step S303 are used as matrix elements to form a 13 × 13 difference matrix, and since the values of the matrix elements may be positive or negative, in order to construct an energy function, the matrix elements are squared, and an energy function e (d) is constructed according to the following formula:
in the above formula, s is the sequence number of the included angle of the contour vector space, k1And(s) is a contour vector space included angle corresponding to a sequence number s in the template data, and d(s) represents the offset in the human body data to be matched for the sequence number s. In practical applications, the order of the space included angles of the contour vectors in the template data may not be consistent with the order of the space included angles of the contour vectors in the human body data to be matched, for example, the following situations may occur: the first space included angle in the template data is a left underarm space included angle, and the third space included angle in the human body data to be matched is a left underarm space included angle, so d(s) needs to be defined to indicate that for the s-th space included angle in the template data, after s needs to be deviated by d(s) in the human body data to be matched, contour vector space included angles of the template data and the human body data can correspond to each other, and therefore k is k2(s-d (s)) is the contour vector space angle in the human body data to be matched after the offset transformation, α is a smoothing coefficient, j is the number of the contour vector space angles, and in this embodiment, the value is 13.
And then, calculating the minimum value of the energy function by using a graph cut algorithm to be used as the human body action difference degree based on the human body contour node analysis.
S400, the human body action difference degree based on the analysis of the human body skeleton nodes and the human body contour nodes is weighted and summed according to the following formula, the result is used as an evaluation parameter for measuring the matching degree of the human body action and the template action, the larger the value is, the lower the similarity of the human body action and the template action is, the smaller the value is, the higher the similarity of the human body action and the template action is, so that the comprehensive and accurate automatic evaluation of the human body action is realized, and the technical effect of the invention is achieved.
D=a×Dskeleton+(1-a)×Dshape
In the above formula, D is an evaluation parameter for measuring the matching degree of the human body action and the template action, DskeletonIs the human body action difference degree based on the human body skeleton node analysis, a is the weight coefficient, DshapeThe human body action difference degree based on human body contour node analysis is normalized, and the weight coefficient is given as (1-a).
In the above-mentioned weight setting process, a large number of data tests are required and the value of the weight coefficient a is determined in combination with the subjective feeling of the human body. And the weighting coefficients can be further adjusted according to specific requirements. For example, it can be determined whether there is a self-occlusion in the human body motion according to the depth data and the human body bone node data obtained in step s 100. When the human body action is shielded by the human body action, partial limb outlines are lost, the operation can not be carried out by adopting a method combined with the analysis based on the human body outline nodes, the weight coefficient of the human body action difference degree based on the human body skeleton node analysis is required to be forcibly set to be 1, the operation is carried out by adopting an analysis method based on the human body skeleton nodes, and the action matching degree is evaluated.
The method for judging whether the human body action is shielded by the human body comprises the following steps:
s401, searching for edges of the human body segmentation image, and finding out depth mutation pixels:
searching edges of the human body segmentation image, finding out pixels of which the depth data are greater than a given threshold value, and considering that the depth of the pixels has a sudden change and the pixels are depth sudden change pixels;
s402, judging whether the depth mutation pixel is a human body image pixel:
and (3) checking the depth mutation pixel coordinates, if the depth mutation pixel is positioned in the range of the human body image and is a human body image pixel, indicating that the human body part has the depth mutation, and further deducing that the human body action has self shielding.
The evaluation result integrating human skeleton node analysis and human contour node analysis automatically evaluates the matching degree between the human body action and the template action, overcomes the defect that action comparison is carried out only based on human skeleton node analysis or human contour node analysis in the prior art to a certain extent, and can better realize accurate evaluation of the human body action.
The interactive advertising equipment provided by the invention is matched with a main control device preset with an image processing algorithm through the somatosensory interactive equipment, can track and position body joint points, gestures and body posture directions of a user, identifies the selection of the user on an advertising product, evaluates the matching degree of the user action and the video action related to the advertising product, and issues the product to the user for reward if the matching degree meets the expected requirement. The interactive mode enhances the participation and immersion of the user and is beneficial to expanding the popularization of the advertisement product.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.