Disclosure of Invention
The embodiment of the application provides a method, a device and a system for detecting articles, which can be used for solving the problems in the related art. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides a method for detecting an article, including:
Acquiring article picking and placing triggering data aiming at an article picking and placing cabinet;
When the article taking and placing cabinet is determined to be in an article taking and placing triggering state based on the triggering data, determining a taking and placing area based on the article taking and placing triggering data;
Acquiring images and weight data related to the article picking and placing time, and acquiring target image data based on the images related to the article picking and placing time;
And determining article picking and placing information based on the information of the picking and placing area, the weight data and the target image data.
Optionally, infrared correlation units are arranged at two sides of the inlet and outlet of the article taking and placing cabinet;
The acquiring the article picking and placing triggering data aiming at the article picking and placing cabinet comprises the following steps:
Acquiring an infrared signal emitted by the infrared correlation unit;
after the acquiring of the article picking and placing triggering data aiming at the article picking and placing cabinet, the method further comprises the following steps:
Detecting an infrared cut-off signal based on an infrared signal emitted by the infrared correlation unit;
when the change of the quantity of the infrared cut-off signals is detected, the article taking and placing cabinet is determined to be in an article taking and placing triggering state.
Optionally, the infrared correlation unit includes infrared emission end and infrared receiving end, infrared emission end set up in the access & exit downside of article getting and putting the cabinet, infrared receiving end set up in the access & exit upside of article getting and putting the cabinet.
Optionally, the entrance of the article picking and placing cabinet is provided with a camera, the field angle of the camera covers the entrance, and the optical axis direction of the camera is parallel to the entrance;
The acquiring the article picking and placing triggering data aiming at the article picking and placing cabinet comprises the following steps:
Acquiring a current image of the access opening acquired by the camera;
after the acquiring of the article picking and placing triggering data aiming at the article picking and placing cabinet, the method further comprises the following steps:
Acquiring an optical flow vector based on the current image of the entrance acquired by the camera;
when the optical flow vector in the taking and placing operation direction appears in the entrance area, the object taking and placing cabinet is determined to be in an object taking and placing triggering state.
Optionally, the entrance and exit of the article picking and placing cabinet is provided with a camera, the field angle of the camera covers the entrance and exit, and the edge of the entrance and exit of the article picking and placing cabinet is provided with a marker;
The acquiring the article picking and placing triggering data aiming at the article picking and placing cabinet comprises the following steps:
acquiring a current image of an access opening of the article taking and placing cabinet acquired by the camera;
after the acquiring of the article picking and placing triggering data aiming at the article picking and placing cabinet, the method further comprises the following steps:
Detecting marker information in the current image;
and determining whether the article taking and placing cabinet is in an article taking and placing triggering state or not based on the detection result.
Optionally, the acquiring the image related to the article picking and placing time includes:
and acquiring images of the article taking and placing time or images of the reference quantity before and after the article taking and placing time.
Optionally, the determining the article picking and placing information based on the information of the picking and placing area, the weight data and the target image data includes:
and sending the information of the picking and placing area, the weight data and the target image data to a cloud end, and determining article picking and placing information based on the information of the picking and placing area, the weight data and the target image data at the cloud end.
Optionally, the determining the article picking and placing information based on the information of the picking and placing area, the weight data and the target image data includes:
Filtering the target image data based on the information of the picking and placing area to obtain filtered image data;
And identifying article picking and placing information in the filtered image data based on the weight data and the target image data, wherein the article picking and placing information comprises types and numbers.
Optionally, the obtaining the target image data based on the image related to the article picking and placing time includes:
and acquiring all image data in the image related to the article picking and placing time, and taking the all image data as target image data.
Optionally, the obtaining the target image data based on the image related to the article picking and placing time includes:
And detecting the image related to the article picking and placing time to obtain local image data of an area where the article is located and coordinates of the local image data, and taking the local image data of the area where the article is located and the coordinates of the local image data as target image data.
Optionally, the determining the article picking and placing information based on the information of the picking and placing area, the weight data and the target image data includes:
Identifying item information in the target image data, the item information including a location, a category, and a quantity;
Determining the types and the numbers of the articles in the picking and placing area based on the positions in the article information and the information of the picking and placing area;
and correcting the quantity of the articles in the picking and placing area based on the weight data, and obtaining article picking and placing information according to the corrected quantity of the articles and the types of the articles in the picking and placing area.
Optionally, after determining the type and the number of the articles located in the picking and placing area based on the position in the article information and the information of the picking and placing area, the method further includes:
Rechecking the number of the articles in the picking and placing area based on the weight data;
and when the check passes, taking the type and the number of the objects in the picking and placing area as object picking and placing information.
Optionally, the determining the article picking and placing information based on the information of the picking and placing area, the weight data and the target image data includes:
Filtering the local image data based on the information of the picking and placing area and the coordinates of the local image data, and comparing the filtered local image data with an article sample library;
when the filtered local image data is determined to comprise the objects according to the comparison result, determining the types and the numbers of the objects included in the filtered local image data;
Correcting the quantity based on the weight data, and determining article picking and placing information according to a correction result.
There is also provided an article detection apparatus, the apparatus comprising:
The triggering component is used for acquiring article picking and placing triggering data aiming at the article picking and placing cabinet and sending the triggering data to the first processor;
the first processor is used for determining a picking and placing area based on the article picking and placing trigger data when the article picking and placing cabinet is determined to be in an article picking and placing trigger state based on the trigger data; acquiring images and weight data related to the article picking and placing time, and acquiring target image data based on the images related to the article picking and placing time; and determining article picking and placing information based on the information of the picking and placing area, the weight data and the target image data.
Optionally, the triggering component comprises an infrared correlation unit, and the infrared correlation unit is arranged at two sides of an inlet and outlet of the article taking and placing cabinet;
The first processor is used for detecting infrared cut-off signals based on infrared signals emitted by the infrared correlation unit; when the change of the quantity of the infrared cut-off signals is detected, the article taking and placing cabinet is determined to be in an article taking and placing triggering state.
Optionally, the infrared correlation unit includes infrared emission end and infrared receiving end, infrared emission end set up in the access & exit downside of article getting and putting the cabinet, infrared receiving end set up in the access & exit upside of article getting and putting the cabinet.
Optionally, the triggering component comprises a camera, the camera is arranged at an entrance of the article taking and placing cabinet, the view angle of the camera covers the entrance, and the optical axis direction of the camera is parallel to the entrance;
The first processor is used for acquiring an optical flow vector based on the current image of the entrance acquired by the camera; when the optical flow vector in the taking and placing operation direction appears in the entrance area, the object taking and placing cabinet is determined to be in an object taking and placing triggering state.
Optionally, the triggering component comprises a camera, the camera is arranged at an access opening of the article taking and placing cabinet, the access opening is covered by a field angle of view of the camera, and a marker is arranged at the edge of the access opening of the article taking and placing cabinet;
The first processor is used for detecting marker information in the current image of the access opening acquired by the camera; and determining whether the article taking and placing cabinet is in an article taking and placing triggering state or not based on the detection result.
Optionally, the first processor is configured to acquire an image of the article picking and placing time, or an image of a reference number before and after the article picking and placing time.
Optionally, the first processor is configured to send the information of the pick-and-place area, the weight data, and the target image data to a cloud end, and determine, at the cloud end, article pick-and-place information based on the information of the pick-and-place area, the weight data, and the target image data.
Optionally, the first processor is configured to filter the target image data based on the information of the pick-and-place area, to obtain filtered image data;
Based on the weight data and the target image data, item information in the filtered image data is identified, the item information including location, category, and quantity.
Optionally, the first processor is configured to acquire all image data in the image related to the article picking and placing time, and take the all image data as target image data.
Optionally, the first processor is configured to perform object detection on the image related to the object picking and placing time, obtain local image data of an area where the object is located and coordinates of the local image data, and take the local image data of the area where the object is located and coordinates of the local image data as target image data.
Optionally, the first processor is configured to determine the type and the number of the articles located in the picking and placing area based on the position in the article information and the information of the picking and placing area; and correcting the quantity of the articles in the picking and placing area based on the weight data, and obtaining article picking and placing information according to the corrected quantity of the articles and the types of the articles in the picking and placing area.
Optionally, the first processor is configured to filter the local image data based on the information of the pick-and-place area and coordinates of the local image data, and compare the filtered local image data with an article sample library; when the filtered local image data is determined to comprise the objects according to the comparison result, determining the types and the numbers of the objects included in the filtered local image data; correcting the quantity based on the weight data, and determining article picking and placing information according to a correction result.
Optionally, the first processor is further configured to review the number of the articles located in the pick-and-place area based on the weight data; and when the check passes, taking the type and the number of the objects in the picking and placing area as object picking and placing information.
There is also provided an item detection system, the system comprising: the device comprises a triggering unit, an image acquisition unit, a weight acquisition unit and an article detection unit; the triggering unit, the weight acquisition unit and the image acquisition unit are connected with the article detection unit, and the image acquisition unit and the weight acquisition unit are also connected with the triggering unit;
The triggering unit is used for acquiring article picking and placing triggering data aiming at the article picking and placing cabinet, determining a picking and placing area based on the article picking and placing triggering data when the article picking and placing cabinet is in an article picking and placing triggering state based on the triggering data, sending information of the picking and placing area to the article detecting unit, sending triggering signals to the image collecting unit and the weight collecting unit, triggering the image collecting unit to collect image data, and triggering the weight collecting unit to collect weight data;
the image acquisition unit is used for acquiring images related to the article picking and placing time based on the trigger signal; transmitting target image data obtained based on the image to the article detection unit;
the weight acquisition unit is used for acquiring weight data related to the article picking and placing time based on the trigger signal; transmitting the weight data to the item detection unit;
the article detection unit is used for determining article picking and placing information based on the information of the picking and placing area, the weight data and the target image data.
Optionally, the triggering unit includes: the infrared correlation unit is arranged at an inlet and an outlet of the article taking and placing cabinet;
The processor is used for detecting infrared cut-off signals based on infrared signals emitted by the infrared correlation unit; when the change of the quantity of the infrared cut-off signals is detected, the article taking and placing cabinet is determined to be in an article taking and placing triggering state.
Optionally, the infrared correlation unit includes infrared emission end and infrared receiving end, infrared emission end set up in the access & exit downside of article getting and putting the cabinet, infrared receiving end set up in the access & exit upside of article getting and putting the cabinet.
Optionally, the image acquisition unit is further configured to acquire a current image of an access opening of the article picking and placing cabinet, and send the current image as trigger data to the trigger unit;
The triggering unit is used for acquiring an optical flow vector based on the current image; when the optical flow vector in the taking and placing operation direction appears in the entrance area, the object taking and placing cabinet is determined to be in an object taking and placing triggering state.
Optionally, the edge of the inlet and outlet of the article taking and placing cabinet is provided with a marker;
The image acquisition unit is further used for acquiring a current image of an access opening of the article taking and placing cabinet and sending the current image of the access opening of the article taking and placing cabinet to the triggering unit as triggering data;
The triggering unit is used for detecting marker information in the current image; and determining whether the article taking and placing cabinet is in an article taking and placing triggering state or not based on the detection result.
Optionally, the image acquisition unit includes: the camera is arranged at the entrance and exit of the article taking and placing cabinet, the monitoring area of the camera covers the whole entrance and exit, and the optical axis direction of the camera is parallel to the entrance and exit;
Or the image acquisition unit comprises a plurality of cameras, the monitoring area of each camera covers a part of the access opening of the article taking and placing cabinet, the monitoring area of the plurality of cameras covers the whole access opening of the article taking and placing cabinet, and the optical axis direction of each camera is parallel to the access opening.
Optionally, the system further comprises: the target detection unit is connected with the image acquisition unit;
the image acquisition unit is used for transmitting the acquired image related to the article picking and placing time to the target detection unit;
The object detection unit is used for detecting the object of the image related to the object picking and placing time to obtain local image data of an area where the object is located and coordinates of the local image data, and sending the local image data of the area where the object is located and the coordinates of the local image data as object image data to the object detection unit;
The article detection unit is used for determining article picking and placing information based on the information of the picking and placing area, the weight data, the local image data of the area where the article is located and the coordinates of the local image data.
Optionally, the system further comprises: a communication unit;
The triggering unit, the image acquisition unit, the weight acquisition unit and the communication unit are arranged in the article picking and placing cabinet, and the article detection unit is arranged in the cloud.
There is also provided a computer device comprising a processor and a memory having stored therein at least one instruction which when executed by the processor implements the article detection method as claimed in any one of the above.
There is also provided a computer readable storage medium having stored therein at least one instruction that when executed implements the article detection method of any of the above.
The technical scheme provided by the embodiment of the application at least has the following beneficial effects:
When the article taking and placing triggering data are acquired, the article taking and placing cabinet is determined to be in an article taking and placing triggering state based on the triggering data, the image and weight data related to the article taking and placing time are acquired, and the target image data are acquired based on the image related to the article taking and placing time, so that the article taking and placing information is determined based on the information of the taking and placing area, the target image data and the weight data, compared with the action gesture analysis and the article identification by utilizing the video image, the calculation amount is reduced, the detection efficiency is improved, and the detection accuracy can be further improved.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
An embodiment of the present application provides an article detection system, as shown in fig. 1, including: a trigger unit 11, an image acquisition unit 12, a weight acquisition unit 13, and an article detection unit 14;
The triggering unit 11, the weight acquisition unit 13 and the image acquisition unit 12 are connected with the article detection unit 14, and the image acquisition unit 12 and the weight acquisition unit 13 are also connected with the triggering unit 11;
the triggering unit 11 is configured to obtain article picking and placing triggering data for the article picking and placing cabinet, determine, when the article picking and placing cabinet is in an article picking and placing triggering state based on the triggering data, a picking and placing area based on the article picking and placing triggering data, send information of the picking and placing area to the article detecting unit 14, send triggering signals to the image collecting unit 12 and the weight collecting unit 13, trigger the image collecting unit 12 to collect image data, and trigger the weight collecting unit 13 to collect weight data. The image acquisition unit 12 is configured to acquire an image related to the article picking and placing time based on the trigger signal, and send the object image data obtained based on the image to the article detection unit 14. The weight acquisition unit 13 is used for acquiring weight data related to the article picking and placing time based on the trigger signal, and sending the weight data to the article detection unit 14. And an article detecting unit 14 for determining article pickup and placement information based on the information of the pickup and placement area, the weight data, and the target image data.
Optionally, the image acquisition unit 12 acquires a current image of the access opening of the article picking and placing cabinet for triggering analysis and article identification detection. The triggering unit 11 performs a triggering analysis using data as provided by the image acquisition unit 12, i.e. determines whether the article picking and placing cabinet is in an article picking and placing triggering state based on the triggering data. For example, when an object (such as a hand, an article, etc.) enters or leaves the article picking and placing cabinet, the article picking and placing cabinet is determined to be in an article picking and placing triggering state, an article picking and placing triggering signal is generated, and a corresponding picking and placing area is provided based on the triggering data. The image acquisition unit 12 may also acquire an image related to the article taking and placing time under the trigger of the trigger unit 11. The target image data obtained based on the image may be sent to the item detection unit 14.
Alternatively, the above-mentioned target image data may be all image data in the image related to the article picking and placing time acquired by the image acquisition unit 12, and then the image acquisition unit 12 may directly transmit all image data in the acquired image as target image data to the article detection unit 14.
It should be noted that, if the article picking and placing cabinet is multi-layered, the weight acquisition unit 13 may be disposed on each layer. When the object taking and placing cabinet is determined to be in an object taking and placing triggering state based on the triggering data, the triggering unit 11 sends a triggering signal to the weight collecting unit 13, and when the weight collecting unit 13 detects the object taking and placing triggering signal, the weight data collected by each layer of weight collecting unit 13 are obtained for subsequent rechecking and object correction. Of course, it is also possible to determine, after determining the picking and placing area, which layer of the article picking and placing cabinet the picking and placing operation is located on, so that only the weight data acquired by the layer weight acquisition unit 13 is acquired. The mode of selection is not limited by the application. Of course, only one weight acquisition unit 13 may be provided for the article picking and placing cabinet, and the number of the weight acquisition units 13 is not limited in the embodiment of the present application. Each weight acquisition unit 13 may acquire weight data related to the time of picking and placing the article under the trigger of the trigger unit 11, and transmit the weight data to the article detection unit 14.
The article detection unit 14 may include a classifier trained by means of deep learning, such as fast_rcnn, YOLO, and the like, to detect and identify a deep network. The article detection unit 14 detects and recognizes article information in the target image data, and obtains information such as the position, kind, and number of articles contained in the target image data.
In addition, the article detecting unit 14 may further include a unit that implements an image region overlapping determination algorithm function, and is configured to determine whether the pick-and-place region determined by the triggering unit 11 overlaps with the position in the article information detected based on the target image data, so as to filter out articles that do not participate in the current pick-and-place trigger event (such as handheld products, background articles, and the like, that do not participate in the current pick-and-place), and give the type and number of articles that are actually picked out or put back by the current pick-and-place trigger event, that is, the type and number of articles that are located in the pick-and-place region. Then, the article detecting unit 14 corrects the number of articles located in the picking and placing area by combining the weight data, and obtains article picking and placing information according to the corrected number of articles and the type of articles located in the picking and placing area.
Optionally, the article detecting unit 14 may be configured at the cloud end, and because the cloud end has a strong computing capability, the target image data, the weight data and the information of the picking and placing area are locally acquired, and then the target image data, the weight data and the information of the picking and placing area are sent to the cloud end to determine the article picking and placing information, so that the local computing amount is further reduced and the detection efficiency is improved by combining the local computing amount with the cloud end.
Of course, the method provided by the embodiment of the application also supports all local implementation. Even so, because only when confirming that the article gets and puts the cabinet and is in article and gets and put the trigger state, obtain the image that is correlated with article getting and put the moment of time, get and put the information of information determination article getting and put based on object image data, weight data and getting that this image obtained, compare with using the video image to carry out action gesture analysis and article discernment, still can reduce the calculated amount, improve detection efficiency.
Optionally, the edge of the entrance and exit of the article picking and placing cabinet is provided with a marker, and the image acquisition unit 12 is used for acquiring images of the entrance and exit of the article picking and placing cabinet; the triggering unit 11 is connected with the image acquisition unit 12, and is used for detecting the object picking and placing triggering state based on the marker information in the image acquired by the image acquisition unit 12.
The article taking and placing cabinet is used for storing articles, the product form of the article taking and placing cabinet is not limited, and the types, the sizes and the number of articles stored in the article taking and placing cabinet are not limited. Because the partial area of the access opening of the article taking and placing cabinet is shielded when articles are taken and placed from the article taking and placing cabinet, the embodiment of the application can detect whether the article taking and placing operation exists or not based on the shielding condition of the marker by arranging the marker at the edge of the access opening of the article taking and placing cabinet, namely, whether the article taking and placing cabinet is in the article taking and placing triggering state is determined.
Optionally, the markers include, but are not limited to, one or more of line property encoded markers, bar code encoded markers, and checkerboard encoded markers.
The markers of the line feature codes are of a vertical gradient coding type, and the markers of the vertical gradient coding type are subjected to gradient coding in a direction perpendicular to a fetching and placing boundary (namely an entrance edge). As shown in fig. 2 (a), the linear feature-encoded markers have gradients in the vertical direction of the boundary, and the marker interval in this encoding method is infinitesimal.
The bar code codes and the checkerboard codes can be of two-dimensional code types, and the markers of the two-dimensional code types are coded in the vertical direction and the horizontal direction of the taking and placing boundary. Common two-dimensional codes include two-dimensional codes, two-dimensional codes in the form of a checkerboard, bar code codes in the form of two-dimensional codes as shown in fig. 2 (b), and checkerboard codes as shown in fig. 2 (c).
In the system provided by the embodiment of the application, a plurality of markers form a feature array no matter which coding type of the markers are. In addition, the interval between every two markers is smaller than the width of the smallest article among articles taken from the article taking-and-putting cabinet. For example, the markers can be continuously arranged at the edge of the entrance and exit of the article taking and placing cabinet for one circle, and the interval between every two markers is smaller than the width of the smallest article in the articles taken and placed in the article taking and placing cabinet, so that detection omission is avoided, and the accuracy of taking and placing detection is further improved.
On the basis of setting the marker, the gradient of the marker edge should be ensured to be more than 10, namely the difference of pixel values of the regions at two sides of the edge should be more than 10, so that the accuracy of the feature extraction of the marker is ensured. To ensure that the marker has a significant edge gradient, optionally one side of the marker edge is of a light absorbing material and the other side is of a diffusely reflecting material. That is, the materials on both sides of the edge of the marker often have one side selecting a material with strong light absorption performance, such as light-absorbing photographic cloth, printing ink, rubber, etc., and the other side selecting a material with strong diffuse reflection performance, such as: printing paper, PET (Polyethylene Terephthalate ) diffuse reflective material, and the like. The embodiment of the application does not limit the material of the marker, and can extract the characteristics.
For example, the markers are black and white, paper markers printed in black and white can be posted to the edges of the access opening of the article taking and placing cabinet, such as a circle of areas for posting the markers are arranged around the inner cavity of the cabinet. The graphite of the black part of the marker has good light absorption performance, and the printing paper of the white part has good diffuse reflection performance, so that the gray level difference of the black and white in the gray level image of the marker is ensured to be above.
Optionally, the image capturing unit 12 is configured to capture images of the access opening of the article picking and placing cabinet, and the image capturing unit 12 may include a camera, where a monitoring area of the camera covers the entire access opening of the article picking and placing cabinet. Therefore, the whole access opening of the article taking and placing cabinet can be shot through one camera, so that inaccurate taking and placing detection caused by missing detection of a certain marker is avoided. For example, a circle of markers are continuously arranged on an inner cavity at the edge of an access opening of the article taking and placing cabinet, and the characteristics of the markers can be collected while the detection camera monitors the access opening. The whole access and exit can be covered through the visual angle of the camera, so that the image which can be collected can be displayed in the taking and placing operation at any position, and detection omission is avoided.
Alternatively, rather than employing one camera, the image capture unit 12 includes a plurality of cameras, each of which covers a portion of the access opening of the article retrieval cabinet, and the monitoring area of the plurality of cameras covers the entire access opening of the article retrieval cabinet. For example, the number of cameras is determined according to the size of the entrance of the article picking and placing cabinet and the visual angle range of the cameras, so that the sum of monitoring areas of the cameras used for detection can cover the whole entrance of the article picking and placing cabinet.
It should be noted that, if the image capturing unit 12 in the article picking and placing system includes a plurality of cameras, each camera transmits the captured current image to the triggering unit 11. In addition, each camera needs to keep synchronous to acquire images, so that the current image acquired by the triggering unit 11 is an image of the same time, and the current image can reflect the condition that the access opening of the article taking and placing cabinet is at the same time, so that the accuracy of the detection result is improved.
In addition, the embodiment of the application is only described by taking the connection of the image acquisition unit 12 and the article picking and placing cabinet as an example, and the image acquisition unit 12 can be arranged in a certain range of the entrance of the article picking and placing cabinet, so that the image of the entrance can be acquired. Alternatively, the image acquisition unit 12 may be provided separately from the article picking and placing cabinet. For example, the image capturing unit 12 may be provided on the opposite side of the article picking and placing cabinet, facing the entrance of the article picking and placing cabinet, and may capture images of the entrance. The embodiment of the present application does not limit the specific number and positions of the image capturing units 12.
For ease of understanding, the present application is illustrated by the schematic diagram shown in fig. 3. As shown in fig. 3 (a), the image capturing unit 12 includes a camera, and a marker may be disposed at the edge of the entrance of the article picking and placing cabinet. A camera may be disposed at the upper right corner of the doorway, and a monitoring area of the camera covers the entire doorway to detect the doorway and collect an image of the entire doorway. As shown in fig. 3 (b), a camera may be further disposed at the upper right corner and the upper left corner of the doorway, and the monitoring area of each camera covers a portion of the doorway of the object picking-and-placing cabinet, and the monitoring areas of all cameras cover the entire doorway to detect the entire doorway and collect images of the entire doorway.
Optionally, considering the light change of the environment in which the article picking and placing cabinet is located, the sharpness of the image acquired by the image acquisition unit 12 may be affected, and the identification of the marker may be affected. In this regard, the system also includes a light source for supplementing the marker, as shown in FIG. 4. The marker supplements light through the light source, so that the gray level of the characteristic image of the marker is ensured not to change along with the change of the illumination condition of the external environment, and the accuracy of taking, placing and detecting is further ensured.
The embodiment of the application is not limited to the specific position of the light source, and the marker can be supplemented with light. For example, the light source may be disposed directly opposite the article handling bin to face the access edge of the article handling bin. In addition, the number of the light sources can be one or more, and the embodiment of the application is not limited to the number of the light sources, and is not limited to the type of the light sources. Optionally, the system may further comprise control means for controlling the light sources to be switched on and off. For example, the light source is controlled to be turned on and off based on the light intensity of the environment in which the article picking and placing cabinet is located.
Based on the article detection system, when the article taking and placing operation is performed, the object entering the cabinet to perform the taking and placing operation can shade the marker, and the taking and placing operation can be accurately detected through the shielding condition of the detection marker, so that the article taking and placing triggering state is obtained. Further, a pick-and-place region may also be determined based on the occlusion region.
The two-dimensional code encoded marker shown in fig. 5 (a) will be described as an example. The two-dimension code coded marker is black and white, and paper two-dimension codes printed by black and white are posted to the edges of the access opening of the article taking and placing cabinet. The marker can be subjected to light supplementing through the light source so as to reduce illumination change, and therefore the influence on the feature extraction of the two-dimensional code is reduced. The graphite of the black part of the two-dimensional code has good light absorption performance, and the printing paper of the white part has good diffuse reflection performance, so that the gray level difference between black and white in a gray level image of the two-dimensional code is ensured to be more than 100.
Before the pick-and-place operation detection, the method provided by the embodiment of the application firstly uses the image acquisition unit 12 to acquire the reference image at the moment when the pick-and-place operation is not performed, and then identifies all the two-dimensional codes in the image. As shown in fig. 5 (a), a continuous two-dimensional code sequence at the edge of the entrance of the article picking and placing cabinet is used for obtaining the positions and the internal code vectors of all the two-dimensional codes as the marker characteristics during picking and placing detection, and obtaining the reference marker information for subsequent detection.
Then, the image acquisition unit 12 detects the current image of the entrance of the article picking and placing cabinet in real time. When the pick-and-place operation exists, the two-dimensional code located at the edge of the entrance is blocked by the pick-and-place operation, as shown by the hatched area in fig. 5 (b). And detecting the two-dimensional code according to the two-dimensional code position obtained by the reference marker information on the current image and extracting the internal coding vector of the two-dimensional code. If the two-dimensional code or the internal code vector of the two-dimensional code cannot be matched with the internal code vector of the reference two-dimensional code at the position in the current image, the two-dimensional code at the position is shielded, and the fact that the picking and placing operation exists is determined, namely the object picking and placing cabinet is in an object picking and placing triggering state.
Further, after each two-dimensional code is identified by adopting the mode, the position and the number of the shielding areas are obtained, as shown in fig. 5 (c), the dotted line part is the shielding area with the picking and placing operation, and the two shielding areas are shared in the figure; and determining whether the object taking and placing cabinet is in an object taking and placing triggering state or not by comparing the number change of the front frame shielding area and the rear frame shielding area in a time domain by utilizing shielding area information, and outputting a triggering signal when the object taking and placing cabinet is in the object taking and placing triggering state. The number change of the shielded areas obtained by comparing the marker information in the current images acquired at different times is used for determining whether the object picking and placing cabinet is in an object picking and placing triggering state, and when the object picking and placing cabinet is in the object picking and placing triggering state, a triggering signal is output.
Taking the method provided by the embodiment of the application as an example applied to the line characteristic coded marker shown in fig. 6 (a), the object picking and placing detection method based on the marker is different from the process shown in fig. 5 in the coding type of the marker. The marker shown in fig. 6 (a) is a continuous strip of markers printed with horizontal black and white stripes (i.e., vertical gradients). When the marker is deployed, a black-and-white printed paper marker strip is posted to the edge of the entrance and exit of the article picking and placing cabinet. And then, the visual angle of the camera is adjusted, so that the marker strips in the current image of the access opening of the object picking and placing cabinet shot by the camera are parallel to the horizontal axis of the current image as much as possible. Since the markers are continuous, a column of markers in the camera image is detected as one feature description unit. As an example of the line-coded marker shown in fig. 6 (a), each column of marker has two vertical gradients, one of which is increasing in gray scale from top to bottom and the other is decreasing in gray scale from top to bottom.
Before the picking and placing operation detection is carried out, namely the object picking and placing triggering state is determined, the estimated position of each gradient edge can be given manually in a line drawing mode on a reference image when the picking and placing operation is not carried out. The method provided by the embodiment of the application firstly utilizes the image acquisition unit to acquire the reference image at the moment when the picking and placing operation is not performed, and the picking and placing operation detection unit searches in the neighborhood of the estimated position in the vertical direction. And obtaining all gradient positions and corresponding gradient directions in each column of markers in the reference image as reference marker information by searching and finding out the pixel position with the maximum gradient in the neighborhood as an accurate gradient position.
Then, the image acquisition unit 12 detects the current image of the entrance of the article picking and placing cabinet in real time. When a fetching operation exists, a gradient is extracted on the current image according to the gradient position in the reference marker information. If the image has the condition that the gradient is not extracted or the direction of the extracted gradient is not consistent with the characteristics in the reference marker information, the picking and placing operation exists in the current area, namely the object picking and placing cabinet is in an object picking and placing triggering state, so that the markers are shielded, as shown by a shadow area in fig. 6 (b).
After each marker is identified in the mode, the position and the number of the shielding areas are obtained. As shown in fig. 6 (c), the broken line portion is an occlusion region where there is a pick-and-place operation, and there are two occlusion regions in the figure; and determining whether the object taking and placing cabinet is in an object taking and placing triggering state or not by comparing the number change of the front frame shielding area and the rear frame shielding area in a time domain by utilizing shielding area information, and outputting a triggering signal when the object taking and placing cabinet is in the object taking and placing triggering state. Namely, the number of the shielded areas obtained by comparing the marker information in the current images acquired at different times is changed to output a trigger signal.
Taking the checkerboard encoded markers as shown in fig. 7 (a) as an example. Before taking and placing operation detection is performed, that is, the object taking and placing triggering state is determined, the image acquisition unit 12 is used for acquiring a reference image at the moment when taking and placing operation is not performed, and then the taking and placing operation detection unit is used for identifying all checkerboard corner points in the image. As shown in fig. 7 (a), the continuous checkerboard coding sequence at the edge of the entrance and exit of the article picking and placing cabinet obtains the positions of all the checkerboard corner points as the marker characteristics during article picking and placing detection, and obtains the reference marker information for subsequent detection.
Then, the image acquisition unit 12 detects the current image of the entrance of the article picking and placing cabinet in real time. When there is a pick-and-place operation, the corner points of the checkerboard located at the edge of the doorway are hidden by the pick-and-place operation, as shown by the hatched area in fig. 7 (b). And extracting the checkerboard corner points on the current image according to the checkerboard corner point positions obtained by the reference marker information. If the checkerboard corner is not detected in the current image, the checkerboard corner at the position is blocked, and the fetching and putting operation is determined.
After each checkerboard is identified in the mode, the positions and the numbers of the shielding areas are obtained, and as shown in fig. 6 (c), the dotted line part is the shielding area with the picking and placing operation, and two shielding areas are shared in the figure; and determining whether the object taking and placing cabinet is in an object taking and placing triggering state or not by comparing the number change of the front frame shielding area and the rear frame shielding area in a time domain by utilizing shielding area information, and outputting a triggering signal when the object taking and placing cabinet is in the object taking and placing triggering state. Namely, the number of the shielded areas obtained by comparing the marker information in the current images acquired at different times is changed to output a trigger signal.
Optionally, whether the article picking and placing cabinet is in an article picking and placing triggering state or not is determined by comparing the number change of the shielding areas in the front frame image and the rear frame image in a time domain based on the information of the area shielded by the determined marker, and when the article picking and placing cabinet is in the article picking and placing triggering state, a picking and placing triggering signal is output. The picking and placing trigger signal is used for indicating the triggering state of the article picking and placing operation, each shielding area can be used as an operation point, and different triggering states can be determined based on the number of the operation points. For example, the trigger state may be defined as 4 types, 0 entering operation (the number of operating points in the trigger plane of the article picking and placing cabinet is changed from 0 to non-0), 1 increasing operation (the number of operating points in the trigger plane of the article picking and placing cabinet is increased and is not 0 before the increase), 2 decreasing operation (the number of operating points in the trigger plane of the article picking and placing cabinet is decreased and is non-0 after the decrease), 3 leaving operation (the number of operating points in the trigger plane of the article picking and placing cabinet is changed from non-0 to 0), 4 simultaneous entering and exiting operation (one operating point in the trigger plane of the article picking and placing cabinet is entered and the other operating point is left). In addition, no object enters or exits the object taking and placing cabinet, namely, under the condition that the number of the operation points is unchanged, the object taking and placing cabinet is considered to be in an invalid operation state, namely, the object taking and placing cabinet is not in an object taking and placing triggering state.
It should be noted that, in the above, only the image capturing unit 12 includes a camera as an example, but in addition, the image capturing unit 12 may also be a depth camera, a video camera, etc., and the embodiment of the present application does not limit the product form of the image capturing unit 12. If the depth camera is used as the image acquisition unit 12, the depth image obtained by the depth camera is used to judge the number of the operating points and the change condition thereof by judging the number change of the depth value communication areas in the trigger plane, so that the various trigger states are given, the trigger signals are provided to trigger the image acquisition unit 12 to acquire images, and the trigger weight acquisition unit 13 to acquire weight data. Furthermore, based on the depth information, the pick-and-place area may also be mapped to a corresponding area in each image, thereby obtaining information of the pick-and-place area.
Optionally, in addition to a mode of detecting a triggering state of picking and placing an article by adopting a marker mode, the method provided by the embodiment of the application further comprises an infrared correlation light curtain detection mode and an optical flow detection mode. Taking the detection mode of the infrared correlation light curtain as an example, the triggering unit 11 includes: the infrared correlation unit is arranged at the inlet and outlet of the article taking and placing cabinet, and the processor is used for detecting infrared cut-off signals based on infrared signals emitted by the infrared correlation unit; when the change of the quantity of the infrared cut-off signals is detected, the article taking and placing cabinet is determined to be in an article taking and placing triggering state. Optionally, the infrared correlation unit includes an infrared emission end and an infrared receiving end. Optionally, the infrared transmitting end and the infrared receiving end can be respectively located at the upper side and the lower side of the entrance of the article taking and placing cabinet, the infrared transmitting end transmits an infrared light at intervals, and meanwhile, the infrared receiving end receives the infrared signals according to the same interval, so that an infrared opposite light curtain covering the entrance is formed. Considering that the article picking and placing cabinet may be disposed outdoors, in the case of outdoor sunlight irradiation, in order to avoid that the infrared portion in the sunlight may interfere with the signal of the infrared receiving end, as shown in fig. 5 (a), as an alternative, the infrared emitting end may be disposed at the lower side (lower edge) of the entrance of the article picking and placing cabinet, and the infrared receiving end may be disposed at the upper side (upper edge) of the entrance of the article picking and placing cabinet. In addition, under the same coverage area, the light curtain with the infrared opposite-shooting units arranged on the upper side and the lower side is shorter than the light curtain with the infrared opposite-shooting units arranged on the left side and the right side, and the cost is lower. And under the condition that a user uses two hands to take articles, the infrared correlation units are arranged on the left side and the right side, and only one hand is detected, so that the detection accuracy of the infrared correlation units arranged on the upper side and the lower side is higher than that of the infrared correlation units arranged on the left side and the right side.
When an object enters the object taking and placing cabinet to take and place the object, the infrared light at the entering position is shielded, so that the infrared light cannot be received at the corresponding position of the receiving end, an infrared cut-off signal is generated, and when the change of the quantity of the infrared cut-off signal is detected, the object taking and placing cabinet can be determined to be in the object taking and placing triggering state. For example, one hand is already in the article picking and placing cabinet, an infrared cut-off signal is generated, the number of the infrared cut-off signals is from 0 to 1, and the number is changed. When the article taking and placing cabinet enters one hand again, an infrared cut-off signal is generated at the other position, the number of the infrared cut-off signals is changed from 1 to 2, and the number of the infrared cut-off signals is also changed under the condition. Therefore, the article taking and placing cabinet can be determined to be in the article taking and placing triggering state based on the quantity change of the infrared cut-off signals. The triggering unit 11 can obtain the time and the position of the object entering the object picking and placing cabinet based on the cut-off time and the cut-off area of the infrared light, so as to obtain object picking and placing triggering data.
For example, when one hand starts to enter the article taking and placing cabinet to perform taking and placing operation, the infrared light curtain starts to generate a continuous cut-off area at the hand entering position, namely the number of the cut-off areas is changed from 0 to 1, and the number of cut-off signals is changed from 0 to 1, so that a starting entering signal can be obtained; then, when one hand enters the article taking and placing cabinet, the infrared light curtain can generate a continuous cut-off area at a new hand entering position, namely the number of the cut-off areas is changed from n to n+1 (n=1, 2,3, …), and at the moment, an entering signal can be obtained; when one hand leaves the article taking and placing cabinet, one continuous cut-off area of the infrared light curtain disappears at the position where the hand leaves, namely the number of the cut-off areas is changed from n+1 to n (n=1, 2,3, …), and a leaving signal can be obtained; when the last hand leaves the article taking and placing cabinet and the infrared light curtain is recovered to be full-on, namely the number of the cut-off areas is changed from 1 to 0, the end leaving signal can be obtained.
Meanwhile, a truncated area corresponding to the appearance or disappearance of each trigger is known. For example, when an object enters the object picking and placing cabinet, the number of light receiving points at the receiving end changes. If the receiving end of the 24 pairs of light spots at a certain moment receives 000000000000011110000000 data (wherein 0 indicates that the spot is not blocked, 1 indicates that the spot is blocked), the spots are indicated to be blocked, and the spots are mutually communicated, the state indicates that an operation spot is in the article picking and placing cabinet at the moment, and the position coordinates of the operation spot are 14-17, and the area is called as a picking and placing area. In addition, the article to be fetched and placed triggered at the time is positioned in the fetching and placing area corresponding to the operation to be fetched and placed at the time, so that the article which is not positioned in the fetching and placing area and does not participate in the fetching and placing at the time can be filtered out subsequently aiming at the triggering of the operation to be fetched and placed at each time.
Taking the optical flow detection method shown in fig. 5 (c) as an example, the trigger mechanism based on image optical flow detection is purely algorithmic trigger. The mode triggers the detection camera to collect the access area of the object picking and placing cabinet, in order to ensure the robustness of optical flow calculation, the optical axis direction of the camera is parallel to the access (or perpendicular to the movement direction of the picking and placing action), and meanwhile, the field of view of the camera is ensured to cover the whole object picking and placing cabinet entrance. The triggering unit 11 detects that the camera captures a video at the doorway, which includes a current image of the doorway of the article retrieval and placement cabinet. The triggering unit 11 calculates an optical flow in real time by using an optical flow calculation method such as LK (Lucas-Kanade) optical flow, and projects the obtained optical flow vector in a vertical direction of an image (since an optical axis direction of a camera is perpendicular to a moving direction of a pick-and-place motion, the vertical direction of the camera image is the moving direction of the pick-and-place motion) to obtain an optical flow vector representing an article coming in and going out of the pick-and-place cabinet. When an object enters or leaves the object picking and placing cabinet, the camera detects an optical flow vector area with the direction of entering or leaving the object picking and placing cabinet due to the movement of the object, and when the optical flow vector area with the direction of the object picking and placing operation appears in the entrance area of the object picking and placing cabinet, the object is indicated to enter or leave the object picking and placing cabinet, and the object picking and placing cabinet is determined to be in an object picking and placing triggering state. The direction of the optical flow vector may be determined as an incoming signal or an outgoing signal. Meanwhile, the optical flow vector area corresponds to the picking and placing area triggered at the time.
Optionally, referring to fig. 9, the system further includes: a target detection unit 15 connected to the image acquisition unit 12; the object detection unit 15 is also connected to the article detection unit 14.
An image acquisition unit 12, configured to send the acquired image related to the article picking and placing time to a target detection unit 15; the object detection unit 15 is configured to detect an image related to the time of picking and placing the object, obtain local image data of an area where the object is located and coordinates of the local image data, and send the local image data of the area where the object is located and coordinates of the local image data as object image data to the object detection unit 14; the article detecting unit 14 is configured to determine article picking and placing information based on the information of the picking and placing area, the weight data, the local image data of the area where the article is located, and the coordinates of the local image data.
Because the target detection unit 15 can send the local image data of the area where the article is located and the coordinates of the local image data as the target image data to the article detection unit 14 after the article is detected, compared with the mode that all the image data of the image related to the article picking and placing time, which is acquired by the image acquisition unit 12, is directly sent to the article detection unit 14 as the target image data, the amount of data transmitted is further reduced, so that the calculation amount is reduced, and the detection efficiency is improved. Alternatively, the processors in the target detection unit 15 and the trigger unit 11 may be different processors. Or may be the same processor, as embodiments of the application are not limited in this regard.
Optionally, referring to fig. 10, for a manner of locally combining the cloud, the system further includes: a communication unit 16; the triggering unit 11, the image acquisition unit 12, the weight acquisition unit 13 and the communication unit 16 are all arranged on the local part of the article picking and placing cabinet, and the article detection unit 14 is arranged on the cloud.
In the case that the communication unit 16 and the article detection unit 14 are deployed at the cloud end, the triggering unit 11 is configured to determine, based on article taking and placing triggering data of the article taking and placing cabinet, that the article taking and placing cabinet is in an article taking and placing triggering state, determine a taking and placing area based on the article taking and placing triggering data, send information of the taking and placing area to the communication unit 16, send triggering signals to the image acquisition unit 12 and the weight acquisition unit 13, trigger the image acquisition unit 12 to acquire image data, and trigger the weight acquisition unit 13 to acquire weight data; an image acquisition unit 12 for acquiring an image related to the article picking and placing time under the triggering of the triggering unit 11, and transmitting target image data obtained based on the image to the communication unit 16; a weight acquisition unit 13, configured to acquire weight data related to the time of picking and placing the article under the triggering of the triggering unit 11, and send the weight data to the communication unit 16; the communication unit 16 transmits the information of the pickup and placement area, the weight data, and the target image data to the article detection unit 14; and an article detecting unit 14 for determining article pickup and placement information based on the information of the pickup and placement area, the weight data, and the target image data.
In the case that the communication unit 16, the target detection unit 15 and the article detection unit 14 are deployed at the cloud end, the triggering unit 11 is configured to determine, when the article taking and placing cabinet is in the article taking and placing triggering state based on the article taking and placing triggering data of the article taking and placing cabinet, determine a taking and placing area based on the article taking and placing triggering data, send information of the taking and placing area to the communication unit 16, send triggering signals to the image acquisition unit 12 and the weight acquisition unit 13 respectively, trigger the image acquisition unit 12 to acquire image data, and trigger the weight acquisition unit 13 to acquire weight data; an image acquisition unit 12, configured to send the acquired image related to the article picking and placing time to a target detection unit 15; a target detection unit 15, configured to detect an image related to the time of picking and placing the article, obtain local image data of an area where the article is located and coordinates of the local image data, and send the local image data of the area where the article is located and coordinates of the local image data as target image data to the communication unit 16; a weight acquisition unit 13, configured to acquire weight data related to the time of picking and placing the article under the triggering of the triggering unit 11, and send the weight data to the communication unit 16; the communication unit 16 transmits the information of the pickup and placement area, the weight data, and the target image data to the article detection unit 14; and an article detecting unit 14 for determining article pickup and placement information based on the information of the pickup and placement area, the weight data, and the target image data.
Optionally, referring to fig. 11, an embodiment of the present application further provides an article picking and placing cabinet system. The article taking and placing cabinet system comprises the article detection system, an access control unit, a payment unit and a display unit. The function of the article detection system may be referred to above, and will not be described herein. And the access control unit is used for controlling the article taking and placing cabinet. For example, when a user needs to open the article taking and placing cabinet to take and place articles, the entrance guard card information can be input in a manner of swiping the entrance guard card, and the entrance guard card information is verified by the entrance guard control unit, so that the article taking and placing cabinet is allowed to be opened after the entrance guard control unit passes the verification. Besides the card swiping mode, access control information can be input, so that the access control unit can open the article taking and placing cabinet after verifying the access control information. Alternatively, the access information may be a user name, a password, or the like. Of course, access control verification can be performed in a face recognition mode, and after the face recognition passes, the article taking and placing cabinet can be opened.
Optionally, the article taking and placing cabinet system further comprises an identity recognition unit, wherein the identity recognition unit is used for verifying the identity of a user, the identity recognition unit can be triggered to perform identity recognition after the user passes through the verification of the access control unit, and the article taking and placing cabinet is opened after the user passes through the identity recognition of the identity recognition unit. In the identification, the user can be guided to input user information, such as identity document information or passwords, and the like. The application is not limited in the manner of authentication. It should be understood that the identification unit and the access control unit may alternatively be disposed in the container system for picking and placing articles, which is not limited in the embodiment of the present application.
Optionally, the payment unit is configured to perform a payment operation based on the picked and placed article after completing the article picking and placing operation. The display unit is used for displaying the information of the articles to be paid and the payment information. For example, after the article information is obtained based on the detection of the article detection system, the amount to be paid can be calculated based on the article information, and the payment information can be obtained. The display unit displays payment information and information of articles to be paid, and a user can pay through the payment unit based on the information displayed by the display unit. The embodiment of the present application is not limited with respect to the specific information displayed by the display unit, and is also not limited with respect to the payment operation process of the payment unit.
In this regard, referring to fig. 12, an embodiment of the present application provides an article detection method, which is applied to the article detection system described above. As shown in fig. 12, the method includes the following steps.
Step 1201, acquiring article picking and placing trigger data of an article picking and placing cabinet.
Based on the three detection methods, the step 1201 includes, but is not limited to, the following three detection cases:
First case: the two sides of the inlet and outlet of the article taking and placing cabinet are provided with infrared correlation units;
acquiring article picking and placing triggering data of an article picking and placing cabinet comprises the following steps:
acquiring an infrared signal emitted by an infrared correlation unit;
after acquiring the article taking and placing triggering data aiming at the article taking and placing cabinet, the method further comprises the following steps:
Detecting an infrared cut-off signal based on an infrared signal emitted by the infrared correlation unit;
When the change of the quantity of the infrared cut-off signals is detected, the object taking and placing cabinet is determined to be in an object taking and placing triggering state.
Second case: the entrance and exit of the article taking and placing cabinet is provided with a camera, the view angle of the camera covers the entrance and exit, and the optical axis direction of the camera is parallel to the entrance and exit;
acquiring article picking and placing triggering data of an article picking and placing cabinet comprises the following steps:
acquiring a current image of an access opening acquired by a camera;
after acquiring the article taking and placing triggering data aiming at the article taking and placing cabinet, the method further comprises the following steps:
acquiring an optical flow vector based on a current image of an entrance acquired by a camera;
when the optical flow vector of the picking and placing operation direction appears in the entrance area, the object picking and placing cabinet is determined to be in an object picking and placing triggering state.
Third case: the edge of the inlet and outlet of the article taking and placing cabinet is provided with a marker; the entrance and exit of the article taking and placing cabinet is provided with a camera, and the view angle of the camera covers the entrance and exit.
Acquiring article picking and placing triggering data of an article picking and placing cabinet comprises the following steps:
Acquiring a current image of an access opening of an article taking and placing cabinet acquired by a camera;
after acquiring the article taking and placing triggering data aiming at the article taking and placing cabinet, the method further comprises the following steps:
detecting marker information in a current image;
And determining whether the article taking and placing cabinet is in an article taking and placing triggering state or not based on the detection result.
It should be noted that, the specific process of the above three cases may refer to the related content in the above article detection system, which is not described herein in detail.
Step 1202, determining a picking and placing area based on the article picking and placing trigger data when the article picking and placing cabinet is determined to be in an article picking and placing trigger state based on the article picking and placing trigger data; and acquiring an image and weight data related to the article picking and placing time, and acquiring target image data based on the image related to the article picking and placing time.
When the object taking and placing cabinet is determined to be in the object taking and placing triggering state based on the object taking and placing triggering data, the manner of determining the taking and placing area based on the object taking and placing triggering data can be seen from the description in the object detection system. The modes for acquiring the article picking and placing triggering data are different, and the modes for determining the picking and placing areas are different. If a mode of combining the cloud end locally is adopted, no matter what mode is adopted for acquiring the article taking and placing triggering data, after the article taking and placing cabinet is determined to be in the article taking and placing triggering state based on the article taking and placing triggering data, the determined information of the taking and placing area can be uploaded to the cloud end, and the cloud end can detect the articles accordingly.
When the object taking and placing cabinet is determined to be in the object taking and placing triggering state based on the object taking and placing triggering data, the image related to the object taking and placing moment can be acquired based on the image acquisition unit. Optionally, acquiring an image related to the time of picking and placing the article includes: and acquiring images of the article taking and placing time or images of the reference quantity before and after the article taking and placing time. The reference number may be set based on an application scenario or experience, which is not limited by the embodiment of the present application.
In addition, according to the triggering data of the article picking and placing, when the weight data related to the article picking and placing time is obtained by the weight collecting unit, the weight collecting unit can continuously collect the weight data of each time in real time, and the weight data of the article before and after the article picking and placing cabinet time are stored for subsequent article detection.
When acquiring target image data based on an image related to the article picking and placing time, the following two methods are included, but not limited to:
Mode one: all image data in the image is taken as target image data.
In this way, when the image acquisition unit acquires an image related to the article picking and placing time, all image data in the image is directly used as target image data. If the mode of combining the cloud end locally is adopted, all image data are uploaded to the cloud end as target image data, and the cloud end detects the object accordingly.
Mode two: and detecting the image related to the article picking and placing time to obtain local image data of the area where the article is located and coordinates of the local image data, and taking the local image data of the area where the article is located and the coordinates of the local image data as target image data.
In this way, when the image acquisition unit acquires an image related to the article picking and placing time, the image related to the article picking and placing time is transmitted to the target detection unit, and the target detection unit detects the article based on the image by means such as deep learning. For example, if a classifier is trained by means such as deep learning in advance and the object detection means includes the classifier, image data relating to the article picking and placing time is input to the classifier. And identifying whether the image contains the object and which object is based on the classifier, thereby extracting the local image data possibly containing the object, and obtaining the local image data of the area where the object is located. And then, further determining the position of the local image data in the original image (namely the image related to the article picking and placing time) to obtain the coordinates of the local image data.
The object detection unit detects the object by uploading the target image data to the object detection unit by taking the local image data of the area where the object is located and the coordinates of the local image data as the target image data. Compared with the method for uploading all image data of the whole image, the method has the advantages that the data volume uploaded is small, and the object detection efficiency can be further improved.
It should be noted that if the article taking and placing cabinet is multi-layered, a weight acquisition unit may be disposed on each layer. When the object taking and placing cabinet is determined to be in the object taking and placing triggering state based on the object taking and placing triggering data, weight data acquired by each layer of weight acquisition unit are acquired and used for subsequent object correction. Of course, as an alternative, it is also possible to determine at which floor of the article picking and placing cabinet the picking and placing operation is located after the picking and placing area is determined, so that only the weight data acquired by the weight acquisition unit of that floor is acquired. The mode of selection is not limited by the application.
Step 1203, determining article picking and placing information based on the information of the picking and placing area, the weight data and the target image data.
Optionally, the method provided by the embodiment of the application can be applied to local, namely, all the processes are realized locally, and can also be realized by combining the local process with the cloud. For example, the information, the weight data and the target image data of the picking and placing area are sent to the cloud end, and the article picking and placing information is determined based on the information, the weight data and the target image data of the picking and placing area at the cloud end. Optionally, whether local or in conjunction with the cloud, the article pick-and-place information is determined based on the information of the pick-and-place area, the weight data, and the target image data, including but not limited to the following three cases:
First case: the target image data comprises all image data in the image related to the article picking and placing time; determining article pick-and-place information based on the information of the pick-and-place area, the weight data, and the target image data, including: determining the types and the numbers of the articles in the picking and placing areas based on the positions in the article information and the information of the picking and placing areas; and correcting the quantity of the articles in the picking and placing area based on the weight data, and obtaining article picking and placing information according to the corrected quantity of the articles and the types of the articles in the picking and placing area.
Optionally, after determining the kind and the number of the articles located in the picking and placing area based on the position in the article information and the information of the picking and placing area, the method further includes: rechecking the number of the articles in the picking and placing area based on the weight data; when rechecking passes, the type and the quantity of the articles in the picking and placing area are taken as article picking and placing information.
For example, as shown in fig. 13, the triggering unit performs triggering analysis by using the image data provided by the image acquisition unit at the local of the article picking and placing cabinet; when the picking and placing operation is generated, corresponding picking and placing triggering data are output, the picking and placing area is determined, and information of the picking and placing area is sent to the article detection unit. And then, the triggering unit triggers the image data acquisition unit to record an image related to the article picking and placing time, and triggers the weight acquisition unit to acquire weight data related to the article picking and placing time. The image acquisition unit acquires images related to the article picking and placing time under the triggering of the triggering unit, and all image data in the images are used as target image data to be sent to the article detection unit; the weight acquisition unit acquires weight data related to the article picking and placing time under the triggering of the triggering unit, and sends the weight data to the article detection unit.
The object detection unit detects the object image data to obtain the positions, types and quantity of all objects contained in the object image data; and determining the types and the quantity of the articles in the picking and placing area based on the positions in the article information and the information of the picking and placing area, namely filtering out the articles which do not belong to the picking and placing operation at this time by utilizing the picking and placing area indicated by the information of the picking and placing area and the positions in the article information detected by the article detection unit. And then, the article detection unit rechecks the quantity of the articles in the picking and placing area based on the weight data, and if the type and quantity of the articles in the picking and placing area can be matched with the weight data acquired by the weight acquisition unit, rechecks the articles to pass through, and takes the type and quantity of the articles in the picking and placing area as article picking and placing information. If the types and the numbers of the articles in the picking and placing area can not be matched with the weight data acquired by the weight acquisition unit, correcting the numbers of the articles in the picking and placing area, and obtaining article picking and placing information according to the corrected numbers of the articles and the types of the articles in the picking and placing area.
When determining whether the type and the number of the articles in the picking and placing area are matched with the weight data acquired by the weight acquisition unit, the weight data can be obtained based on the type and the number of the articles in the picking and placing area, and the weight data acquired by the weight acquisition unit can be changed in weight before and after the picking and placing operation, so that a weight difference value is obtained. If a weight data is obtained based on the kind and number of the articles located in the pick-and-place area to be consistent with the weight difference, or the error is within the reference range, it is determined that the review passes. If the weight data obtained based on the types and the numbers of the articles in the picking and placing area are inconsistent with the weight difference, and the error exceeds the reference range, the rechecking is determined to not pass.
Alternatively, for the case that the recheck fails and needs to be corrected, different numbers can be enumerated based on the types of the articles located in the picking and placing area, and simultaneous equations are solved, so that the number meeting the types of the articles located in the picking and placing area and meeting the weight difference is obtained, and the number is used as the corrected number. And taking the corrected quantity and the identified type of the articles in the picking and placing area as article picking and placing information. Of course, other correction methods besides the above correction method may be used, which is not limited in the embodiment of the present application.
Second case: the target image data comprise local image data of an area where the object is located and coordinates of the local image data, wherein the local image data is obtained by detecting the object in the image related to the object picking and placing time; determining article pick-and-place information based on the information of the pick-and-place area, the weight data, and the target image data, comprising: filtering the local image data based on the information of the picking and placing area and the coordinates of the local image data, and comparing the filtered local image data with an article sample library; when the filtered local image data is determined to comprise the objects according to the comparison result, determining the types and the quantity of the objects comprised in the filtered local image data; and rechecking the quantity based on the weight data, and determining article picking and placing information according to the rechecking result. Optionally, referring to the first case, determining the article picking and placing information according to the rechecking result includes: rechecking the number of the articles in the picking and placing area based on the weight data; and when the rechecking passes, taking the type and the number of the objects contained in the filtered local image data as object picking and placing information. When the check fails, the quantity is corrected based on the weight data, and the type of the article and the corrected quantity contained in the filtered local image data are used as article picking and placing information. The correction method can be referred to the related description of the first case, and will not be repeated here.
In this case, the object detection system further includes an object detection unit, where the object detection unit is locally arranged, and only performs object detection, and does not perform object type judgment, and the object detection unit may be a classifier (which is an object target or not), where the algorithm complexity is low, and where the classifier is not frequently updated and maintained for a specific object type insensitive algorithm model, and is suitable for being deployed locally. The data transmitted to the cloud server does not need to transmit the whole image related to the article picking and placing time, but transmits local image data (small image) of the area where the article is located, so that the transmission data volume is further reduced. Overall optimization trades off computational costs, data transmission costs, and operational maintenance costs, further reducing the cost of the overall system solution.
In addition, an article sample library can be pre-established, and article information in the article picking and placing cabinet is stored in the article sample library, including but not limited to information of article types, quantity, positions and the like. Optionally, the article sample library may be stored in a cloud end, and the article sample library includes article information in all article picking and placing cabinets managed by the cloud end. Therefore, for the article picking and placing cabinet which needs to be detected currently, the method provided by the embodiment of the application can also upload the identification information of the article picking and placing cabinet which needs to be detected currently to the cloud end so as to determine the information related to the article picking and placing cabinet which needs to be detected currently in the article sample library to be used by the cloud end. Optionally, the identification information for identifying the article picking and placing cabinet includes, but is not limited to, location information, codes, and the like, which are not limited in this embodiment of the present application, and the corresponding article picking and placing cabinet can be identified.
For example, taking an example of determining whether the object picking and placing cabinet is in an object picking and placing triggering state by using on-off of the light curtain, as shown in fig. 14, an infrared correlation light curtain is still used as a triggering mechanism, and the whole object detection flow is basically the same as that shown in fig. 13 in the first case. Unlike fig. 13, the article detection unit compares all the small images provided by the object detection unit with the article sample library to determine whether the object in the small images is an article, why the object is an article, and counts all the types and numbers of the articles.
The method comprises the steps that at the local of an article picking and placing cabinet, a target detection unit detects images related to article picking and placing time, and outputs possible partial images (small images) of an article target area and coordinates of the partial images in an original image, namely, local image data of the area where the article is located and coordinates of the local image data are obtained; and transmitting the data such as the triggering state, the information of the picking and placing area, the local image data of the area where the article is located and the coordinates of the local image data to the article detection unit. The object detection unit may be located on a cloud server in the cloud or may be located locally, which is not limited in the embodiment of the present application.
The article detection unit filters the local image data based on the information of the picking and placing area and the coordinates of the local image data, namely, the coordinates of the picking and placing area indicated by the information of the picking and placing area and the coordinates of the local image data in the original image are utilized to filter out articles which do not belong to the picking and placing operation at the time, and the filtered local image data containing the types and the numbers of the picked and placed articles is output. Comparing the filtered local image data with an article sample library; when the filtered local image data is determined to comprise the objects according to the comparison result, determining the types and the quantity of the objects comprised in the filtered local image data; and correcting the quantity based on the weight data, and determining article picking and placing information according to the correction result.
Third case: filtering the target image data based on the information of the picking and placing area to obtain filtered image data; based on the weight data and the target image data, identifying article information in the filtered image data to obtain article picking and placing information, wherein the article picking and placing information comprises types and numbers.
Optionally, based on the weight data and the target image data, identifying the article information in the filtered image data to obtain article picking and placing information, including: rechecking the quantity in the identified article information through the weight data; when rechecking passes, taking the identified article information as article picking and placing information; and when the rechecking is not passed, correcting the identified article information, and obtaining article picking and placing information according to a correction result.
The method for correcting the identified article information is referred to the description of the first case, and is not repeated here. Under the condition, because the image acquired by the image acquisition unit possibly comprises the hand for taking and placing the article and other articles outside the article taking and placing cabinet besides the articles for taking and placing, the target image data is filtered firstly based on the information of the taking and placing area, so that the hand for taking and placing the article and the other articles outside the article taking and placing cabinet are filtered, the noise of the filtered image data is smaller, the calculated amount of subsequent data processing can be reduced, and the detection efficiency and the accuracy are improved.
Based on the above-mentioned object detection process, the embodiment of the present application may start with the 0 entering the operation state, end with the 3 leaving the operation state, and use the two states and the intermediate non-5 invalid operation state as a complete object picking and placing event. As shown in fig. 15, the method can be completely processed locally, and can also be used for detecting objects in a local cloud-combined mode.
If the processing is completely local, the weight change value of the operation is obtained by analyzing the change of the weight data before and after the occurrence of the event. The object image data related to the occurrence time of the event is detected and searched, and the kind and the corresponding number of the objects existing in the object image data are obtained. And then checking whether the kind and the number of the articles obtained based on the image data can be corresponding to the weight change values before and after the picking and placing operation. If the image identification result is corresponding to the visual identification result, the visual identification result is correct, and the visual identification result can be used as a final result of the picking and placing operation. If the types and the amounts of the articles and the weight change amounts obtained based on the image data have large differences, the information of the types of the articles identified by the image data is calculated by enumerating different amounts of the articles and a simultaneous equation, and the amount value which simultaneously satisfies the types of the articles and the total weight change amounts is searched. The number value is used as the type and the number value of the article corresponding to the current picking and placing operation.
Taking the self-service shopping scene of the user as an example, determining the final article taking and placing information by adopting the method, after obtaining the types and the quantity of the articles, adding the results of all taking and placing operation events from opening the article taking and placing cabinet to closing the article taking and placing cabinet, and obtaining the article information purchased in the shopping. And then, the article information can be output to the display unit for display, and the payment unit carries out payment operation, so that self-service shopping is completed.
If the cloud processing is locally combined, the steps corresponding to 1-3 in fig. 15 are locally performed, corresponding data are transmitted to the cloud, and the steps corresponding to 4 and 5 are performed at the cloud. And feeding back the detected article picking and placing information to the local by the cloud, displaying the article picking and placing information by a local display unit, and performing functions such as payment settlement by a payment unit.
It can be seen that the technical scheme provided by the embodiment of the application is to fuse all weight changes and image recognition results in the time of opening and closing the article taking and placing cabinet. The triggering hardware is utilized to divide the entire shopping process into individual events. Firstly, rechecking the weight change before and after the event and the image recognition result, rechecking that the weight change does not pass, and then carrying out the joint solution of the image recognition and the weight, wherein the number and the types of articles in a single event are less, and the joint solution accuracy is high. The shopping process is divided into independent events, the events are not affected mutually, and the error of the whole shopping result caused by the calculation error of one event is avoided, so that the accuracy is improved.
According to the method provided by the embodiment of the application, when the object taking and placing cabinet is determined to be in the object taking and placing triggering state based on the triggering data, the image and the weight data related to the object taking and placing time are acquired, and the target image data is acquired based on the image related to the object taking and placing time, so that the object taking and placing information is determined based on the information of the taking and placing area, the target image data and the weight data.
Based on the same technical concept, the embodiment of the application provides an article detection device, which is applied to the article detection system. Referring to fig. 16, the apparatus includes:
A trigger component 161, configured to acquire article picking and placing trigger data for the article picking and placing cabinet, and send the trigger data to the first processor 162;
A first processor 162 configured to determine a pick-and-place area based on the item pick-and-place trigger data when the item pick-and-place cabinet is determined to be in the item pick-and-place trigger state based on the trigger data; acquiring images and weight data related to the article picking and placing time, and acquiring target image data based on the images related to the article picking and placing time; article pick-and-place information is determined based on the information of the pick-and-place area, the weight data, and the target image data.
Optionally, the triggering component 161 includes an infrared correlation unit, and the infrared correlation unit is disposed at two sides of the entrance of the article picking and placing cabinet;
A first processor 162 for detecting an infrared cut-off signal based on an infrared signal emitted from the infrared correlation unit; when the change of the quantity of the infrared cut-off signals is detected, the object taking and placing cabinet is determined to be in an object taking and placing triggering state.
Optionally, the infrared correlation unit includes infrared emission end and infrared receiving end, and infrared emission end sets up in the access & exit downside of article picking and placing cabinet, and infrared receiving end sets up in the access & exit upside of article picking and placing cabinet.
Optionally, the triggering component 161 includes a camera, the camera is disposed at an entrance of the article picking and placing cabinet, a field angle of the camera covers the entrance, and an optical axis direction of the camera is parallel to the entrance;
A first processor 162 for acquiring an optical flow vector based on a current image of the doorway acquired by the camera; when the optical flow vector of the picking and placing operation direction appears in the entrance area, the object picking and placing cabinet is determined to be in an object picking and placing triggering state.
Optionally, the triggering component 161 comprises a camera, the camera is arranged at an entrance of the article taking and placing cabinet, the view angle of the camera covers the entrance, and the edge of the entrance of the article taking and placing cabinet is provided with a marker;
a first processor 162 for detecting marker information in a current image of the doorway acquired by the camera; and determining whether the article taking and placing cabinet is in an article taking and placing triggering state or not based on the detection result.
Optionally, the first processor 162 is configured to acquire an image of the time of picking and placing the article, or images of a reference number before and after the time of picking and placing the article.
Optionally, the first processor 162 is configured to send the information, the weight data, and the target image data of the picking and placing area to the cloud end, and determine the article picking and placing information based on the information, the weight data, and the target image data of the picking and placing area at the cloud end.
Optionally, the first processor 162 is configured to filter the target image data based on the information of the pick-and-place area, to obtain filtered image data;
Based on the weight data and the target image data, identifying article picking and placing information in the filtered image data, wherein the article picking and placing information comprises types and numbers.
Optionally, the first processor 162 is configured to acquire all image data in the image related to the article picking and placing time, and take all image data as target image data.
Optionally, the first processor 162 is configured to perform item detection on an image related to the time of picking and placing the item, obtain local image data of an area where the item is located and coordinates of the local image data, and take the local image data of the area where the item is located and coordinates of the local image data as target image data.
Optionally, the first processor 162 is configured to determine the type and number of the articles located in the pick-and-place area based on the position in the article information and the information of the pick-and-place area; and correcting the quantity of the articles in the picking and placing area based on the weight data, and obtaining article picking and placing information according to the corrected quantity of the articles and the types of the articles in the picking and placing area.
Optionally, the first processor 162 is configured to filter the local image data based on the information of the pick-and-place area and the coordinates of the local image data, and compare the filtered local image data with the article sample library; when the filtered local image data is determined to comprise the objects according to the comparison result, determining the types and the quantity of the objects comprised in the filtered local image data; and correcting the quantity based on the weight data, and determining article picking and placing information according to the correction result.
Optionally, the first processor 162 is further configured to review the number of items located in the pick-and-place area based on the weight data; when rechecking passes, the type and the quantity of the articles in the picking and placing area are taken as article picking and placing information.
It should be noted that, when the apparatus provided in the foregoing embodiment performs the functions thereof, only the division of the foregoing functional modules is used as an example, in practical application, the foregoing functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to perform all or part of the functions described above. In addition, the apparatus and the method embodiments provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the apparatus and the method embodiments are detailed in the method embodiments and are not repeated herein.
In an example embodiment, there is also provided a computer device including a processor and a memory having at least one instruction stored therein. The at least one instruction is configured to be executed by one or more processors to implement any of the article detection methods described above.
Fig. 17 is a schematic structural diagram of a computer device according to an embodiment of the present invention. The device may be a terminal, for example: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. Terminals may also be referred to by other names as user equipment, portable terminals, laptop terminals, desktop terminals, etc.
Generally, the terminal includes: a processor 1701 and a memory 1702.
The processor 1701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1701 may be implemented in at least one hardware form of DSP (DIGITAL SIGNAL Processing), FPGA (Field-Programmable gate array) GATE ARRAY, PLA (Programmable Logic Array ). The processor 1701 may also include a main processor and a coprocessor, wherein the main processor is a processor for processing data in an awake state, and is also called a CPU (Central Processing Unit ); a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 1701 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 1701 may also include an AI (ARTIFICIAL INTELLIGENCE ) processor for processing computing operations related to machine learning.
Memory 1702 may include one or more computer-readable storage media, which may be non-transitory. Memory 1702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1702 is used to store at least one instruction for execution by processor 1701 to implement the method of item detection provided by an embodiment of the method of the present application.
In some embodiments, the terminal may further optionally include: a peripheral interface 1703, and at least one peripheral. The processor 1701, memory 1702, and peripheral interface 1703 may be connected by a bus or signal line. The individual peripheral devices may be connected to the peripheral device interface 1703 by buses, signal lines or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1704, a touch display screen 1705, a camera 1706, audio circuitry 1707, a positioning assembly 1708, and a power source 1709.
The peripheral interface 1703 may be used to connect at least one Input/Output (I/O) related peripheral to the processor 1701 and the memory 1702. In some embodiments, the processor 1701, the memory 1702, and the peripheral interface 1703 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 1701, the memory 1702, and the peripheral interface 1703 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1704 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 1704 communicates with a communication network and other communication devices through electromagnetic signals. The radio frequency circuit 1704 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1704 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 1704 may communicate with other terminals through at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (WIRELESS FIDELITY ) networks. In some embodiments, the radio frequency circuit 1704 may also include NFC (NEAR FIELD Communication) related circuits, which are not limited by the present application.
The display screen 1705 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 1705 is a touch display, the display 1705 also has the ability to collect touch signals at or above the surface of the display 1705. The touch signal may be input as a control signal to the processor 1701 for processing. At this point, the display 1705 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1705 may be one, providing a front panel of the terminal; in other embodiments, the display 1705 may be at least two, respectively disposed on different surfaces of the terminal or in a folded design; in still other embodiments, the display 1705 may be a flexible display disposed on a curved surface or a folded surface of the terminal. Even more, the display 1705 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The display 1705 may be made of LCD (Liquid CRYSTAL DISPLAY), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 1706 is used to capture images or video. Optionally, the camera assembly 1706 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, the camera assembly 1706 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuit 1707 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1701 for processing, or inputting the electric signals to the radio frequency circuit 1704 for voice communication. For the purpose of stereo acquisition or noise reduction, a plurality of microphones can be respectively arranged at different parts of the terminal. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 1701 or the radio frequency circuit 1704 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuit 1707 may also include a headphone jack.
The location component 1708 is utilized to locate the current geographic location of the terminal for navigation or LBS (Location Based Service, location-based services). The positioning component 1708 may be a positioning component based on the United states GPS (Global Positioning System ), the Beidou system of China, the Granati system of Russia, or the Galileo system of the European Union.
A power supply 1709 is used to power the various components in the terminal. The power source 1709 may be alternating current, direct current, disposable battery, or rechargeable battery. When the power source 1709 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal further includes one or more sensors 1710. The one or more sensors 1710 include, but are not limited to: an acceleration sensor 1711, a gyro sensor 1712, a pressure sensor 1713, a fingerprint sensor 1714, an optical sensor 1715, and a proximity sensor 1716.
The acceleration sensor 1711 can detect the magnitudes of accelerations on three coordinate axes of a coordinate system established with a terminal. For example, the acceleration sensor 1711 may be used to detect the components of gravitational acceleration in three coordinate axes. The processor 1701 may control the touch display 1705 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal acquired by the acceleration sensor 1711. The acceleration sensor 1711 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 1712 may detect a body direction and a rotation angle of the terminal, and the gyro sensor 1712 may collect 3D actions of the user on the terminal in cooperation with the acceleration sensor 1711. The processor 1701 may implement the following functions based on the data collected by the gyro sensor 1712: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 1713 may be disposed at a side frame of the terminal and/or at a lower layer of the touch display 1705. When the pressure sensor 1713 is disposed on a side frame of the terminal, a grip signal of the terminal by a user may be detected, and the processor 1701 performs left-right hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 1713. When the pressure sensor 1713 is disposed at the lower layer of the touch display screen 1705, the processor 1701 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 1705. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 1714 is used to collect a fingerprint of a user, and the processor 1701 identifies the identity of the user based on the fingerprint collected by the fingerprint sensor 1714, or the fingerprint sensor 1714 identifies the identity of the user based on the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 1701 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 1714 may be provided on the front, back, or side of the terminal. When a physical key or a vendor Logo is provided on the terminal, the fingerprint sensor 1714 may be integrated with the physical key or vendor Logo.
The optical sensor 1715 is used to collect ambient light intensity. In one embodiment, the processor 1701 may control the display brightness of the touch display 1705 based on the ambient light intensity collected by the optical sensor 1715. Specifically, when the intensity of the ambient light is high, the display brightness of the touch display screen 1705 is turned up; when the ambient light intensity is low, the display brightness of the touch display screen 1705 is turned down. In another embodiment, the processor 1701 may also dynamically adjust the shooting parameters of the camera assembly 1706 based on the ambient light intensity collected by the optical sensor 1717.
A proximity sensor 1716, also referred to as a distance sensor, is typically provided on the front panel of the terminal. The proximity sensor 1716 is used to collect the distance between the user and the front of the terminal. In one embodiment, when the proximity sensor 1716 detects a gradual decrease in the distance between the user and the front face of the terminal, the processor 1701 controls the touch display 1705 to switch from the bright screen state to the off screen state; when the proximity sensor 1716 detects that the distance between the user and the front surface of the terminal gradually increases, the processor 1701 controls the touch display 1705 to switch from the off-screen state to the on-screen state.
It will be appreciated by those skilled in the art that the structure shown in fig. 17 is not limiting of the terminal and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
In an exemplary embodiment, a computer readable storage medium is also provided, in which at least one instruction is stored, which when executed by a processor of a computer device, implements any of the above-described article retrieval detection methods.
In a possible embodiment of the present application, the above-mentioned computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, etc.
It should be understood that references herein to "a plurality" are to two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
The foregoing description of the exemplary embodiments of the application is not intended to limit the application to the particular embodiments disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the application.