Disclosure of Invention
In view of this, an object of the present application is to provide a method and a device for detecting safety protection of an operation site, so as to solve the technical problems that a monitoring video of the operation site is low in efficiency and dangerous situations are easily missed when the monitoring video is manually viewed, and a specific technical scheme disclosed in the application is as follows:
in a first aspect, the present application provides a method for detecting safety protection of an operation site, including:
acquiring a video frame to be detected from a monitoring video of an operation site;
detecting information of a preset target contained in the video frame to be detected by using a target detection model obtained by pre-training to obtain a target detection result, wherein the preset target comprises personnel and an object to be detected in an operation site;
carrying out gesture recognition on the personnel contained in the video frame to be detected to obtain a gesture recognition result;
and detecting an abnormal condition of the operation site according to at least one of the target detection result and the gesture recognition result, wherein the abnormal condition comprises at least one of an abnormal object and an abnormal behavior.
In a possible implementation manner of the first aspect, the acquiring a video frame to be detected from a monitoring video of an operation site includes:
reading a frame of video image from the monitoring video;
if the average gray value of the read video image is lower than a preset gray threshold value, carrying out gray homogenization treatment on the video image to obtain a gray uniform image;
carrying out image sharpening filtering processing on the gray uniform image to obtain the video frame to be detected;
and if the average gray value of the read video image is not lower than the preset gray threshold, directly carrying out image sharpening filtering processing on the video image to obtain the video frame to be detected.
In another possible implementation manner of the first aspect, the acquiring a video frame to be detected from a monitoring video of an operation site further includes:
and storing the video frame to be detected in a video frame queue, wherein the capacity of the video frame queue is more than or equal to 2.
In another possible implementation manner of the first aspect, the detecting, by using a pre-trained target detection model, information of a preset target included in the video frame to be detected includes:
the video detection process reads the video frame to be detected from the video frame queue;
acquiring a plurality of boundary frames from the video frame to be detected, carrying out object type detection on objects contained in each boundary frame, and determining the offset of the boundary frames;
and determining that the video frame to be detected contains preset targets and the current state of each preset target according to the object category and the boundary box offset.
In yet another possible implementation manner of the first aspect, the target detection result includes a preset target and a current state of the preset target;
the obtaining of the abnormal condition of the operation site according to at least one of the target detection result and the gesture recognition result includes:
for a non-human target in the video frame to be detected, determining that the non-human target has an abnormal condition when the current state of the non-human target is inconsistent with the normal state of the non-human target;
and judging whether the gesture recognition result of the person has abnormal behaviors or not according to the abnormal behavior evaluation standard aiming at the person target contained in the video frame to be detected.
In another possible implementation manner of the first aspect, the determining, according to an abnormal behavior evaluation criterion, whether an abnormal behavior exists in a posture recognition result of a person included in the video frame to be detected includes:
after the operation of an operation site is finished, if the situation that personnel are detained in a target area of the operation site is detected, determining that disappearance and abnormity occur;
for the same personnel target in the dangerous area of the operation site, judging whether the running of the personnel exists or not according to the displacement of the personnel target in unit time;
when a person is detected to appear in a target area of the operation site and is determined to be a non-worker, recording the stay time of the non-worker in the target area, and determining that the retention loitering abnormality exists when the stay time exceeds a preset time threshold;
when a person is detected within a first preset distance range of target equipment in the operation field, determining that equipment operation is illegal;
and for the staff in the operation field, determining that the staff has the illegal action when the illegal object is detected within a second preset distance range of the staff.
In yet another possible implementation manner of the first aspect, the method further includes:
and when the abnormal condition in the operation site is detected, sending out an alarm signal.
In a second aspect, the present application further provides an operation site safety protection detection device, including:
the video frame acquisition module is used for acquiring a video frame to be detected from a monitoring video of an operation site;
the target detection module is used for detecting information of a preset target contained in the video frame to be detected by using a target detection model obtained through pre-training to obtain a target detection result, wherein the preset target comprises personnel and an object needing to be detected in an operation site;
the gesture recognition module is used for recognizing gestures of personnel contained in the video frame to be detected to obtain a gesture recognition result;
and the abnormal condition detection module is used for detecting and obtaining the abnormal condition of the operation site according to at least one of the target detection result and the gesture recognition result, wherein the abnormal condition comprises at least one of an abnormal object and abnormal behavior.
In a possible implementation manner of the second aspect, the video frame acquisition module includes:
the video frame reading submodule is used for reading a frame of video image from the monitoring video;
the gray level homogenization processing submodule is used for carrying out gray level homogenization processing on the video image to obtain a gray level uniform image if the average gray level value of the read video image is lower than a preset gray level threshold value;
and the sharpening filtering processing submodule is used for carrying out image sharpening filtering processing on the uniform gray level image to obtain the video frame to be detected, or directly carrying out image sharpening filtering processing on the video image to obtain the video frame to be detected when the average gray level value of the read video image is not lower than the preset gray level threshold value.
In another possible implementation manner of the second aspect, the video frame acquiring module further includes:
and the video frame storage submodule is used for storing the video frame to be detected in a video frame queue, wherein the capacity of the video frame queue is greater than or equal to 2.
According to the operation site safety protection detection method, the video frame to be detected is obtained from the monitoring video of the operation site, and target detection and gesture recognition are carried out on the video frame to be detected. And recognizing an abnormal condition existing in the operation site according to at least one of the target detection result and the posture recognition result. According to the scheme, target detection and gesture recognition are combined, basic states of personnel and objects in an operation field are recognized through the target detection, obvious abnormal conditions are preliminarily eliminated, and then the states of the personnel in the operation field are recognized through the gesture to judge whether the personnel have abnormal behaviors. Therefore, active security detection is realized, and the safety protection capability of the operation site is improved.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a flowchart of an operation field security protection detection method provided by an embodiment of the present application is shown, where the method may be executed in a computer device, for example, a computer in a remote monitoring system. As shown in fig. 1, the method may include the steps of:
and S110, acquiring a video frame to be detected from the monitoring video of the operation site.
And reading a monitoring video uploaded by a camera on an operation site, and reading a frame of original video image from the monitoring video. Since the operation site may be limited by adverse factors such as an actual shooting angle and lighting conditions, the video image in the original monitoring video may have a problem of low brightness or low definition, and therefore, the original video image needs to be subjected to preliminary processing and then to be subjected to subsequent detection.
In an embodiment of the present application, as shown in fig. 2, the process of performing preliminary processing on the original video image to obtain the video image to be detected may include:
and S111, reading a frame of original video image from the original monitoring video.
S112, judging whether the average gray value of the original video image is lower than a preset gray threshold value or not; if so, executing S113; if not, S114 is performed.
The preset gray threshold can be set according to the actual application requirements, and when the average gray value of the image is lower than the preset gray threshold, the image is considered to be dark as a whole, and the contrast needs to be enhanced.
And S113, carrying out gray level homogenization treatment on the original video image to obtain a gray level uniform image.
And (3) carrying out gray level homogenization treatment on the original video image for the problem of dark image caused by insufficient field illumination intensity or other reasons.
In one possible implementation, histogram equalization processing may be utilized for darker images.
The histogram equalization processing is a method for enhancing the contrast of an image, and the main idea is to change the histogram distribution of the image into approximately uniform distribution, so as to enhance the contrast of the image, and the overall brightness of the processed image is higher.
And S114, carrying out image sharpening filtering processing on the image obtained in the last step to obtain a video frame to be detected.
In an application scenario, the image sharpening filtering process is continued for the gray-level uniform image obtained in step S113.
In another application scenario, the brightness of the original video image read from the original monitoring video is normal but blurry, and the read original video image can be directly subjected to image sharpening filtering processing.
The image sharpening filtering process is to highlight the edge portion of the image (i.e., the high frequency portion of the image) and enhance the contrast of the boundary pixels so that the difference of the boundary pixels is larger. For example, an isotropic filter may be constructed using the laplacian operator to perform image sharpening to increase the sharpness of the image.
And S120, detecting information of a preset target contained in the video frame to be detected by using a target detection model obtained through pre-training to obtain a target detection result.
And after the video frame to be detected is obtained, sequentially carrying out target detection and gesture recognition. This step is object detection of the image.
The target detection model is obtained by utilizing sample image training, the problems of monotonous scene, limited illumination condition and the like are solved through data enhancement in the initial training stage, and the deep learning network with the highest accuracy is selected through testing models with different scales.
The target detection algorithm can adopt a one-stag method represented by Yolo and SSD, the main idea is to uniformly perform intensive sampling at different positions of a picture, different scales and aspect ratios can be adopted during sampling, then classification and regression are directly performed after CNN is used for extracting features, and the whole process only needs one step, so the detection speed is high.
Taking the SSD algorithm as an example, the process of detecting the target as shown in fig. 3 includes the following steps:
s121, selecting a plurality of bounding boxes from the video frame to be detected.
And selecting a bounding box (bounding box) with preset scale and aspect ratio in the video frame to be detected.
And S122, detecting the object type of the object contained in each bounding box, and determining the offset of the bounding box.
For each bounding box, the classification label and the offset of the object contained in the bounding box are predicted so as to better frame the object.
And S123, determining that the video frame to be detected contains the preset targets and the current state of each preset target according to the object type and the boundary box offset.
For a video frame to be detected, the prediction results of a plurality of feature maps with different sizes are combined so as to process objects with different sizes.
The preset targets include personnel and objects to be detected in an operation site, such as ground bars in an overhaul area.
In one embodiment of the present application, for a preset target detected from a video frame to be detected, a category label and a confidence of the target are marked on an image of the video frame. For example, if the rooftop air conditioner is identified from the current video frame image and the confidence is 99%, the area label "air _ condition: 99%" of the rooftop air conditioner may be detected on the current video frame image.
For another example, after the person target in the picture is detected, whether the person is a worker or not and whether a safety helmet is worn or not can be continuously judged according to wearing of the person target. For example, the labeling results are: worker: 99%, Helmet: 96%, indicating that the target in the bounding box is a worker and wearing a safety helmet; non-staff: 81% and non _ helmet: 86%, indicating that the target in the bounding box is a non-worker and is not wearing a safety helmet.
S130, performing gesture recognition on the personnel contained in the video frame to be detected to obtain a gesture recognition result.
The object of gesture recognition is to describe the shape of the human body in an image or video, including object detection and gesture estimation, segmentation, etc.
For example, a typical gesture recognition algorithm openpos can estimate the shape of key points such as the body, face and hands of multiple people in real time.
Of course, other gesture recognition algorithms, such as real Multi-Person Pose estimation, may be used, and are not described in detail herein.
And S140, detecting to obtain the abnormal condition of the operation site according to at least one of the target detection result and the gesture recognition result.
And for the non-human target in the video image, comparing whether the current state of the non-human target is consistent with the normal state of the target or not, and if not, determining that the target has an abnormal condition.
For example, during the process of overhauling a locomotive, the grounding rod of an overhauling area needs to be grounded, and if the grounding rod is detected to be ungrounded, the grounding rod is determined to be abnormal.
It should be noted that when an item that has not been presented in training is detected in the operation scene screen, that is, an object in the image is not a preset object, indicating that the item may be an item that has been lost by a worker, or an item that has not been discarded by a worker, the area of the item in the video image is marked with a tag of the discarded item.
On the other hand, the scheme also carries out regional intrusion detection on the basis of target detection, a target region is pre-defined, and then whether a foreign person invades the region is detected; and correlating the detection result of whether the person invades with the color of the marked line of the area on the video image. For example, when a foreign person invades, the marked line of the area is red in the video image; when no foreign person invades, the marked line of the area in the video image is green.
And judging whether the gesture recognition result of the personnel target in the video image has abnormal behaviors or not according to the abnormal behavior evaluation standard.
In an embodiment of the present application, the process of determining whether the person target has the abnormal behavior according to the abnormal behavior evaluation criteria may include:
identification of disappearance appearance
And under the application scene of operation ending of the operation site, detecting whether a person stays in the site or not, and if so, determining that the disappearance and the appearance are abnormal.
After the operation on the operation site is finished, all the staff need to leave the site so as to start the vehicle, and at the moment, if the staff stays on the site, the abnormality is determined. In such a case, it is necessary to prohibit vehicle start and notify field workers to evacuate as soon as possible.
② abnormal running recognition
For the same personnel target in the dangerous area of the operation field, whether the personnel run or not is judged according to the displacement of the personnel target in unit time.
For example, when it is detected that the displacement of the same human target per unit time exceeds a preset displacement threshold value in overhauling a dangerous area of a construction site, it is determined that the human target is running. If people run, the abnormal condition of the operation field is shown, at the moment, the on-site alarm device can be triggered to send out an alarm signal, the on-site personnel is reminded to stop operating immediately, and the safety supervision personnel is reminded to enter the states of on-site inspection tool instruments and the like.
Identifying retention wandering abnormality;
when the person is detected to appear in the target area of the operation site and is determined to be a non-working person, the staying time of the person in the target area is recorded, and when the staying time exceeds a preset time threshold value, it is determined that the retention loitering abnormity exists.
When the retention loitering abnormality is detected, an alarm device on an operation site can be triggered to send an alarm signal to inform workers on the operation site, and alarm information is displayed in a monitoring center to inform the monitoring personnel.
Identification of illegal operation
In one embodiment of the present application, the illegal operation includes a personal behavior violation of safety regulations of the worker, and an illegal operation existing when the worker operates the field machine device, and identification processes of the two illegal operations are described below.
The identification of personal behavior violations for the staff is as follows: and when the illegal article is detected within a second preset distance range of the staff on the operation site, determining that the staff has illegal behaviors.
For example, if an illegal article such as a cigarette is detected in a video screen of an operation site, and the center distance between the identification frame of the illegal article and the worker identification frame is smaller than a preset distance threshold, it is confirmed that the illegal article belongs to the worker, and for example, it is confirmed that the worker is smoking or is ready to smoke. Under the condition, the alarm device on the site can be triggered to send out an alarm signal to remind the site workers of the illegal behavior, and the alarm information can be displayed in the monitoring center, so that the monitoring personnel can find the illegal behavior in time.
The identification of device operation violations is as follows: and when the human target is detected within a first preset distance range of the target equipment in the operation field, determining that the equipment operation is illegal.
For example, in a frame of an operation site, a human target (worker or non-worker) is detected within a range of a first preset distance from the center of the on-site crane recognition frame, and a crane violation operation is determined. Under the condition, an alarm device of an operation site is triggered to alarm to notify site personnel, and alarm information is displayed in a monitoring center, so that the monitoring personnel can find out the illegal operation in time.
According to the operation site safety protection detection method provided by the embodiment, the video frame to be detected is obtained from the monitoring video of the operation site, and target detection and gesture recognition are carried out on the video frame to be detected. And recognizing an abnormal condition existing in the operation site according to at least one of the target detection result and the posture recognition result. According to the scheme, target detection and gesture recognition are combined, basic states of personnel and objects in an operation field are recognized through the target detection, obvious abnormal conditions are preliminarily eliminated, and then the states of the personnel in the operation field are recognized through the gesture to judge whether the personnel have abnormal behaviors. Therefore, active security detection is realized, and the safety protection capability of the operation site is improved.
Referring to fig. 4, a flowchart of another operation site security protection detection method provided in an embodiment of the present application is shown, in which a video reading and detection recognition adopts a multi-thread concurrent operation.
In order to prevent the waste of system resources caused by the difference of execution speeds of different threads, a Queue for storing primarily processed pictures is established by adopting a Queue class.
S210, the video reading thread reads a frame of original video image from the original monitoring video.
And S220, the video reading thread performs primary processing on the original video image to obtain a processed video frame.
Here, the preliminary processing on the original video image, i.e., the gray level uniformization processing and the image sharpening filtering processing in S112 to S114, is not described herein again.
S230, the video reading thread judges whether the number of the video frames in the video frame queue is equal to the preset number, if so, S240 is executed; if not, S250 is executed.
To ensure that the video detection thread can process the latest video frame, the capacity of the video frame queue can be set to 2, i.e. only 2 video frames can be stored.
Of course, in other embodiments, the capacity of the video frame queue may be set to other values. The queue capacity is set according to the actual service requirement.
S240, the video reading thread deletes the video frame with the earliest time stored in the video frame queue.
And S250, the video reading thread stores the currently processed video frame into the video frame queue.
S260, the video detection thread reads one video frame from the video frame queue to perform target detection to obtain a target detection result.
And S270, the video detection thread performs gesture recognition on the personnel target in the video frame to obtain a gesture recognition result.
S280, the video detection thread obtains the abnormal condition of the operation site according to at least one of the target detection result and the gesture recognition result.
And S290, when the video detection thread detects that an abnormal condition exists in the operation site, sending an alarm signal.
In one possible implementation, an alarm device at the operation site may be triggered to send an alarm signal to notify the staff at the operation site.
In another possible implementation, an alarm message may also be displayed at the monitoring center to notify monitoring personnel.
In the operation site safety protection detection method provided by this embodiment, multithreading concurrent operation is adopted to implement parallel execution of video data transceiving and image processing, and in order to ensure that the processed video frame is the latest video frame, the capacity of the video frame queue may be set to a smaller value. In addition, after the video detection thread detects that an abnormal condition exists in the operation site, an alarm signal is sent out to inform related personnel to process in time, so that the safety protection capability of the operation site is ensured.
Corresponding to the embodiment of the operation field safety protection detection method, the application also provides an embodiment of an operation field safety protection detection device.
Referring to fig. 5, a block diagram of an operation field safety protection detection apparatus provided in an embodiment of the present application is shown, where the apparatus is applied to a computer device, such as a computer device in a remote monitoring system.
As shown in fig. 5, the apparatus includes: a videoframe acquisition module 110, anobject detection module 120, agesture recognition module 130, and an abnormalsituation detection module 140.
The videoframe acquiring module 110 is configured to acquire a video frame to be detected from a monitoring video of an operation site.
In one embodiment of the present application, as shown in fig. 6, the videoframe acquiring module 110 may include:
a videoframe reading submodule 111, configured to read a frame of video image from a monitoring video;
the gray levelhomogenization processing submodule 112 is configured to, if the average gray level value of the read video image is lower than a preset gray level threshold, perform gray level homogenization processing on the video image to obtain a gray level uniform image;
and the sharpeningfiltering processing submodule 113 is configured to perform image sharpening filtering processing on the uniform-gray-level image to obtain a video frame to be detected, or directly perform image sharpening filtering processing on the video image to obtain the video frame to be detected when the average gray level value of the read video image is not lower than a preset gray level threshold.
In another embodiment of the present application, as shown in fig. 6, the videoframe acquiring module 110 further includes:
and the videoframe storage sub-module 114 is used for storing the video frames to be detected in a video frame queue, wherein the capacity of the video frame queue is greater than or equal to 2.
Thetarget detection module 120 is configured to detect information of a preset target included in the video frame to be detected by using a target detection model obtained through pre-training, so as to obtain a target detection result.
The preset targets comprise personnel and objects needing to be detected in an operation site.
In an embodiment of the present application, thetarget detection module 120 is specifically configured to:
the video detection process reads the video frame to be detected from the video frame queue; acquiring a plurality of bounding boxes from a video frame to be detected, carrying out object type detection on an object contained in each bounding box, and determining the offset of each bounding box; and determining that the video frame to be detected contains preset targets and the current state of each preset target according to the object type and the boundary box offset.
And thegesture recognition module 130 is configured to perform gesture recognition on the person included in the video frame to be detected to obtain a gesture recognition result.
And the abnormalcondition detection module 140 is configured to detect an abnormal condition of the operation site according to at least one of the target detection result and the gesture recognition result.
Wherein the abnormal condition includes at least one of an abnormal object and an abnormal behavior.
In one embodiment of the present application, the target detection result includes a preset target and a current state of the preset target; in this case, the abnormalsituation detection module 140 is specifically configured to:
for a non-human target in a video frame to be detected, determining that the non-human target has an abnormal condition when the current state of the non-human target is inconsistent with the normal state of the non-human target;
and judging whether the gesture recognition result of the person has abnormal behaviors or not according to the abnormal behavior evaluation standard aiming at the person target contained in the video frame to be detected.
In another embodiment of the present application, the abnormalsituation detecting module 140 is configured to, when determining whether an abnormal behavior exists in a posture recognition result of a person according to an abnormal behavior evaluation criterion for a person target included in a video frame to be detected, specifically:
after the operation of the operation site is finished, if the situation that personnel are detained in a target area of the operation site is detected, determining that disappearance and abnormity occur;
for the same personnel target in the dangerous area of the operation site, judging whether the running of the personnel exists or not according to the displacement of the personnel target in unit time;
when a person is detected to appear in a target area of an operation site and is determined to be a non-worker, recording the stay time of the non-worker in the target area, and determining that the retention loitering abnormality exists when the stay time exceeds a preset time threshold;
when a person is detected within a first preset distance range of target equipment in an operation field, determining that the equipment operation is illegal;
for a worker in an operation field, when an illegal article is detected within a second preset distance range of the worker, determining that the worker has illegal behaviors.
In another embodiment of the present application, as shown in fig. 5, the apparatus further comprises: and analarm module 150 for sending an alarm signal when an abnormal condition is detected in the operation site.
The application provides an operation scene safety protection detection device obtains waiting to detect the video frame from the surveillance video of operation scene to treat and detect the video frame and carry out target detection and gesture recognition. And recognizing an abnormal condition existing in the operation site according to at least one of the target detection result and the posture recognition result. According to the scheme, target detection and gesture recognition are combined, basic states of personnel and objects in an operation field are recognized through the target detection, obvious abnormal conditions are preliminarily eliminated, and then the states of the personnel in the operation field are recognized through the gesture to judge whether the personnel have abnormal behaviors. Therefore, active security detection is realized, and the safety protection capability of the operation site is improved.
The application also provides a computer device, which comprises a memory and a processor, wherein the memory stores programs, and the processor calls the programs stored in the memory to realize the operation field safety protection detection method.
The application also provides a storage medium executable by the computing device, wherein the storage medium stores a program, and the program realizes the operation field safety protection detection method when being executed by the computing device.
While, for purposes of simplicity of explanation, the foregoing method embodiments have been described as a series of acts or combination of acts, it will be appreciated by those skilled in the art that the present invention is not limited by the illustrated ordering of acts, as some steps may occur in other orders or concurrently with other steps in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
It should be noted that technical features described in the embodiments in the present specification may be replaced or combined with each other, each embodiment is mainly described as a difference from the other embodiments, and the same and similar parts between the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The steps in the method of the embodiments of the present application may be sequentially adjusted, combined, and deleted according to actual needs.
The device and the modules and sub-modules in the terminal in the embodiments of the present application can be combined, divided and deleted according to actual needs.
In the several embodiments provided in the present application, it should be understood that the disclosed terminal, apparatus and method may be implemented in other manners. For example, the above-described terminal embodiments are merely illustrative, and for example, the division of a module or a sub-module is only one logical division, and there may be other divisions when the terminal is actually implemented, for example, a plurality of sub-modules or modules may be combined or integrated into another module, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The modules or sub-modules described as separate parts may or may not be physically separate, and parts that are modules or sub-modules may or may not be physical modules or sub-modules, may be located in one place, or may be distributed over a plurality of network modules or sub-modules. Some or all of the modules or sub-modules can be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, each functional module or sub-module in the embodiments of the present application may be integrated into one processing module, or each module or sub-module may exist alone physically, or two or more modules or sub-modules may be integrated into one module. The integrated modules or sub-modules may be implemented in the form of hardware, or may be implemented in the form of software functional modules or sub-modules.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.