Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The same reference numerals denote the same or similar parts in the drawings, and thus, a repetitive description thereof will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, systems, steps, and so forth. In other instances, well-known methods, systems, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
It will be understood that, although the terms first, second, third, etc. may be used herein to describe various components, these components should not be limited by these terms. These terms are used to distinguish one element from another. Thus, a first component discussed below may be termed a second component without departing from the teachings of the disclosed concept. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
It is to be understood by those skilled in the art that the drawings are merely schematic representations of exemplary embodiments, and that the blocks or processes shown in the drawings are not necessarily required to practice the present disclosure and are, therefore, not intended to limit the scope of the present disclosure.
As described above, common indoor positioning techniques are: Wi-Fi, Bluetooth, infrared, RFID, Zigbee, etc. The inventor of the present disclosure finds that the indoor positioning technology has the following disadvantages in practical application:
Wi-Fi positioning technology: the WIFI access point usually only can cover an area with a radius of about 90 meters, and is easily interfered by other signals, so that the precision of the WIFI access point is influenced, and the energy consumption of the locator is high.
Bluetooth positioning: for a complex space environment, the stability of the bluetooth positioning system is slightly poor, and the interference of noise signals is large.
Infrared technology: indoor location is through installing indoor optical sensor, receives each mobile device (infrared ray IR sign) and transmits the infrared ray of modulation and fix a position, however, because light can not pass the barrier for infrared ray can only look the distance and spread, receives other light interference easily, and the transmission distance of infrared ray is shorter, makes its indoor effect of fixing a position very poor. When the mobile device is placed in a pocket or is shielded by a wall, the mobile device cannot work normally, and a receiving antenna needs to be installed in each room or corridor, so that the overall cost is high.
The RFID positioning technology utilizes a radio frequency mode to perform non-contact bidirectional communication to exchange data, and achieves the purposes of mobile equipment identification and positioning. Has the disadvantages of short action distance and the like.
In order to overcome the defects in the existing indoor positioning technology, the invention provides a novel track tracking method and a novel track tracking system, and the method and the system can realize positioning tracking of people and determine the motion track by the aid of three types of cameras. The following detailed description is given in conjunction with specific examples.
FIG. 1 is a block diagram illustrating an application scenario of a trajectory tracking system in accordance with an exemplary embodiment.
As shown in fig. 1, thesystem architecture 10 may includeimage sensing apparatuses 101, 102, 103, anetwork 104, and aprocessing device 105. Thenetwork 104 is a medium to provide a communication link between theimage pickup apparatuses 101, 102, 103 and theprocessing device 105.Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
Theimage pickup apparatuses 101, 102, 103 can be used to interact with theprocessing device 105 through thenetwork 104 to receive or transmit data or the like. Various communication client applications may be installed on theimage pickup apparatuses 101, 102, 103.
Thecamera devices 101, 102, 103 may be various electronic devices having a camera function, including, but not limited to, professional cameras, CCD cameras, web cameras, broadcast-grade cameras, business-grade cameras, home-grade cameras, studio/field-based cameras, camcorders, monochrome cameras, color cameras, infrared cameras, X-ray cameras, surveillance cameras, scout cameras, button cameras, and the like.
Theimage capturing apparatus 101 may be an initial image capturing device for acquiring an initial image of a target object.
Thecamera apparatus 102 may include a plurality of first camera devices for acquiring a first video of the target object.
Theimage capturing apparatus 103 may include a plurality of second image capturing devices for acquiring a second video of the target object based on the initial image.
Theprocessing device 105 may be a processing device that provides various services, such as a processing device that performs background processing on pictures or videos taken by theimage capturing apparatuses 101, 102, 103. Theprocessing device 105 may analyze and otherwise process the received data such as the picture or video, and feed back the processing result (e.g., the trajectory of the target object) to the administrator.
Theprocessing device 105 may, for example, acquire an initial image of the target object; theprocessing device 105 may, for example, acquire a first video of the target object, and generate trajectory information by tracking the moving route of the target object based on the first video; theprocessing device 105 may, for example, obtain a second video of the target object based on the initial image and generate supplemental information based on the second video; theprocessing device 105 may adjust the trajectory information, for example, based on the supplemental information, to perform trajectory tracking of the target object.
Theprocessing device 105 may be a physical processing device, and may also be composed of a plurality of processing devices, for example, it should be noted that the trajectory tracking method provided by the embodiments of the present disclosure may be executed by theimage capturing apparatuses 101, 102, 103 and theprocessing device 105, and accordingly, a trajectory tracking system may be disposed in theimage capturing apparatuses 101, 102, 103 and theprocessing device 105. Theprocessing device 105 may control theimage capturing apparatuses 101, 102, and 103 to capture data and transmit and receive data, or theimage capturing apparatuses 101, 102, and 103 may actively capture data and transmit the captured data to theprocessing device 105 for processing.
Theprocessing device 105 may be a server or a terminal device, and the terminal device may include at least one camera device, which is not limited by the present disclosure. In one embodiment, the method in the present disclosure may be supported by designating any one of theimage capturing apparatuses 101, 102, 103 (for example, the image capturing apparatus 103) as a processing device, in which case, theimage capturing apparatus 103 may receive video data sent by theimage capturing apparatuses 101, 102 in addition to capturing a current video image, and process the video data according to the method described in the embodiment of the present disclosure to track a user, which is not limited by the present disclosure.
FIG. 2 is a schematic diagram illustrating an application scenario of a trajectory tracking system according to an exemplary embodiment. As shown in fig. 2, the initial image capturing device may include a camera L and an information collecting device; the first camera device can be an image acquisition device 1-19; the second camera may be an image capture device a-K.
The camera L is used for identifying entrance personnel, and for example, information can be collected at the entrance gate through the camera L, a two-dimensional code, an RFID card and the like. Various types of characteristic information of users entering an indoor area are collected, and corresponding relations of the various types of characteristic information are established. For example, in an unmanned supermarket scene, a user enters the unmanned supermarket through the two-dimensional code, the two-dimensional code device acquires the two-dimensional code information of the user, the camera acquires the user information, and the corresponding relation between the user information and the two-dimensional code is established.
Wherein the image capturing devices 1-19: the device is arranged on the indoor roof, for example, a plurality of fisheye cameras can be used for shooting downwards from the roof, and the top area characteristics of a moving object can be shot. For example, when thecamera 1 gazes at the top of the head of a user, thecamera 1 may track the motion track of the user and determine the moving direction of a moving target, and when thecamera 1 determines that the user is about to enter the collectable area of thecamera 2, thecamera 2 predicting the area may be notified to switch to thecamera 2 for tracking the track of the user, and the information of the moving target is synchronized with the track information of the target user formed by shooting the target by thecamera 2, and so on, the tracking of the motion track of the user may be realized by thecameras 1 to 19.
Wherein the image acquisition devices A-K: and a plurality of cameras arranged at fixed positions and used for correcting the information of the moving target. When thecameras 1 to 19 installed on the roofs track a target object, problems such as loss of a tracking chain or tracking errors caused by shielding and the like can occur, the A-K cameras analyze image information of users by acquiring the image information of the users, and the specific user of the vertex is corrected or re-determined by algorithms such as face recognition, video structuring, gait and the like. For example, thecamera 1 stares at the tops of two users at the same time, possibly confusing. The a-K type cameras can acquire images of users, and determine which two users the two tops of the head are respectively by combining various types of information (such as faces, clothes, gaits) of the users acquired by the previous camera L, and continue to track.
According to the track tracking method, an initial information initial image of a target object is obtained; acquiring a first video of the target object, and tracking the moving route of the target object based on the first video to generate track information; acquiring a second video of the target object based on the initial image, and generating supplementary information based on the second video; and adjusting the trajectory information based on the supplementary information to track the trajectory of the target object, so that the target object can be accurately positioned and the motion trajectory of the target object can be determined in a complex indoor environment.
FIG. 3 is a flow chart illustrating a trajectory tracking method according to an exemplary embodiment. Thetrajectory tracking method 30 includes steps S302 to S308. The trajectory tracking method is applicable to a server or a terminal device, wherein the terminal device may include at least one camera. In one embodiment, the method in the present disclosure may also be supported by designating any one of theimage capturing apparatuses 101, 102, 103 (for example, the image capturing apparatus 103) as a terminal, in which case, theimage capturing apparatus 103 may receive video data sent by theimage capturing apparatuses 101, 102 in addition to capturing a current video image, and process the video data according to the method described in the embodiment of the present disclosure to track a user, which is not limited by the present disclosure.
As shown in fig. 3, in S302, an initial image of the target object is acquired. The method comprises the following steps: acquiring an initial image of the target object by an initial imaging device, may further include: acquiring feature information of the target object by an information acquisition device, wherein the feature information may include: two-dimensional codes and/or radio frequency identification codes; and associating and storing a plurality of historical track information of the target object according to the characteristic information.
Further, the user information may include a face, clothes, gait, and the like. For example, in an unmanned supermarket scene, a user enters the unmanned supermarket through a two-dimensional code, the two-dimensional code can be used as characteristic information of the user, a two-dimensional code device acquires two-dimensional code information of the user, a camera collects an image of the user, the image of the user can be used as an initial image, and a corresponding relation between the image of the user and the two-dimensional code is established in a background processing device.
In S304, a first video of the target object is acquired, and a moving route of the target object is tracked based on the first video to generate track information. Wherein the first video comprises at least one frame of image.
In one possible implementation, step S304 may include: when the target object moves, acquiring a plurality of first videos generated by tracking and collecting the target object through a plurality of first camera devices; and generating the trajectory information based on the plurality of first videos.
The details of "acquiring the first video of the target object and tracking the moving route of the target object based on the first video to generate the track information" will be described in the embodiment corresponding to fig. 4. The first cameras may track and capture a plurality of first videos generated by the target object, or a part of the first cameras track and capture the target object to generate one first video, and a part of the first cameras track and capture the target object to generate a plurality of first videos, which is not limited in this disclosure.
In S306, a second video of the target object is acquired based on the initial image, and supplemental information is generated based on the second video. Wherein the second video comprises at least one frame of image.
In one possible implementation, step S306 may include: acquiring a plurality of real-time videos acquired by a plurality of second camera devices; and performing target recognition on the plurality of real-time videos based on the initial image to determine the second video corresponding to the target object.
The details of "acquiring a second video of the target object based on the initial image, and generating supplemental information based on the second video" will be described in the embodiment corresponding to fig. 5.
In S308, the trajectory information is adjusted based on the supplemental information to perform trajectory tracking of the target object.
In one embodiment, further comprising: storing a plurality of historical track information of the target object; and analyzing and processing the behavior of the target object based on the plurality of historical track information.
In one embodiment, further comprising: and generating early warning information when the track information of the target object meets a preset condition.
In one embodiment, in an unmanned supermarket, a face image of a user collected by a camera can be stored in a processing device, when the user reappears, a data record of the user can be established through face image comparison, and according to the data record, the purchasing habits of the user can be continuously analyzed, such as the preferences of the user, the living habits of the user, the work and rest time of the user, and the like. Further, a warning may be given when conditions are met, for example, if a user appears in a supermarket too frequently, the user is considered to have a possible malicious attempt, and then a warning is given to the user so that the manager pays attention to the user.
According to the track tracking method disclosed by the invention, the positioning tracking of people can be realized and the motion track can be determined through the three cameras. Data analysis can be performed based on the acquired motion trajectory, and the analysis result can be used for early warning and the like.
It should be clearly understood that this disclosure describes how to make and use particular examples, but the principles of this disclosure are not limited to any details of these examples. Rather, these principles can be applied to many other embodiments based on the teachings of the present disclosure.
FIG. 4 is a flow chart illustrating a trajectory tracking method according to another exemplary embodiment. Theflow 40 shown in fig. 4 is a detailed description of S304 "acquiring the first video of the target object and tracking the moving route of the target object based on the first video to generate track information" in the flow shown in fig. 3.
As shown in fig. 4, in S402, one of the plurality of first image capturing apparatuses is controlled to acquire one first video of the target object. A plurality of first camera device can arrange at indoor roof, and a plurality of first camera device can be a plurality of fisheye cameras, from the roof shooting downwards, and a plurality of first camera device still can be different types of camera, the work of mutually supporting, and this disclosure does not use this as the limit.
In S404, a moving direction of the target object is determined according to the first video. When the current first camera device shoots the user, the track direction of the user can be determined according to the distance range shot by the user in the current first camera device.
In S406, another first camera is controlled to acquire another first video of the target object according to the moving direction. And analyzing an image area to be entered by the user according to the moving direction to determine another first camera device in the image area to be performed, and switching to another first camera device to continue tracking shooting of the track of the user when the user enters the image area.
In S408, a plurality of movement trajectories of the target object in the plurality of first videos is determined.
In S410, the trajectory information is generated based on the positional relationship between the plurality of movement routes and the plurality of first image pickup devices. During the shooting process of the video, the track information of the user can be generated through the positions of the different first camera devices and the movement tracks of the user in different areas.
In one embodiment, for example, the area where each first camera device that the user passes through is located is marked, an approximate range of the user track is drawn, then the track of the user in each area is drawn, the tracks of adjacent areas are connected, and then the user track information is determined.
FIG. 5 is a flow chart illustrating a trajectory tracking method according to another exemplary embodiment. Theflow 50 shown in fig. 5 is a detailed description of S306 "acquiring a second video of the target object based on the initial image and generating supplementary information based on the second video" in the flow shown in fig. 3.
As shown in fig. 5, in S502, a plurality of real-time videos captured by a plurality of second cameras are acquired. The second camera device can be arranged at a fixed position, is used for correcting the information of the moving target, and can be used for tracking the problems that the tracking chain of the user is lost or the tracking is wrong and the like due to the problems of shielding and the like. Wherein the real-time video comprises at least one frame of image.
In a possible implementation manner, the second video corresponding to the target object may be determined in at least one of steps S504, S506 and S508.
In S504, performing face recognition on the plurality of real-time videos based on the initial image to determine the second video corresponding to the target object. The face recognition is a biological recognition technology for identity recognition based on face feature information of a person. A series of related technologies, also commonly called face recognition and face recognition, are used to collect images or video streams containing faces by using a camera or a video camera, automatically detect and track the faces in the images, and then perform face recognition on the detected faces.
In S506, performing video structural calculation on the plurality of real-time videos based on the initial image to determine the second video corresponding to the target object. The video structuring refers to establishing a video big data structuring platform according to the characteristics of people, vehicles, objects, colors, numbers and other attributes presented in a video picture. After the video is structured, the video is stored in a corresponding structured data warehouse, and the storage capacity is greatly reduced. The person in the video image can be subjected to structural processing, and various structural characteristic attribute information of the user can be obtained, wherein the structural characteristic attribute information comprises clothing and ornament characteristics: coats, trousers, skirts and dresses, shoes, hats, sunglasses and sunglasses, scarves and belt waistbands; carrying object characteristics: single shoulder satchels, backpack, handbags, draw-bar boxes, umbrellas; human body characteristics: hair, face.
In S508, gait recognition is performed on the plurality of real-time videos based on the initial image to determine the second video corresponding to the target object. The gait recognition is a biological feature recognition technology, aims to identify the identity through the walking posture of people, and has the advantages of non-contact remote distance and difficulty in disguising compared with other biological recognition technologies. In the field of intelligent video monitoring, the method has more advantages than image recognition.
In S510, a real-time location of the target object is determined based on the second video.
In S512, the supplementary information is generated based on the real-time position and the positional relationship of the plurality of second image pickup devices.
Those skilled in the art will appreciate that all or part of the steps implementing the above embodiments are implemented as computer programs executed by a CPU. When executed by the CPU, performs the functions defined by the above-described methods provided by the present disclosure. The program may be stored in a computer readable storage medium, which may be a read-only memory, a magnetic or optical disk, or the like.
Furthermore, it should be noted that the above-mentioned figures are only schematic illustrations of the processes involved in the methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods. For details not disclosed in the embodiments of the apparatus of the present disclosure, refer to the embodiments of the method of the present disclosure.
FIG. 6 is a block diagram illustrating a trajectory tracking system in accordance with an exemplary embodiment. As shown in fig. 6, thetrajectory tracking system 60 includes: an initialimage capturing device 602, a firstimage capturing device 604, a secondimage capturing device 606, and aprocessing device 608.
Theinitial camera 602 is used for acquiring an initial image of a target object;
the plurality offirst cameras 604 are used for acquiring a first video of the target object; the plurality offirst cameras 604 are arranged at predetermined positions according to a first rule, and thefirst cameras 604 comprise fisheye cameras
The plurality ofsecond cameras 606 are used for acquiring a second video of the target object based on the initial image; the plurality of secondimage pickup devices 606 are arranged at predetermined positions according to a second rule
Theprocessing device 608 is configured to track the moving route of the target object based on the first video to generate track information; and generating supplementary information based on the second video, and adjusting the track information based on the supplementary information to track the target object.
The first rule may be determined according to the area to be tracked and the acquisition range of the first camera device. For example, the area to be tracked is an indoor supermarket, and a plurality of first camera devices can be uniformly or non-uniformly distributed on the ceiling of the indoor supermarket according to the area of the indoor supermarket and the collection range of the first camera devices, so that the first camera devices can collect any area of the indoor supermarket. The second rule may be determined according to the area to be tracked and the acquisition range of the second camera. For example, the area to be tracked is an indoor supermarket, the second camera devices may be respectively arranged in a plurality of fixed areas of the indoor supermarket, and the area collected by the second camera devices may be an area with a high occurrence frequency of users in the indoor supermarket, so that the second video is collected by the second camera devices in the area, and the supplementary information is generated based on the second video, thereby correcting the track information of the target object.
According to the track tracking system disclosed by the invention, an initial image of a target object is obtained; acquiring a first video of the target object, and tracking the moving route of the target object based on the first video to generate track information; acquiring a second video of the target object based on the initial image, and generating supplementary information based on the second video; and adjusting the trajectory information based on the supplementary information to track the trajectory of the target object, so that the target object can be accurately positioned and the motion trajectory of the target object can be determined in a complex indoor environment.
FIG. 7 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Anelectronic device 700 according to this embodiment of the disclosure is described below with reference to fig. 7. Theelectronic device 700 shown in fig. 7 is only an example and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 7,electronic device 700 is embodied in the form of a general purpose computing device. The components of theelectronic device 700 may include, but are not limited to: at least oneprocessing unit 710, at least onememory unit 720, abus 730 that connects the various system components (including thememory unit 720 and the processing unit 710), adisplay unit 740, and the like.
Wherein the storage unit stores program codes executable by theprocessing unit 710 to cause theprocessing unit 710 to perform the steps according to various exemplary embodiments of the present disclosure described in the above-mentioned electronic prescription flow processing method section of the present specification. For example, theprocessing unit 710 may perform the steps as shown in fig. 3, 4, 5.
Thememory unit 720 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)7201 and/or acache memory unit 7202, and may further include a read only memory unit (ROM) 7203.
Thememory unit 720 may also include a program/utility 7204 having a set (at least one) ofprogram modules 7205,such program modules 7205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 730 may be any representation of one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
Theelectronic device 700 may also communicate with one or more external devices 700' (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with theelectronic device 700, and/or with any devices (e.g., router, modem, etc.) that enable theelectronic device 700 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O)interface 750. Also, theelectronic device 700 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the internet) via thenetwork adapter 760. Thenetwork adapter 760 may communicate with other modules of theelectronic device 700 via thebus 730. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with theelectronic device 700, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, as shown in fig. 8, the technical solution according to the embodiment of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a processing apparatus, or a network device, etc.) to execute the above method according to the embodiment of the present disclosure.
The software product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable storage medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or processing apparatus. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The computer readable medium carries one or more programs which, when executed by a device, cause the computer readable medium to perform the functions of: acquiring an initial image of a target object; acquiring a first video of the target object, and tracking the moving route of the target object based on the first video to generate track information; acquiring a second video of the target object based on the initial image, and generating supplementary information based on the second video; and adjusting the trajectory information based on the supplemental information to perform trajectory tracking of the target object.
Those skilled in the art will appreciate that the modules described above may be distributed in the apparatus according to the description of the embodiments, or may be modified accordingly in one or more apparatuses unique from the embodiments. The modules of the above embodiments may be combined into one module, or further split into multiple sub-modules.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a processing device, a mobile terminal, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Exemplary embodiments of the present disclosure are specifically illustrated and described above. It is to be understood that the present disclosure is not limited to the precise arrangements, instrumentalities, or instrumentalities described herein; on the contrary, the disclosure is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.