Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in the specification of this application and the appended claims, the term "if" may be interpreted contextually as "when … …" or "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The robot in this application has integrateed the sensor of multiple different grade type for the information of collection without the type, the information integrated control robot according to the different grade type, it is low to have solved the intelligent degree of robot, only goes the problem of control robot according to an information.
As shown in fig. 1, an application scenario diagram of a control method of a robot according to an embodiment of the present application is provided, and the control method of the robot may be used to control the robot. Wherein, be equipped with a plurality of sensors on the robot, a plurality of sensors include: the device comprises a microphone array, a light sensing sensor, a radio frequency sensor, a camera, a touch sensor, a limit sensor, a current sensor, an acceleration sensor, a temperature sensor, an infrared sensor and an ultrasonic sensor. The microphone array is used for collecting voice information of a user. The light sensing sensor is used for collecting light sensing information. The radio frequency sensor is used for collecting radio frequency information. The camera is used for collecting video information. The touch sensor is used for collecting touch information of a user. The limiting sensor is used for acquiring the action position information of the robot. The current sensor is used for collecting current signals of a power supply in the robot. The acceleration sensor is used for acquiring acceleration information of the robot. The temperature sensor is used for acquiring temperature information of the environment where the robot is located. The infrared sensor is used for collecting barrier information of the environment where the robot is located. The ultrasonic sensor is used for acquiring second obstacle information of the environment where the robot is located. The robot carries out feature lifting and feature fusion on information acquired by the sensor to obtain a control strategy of the robot, and corresponding human-computer interaction, obstacle avoidance and movement are carried out according to the control strategy.
Specifically, feature extraction and feature fusion are performed on first voice information acquired by a microphone array, light induction information acquired by a light induction sensor, radio frequency information acquired by a radio frequency sensor, first video information acquired by a camera, and first touch information acquired by a touch sensor.
The method comprises the steps of carrying out feature lifting and feature fusion on motion position information collected by a limit sensor, current signals collected by a current sensor, acceleration information collected by an acceleration sensor, temperature information collected by a temperature sensor, first obstacle information collected by an infrared sensor and second obstacle information collected by an ultrasonic sensor.
Fig. 2 shows a schematic flow chart of a control method of the robot provided by the present application, and a processor in the robot can be used to implement the method described below. Referring to fig. 2, the method is detailed as follows:
s101, first data collected by a sensor module in the robot are obtained.
In this embodiment, the first data includes human-computer interaction information, performance information of the robot, and environment information of an environment in which the robot is located.
The man-machine interaction information comprises instruction information sent by a user. For example, the human-computer interaction information may include a voice instruction issued by the user, and an action instruction issued by the user. The performance information of the robot may include the amount of power of a battery in the robot, the current of the battery, the angle of rotation of the robot head, the angle of motion of the robot hand, and the like. The environmental information may include obstacles around the robot, the temperature, humidity, etc. of the environment.
In the embodiment, the sensor module in the robot may include a microphone array, a light sensing sensor, a radio frequency sensor, a camera, a touch sensor, and the like for collecting human-computer interaction information.
The sensor module in the robot may include a limit sensor, a current sensor, an acceleration sensor, etc. for collecting performance information of the robot.
The sensor module in the robot may include a temperature sensor, an infrared sensor, an ultrasonic sensor, and the like for collecting environmental information. The infrared sensor and the ultrasonic sensor are used for acquiring information of the obstacles.
And S102, obtaining the robot control strategy based on the first data.
In this embodiment, the control strategy includes at least one of an interaction strategy, an obstacle avoidance strategy, and a motion strategy.
Specifically, the first data is input into a neural network model to obtain a control strategy of the robot. Before the control strategy of the robot is obtained by using the neural network model, a process of training the neural network model can be further included.
Specifically, the process of training the neural network model includes: training parameters are obtained, and the training parameters can comprise human-computer interaction information, performance information of the robot and environment information of the environment where the robot is located. And inputting the training parameters into the neural network model to be trained to obtain training result data. And comparing the training result data with preset result data to obtain difference data, and adjusting parameters in the neural network model according to the difference data. And training the neural network model after the parameters are adjusted by using the training parameters until training result data output by the neural network model meet preset requirements, and obtaining the trained neural network model.
For example, if the human-computer interaction information is an instruction from the user to rotate the head 10 degrees to the left, the performance information of the robot is that the head of the robot has rotated 30 degrees to the left. The robot may determine the interaction policy as "robot play: the head has been turned 10 degrees to the left ". The motion strategy is "head turn 10 degrees to the left". The obstacle avoidance strategy of the robot is 'no obstacle avoidance'.
If the man-machine interaction information is a straight-walking instruction sent by the user, the robot detects that the environment information is that an obstacle exists 2 meters ahead. The performance information of the robot is that the acceleration of the robot is B. The motion strategy of the robot is 'non-straight'. The obstacle avoidance strategy of the robot is 'avoiding the obstacle 2 meters ahead'. The interaction policy of the robot is' robot play: the front has obstacles, cannot go straight, has re-planned the route ".
And S103, controlling the robot to act based on the control strategy.
In this embodiment, the robot may be controlled to output corresponding response information, control the robot to move, and avoid an obstacle according to the control policy.
In the embodiment of the application, first data acquired by a sensor module in a robot is acquired, wherein the first data comprises human-computer interaction information, performance information of the robot and environment information of an environment where the robot is located; obtaining a robot control strategy based on the first data, wherein the control strategy comprises at least one of an interaction strategy, an obstacle avoidance strategy and a motion strategy; and controlling the robot action based on the control strategy. According to the method and the device, at least one of an interaction strategy, an obstacle avoidance strategy and a motion strategy is obtained according to the human-computer interaction information, the performance information of the robot and the environment information of the environment where the robot is located.
In a possible implementation manner, the human-computer interaction information includes first voice information collected by the microphone array, light-sensing information collected by the light-sensing sensor, radio-frequency information collected by the radio-frequency sensor, first video information collected by the camera, and first touch information collected by the touch sensor.
Specifically, the implementation process of step S101 may include:
s1011, acquiring first voice information acquired by the microphone array.
In this embodiment, the microphone array may collect voice information of a user, and perform denoising processing on the voice information to obtain first voice information.
By way of example, the first speech information may include: the user sends out singing, raising head, dancing and other instructions.
And S1012, acquiring light induction information collected by the light induction sensor, wherein the light induction information is used for determining an action instruction of a user.
In the present embodiment, the light sensing sensor generates light sensing information through light variation, and the light sensing information may be a level signal. The motion performed by the user, such as a hand waving motion of the user, can be identified through the light sensing information.
And S1013, acquiring the radio frequency information acquired by the radio frequency sensor.
In this embodiment, the radio frequency sensor is mainly used for acquiring information on a radio frequency chip on an object having the radio frequency chip, and the information acquired by the radio frequency sensor is recorded as radio frequency information in the present application. For example, the radio frequency sensor may obtain the contents of the book by scanning a radio frequency chip in the book, and after the robot acquires the contents of the book, the contents of the book may be displayed or played through an audio/video module.
And S1014, acquiring first video information acquired by the camera.
In this embodiment, the camera is mainly used for acquiring video information around the robot, and whether people, books, obstacles or the like exist around the robot can be determined through the video information.
And S1015, acquiring first touch information acquired by the touch sensor.
In this embodiment, the touch sensor is mainly used for collecting the touch of the user on the robot, and the robot can make corresponding actions according to the touch information, for example, the user touches the head of the robot, and the robot can make expressions of being shy, and the like.
In a possible implementation manner, the performance information includes motion position information of a component on the robot, which is associated with the limit sensor, acquired by the limit sensor, a current signal of a power supply in the robot, acquired by the current sensor, and acceleration information of the robot, acquired by the acceleration sensor.
Specifically, the implementation process of step S101 may include:
and acquiring action position information of a part on the robot, which is related to the limit sensor and acquired by the limit sensor.
And acquiring a current signal of the power supply in the robot, which is acquired by the current sensor.
And acquiring the acceleration information of the robot acquired by the acceleration sensor.
In this embodiment, the limit sensors may include a first limit sensor disposed at the robot head for acquiring the motion amplitude of the robot head, a second limit sensor disposed at the robot arm for acquiring the motion amplitude of the robot arm, a third limit sensor disposed at the robot leg for acquiring the motion amplitude of the robot leg, and the like.
As an example, the first sensor disposed at the robot head may capture an angle of the robot head moving leftward, rightward, upward, or downward.
In this embodiment, a current signal of a power supply in the robot may reflect whether a motor in the robot can normally operate, and if the current of the power supply in the robot is greater than a preset current, it indicates that the current of the motor is too large, and the motor is damaged to some extent, and the motor is in an abnormal operation state. From the current signal, a corresponding motion strategy and/or interaction strategy may be determined.
In this embodiment, the acceleration information may reflect a current time movement situation of the robot, and further, the movement strategy of the robot may be determined again according to the acceleration, the surrounding environment information, and the like.
In a possible implementation manner, the environment information includes temperature information of an environment where the robot is located, which is acquired by the temperature sensor, first obstacle information of the environment where the robot is located, which is acquired by the infrared sensor, and second obstacle information of the environment where the robot is located, which is acquired by the ultrasonic sensor.
Specifically, the implementation process of step S101 may include:
and acquiring the temperature information of the environment where the robot is located, which is acquired by the temperature sensor.
And acquiring first barrier information of the environment where the robot is located, which is acquired by the infrared sensor.
And acquiring second obstacle information of the environment where the robot is located, which is acquired by the ultrasonic sensor.
In this embodiment, the temperature information collected by the temperature sensor may generate the interaction information when the user needs to obtain the current temperature, so as to meet the user's requirement.
In this embodiment, the infrared sensor and the ultrasonic sensor may both allow a user to detect whether an obstacle exists around the robot, so that the robot can avoid the obstacle, interact with the user, and/or generate a motion strategy.
As shown in fig. 3, in a possible implementation manner, the implementation process of step S102 may include:
and S1021, performing feature extraction and feature fusion on the human-computer interaction information to obtain first fusion data.
Specifically, the human-computer interaction information is input into a feature extraction model, and feature extraction is performed on the human-computer interaction information to obtain feature data. After the feature data is obtained, the feature data may be input into the first feature fusion model to perform data combination and data fusion, so as to obtain first fusion data.
And S1022, performing feature extraction and feature fusion on the performance information and the environment information to obtain second fusion data.
Specifically, the performance information is input into a feature extraction model, and feature extraction is performed on the performance information to obtain first feature data. And inputting the environment information into the feature extraction model, and performing feature extraction on the environment information to obtain second feature data. After the first feature data and the second feature data are obtained, the first feature data and the second feature data may be input into the second feature fusion model for data combination and data fusion, so as to obtain second fusion data.
Specifically, the performance information and the environmental information may be input into the feature extraction model to perform feature extraction, so as to obtain third feature data, and the third feature data may be input into the feature fusion model to obtain second fusion data.
And S1023, obtaining a control strategy of the robot based on the first fusion data and the second fusion data.
In this embodiment, the first fusion data and the second fusion data are input into the trained neural network model to obtain the control strategy of the robot.
In the embodiment of the application, the information collected by the plurality of sensors is fused to obtain a control strategy, so that the intelligent degree of the robot can be improved.
Optionally, the implementation process of step S102 may include:
performing feature extraction and feature fusion on the human-computer interaction information to obtain first fusion data;
performing feature extraction and feature fusion on the performance information to obtain third fusion data;
performing feature extraction and feature fusion on the environmental information to obtain fourth fusion data;
and obtaining a control strategy of the robot based on the first fusion data, the third fusion data and the fourth fusion data.
Optionally, the implementation process of step S102 may include:
performing feature extraction and feature fusion on the human-computer interaction information and the performance information to obtain fifth fusion data;
performing feature extraction and feature fusion on the environmental information to obtain sixth fusion data;
and obtaining a control strategy of the robot based on the fifth fusion data and the sixth fusion data.
Optionally, the implementation process of step S102 may include:
performing feature extraction and feature fusion on the human-computer interaction information and the environment information to obtain seventh fusion data;
performing feature extraction and feature fusion on the performance information to obtain eighth fusion data;
and obtaining a control strategy of the robot based on the seventh fusion data and the eighth fusion data.
As shown in fig. 4, in a possible implementation manner, step S101 may further include:
s201, second voice information collected by the microphone array is obtained.
In this embodiment, after the robot is turned on, the microphone array collects voice information sent by the user in real time.
S202, determining whether the second voice message is matched with the awakening instruction or not based on a preset awakening instruction.
In this embodiment, after the second voice information is acquired, denoising processing may be performed on the voice information, so as to extract key information of the voice information. And matching the key information of the second voice information with a preset awakening instruction, and determining whether the robot needs to be switched from the dormant state to the working state. The preset wake-up instruction can be set as required.
S203, if the second voice information is matched with the awakening instruction, acquiring first data acquired by the sensor module.
In this embodiment, if the second voice message matches the wake-up command, the robot is switched from the sleep state to the working state. The robot can acquire first data acquired by the sensor module in a working state, and determines a control strategy according to the first data.
For example, if the key information of the second voice information is "small Q and small Q" and the preset wake-up instruction is "small Q and small Q", it is determined that the second voice information matches the preset wake-up instruction.
As shown in fig. 5, in a possible implementation manner, step S101 may further include:
s301, second touch information collected by the touch sensor is obtained, and a first duration of the second touch information is determined.
In this embodiment, after the robot is turned on, the touch sensor collects touch information of the user in real time.
S302, if the first duration is longer than a first preset time, first data collected by the sensor module are obtained.
In this embodiment, after the second touch information is acquired, the touch duration of the user may be calculated, and in this application, the touch duration is recorded as the first duration. When the touch duration is longer than the first preset time, the robot can be switched from the dormant state to the working state. The first preset time may be set as needed, for example, 4 seconds, 5 seconds, or 6 seconds, etc.
In this embodiment, if the first duration is less than or equal to the first preset time, the first data collected by the sensor module is not acquired, and the touch sensor in the robot continues to detect the touch information. The phenomenon that the robot is started due to the fact that a user touches the robot by mistake is avoided.
As shown in fig. 6, in a possible implementation manner, step S101 may further include:
s401, second video information collected by the camera is obtained, and whether face information exists in the second video information is determined.
In this embodiment, after the robot is turned on, the camera collects video information around the robot in real time. This application is denoted as second video information. After the processor acquires the second video information, the processor analyzes the second video information to determine whether face information exists in the second video information, namely whether a user exists near the robot.
Specifically, the processor may input the second video information into the detection model to determine whether the second video information has face information.
S402, if the face information exists in the second video information, determining a second duration time of the face information existing in the second video information.
In this embodiment, if the face information exists in the second video information, the duration of the face information may be calculated so as to exclude a situation where the user only passes there, but does not want to interact with the robot.
And S403, if the second duration is longer than a second preset time, acquiring first data acquired by the sensor module.
In this embodiment, if the duration of the face information is greater than the second preset time, it may be determined that the user wants to interact with the robot, and the robot may be switched from a sleep state to an operating state. The second preset time may be set as needed.
In the embodiment of the application, the triggering condition for acquiring the first data by the robot is set, and the first data is acquired when the triggering condition is met, so that the misoperation of the robot can be prevented, in addition, the data processing quantity in the robot can be reduced, and the service life of the robot is prolonged.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Corresponding to the control method of the robot described in the above embodiments, the present application provides a robot including a sensor module and an information processing module.
Referring to fig. 7, theinformation processing module 500 may include: adata acquisition module 510, apolicy determination module 520, and acontrol module 530.
Thedata acquisition module 510 is configured to acquire first data acquired by a sensor module in the robot, where the first data includes human-computer interaction information, performance information of the robot, and environment information of an environment where the robot is located;
astrategy determination module 520, configured to obtain a control strategy of the robot based on the first data, where the control strategy includes at least one of an interaction strategy, an obstacle avoidance strategy, and a motion strategy;
acontrol module 530 for controlling the robot action based on the control strategy.
In one possible implementation, the sensor module includes a microphone array, a light sensing sensor, a radio frequency sensor, a camera, and a touch sensor;
the man-machine interaction information comprises first voice information collected by the microphone array, light induction information collected by the light induction sensor, radio frequency information collected by the radio frequency sensor, first video information collected by the camera and first touch information collected by the touch sensor;
the light sensing information is used for determining action instructions of a user.
In one possible implementation, the sensor module includes: the device comprises a limit sensor, a current sensor, an acceleration sensor, a temperature sensor, an infrared sensor and an ultrasonic sensor;
the performance information comprises action position information of a part on the robot, which is related to the limit sensor, acquired by the limit sensor, current signals of a power supply in the robot, acquired by the current sensor, and acceleration information of the robot, acquired by the acceleration sensor;
the environment information comprises temperature information of the environment where the robot is located, acquired by the temperature sensor, first obstacle information of the environment where the robot is located, acquired by the infrared sensor, and second obstacle information of the environment where the robot is located, acquired by the ultrasonic sensor.
In a possible implementation manner, thepolicy determining module 520 may specifically be configured to:
performing feature extraction and feature fusion on the human-computer interaction information to obtain first fusion data;
performing feature extraction and feature fusion on the performance information and the environment information to obtain second fusion data;
and obtaining a control strategy of the robot based on the first fusion data and the second fusion data.
In a possible implementation manner, thedata obtaining module 510 may specifically be configured to:
acquiring second voice information acquired by the microphone array;
determining whether the second voice information is matched with a preset awakening instruction or not based on the preset awakening instruction;
and if the second voice information is matched with the awakening instruction, acquiring first data acquired by the sensor module.
In a possible implementation manner, thedata obtaining module 510 may specifically be configured to:
acquiring second touch information acquired by the touch sensor, and determining first duration of the second touch information;
and if the first duration is longer than a first preset time, acquiring first data acquired by the sensor module.
In a possible implementation manner, thedata obtaining module 510 may specifically be configured to:
acquiring second video information acquired by the camera, and determining whether face information exists in the second video information;
if the face information exists in the second video information, determining a second duration of the face information existing in the second video information;
and if the second duration is longer than a second preset time, acquiring first data acquired by the sensor module.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Embodiments of the present application further provide a robot, and referring to fig. 8, therobot 700 may include: a sensor module, at least oneprocessor 710, amemory 720, and a computer program stored in thememory 720 and executable on the at least oneprocessor 710, wherein theprocessor 710, when executing the computer program, implements the steps of any of the method embodiments described above, such as the steps S101 to S103 in the embodiment shown in fig. 2. Alternatively, theprocessor 710, when executing the computer program, implements the functions of the modules/units in the above-described device embodiments, such as the functions of themodules 510 to 530 shown in fig. 7.
The sensor module includes: the device comprises a microphone array, a light-sensitive sensor, a radio-frequency sensor, a camera, a touch sensor, a limit sensor, a current sensor, an acceleration sensor, a temperature sensor, an infrared sensor and an ultrasonic sensor; the microphone array is used for collecting first voice information, the light-induced sensor is used for collecting light-induced information, the radio-frequency sensor is used for collecting radio-frequency information, the camera is used for collecting first video information, and the touch sensor is used for collecting first touch information; the limiting sensor is used for acquiring action position information of a part on the robot, which is associated with the limiting sensor, the current sensor is used for acquiring a current signal of a power supply in the robot, and the acceleration sensor is used for acquiring acceleration information of the robot; the temperature sensor is used for collecting temperature information of the environment where the robot is located, the infrared sensor is used for collecting first obstacle information of the environment where the robot is located, and the ultrasonic sensor is used for collecting second obstacle information of the environment where the robot is located.
Illustratively, the computer program may be divided into one or more modules/units, which are stored in thememory 720 and executed by theprocessor 710 to accomplish the present application. The one or more modules/units may be a series of computer program segments capable of performing certain functions, which are used to describe the execution of the computer program in therobot 700.
Those skilled in the art will appreciate that fig. 8 is merely an example of a robot and is not intended to be limiting and may include more or fewer components than shown, or some components in combination, or different components such as input output devices, network access devices, buses, etc.
TheProcessor 710 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Thememory 720 may be an internal memory unit of the robot or an external memory device of the robot, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Thememory 720 is used for storing the computer program and other programs and data required by the robot. Thememory 720 may also be used to temporarily store data that has been output or is to be output.
The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.
The control method of the robot provided by the embodiment of the application can be applied to terminal devices such as a computer, a tablet computer, a notebook computer, a netbook, a Personal Digital Assistant (PDA) and the like, and the embodiment of the application does not limit the specific types of the terminal devices.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed terminal device, apparatus and method may be implemented in other ways. For example, the above-described terminal device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical function division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the method embodiments described above when the computer program is executed by one or more processors.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the method embodiments described above when the computer program is executed by one or more processors.
Also, as a computer program product, when the computer program product runs on a terminal device, the terminal device is enabled to implement the steps in the above-mentioned method embodiments when executed.
Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.