Detailed Description
To make the purpose and embodiments of the present disclosure clearer, the following will clearly and completely describe exemplary embodiments of the present disclosure with reference to the attached drawings in the exemplary embodiments of the present disclosure, and it is obvious that the described exemplary embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments.
It should be noted that the brief descriptions of terms in the present disclosure are only for convenience of understanding the embodiments described below, and are not intended to limit the embodiments of the present disclosure. These terms should be understood in their ordinary and customary meaning unless otherwise indicated.
The terms "target," "second," "third," and the like in the description and claims of this disclosure and in the above-described drawings are used for distinguishing between similar or analogous objects or entities and not necessarily for describing a particular sequential or chronological order, unless otherwise noted. It is to be understood that the terms so used are interchangeable under appropriate circumstances.
The terms "comprises" and "comprising," and any variations thereof, in this disclosure are intended to cover a non-exclusive inclusion, such that a product or device that comprises a list of elements is not necessarily limited to all elements expressly listed but may include other elements not expressly listed or inherent to such product or device.
The term "and/or" in this disclosure is only one kind of association relationship describing the associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in the present disclosure generally indicates that the former and latter associated objects are in an "or" relationship.
The display method provided by the embodiment of the disclosure is applied to an intelligent follow-up scene. In the related art, in consideration of the problems of less interaction between a user and a display device, less interest in a follow-up exercise process, less user participation and the like, the embodiment of the disclosure provides a display method, by acquiring a media resource, a dotting file and an action set of the media resource, when a body-building action in the media resource is played, a follow-up exercise image corresponding to the follow-up exercise action of the body-building action is acquired, and then according to the dotting file, the action set and the follow-up exercise image, motion data of a target user is determined and displayed, so that the interaction between the user and the display device is increased, the follow-up exercise experience of the user is improved, and the interest in the whole follow-up exercise process is increased.
The following describes a display method provided by an embodiment of the present disclosure. As can be appreciated by those skilled in the art, with the development of technology and the emergence of new scenarios, the technical solutions provided by the embodiments of the present disclosure are also applicable to similar technical problems.
In the embodiment of the present disclosure, the electronic device executing the processing method of the dialog data of the embodiment of the present disclosure may be a server, and may also be a display device. A schematic diagram of an operation scenario between a display device and a control device according to an embodiment is illustrated in fig. 1. As shown in fig. 1, the user may operate thedisplay device 200 through thesmart device 300 or thecontrol apparatus 100.
In some embodiments, thecontrol apparatus 100 may be a remote controller, and the communication between the remote controller and the display device includes an infrared protocol communication or a bluetooth protocol communication, and other short-distance communication methods, and controls thedisplay device 200 in a wireless or wired manner. The user may input a user instruction through a key on a remote controller, voice input, control panel input, etc., to control thedisplay apparatus 200. Such as: the user can input a corresponding control command through a volume up/down key, a channel control key, an up/down/left/right movement key, a voice input key, a menu key, a switch key, etc. on the remote controller, thereby implementing a function of controlling thedisplay apparatus 200. In the disclosed embodiment, the user may also select the corresponding text input to thedisplay device 200 by pressing a key.
In some embodiments, the smart device 300 (e.g., mobile terminal, tablet, computer, notebook, and other smart terminals, etc.) may also be used to control thedisplay device 200. For example, thedisplay device 200 is controlled using an application program running on the smart device. The application can be associated with the intelligent terminal through configuration, and various intuitive controls are presented to the user in a User Interface (UI).
In some embodiments, thedisplay device 200 may receive the user's control through touch, gesture, voice, or the like, instead of receiving the instruction using the smart device or the control device described above.
In some embodiments, thedisplay device 200 may also be controlled in a manner other than thecontrol apparatus 100 and thesmart device 300, for example, the voice command control of the user may be directly received by a module configured inside thedisplay device 200 to obtain a voice command, or may be received by a voice control device provided outside thedisplay device 200.
In some embodiments, thedisplay device 200 is also in data communication with theserver 400 through a variety of communication means. Thedisplay device 200 may be allowed to be communicatively connected through a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. Theserver 400 may provide various contents and interactions to thedisplay apparatus 200. Illustratively, thesmart device 200 receives software program updates, or accesses a remotely stored digital media library, by sending and receiving information, as well as Electronic Program Guide (EPG) interactions. Theserver 400 may be a cluster or a plurality of clusters, and may include one or more types of servers. Other web service contents such as video-on-demand and advertisement services are provided to thedisplay apparatus 200 through theserver 400.
In some embodiments, thedisplay device 200 may be a liquid crystal display, an OLED display, or a projection smart device. The specific type, size, resolution and the like of thedisplay device 200 are not limited in the embodiments of the present disclosure, and those skilled in the art will understand that some changes may be made to the performance and configuration of thedisplay device 200 according to actual needs. Thedisplay apparatus 200 may additionally provide an intelligent network tv function of a computer support function including, but not limited to, a network tv, an intelligent tv, an Internet Protocol Tv (IPTV), and the like, in addition to the broadcast receiving tv function.
Fig. 2 exemplarily shows a block diagram of a configuration of thecontrol apparatus 100 according to an exemplary embodiment. As shown in fig. 2, thecontrol device 100 includes acontroller 110, acommunication interface 130, a user input/output interface 140, a memory, and a power supply. Thecontrol apparatus 100 may receive an input operation instruction of a user and convert the operation instruction into an instruction recognizable and responsive by thedisplay device 200, thereby mediating interaction between the user and thedisplay device 200.
Exemplarily, taking thedisplay device 200 as a television as an example, fig. 3 illustrates a schematic structural diagram of adisplay device 200 provided by an embodiment of the present disclosure.
As shown in fig. 3, thedisplay apparatus 200 includes at least one of atuner demodulator 210, acommunicator 220, adetector 230, anexternal device interface 240, acontroller 250, adisplay 260, anaudio output interface 270, a memory, a power supply, and auser interface 280.
In some embodiments the controller includes a processor, a video processor, an audio processor, a graphics processor, a RAM, a ROM, a target interface for input/output to an nth interface.
Thedisplay 260 includes a display screen component for displaying images, and a driving component for driving image display, and is configured to receive image signals from the controller output, and perform components for displaying video content, image content, and a menu manipulation Interface, and a user manipulation User Interface (UI).
Thedisplay 260 may be a liquid crystal display, an OLED display, and a projection display, and may also be a projection device and a projection screen.
Thecommunicator 220 is a component for communicating with an external device or a server according to various communication protocol types. For example: the communicator may include at least one of a Wifi module, a bluetooth module, a wired ethernet module, and other network communication protocol chips or near field communication protocol chips, and an infrared receiver. Thedisplay apparatus 200 may establish transmission and reception of control signals and data signals with theexternal control apparatus 100 or theserver 400 through thecommunicator 220.
Theuser interface 280 may be used to receive control signals for controlling the apparatus 100 (e.g., an infrared remote control, etc.).
Thedetector 230 is used to collect signals of the external environment or interaction with the outside. For example,detector 230 includes a light receiver, a sensor for collecting ambient light intensity; alternatively, thedetector 230 includes an image collector, such as a camera, which may be used to collect external environment scenes, attributes of the user, or user interaction gestures, or thedetector 230 includes a sound collector, such as a microphone, which is used to receive external sounds.
Theexternal device interface 240 may include, but is not limited to, the following: high Definition Multimedia Interface (HDMI), analog or data high definition component input interface (component), composite video input interface (CVBS), USB input interface (USB), RGB port, and the like. The interface may be a composite input/output interface formed by the plurality of interfaces.
Thetuner demodulator 210 receives a broadcast television signal through a wired or wireless reception manner, and demodulates an audio/video signal, such as an EPG data signal, from a plurality of wireless or wired broadcast television signals.
In some embodiments, thecontroller 250 and themodem 210 may be located in different separate devices, that is, themodem 210 may also be located in an external device of the main device where thecontroller 250 is located, such as an external set-top box.
Thecontroller 250 controls the operation of the display device and responds to the user's operation through various software control programs stored in the memory. Thecontroller 250 controls the overall operation of thedisplay apparatus 200. For example: in response to receiving a user command for selecting a UI object displayed on thedisplay 260, thecontroller 250 may perform an operation related to the object selected by the user command.
In some embodiments, the controller includes at least one of a Central Processing Unit (CPU), a video processor, an audio processor, a Graphic Processing Unit (GPU), a RAM Random Access Memory (RAM), a ROM (Read-Only Memory), a target interface to an nth interface for input/output, a communication Bus (Bus), and the like.
A user may input a user command on a Graphical User Interface (GUI) displayed on thedisplay 260, and the user input interface receives the user input command through the Graphical User Interface (GUI). Alternatively, the user may input the user command by inputting a specific sound or gesture, and the user input interface receives the user input command by recognizing the sound or gesture through the sensor.
A "user interface" is a media interface for interaction and information exchange between an application or operating system and a user that enables the conversion of the internal form of information to a form acceptable to the user. A commonly used presentation form of the User Interface is a Graphical User Interface (GUI), which refers to a User Interface related to computer operations and displayed in a graphical manner. It may be an interface element such as an icon, window, control, etc. displayed in the display screen of the display device, where the control may include a visual interface element such as an icon, button, menu, tab, text box, dialog box, status bar, navigation bar, WIDget, etc.
It will be appreciated that, in general, the implementation of display device functions requires the cooperation of software in addition to the support of the hardware described above.
Fig. 4 is a schematic diagram of a software system of a display device provided by the present disclosure, referring to fig. 4, in some embodiments, the system is divided into four layers, which are, from top to bottom, an Application (Application) layer (referred to as an "Application layer"), an Application Framework (Application Framework) layer (referred to as a "Framework layer"), an AndroID runtime (AndroID runtime) and a system library layer (referred to as a "system runtime library layer"), and a kernel layer.
In some embodiments, at least one application program runs in the application program layer, and the application programs may be windows (windows) programs carried by an operating system, system setting programs, clock programs or the like; or an application developed by a third party developer. In particular implementations, applications in the application layer include, but are not limited to, the above examples. Illustratively, the application layer includes a fitness application that may enable intelligent interaction by playing fitness videos, thereby helping users develop fitness habits. The display device may launch the exercise application in response to a user initiating operation of the exercise application. The operation of starting the fitness application program can be touch operation, voice operation and control instructions of a remote controller.
The framework layer provides an Application Programming Interface (API) and a programming framework for the application. The application framework layer includes a number of predefined functions. The application framework layer acts as a processing center that decides to let the applications in the application layer act. The application program can access the resources in the system and obtain the services of the system in execution through the API interface. For example, the system Service (i.e., service) included in the application framework layer may be configured to receive a body-building video playing instruction sent by the body-building application, and after the body-building video playing instruction is received, the system Service may invoke the display driver to display a video frame of the body-building video through the display, invoke the audio driver to play sound data of the body-building video through the speaker, and invoke the camera to drive the camera to capture a follow-up image of the user. I.e., service may be responsible for controlling the entire process used by the fitness application.
After the system service acquires the body-building video, the video picture of the body-building video is sent to the display driver corresponding to the display, and after the video picture of the body-building video acquired by the display driver is displayed, the display is transferred to play the video picture of the body-building video. And then sending the sound data of the body-building video to an audio driver corresponding to the loudspeaker, after the audio driver acquires the sound data of the body-building video, moving the loudspeaker, playing the sound data of the body-building video, and finally calling the camera driver to control the camera to acquire a follow-up image of the user.
In some embodiments, the system runtime layer provides support for the upper layer, i.e., the framework layer, and when the framework layer is used, the android operating system runs the C/C + + library included in the system runtime layer to implement the functions to be implemented by the framework layer.
In some embodiments, the kernel layer is a layer between hardware and software. The inner core layer comprises at least one of the following drivers: bluetooth drive, camera drive, WIFI drive, USB drive, HDMI drive, sensor drive (like fingerprint sensor, temperature sensor, pressure sensor etc.), MIC drive, power drive, the demonstration drive that the display corresponds, the audio drive that the speaker corresponds and the camera drive that the camera corresponds etc.. The display driver corresponding to the display can call an interface of the display to acquire a video picture of the fitness video, or call the interface of the display to set the display. The audio driver that the speaker corresponds can call the interface of speaker to acquire the video sound data of broadcast body-building, perhaps call the interface of speaker and set up the speaker, the camera driver that the camera corresponds can call the interface of camera, with the following image of practicing of acquireing the user, perhaps call the interface of camera and set up the camera.
And the display can be used for displaying the pictures of the fitness videos. And the loudspeaker can be used for playing the audio of the fitness video. The camera can be used for collecting a follow-up image of a user.
The methods in the following embodiments may be implemented in a display device having the above-described hardware structure or software structure. In the following embodiments, the method of the embodiments of the present disclosure is described by taking the above-describeddisplay device 200 as an example.
The embodiment of the present disclosure provides a display method, which may include S11 to S14, as shown in fig. 5.
S11, obtaining the media resources, and dotting files and action sets of the media resources.
The dotting file comprises time information of a plurality of body-building actions corresponding to the media resources, and the action set comprises standard information corresponding to the body-building actions.
The embodiment of the present disclosure may be applied to the scenario shown in fig. 6, referring to (a) in fig. 6, anicon 602 of a fitness application is included in aninterface 601 of my application in a television, and the fitness application of the television may receive a control signal of a user to theicon 602 of the fitness application, and in response to the control signal, display aninterface 603 shown in (b) in fig. 6. Theinterface 603 may include adisplay window 604 for theworkout video 1. Other fitness video display windows may also be included in theinterface 603, such as: a display window of the fitness video 2, a display window of thefitness video 3, a display window of the fitness video 4, and the like. After the workout application receives the user's goal message for thedisplay window 604 of theworkout video 1, the television may display aninterface 605 in response to the goal message, as shown in (c) of fig. 6, theinterface 605 being used to display the zoomed-invideo 1.
The process of the body-building application of the television responding to the first message of the user and acquiring the media resource, the dotting file and the action set of the media resource comprises the following steps:
(1) The fitness application of the television receives a first message of a user on media resources and sends a resource request message to the cloud server.
The first message is for requesting playback of a media asset. The resource request message is used for requesting to acquire the media resource. The resource request message includes attribute information of the media resource. The cloud server is a server that stores media resources in the fitness application.
(2) After receiving the resource request message, the cloud server searches for the data packet of the media resource through the storage unit of the cloud server according to the attribute information of the media resource in the received resource request message, and sends the data packet of the media resource to the fitness application of the television.
(3) The body-building application of the television receives the data packet of the media resource, analyzes the data packet of the media resource and obtains the media resource and the address of the dotting file of the media resource.
(4) And the body-building application of the television downloads the dotting file of the media resource according to the address of the dotting file of the media resource and acquires the action set from the preset address.
For example, the fitness application of the television receives a first message of the user on thedisplay window 604 of thefitness video 1, and sends a resource request message to the cloud server. The resource request message includes attribute information of the body-buildingvideo 1. After receiving the resource request message, the cloud server searches for the data packet of thefitness video 1 by traversing the storage unit of the cloud server according to the attribute information of thefitness video 1 in the received resource request message, and sends the data packet of thefitness video 1 to the fitness application of the television. The body-building application of the television receives the data packet of the body-buildingvideo 1 and analyzes the data packet of the body-buildingvideo 1 to obtain the addresses of the body-buildingvideo 1 and the dotting file of the body-buildingvideo 1. The body-building application of the television downloads the dotting file of the body-buildingvideo 1 according to the address of the dotting file of the body-buildingvideo 1, and acquires an action set from a preset address.
The dotting file of the media resource comprises basic information of the media resource and information of a plurality of fitness actions. The basic information of the media resource includes a name of the media resource, a code of the media resource, a Frame Per Second (FPS) of the media resource, and a total frame number of the media resource.
The information of the plurality of fitness actions includes basic information of the plurality of fitness actions and time information of the plurality of fitness actions. The basic information of a fitness action includes the ranking of the fitness action in the media resource, the type of the fitness action, the name of the fitness action, the identification number (ID) of the fitness action, the number of times of repetition of the fitness action, and the like. The time information of the body-building action comprises the action starting time of the body-building action, the action ending time of the body-building action, the preparation starting time of the body-building action and the preparation ending time of the body-building action.
Illustratively, the media asset is afitness video 1, and the data of the media asset in the dotting file of the media asset is the basic information and the time information of a plurality of fitness actions. The basic information of a fitness activity includes the sequence of the fitness activity in thefitness video 1, the name of the fitness activity, the ID of the fitness activity, and the number of repetitions of the fitness activity. The time information of the fitness action comprises formal starting time of the fitness action, formal ending time of the fitness action, preparation starting time of the fitness action and preparation ending time of the fitness action. Part of codes of the dotting file corresponding to thefitness video 1 are as follows:
wherein v _ start is used to record the preparation start time of the fitness activity. v _ end is used to record the preparation end time of the fitness activity. time _ start is used to record the formal start time of the fitness activity. And the time _ end is used for recording the formal ending time of the fitness movement.
The action set includes a total number of workout actions, a version of the action set, an update time for the action set, a description of the action set, and information about the workout actions. The description of the action set comprises a log, a type identifier of the body building action, a scoring identifier of the body building action and a weight identifier corresponding to the body building action. The information of the body-building action comprises basic information of the body-building action and standard information of the body-building action. The basic information of the body-building action comprises the name of the body-building action, the ID of the body-building action, the type of the body-building action, the standard scoring data corresponding to the body-building action and the weight corresponding to the body-building action.
The standard information of the fitness action comprises standard limb coordinate information corresponding to the fitness action and standard scoring data corresponding to the fitness action. The standard limb coordinate information includes standard contour region coordinates and standard keypoint coordinates. The standard outline area coordinates are the corresponding human body outline area coordinates when the reference object (i.e. the fitness trainer in the media resource) demonstrates the fitness action. The standard key point coordinates are corresponding human body key point coordinates when the reference object demonstrates the body-building action. Fig. 7 shows 19 keypoint locations, 19 keypoints being respectively the nose, left eye, right eye, left ear, right ear, left shoulder, right shoulder, left elbow, right elbow, left wrist, right wrist, left hip, right hip, left knee, right knee, left ankle, right ankle, left heart and right heart. 0 in fig. 7 represents a position of a nose, 1 represents a position of a left eye, 2 represents a position of a right eye, 3 represents a position of a left ear, 4 represents a position of a right ear, 5 represents a position of a left shoulder, 6 represents a position of a right shoulder, 7 represents a position of a left elbow, 8 represents a position of a right elbow, 9 represents a position of a left wrist, 10 represents a position of a right wrist, 11 represents a position of a left hip, 12 represents a position of a right hip, 13 represents a position of a left knee, 14 represents a position of a right knee, 15 represents a position of a left ankle, 16 represents a position of a right ankle, 17 represents a position of a left hand heart, and 18 represents a position of a right hand heart. The key point position is a position corresponding to 19 positions. Then, the limb coordinate information of the user can be determined according to the positions of the 19 key points.
Illustratively, part of the code for the action set is as follows:
and S12, acquiring the follow-up exercise image based on the time information of the plurality of body-building actions under the condition that the playing progress of the media resource reaches the starting time of the first body-building action.
The first body-building action is taken as an action in the plurality of body-building actions, and the follow-up exercise image is an image collected when the user follows the first body-building action.
And the body-building application of the television analyzes the dotting file after acquiring the dotting file from the address of the dotting file. In connection with S11, the dotting file includes information of a plurality of fitness activities. The information of the plurality of fitness actions comprises time information of the fitness actions. When the media resource is played, the playing progress of the media resource can be obtained by polling the progress bar control of the television. And based on the time information of the body-building action, when the playing progress of the media resource is determined to reach the starting time of the first body-building action, controlling a camera in the television to collect follow-up images of the user.
The camera in the present disclosure may be a 3D camera, and the coordinate system to which the camera is adapted is a camera coordinate system. The camera coordinate system takes the optical center of the lens as the origin of coordinates, the z axis as the optical axis of the lens, the y axis is parallel to the image plane upwards, and the x axis is parallel to the image plane rightwards.
The time sequence process of the camera acquiring the follow-up training image of the user is as follows: the fitness application initializes the environment, the camera is turned on, the following-practice image is collected by the camera, and the following-practice image is sent to the fitness application. The fitness application sets preview parameters of the follow-up training images and displays the follow-up training images based on the preview parameters.
When the media resource is played from the beginning, if the motion data of the warm-up action of the media resource is not recorded in advance, the first body-building action is the first body-building action played except the warm-up action. If the preset motion data is not recorded, the first body-building action is the first body-building action played by the media resource.
When the media resource is played from the midway, if the motion data of the warming-up action of the media resource is preset, the first fitness action is the first complete fitness action played except the warming-up action. If the preset motion data is not recorded, the first body-building motion is used as the first complete body-building motion.
For example, taking the dotting file of thefitness video 1 shown in S11 as an example, when there is no preset and no exercise data of the warming-up action of the media resource is recorded, the first fitness action may be the fitness action ranked as 0 in thefitness video 1, and the name of the fitness action ranked as 0 in thefitness video 1 is "alternate halo". When the playing progress of the media resources reaches the 20 th time of the starting time of the 'alternate halo', the television informs the camera to start to collect follow-up images of the user (namely the user who follows the 'alternate halo' fitness action).
And S13, obtaining the motion data of the user according to the dotting file, the motion set and the follow-up exercise image, and displaying the motion data of the user.
After the dotting file, the action set and the follow-up practice image are obtained, the motion data of the user can be calculated according to the dotting file, the action set and the follow-up practice image, and the motion data of the user can be displayed.
In some embodiments, as shown in fig. 8, the step S13 of obtaining the motion data of the user according to the dotting file, the action set and the follow-up image comprises the following sub-steps:
s131, determining first limb coordinate information of the user based on the follow-up image.
Specifically, after the follow-up image is acquired, the follow-up image is input into a limb detection algorithm, the limb detection algorithm outputs first limb coordinate information corresponding to the follow-up image, and then the detected first limb coordinate information is input into a scoring algorithm. The first limb coordinate information includes first contour region coordinates and first keypoint coordinates. The first keypoint coordinates include 19 keypoint coordinates, respectively coordinates of a nose of the user, coordinates of a left eye of the user, coordinates of a right eye of the user, coordinates of a left ear of the user, coordinates of a right ear of the user, coordinates of a left shoulder of the user, coordinates of a right shoulder of the user, coordinates of a left elbow of the user, coordinates of a right elbow of the user, coordinates of a left wrist of the user, coordinates of a right wrist of the user, coordinates of a left hip of the user, coordinates of a right hip of the user, coordinates of a left knee of the user, coordinates of a right knee of the user, coordinates of a left ankle of the user, coordinates of a right ankle of the user, coordinates of a left heart of the user, and coordinates of a right heart of the user. In connection with S12, since the camera is a 3D camera, the 19 key point coordinates in the first limb coordinate information can be represented not only by the spatial coordinates (xw, yw, zw), but also by the pixel coordinates (xd, yd) in the depth map. Each pixel point in the depth map corresponds to the depth information of a spatial midpoint.
Before the limb detection algorithm is applied, a training process of the limb detection algorithm is also included. The training process of the limb detection algorithm comprises the steps of constructing a sample data set and training the initial limb detection algorithm by adopting the sample data set to obtain the limb detection algorithm. The limb detection algorithm has the function of analyzing the acquired follow-up image to obtain limb coordinate information.
In some embodiments, the display device constructs a sample data set comprising a plurality of sets of sample data pairs, each set of sample data pairs comprising a follow-up image and limb coordinate information corresponding to the follow-up image. The limb coordinate information corresponding to the training image can be obtained by manual marking. The multiple sets of sample data pairs in the sample data set may be sample data pairs in multiple different scenarios.
The display device performs model training on the initial limb detection algorithm by adopting the constructed sample data set, so that the trained limb detection algorithm has the function of analyzing the follow-up image acquired by the display device to obtain the limb coordinate information.
Illustratively, the display device trains the limb detection algorithm by using the constructed sample data set, and may train the initial limb detection algorithm end to end by using a deep learning manner. The loss function (loss function) can be used to measure the initial limb coordinate information output by the initial limb detection algorithm and the artificially labeled limb coordinate information, i.e. the difference between the predicted value and the true value of the model.
And after the initial limb detection algorithm is trained and converged, the function value of the loss function is reduced to the minimum, and the trained limb detection algorithm can be obtained. The trained limb detection algorithm has the function of analyzing the acquired follow-up image to obtain limb coordinate information, and can be used as a preset limb detection algorithm in fitness application to mark a newly acquired follow-up image of a television.
The limb detection algorithm is obtained through training of a sample data set, and the sample data set comprises various application scenes, so that the obtained limb detection algorithm can be used in various application scenes.
Optionally, embodiments of the present disclosure may also periodically obtain the latest follow-up images of the user from the fitness application. And (4) marking the latest follow-up image of the user by using a limb detection algorithm, and performing iterative optimization on the limb detection algorithm according to a marking result. Therefore, the limb detection algorithm and the fitness application can be updated synchronously, and more accurate multi-round training data is provided for optimization of the fitness application.
S132, determining standard limb coordinate information and standard scoring data corresponding to the first fitness action based on the action set and the scoring file.
In combination with S11, since the dotting file includes the IDs of the plurality of exercise motions, the ID of the first exercise motion can be specified from the dotting file. Since the action set also includes IDs of a plurality of fitness actions, based on the dotting file and the action set, standard limb coordinate information and standard scoring data corresponding to the first fitness action may be determined.
In some embodiments, as shown in fig. 8, S132 includes the following sub-steps:
s1321, searching a second fitness action in the action set based on the ID of the first fitness action in the dotting file, and taking second standard limb coordinate information and second standard scoring data corresponding to the second fitness action as standard limb coordinate information and standard scoring data corresponding to the first fitness action.
Wherein the ID of the first workout session is consistent with the ID of the second workout session.
Because both the dotting file and the action set comprise the IDs of the plurality of body-building actions, after the ID of the first body-building action is determined, the body-building action consistent with the ID of the first body-building action can be searched in the action set, and the second body-building action is obtained. The ID of the second workout activity corresponds to the ID of the first workout activity.
The action set also records standard limb coordinate information and standard score data corresponding to each fitness action, second standard limb coordinate information and second standard score data corresponding to a second fitness action can be found out from the action set, then the second standard limb coordinate information and the second standard score data are used as the standard limb coordinate information and the standard score data of the first fitness action, and finally the standard limb coordinate information and the standard score of the first fitness action are input into a scoring algorithm. The standard limb coordinate information of the first body-building action may include the human body contour area coordinate and the key point coordinate corresponding to the first body-building action.
Optionally, based on S11, the dotting file and the action set both include a name of the first exercise action, and the second exercise action may be found in the action set according to the name of the first exercise action. And taking the second standard limb coordinate information and the second standard score data corresponding to the second body-building action as the standard limb coordinate information and the standard score data corresponding to the first body-building action.
And S133, obtaining the motion data of the user according to the first limb coordinate information, the standard scoring data and the scoring file.
After the first limb coordinate information, the standard scoring data and the scoring file are obtained, when the playing progress of the media resource reaches the end time of the first fitness action, the scoring algorithm can obtain the motion data of the user according to the obtained first limb coordinate information, the standard scoring data and the scoring file, the motion data of the user is sent to the display, and the display refreshes and displays the motion data of the user. Specifically, the ending time of the first fitness action is the formal ending time of the first fitness action in the dotting file.
In some embodiments, as shown in fig. 9, S133 includes the following sub-steps:
and S1331, determining the matching degree of the first limb coordinate information and the standard limb coordinate information.
Because each body-building action has a certain time length during playing, the camera collects the follow-up images according to a fixed frequency, and therefore the camera can collect a plurality of follow-up images in the process from the beginning to the end of playing of each body-building action. Therefore, in the playing process of the first fitness action, the camera can also acquire a plurality of follow-up images. Each follow-up image can correspond to one group of limb coordinate information, and a plurality of follow-up images can correspond to a plurality of groups of limb coordinate information, so that the first limb coordinate information comprises a plurality of groups of limb coordinate information.
After the dotting file receives the first limb coordinate information and the standard limb coordinate information, comparing multiple groups of limb coordinate data in the first limb coordinate information with multiple groups of limb coordinate information in the standard limb coordinate information, and thus calculating the matching degree of the first limb coordinate information and the standard limb coordinate information. When the matching degree of the first limb coordinate information and the standard limb coordinate information exceeds a preset threshold value, the first limb coordinate information and the standard limb coordinate information are considered to be successfully matched, and the first limb coordinate information of the user is cleared. When the matching degree of the first limb coordinate information and the standard limb coordinate information does not exceed a preset threshold value, the first limb coordinate information and the standard limb coordinate information are considered to be failed to be matched, and the limb coordinate information is continuously obtained until the matching is successful.
And S1332, obtaining the motion data of the user according to the matching degree, the standard scoring data and the scoring file.
In some embodiments, S1332 includes the following sub-steps:
and S13321, obtaining scoring data of the user according to the matching degree and the standard scoring data.
The motion data of the user includes score data. After the matching degree and the standard score data are obtained, score data of the follow-up exercise of the first body-building action can be calculated, and the score data of the follow-up exercise of the first body-building action can also be called score data of the user. Specifically, the dotting algorithm obtains the scoring data of the user according to the matching degree and the standard scoring data. For example, if the first fitness action is a warm-up action, the matching degree between the first limb coordinate information and the standard limb coordinate information is 90%, and the standard score data of the warm-up action is 7 as seen from the action set in S11, the score data of the user is 7 × 90% =6.3 points.
In some other embodiments, S1332 includes the following sub-steps:
and S13322, obtaining the exercise consumption data of the target follow-up exercise according to the matching degree, the total exercise consumption data of the reference object, the duration corresponding to the first body-building action and the total duration of the media resources.
The exercise data of the user further comprises exercise consumption data (also called calorie consumption data), the dotting file further comprises total exercise consumption data of the reference object, and the time information of the plurality of body-building actions comprises the time length corresponding to the first body-building action and the total time length of the media resources.
According to the matching degree of the first limb coordinate information and the standard limb coordinate information, the total exercise consumption data of the reference object, the duration corresponding to the first body-building action and the total duration of the media resources, the exercise consumption data of the follow-up exercise of the first body-building action (also called as the exercise consumption data of the user) can be calculated. Specifically, the dotting algorithm calculates the exercise consumption data of the user according to the matching degree of the first limb coordinate information and the standard limb coordinate information, the total exercise consumption data of the reference object, the duration corresponding to the first body-building action and the total duration of the media resources.
Wherein the exercise consumption data of the follow-up action of the first fitness action satisfies the following expression:
calorie=total_calorie*(matchRatio/100)*(time/duration)
the location is the exercise consumption data of the user, the total _ location is the total exercise consumption data of the reference object, the match degree of the matchrating is 0-100, the time is the duration corresponding to the first body-building action, and the duration is the total duration of the media resource.
In some other embodiments, as shown in fig. 10, after S12, the method further includes:
and S15, determining the height and waist of the user according to the follow-up training image.
In connection with S12, the camera is a 3D camera, and the first limb coordinate information may be represented by spatial coordinates or pixel coordinates. The waist circumference of the user can be determined according to the space coordinate and the pixel coordinate of the waist center point and the space coordinate and the pixel coordinate of the spine center point of the first limb coordinate information.
Specifically, first, the actual spatial distance m is calculated from the spatial coordinates (x 1, y 1) and the pixel coordinates (px 1, py 1) of the center point of the lumbar region and the spatial coordinates (x 2, y 2) and the pixel coordinates (px 2, py 2) of the center point of the spine. The actual spatial distance m satisfies the following expression:
m=|y2-y1|÷|py2-py1|
and then acquiring pixel coordinates of a plurality of key points with the same horizontal height based on the position of the waist center point. If the difference between the y value of the pixel coordinate of one key point and the y value of the pixel coordinate of the waist central point exceeds 20cm, or the difference between the x value of the pixel coordinate of one key point and the x value of the pixel coordinate of the waist central point exceeds 40cm, the key point can be judged not to be on the human body. And continuously traversing other key points in the plurality of key points, and filtering out the key points belonging to the waist position. The waist position shot by the camera can be approximate to an irregular arc line, so the waist length can be approximate to the length of the irregular arc line.
FIG. 11 shows a top view scene diagram including depth data (d 1, d2, d3 \8230; dn-1, dn) corresponding to a plurality of plane points (p 1, p2, p3 \8230; pn-1, pn), and waist arc points (a 1, a2, a3 \8230; an-1, an) corresponding to a plurality of plane points (p 1, p2, p3 \8230, pn-1, pn). The plurality of plane points are obtained by marking a plurality of key points on a plane. According to the actual space distance m and the depth data (d 1, d2, d3 \8230; dn-1, dn), a plurality of sub-line segments (namely l) in the irregular arc line can be calculateda1a2 、la2a3 8230aan-1an ) Length of (d). And summing the lengths of all the sub-line segments to obtain the total length of the irregular arc line (namely the length of the waist arc line of one surface), wherein the total length of the irregular arc line satisfies the following expression:
…
l=la1a2 +la2a3 +...+lan-1an
in this way, the lumbar arc length of the user's face (i.e. the front lumbar arc length or the back lumbar arc length) can be calculated. And continuously calculating the waist arc length of the other side of the user by using the above mode, and finally summing the waist arc lengths of the two sides of the user to obtain the waist length of the user.
In some embodiments, after the follow-up image is acquired, data acquisition and calibration are performed on the follow-up image, and an RGB stream, a depth stream, a limb stream, and the like are separated. And then determining a waist central point and a spine central point from the RGB stream, the depth stream and the limb stream, and obtaining a space coordinate and a pixel coordinate of the waist central point and a space coordinate and a pixel coordinate of the spine central point. And calculating the actual space distance m according to the space coordinate and the pixel coordinate of the central point of the waist and the space coordinate and the pixel coordinate of the central point of the spine. Obtaining a plurality of pixel points of the whole picture at the horizontal position of the waist from the depth stream, then judging whether the spatial depth of the pixel points is greater than 0, if so, calculating the pixel coordinates of the pixel points according to the actual spatial distance m, judging whether the y values of the pixel coordinates of the pixel points and the y value of the pixel coordinate of the waist central point are less than 30cm, if so, continuing to judge whether the x values of the pixel coordinates of the pixel points and the x value of the pixel coordinate of the waist central point are less than 30cm, and if so, obtaining the pixel points at the position of the waist. And then calculating the space distance between two adjacent pixel points, and accumulating the space distances between all two adjacent pixel points to obtain the waistline of the user.
Similarly, after the actual spatial distance m is calculated, the head key point and the foot key point of the user can be obtained. And calculating the height of the user according to the y-axis distance difference between the head key point and the foot key point of the user. The height of the user satisfies the expression: user height = actual spatial distance m y1-y 2.
Wherein, y1 is the y-axis coordinate corresponding to the key point of the head of the user, and y2 is the y-axis coordinate corresponding to the key point of the foot of the user.
And S16, determining the standard weight, the standard waistline and a deviation ratio according to the height and the waistline of the user, wherein the deviation ratio is the deviation ratio of the waistline of the user to the standard waistline.
After the height of the user is obtained, the standard waistline corresponding to the user and the standard weight corresponding to the user can be determined according to the height. The standard waistline corresponding to the user meets the following expression:
standard waist length corresponding to user = height of user × 0.365
The standard weight corresponding to the user meets the following expression:
standard weight for user = height of user-105
The unit of the height of the user in the expression is centimeter, and the standard waistline corresponding to the user and the standard weight corresponding to the user can be obtained by combining the expression. The deviation ratio can be calculated according to the waistline of the user and the standard waistline corresponding to the user. The deviation ratio satisfies the following expression:
ratio = user waist length/user corresponding standard waist length
And S17, determining the weight of the user based on the deviation ratio and the standard weight corresponding to the user.
In connection with S16, after obtaining the deviation ratio and the standard weight, the weight of the user may be calculated. The weight of the user satisfies the following expression:
user's weight = user's corresponding standard weight rat
And S18, obtaining exercise consumption data of the target follow-up action based on the weight of the user, the weight of the reference object and the total exercise consumption data of the reference object.
In connection with S13322, the dotting file may further include total exercise consumption data of the reference object, and the time information of the plurality of exercise motions includes a time length corresponding to the first exercise motion and a total time length of the media resource. The exercise consumption data of the follow-up exercise of the first body-building action (also called as exercise consumption data of the user) can be calculated according to the weight of the user, the weight of the reference object, the total exercise consumption data of the reference object, the time length corresponding to the first body-building action and the total time length of the media resources.
Specifically, the exercise consumption data of the follow-up action of the first fitness action also satisfies the following expression:
caliorie = total _ caliorie (user weight/weight of reference object) ((time/duration))
Wherein, the caliorie is the exercise consumption data of the user, the total _ caliorie is the total exercise consumption data of the reference object, the time is the duration corresponding to the body-building action, and the duration is the total duration of the media resources.
After the exercise consumption data of the user is obtained, scoring data of the user can be calculated by using the steps S131, S132, S1331 and S13321, and finally the scoring data and the exercise consumption data of the user are updated and displayed on a display of the television.
In some embodiments, as shown in fig. 12, after S13, the method further comprises:
and S21, determining the motion data of the next action of the first fitness action under the condition that the first fitness action is not the last action of the media resource.
If the first workout is the last workout of the media asset, then S21 and S22 are not performed. If the first body-building action is not the last body-building action of the media resource, the motion data of the next action of the first body-building action, that is, the scoring data and the motion consumption data of the next action of the first body-building action, are determined according to the scheme.
And S22, updating and displaying the motion data of the user according to the motion data of the next motion of the first fitness motion.
And if the first body-building action is the first body-building action of the media resource, directly displaying the motion data of the first body-building action by the display equipment, and accumulating the motion data of the next action of the first body-building action to the motion data of the first body-building action after obtaining the motion data of the next action of the first body-building action to obtain the updated motion data of the user.
If the first body-building action is not the first body-building action of the media resource, the motion data displayed by the television not only comprises the motion data of the first body-building action, but also comprises the motion data of all body-building actions before the first body-building action. When the exercise data for the next of the first workout is obtained, then the television displays the total exercise data by adding the exercise data for the next of the first workout to the exercise data for all of the previous workouts.
Illustratively, as shown in fig. 13, the television displays the updated motion data of the user. The updated motion data of the user includes anidentification 1301, anidentification 1302, anidentification 1303, and anidentification 1304. Displayed onlabel 1302 are athletic consumption data 121 kcal, displayed onlabel 1302 are scoring data 74 points, displayed onlabel 1303 are scoring data +6 for the next activity of the first workout and displayed onlabel 1304 are interactive characters. The interactive characters can be characters (such as "very good" in fig. 13), special patterns (such as love patterns and star patterns in fig. 13), and the like. The interactivity between the user and the television can be increased through the marks, so that the user can better insist on the mark, and the interest of body-building sports is improved.
In one embodiment, as shown in FIG. 14, the television is started and the user selects a media asset on the television. And then starting the camera, and displaying the preview image acquired by the camera by a display of the television. And simultaneously, the television acquires a dotting file corresponding to the media resource, analyzes the dotting file, acquires the media resource according to the dotting address in the dotting file, and plays the media resource.
Taking the example that the warm-up action does not record the exercise data, when the end of the warm-up action is detected, the following loop process is executed.
The circulating process is as follows: and querying the playing progress of the media resource in the media player once every other fixed-time progress bar control, and displaying a query result on a progress bar interface. When the playing progress of the media resources reaches the formal starting time of the fitness action, acquiring a following and practicing image of the user, and performing limb detection on the following and practicing image of the user to obtain limb coordinate information of the user.
And meanwhile, searching for standard body building actions in the action set according to the ID of the body building actions, and taking the standard limb coordinate information and the standard score data corresponding to the standard body building actions as the standard limb coordinate information and the standard score data corresponding to the body building actions. And inputting the limb coordinate information, the standard limb coordinate information and the standard scoring data into a scoring algorithm.
And when the playing progress of the media resources reaches the formal ending time of the fitness action, the scoring algorithm calculates the matching degree of the follow-up exercise action returned to the fitness action and the standard fitness action. And calculating the scoring data and the exercise consumption data of the follow-up exercise of the body-building exercise according to the matching degree. And updates the rating data and the sports consumption data displayed by the television. And then, continuously judging whether the next body-building action exists or not, and when the next body-building action exists, continuously judging whether the playing progress of the media resource reaches the formal starting time of the next action or not. And repeating the steps until the playing of the last fitness action is finished.
The foregoing describes a solution provided by an embodiment of the present disclosure, primarily from a method perspective. In order to implement the above functions, it includes a hardware structure and/or a software module for performing each function. Those of skill in the art will readily appreciate that the present disclosure is capable of being implemented in hardware or a combination of hardware and computer software for performing the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The embodiment of the present disclosure may perform division of functional modules on the electronic device according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, the division of the modules in the embodiments of the present disclosure is illustrative, and is only one division of logic functions, and there may be another division in actual implementation.
Corresponding to the method in the foregoing embodiment, an embodiment of the present disclosure further provides a server. The server is used to implement the method in the foregoing embodiments. The functions of the server can be realized by hardware, and can also be realized by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the above-described functions.
For example, fig. 15 shows a schematic structural diagram of a display device, and as shown in fig. 15, the display device may include: anacquisition module 1501, aprocessing module 1502, and the like.
The system comprises an acquisition module, a storage module and a display module, wherein the acquisition module is used for acquiring media resources, dotting files and action sets of the media resources, and the dotting files comprise time information of a plurality of body-building actions corresponding to the media resources; the processing module is used for acquiring a follow-up exercise image under the condition that the playing progress of the media resource reaches the starting time of the first body-building action based on the time information of the plurality of body-building actions; the first body-building action is taken as an action in the plurality of body-building actions, and the follow-up exercise image is an image collected when the user follows the first body-building action; the processing module is also used for obtaining the motion data of the user according to the dotting file, the action set and the follow-up training image and displaying the motion data of the user; and the processing module is further used for continuously acquiring the follow-up image under the condition that the playing progress of the media resource reaches the starting time of the next action of the first body-building action.
In some implementable examples, the processing module includes a first unit, a second unit, and a third unit. And the first unit is used for determining the first limb coordinate information of the user based on the follow-up image. And the second unit is used for determining standard limb coordinate information and standard scoring data corresponding to the first body-building action based on the action set and the scoring file. And the third unit is used for obtaining the motion data of the user according to the first limb coordinate information, the standard scoring data and the dotting file.
In some practical examples, the third unit is further configured to determine a matching degree of the first limb coordinate information and the standard limb coordinate information; and obtaining the motion data of the user according to the matching degree, the standard scoring data and the dotting file.
In some practical examples, the third unit is further configured to obtain scoring data of the user according to the matching degree and the standard scoring data.
In some practical examples, the third unit is further configured to obtain the exercise consumption data of the user according to the matching degree, the total exercise consumption data of the reference object, the duration corresponding to the first exercise action, and the total duration of the media resource.
In some practical examples, the third unit is further configured to determine a height and a waist circumference of the user according to the follow-up image, determine a standard weight, a standard waist circumference and a deviation ratio according to the height and the waist circumference of the user, wherein the deviation ratio is a deviation ratio of the waist circumference of the user to the standard waist circumference, determine the weight of the user based on the deviation ratio and the standard weight, and derive the exercise consumption data of the user based on the weight of the user and the weight of the reference object and the total exercise consumption data of the reference object.
In some implementable examples, the third unit is further configured to search, based on the ID of the first fitness action in the point file, for a second fitness action in the action set, and use second standard limb coordinate information and second standard score data corresponding to the second fitness action as standard limb coordinate information and standard score data corresponding to the first fitness action; the ID of the first workout activity and the ID of the second workout activity are consistent.
In some implementable examples, the third unit is further to determine, if the first workout activity is not the last activity of the media asset, the motion data for a next activity of the first workout activity and update the motion data for the user based on the motion data for the next activity of the first workout activity.
It should be understood that the division of units or modules (hereinafter referred to as units) in the above apparatus is only a division of logical functions, and may be wholly or partially integrated into one physical entity or physically separated in actual implementation. And the units in the device can be realized in the form of software called by the processing element; or may be implemented entirely in hardware; part of the units can also be realized in the form of software called by a processing element, and part of the units can be realized in the form of hardware.
For example, each unit may be a processing element separately set up, or may be implemented by being integrated into a chip of the apparatus, or may be stored in a memory in the form of a program, and a function of the unit may be called and executed by a processing element of the apparatus. In addition, all or part of the units can be integrated together or can be independently realized. The processing element, which may also be referred to herein as a processor, may be an integrated circuit having signal processing capabilities. In the implementation process, the steps of the method or the units above may be implemented by integrated logic circuits of hardware in a processor element or in a form called by software through the processor element.
In one example, the units in the above apparatus may be one or more integrated circuits configured to implement the above method, such as: one or more ASICs, or one or more DSPs, or one or more FPGAs, or a combination of at least two of these integrated circuit forms.
As another example, when a unit in an apparatus may be implemented in the form of a processing element scheduler, the processing element may be a general-purpose processor, such as a CPU or other processor that may invoke a program. As another example, these units may be integrated together, implemented in the form of a system on chip SOC.
In one implementation, the means for implementing the respective corresponding steps of the above method by the above apparatus may be implemented in the form of a processing element scheduler. For example, the apparatus may comprise a processing element and a storage element, the processing element calling a program stored by the storage element to perform the method of the above method embodiment. The memory elements may be memory elements on the same chip as the processing elements, i.e. on-chip memory elements.
In another implementation, the program for performing the above method may be in a memory element on a different chip than the processing element, i.e. an off-chip memory element. At this time, the processing element calls or loads a program from the off-chip storage element onto the on-chip storage element to call and execute the method of the above method embodiment.
For example, the embodiments of the present disclosure may also provide an apparatus, such as: a data processing apparatus, which may include: a processor, a memory for storing instructions executable by the processor. The processor is configured to execute the above instructions, so that the data processing apparatus implements the display method according to the foregoing embodiment. The memory may be located within the data processing apparatus or may be located outside the data processing apparatus. And the processor includes one or more.
In another implementation, the unit of the apparatus for implementing the steps of the above method may be configured as one or more processing elements, and these processing elements may be disposed on the display device corresponding to the above, where the processing elements may be integrated circuits, for example: one or more ASICs, or one or more DSPs, or one or more FPGAs, or a combination of these types of integrated circuits. These integrated circuits may be integrated together to form a chip.
Another embodiment of the present disclosure provides a chip system, as shown in fig. 16, which includes at least oneprocessor 1601 and at least oneinterface circuit 1602. Theprocessor 1601 and theinterface circuit 1602 may be interconnected by a line. For example, theinterface circuit 1602 may be used to receive signals from other devices. Also for example, theinterface circuit 1602 may be used to send signals to other devices, such as theprocessor 1601.
For example, theinterface circuit 1602 may read instructions stored in a memory in the device and send the instructions to theprocessor 1601. The instructions, when executed by theprocessor 1601, may cause the electronic device to perform the various steps in the embodiments described above. Of course, the chip system may also include other discrete devices, which are not specifically limited in this disclosure.
Embodiments of the present disclosure also provide a computer-readable storage medium having computer program instructions stored thereon. The computer program instructions, when executed by the display device, cause the display device to implement the application no-response processing method as described above.
The embodiment of the present disclosure further provides a computer program product, which includes computer instructions executed by the data processing device, and when the computer instructions are executed in the data processing device, the data processing device is enabled to implement the display method as described above. Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.
In the several embodiments provided in the present disclosure, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, a module or a unit may be divided into only one logic function, and may be implemented in other ways, for example, a plurality of units or components may be combined or integrated into another apparatus, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed to a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present disclosure may be embodied in the form of software products, in essence, or as part of or all of the technical solutions that contribute to the prior art, such as: and (5) programming. The software product is stored in a program product, such as a computer readable storage medium, and includes several instructions for causing a device (which may be a single chip, a chip, or the like) or a processor (processor) to perform all or part of the steps of the methods according to the embodiments of the disclosure. And the aforementioned storage medium includes: a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk or an optical disk, and various media capable of storing program codes.
For example, embodiments of the present disclosure may also provide a computer-readable storage medium having stored thereon computer program instructions. The computer program instructions, when executed by the display device, cause the display device to implement the display method as in the foregoing method embodiments.
The above is only a specific embodiment of the present disclosure, but the scope of the present disclosure is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.