Movatterモバイル変換


[0]ホーム

URL:


CN111741352A - Screen projection control method, screen projection playing method and related device - Google Patents

Screen projection control method, screen projection playing method and related device
Download PDF

Info

Publication number
CN111741352A
CN111741352ACN202010657216.0ACN202010657216ACN111741352ACN 111741352 ACN111741352 ACN 111741352ACN 202010657216 ACN202010657216 ACN 202010657216ACN 111741352 ACN111741352 ACN 111741352A
Authority
CN
China
Prior art keywords
video
playing
address
information
play
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010657216.0A
Other languages
Chinese (zh)
Inventor
陈慧明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co LtdfiledCriticalTencent Technology Shenzhen Co Ltd
Priority to CN202010657216.0ApriorityCriticalpatent/CN111741352A/en
Publication of CN111741352ApublicationCriticalpatent/CN111741352A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

The application discloses a screen projection control method, a screen projection playing method and a related device. The application includes: when a source end device plays a first video, a screen projection instruction is sent to a target device; when the target equipment responds to the screen projection instruction and starts the screen projection function, first playing information is sent to the target equipment; and if a video continuous playing request sent by the destination equipment is received, sending second playing information to the destination equipment. According to the method and the device, in the process that the source end device projects the episode to the target device for playing, the target device actively initiates the video continuous playing request to the source end device, and the source end device reports the playing address of the subsequent video to the target device based on the video continuous playing request, so that the video continuous playing can be realized under the condition that a user does not sense, the playing operation flow is simplified, the signaling overhead between the source end device and the target device is reduced, and the power consumption of the source end device is saved.

Description

Screen projection control method, screen projection playing method and related device
Technical Field
The present application relates to the technical field of multimedia control, and in particular, to a screen-casting control method, a screen-casting playing method, and a related device.
Background
With the development of intelligent products, the smart phone is connected with a television and is applied to projection display more and more widely. The target devices can be mutually interconnected by adopting a wireless communication technology, and on the basis, the target devices (such as a television) can be remotely controlled by running special application software, so that the source devices (such as a smart phone) become a multifunctional intelligent remote controller.
In the existing screen projection scheme, a user may first select an episode to be played on a source device, then send a play address corresponding to a certain video in the episode to a destination device, the destination device downloads the corresponding video through the play address for playing, and when the video is played, the destination device sends a prompt message indicating that the playing is finished to the source device.
However, in the process that the source device projects the episode to the destination device for playing, when playing a video in the episode, the user needs to manually switch to the next video in the episode for playing, which results in a tedious operation of screen projection playing on one hand, and increases signaling transmission between devices on the other hand, resulting in an increase in power consumption of the source device.
Disclosure of Invention
The embodiment of the application provides a screen-casting control method, a screen-casting playing method and a related device, which are used for actively initiating a video continuous playing request to a source end device by the target device in the process that the source end device projects an episode to the target device for playing, and the source end device reports a playing address of a subsequent video to the target device based on the video continuous playing request, so that the video continuous playing can be realized under the condition that a user does not sense, the playing operation flow is simplified, the signaling overhead between the source end device and the target device is reduced, and the power consumption of the source end device is saved.
In view of the above, an aspect of the present application provides a method for controlling screen projection, including:
when the source end equipment plays the first video, a screen projection instruction is sent to the target equipment, wherein the screen projection instruction is used for indicating that the playing content of the source end equipment is projected to the target equipment to be displayed;
when the target device responds to the screen projection instruction and starts the screen projection function, first playing information is sent to the target device, wherein the first playing information comprises a playing address of a first video, and the first playing information is used for indicating the target device to play the first video;
and if a video continuous playing request sent by the destination equipment is received, sending second playing information to the destination equipment, wherein the second playing information comprises a playing address of a second video, and the second playing information is used for indicating the destination equipment to play the second video.
Another aspect of the present application provides a method for screen projection playing, including:
when a source end device plays a first video, receiving a screen projection instruction sent by the source end device, wherein the screen projection instruction is used for indicating that the playing content of the source end device is projected to a target device to be displayed;
when a screen projection function is started in response to a screen projection instruction, receiving first playing information sent by source end equipment, wherein the first playing information comprises a playing address of a first video;
playing a first video according to the first playing information;
if a video continuous playing request is sent to the source end device, second playing information sent by the source end device is received, wherein the second playing information comprises a playing address of a second video;
and playing the second video according to the second playing information.
Another aspect of the present application provides a screen projection control apparatus, including:
the device comprises a sending module and a target device, wherein the sending module is used for sending a screen projecting instruction to the target device when the source device plays a first video, and the screen projecting instruction is used for indicating that the playing content of the source device is projected to the target device to be displayed;
the sending module is further used for sending first playing information to the destination device when the destination device responds to the screen projection instruction and starts a screen projection function, wherein the first playing information comprises a playing address of a first video, and the first playing information is used for indicating the destination device to play the first video;
the sending module is further configured to send second playing information to the destination device if a video continuous playing request sent by the destination device is received, where the second playing information includes a playing address of the second video, and the second playing information is used to instruct the destination device to play the second video.
In one possible design, in an implementation manner of another aspect of the embodiment of the present application, the screen projection control device further includes an obtaining module and a receiving module;
the device comprises an acquisition module and a display module, wherein the acquisition module is used for acquiring a video playing instruction aiming at a first video through a video playing interface before sending a screen projection instruction to a target device when a source end device plays the first video, the video playing interface at least comprises a video list, the first video belongs to any one of videos in the video list, and the video playing instruction carries a video identifier of the first video;
the sending module is further used for sending a video playing instruction to the server, wherein the video playing instruction is used for indicating the server to determine a playing address of the first video;
and the receiving module is used for receiving the playing address of the first video sent by the server.
In one possible design, in one implementation of another aspect of an embodiment of the present application,
the receiving module is specifically used for receiving an address to be played and an address failure identifier corresponding to the address to be played, which are sent by the server, wherein the address to be played corresponds to the first video;
if the address failure identification meets the video transmission condition, determining that the address to be played is the playing address of the first video;
and if the address failure identifier does not meet the video transmission condition, sending an address retransmission request to the server so that the server sends the playing address of the first video to the source end device, wherein the address retransmission request carries the video identifier of the first video.
In one possible design, in one implementation of another aspect of an embodiment of the present application,
the acquisition module is further used for acquiring a video playing instruction for a first video through a video playing interface before sending a screen projection instruction to a destination device when the source device plays the first video, wherein the video playing interface at least comprises a video list, the first video belongs to any one of videos in the video list, and the video playing instruction carries a video identifier of the first video and a definition identifier of the first video;
the sending module is further used for sending a video playing instruction to the server, wherein the video playing instruction is used for indicating the server to determine a playing address of the first video;
the receiving module is further used for receiving the playing address of the first video sent by the server.
In one possible design, in one implementation of another aspect of an embodiment of the present application,
the receiving module is further used for responding to the screen projection instruction, sending first playing information to the target device, and receiving the first playing state information sent by the target device and displaying the first playing state information if the target device determines that the first video is successfully played;
the receiving module is further configured to, in response to the screen-projecting instruction, after sending the first playing information to the destination device, if the destination device determines that the first video playing fails, receive second playing state information sent by the destination device, and display the second playing state information, where the second playing state information includes a playing error code.
In one possible design, in one implementation of another aspect of the embodiment of the present application, the screen projection control device further includes a display module;
the sending module is further configured to receive first play state information sent by the destination device, and send a play pause instruction to the destination device after the first play state information is displayed, so that the destination device pauses playing of the first video in response to the play pause instruction;
the receiving module is further configured to receive third play state information sent by the destination device, where the third play state information includes at least one of play pause prompt information, total length information of the video, and current progress information of the video;
and the display module is used for displaying the third playing state information.
In one possible design, in one implementation of another aspect of an embodiment of the present application,
the sending module is further configured to send a play start instruction to the destination device after the third play state information is displayed, so that the destination device continues to play the first video in response to the play start instruction;
the receiving module is further configured to receive fourth play state information sent by the destination device, where the fourth play state information includes at least one of play start prompt information, total video length information, and current video progress information;
and the display module is also used for displaying the fourth play state information.
Another aspect of the present application provides a screen projection playing device, including:
the receiving module is used for receiving a screen projecting instruction sent by the source end equipment when the source end equipment plays the first video, wherein the screen projecting instruction is used for indicating that the playing content of the source end equipment is projected to the target equipment to be displayed;
the receiving module is further configured to receive first play information sent by the source end device when a screen projection function is started in response to the screen projection instruction, where the first play information includes a play address of a first video;
the playing module is used for playing the first video according to the first playing information;
the receiving module is further configured to receive second play information sent by the source end device if a video continuous play request is sent to the source end device, where the second play information includes a play address of a second video;
and the playing module is also used for playing the second video according to the second playing information.
In one possible design, in one implementation of another aspect of an embodiment of the present application,
the playing module is further configured to play a second video according to a playing address of the second video after playing the first video according to the first playing information, if the first playing information further includes the playing address of the second video, where the second video is a video adjacent to the first video;
the playing module is further configured to, after the first video is played according to the first playing information, execute a step of sending a video continuous playing request to the source device if the first playing information does not include the playing address of the second video.
In one possible design, in one implementation of another aspect of an embodiment of the present application,
the playing module is specifically used for acquiring a first video according to a playing address of the first video included in the first playing information if the first playing information further includes an address invalidation identification and the address invalidation identification meets the video transmission condition;
playing the first video;
the playing module is specifically configured to send an address retransmission request to the source device if the first playing information further includes an address invalidation identifier and the address invalidation identifier does not satisfy the video transmission condition, so that the source device requests the server for a playing address of the first video, where the address retransmission request carries the video identifier of the first video;
receiving a play address of a first video sent by source-end equipment;
acquiring a first video according to the playing address of the first video;
the first video is played.
In one possible design, in one implementation of another aspect of an embodiment of the present application,
the playing module is specifically used for playing the first video according to the first playing information and displaying the title information if the first playing information also comprises the title information;
and if the first playing information also comprises the definition identification, playing the first video corresponding to the definition identification according to the first playing information.
Another aspect of the present application provides a computer-readable storage medium having stored therein instructions, which, when executed on a computer, cause the computer to perform the method of the above-described aspects.
According to the technical scheme, the embodiment of the application has the following advantages:
in an embodiment of the present application, a method for controlling screen projection is introduced, where a source device may send a screen projection instruction to a destination device when the source device plays a first video, and when the destination device responds to the screen projection instruction and starts a screen projection function, the source device sends first play information to the destination device, where the first play information includes a play address of the first video, the first play information is used to instruct the destination device to play the first video, and if the source device receives a video continuous play request sent by the destination device, the source device sends second play information to the destination device, where the second play information includes a play address of a second video, and the second play information is used to instruct the destination device to play the second video. By the method, in the process that the source device projects the episode to the target device for playing, the target device actively initiates the video continuous playing request to the source device, and the source device reports the playing address of the subsequent video to the target device based on the video continuous playing request, so that the video continuous playing can be realized under the condition that a user does not sense, the playing operation flow is simplified, the signaling overhead between the source device and the target device is reduced, and the power consumption of the source device is saved.
Drawings
FIG. 1 is a schematic diagram of an architecture of a projection control system according to an embodiment of the present application;
FIG. 2 is a schematic flow chart illustrating a screen projection control method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an embodiment of a screen projection control method in an embodiment of the present application;
fig. 4 is a schematic interface diagram of a source device playing a first video in an embodiment of the present application;
FIG. 5 is a schematic view of a video playback interface in an embodiment of the present application;
FIG. 6 is another schematic diagram of a video playback interface in an embodiment of the present application;
fig. 7 is a schematic interface diagram showing first play status information in the embodiment of the present application;
fig. 8 is a schematic interface diagram showing second play status information in the embodiment of the present application;
fig. 9 is a schematic interface diagram showing third play status information in the embodiment of the present application;
fig. 10 is a schematic interface diagram showing fourth play status information in the embodiment of the present application;
FIG. 11 is a schematic diagram of an embodiment of a method for screen projection playing in an embodiment of the present application;
FIG. 12 is a schematic diagram of an embodiment of a destination device playing a first video in the embodiment of the present application;
fig. 13 is a schematic diagram of another embodiment of the present application in which the destination device plays the first video;
FIG. 14 is a flowchart illustrating a method for screen-casting playing in an embodiment of the present application;
FIG. 15 is a schematic view of an embodiment of a screen projection control device in an embodiment of the present application;
fig. 16 is a schematic diagram of an embodiment of a screen projection playing device in an embodiment of the present application;
fig. 17 is a schematic structural diagram of a source device in an embodiment of the present application;
fig. 18 is a schematic structural diagram of a destination device in the example of the present application.
Detailed Description
The embodiment of the application provides a screen-casting control method, a screen-casting playing method and a related device, which are used for actively initiating a video continuous playing request to a source end device by the target device in the process that the source end device projects an episode to the target device for playing, and the source end device reports a playing address of a subsequent video to the target device based on the video continuous playing request, so that the video continuous playing can be realized under the condition that a user does not sense, the playing operation flow is simplified, the signaling overhead between the source end device and the target device is reduced, and the power consumption of the source end device is saved.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "corresponding" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Along with the development of intelligent products, the entertainment screen enlargement is the daily requirement of people's life, and how to combine the advantages of rich entertainment information and television screen enlargement content becomes a new direction. The screen projection of the mobile phone is a new trend, no matter the mobile phone uses a mirror image screen projection device, a software screen projection device or a screen projection device, the purpose is to enjoy movie and television works on a large screen, better movie and television viewing experience is achieved, and the application of connecting the smart mobile phone (or a tablet personal computer and the like) with the smart television (or the screen projection device and the like) and performing projection display is more and more extensive. The devices can be mutually interconnected by adopting a wireless communication technology, and on the basis, the source end device (such as a smart phone, a tablet computer or a desktop computer) can be used for remotely controlling the destination device (such as a smart television or a screen projection device) so that the source end device becomes a multifunctional intelligent remote controller. Therefore, the embodiment of the application can be applied to a scene of projection display, and specifically can relate to the following three scenes. Specifically, the method comprises the following steps:
in the first scenario, the copyright of some episodes is only directed at the smart phone end, the tablet computer end or the computer end, and the smart phone television end does not have the authority of playing the episodes, so that the user can project the episodes played on the smart phone end, the tablet computer end or the computer end onto the smart television for watching by adopting a screen projection method.
And in a second scenario, if the user watches a certain video on the road through the smart phone or the tablet personal computer, the screen projection function can be started after the user arrives at home, seamless connection of video playing is realized, and half of the video watched on the smart phone or the tablet personal computer is directly projected onto the smart television for continuous watching.
And in a third scenario, if the user wants to search for a certain video, the user can directly search through the smart phone or the tablet personal computer, and the searched video is projected to the smart television for watching. Compared with the method for searching videos on the smart television by using a television remote controller, the method is more convenient and faster.
The source end device is taken as a smart phone, the target device is taken as a smart television as an example for explanation, by adopting the screen projection control method provided by the application, the episode can be projected to the smart television through the smart phone for playing, when a user selects video continuous playing or video switching, the smart television actively initiates a video continuous playing request to the smart phone, so that the smart phone sends a playing address of a subsequent video to the smart television based on the video continuous playing request, and the user does not need to use the smart phone to perform video switching on the episode, therefore, the video continuous playing can be realized under the condition that the user does not sense, the playing operation flow is simplified, the signaling overhead between the smart phone and the smart television is reduced, and the power consumption of the smart phone is saved.
Referring to fig. 1, fig. 1 is a schematic diagram of an architecture of a screen projection control system in an embodiment of the present application, where as shown in the drawing, the screen projection control system includes a server K1, a source device K2, and a destination device K3. The server related to the application can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and can also be a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, Network service, cloud communication, middleware service, domain name service, safety service, Content Delivery Network (CDN), big data and an artificial intelligence platform. The source device may be a smart phone, a tablet computer, a notebook computer, a palm computer, or a personal computer, and the destination device may be a smart television, or a screen projection device, but is not limited thereto.
Specifically, the source device and the destination device may communicate with each other through a Local Area Network (LAN) or a Wide Area Network (WAN). In one example, when a user logs in to a source device and a destination device using the same account, the source device and the destination device may communicate with each other through a wide area network. In another example, a source device and a destination device used by a user are accessed to the same router, at this time, a local area network may be formed between the source device and the destination device, and the devices in the local area network may implement mutual communication through the router. The terminal devices may also form a Peer-to-Peer network (also referred to as Peer-to-Peer network (P2P)) through a Wireless communication manner (e.g., bluetooth, Wireless Fidelity (WiFi) or ZigBee (ZigBee) network). In one example, the user may join both source devices to a WiFi network named "11111". The terminal devices within the WiFi network form a P2P network. The source device and the destination device may be interconnected through a cellular network, or the source device and the destination device may be interconnected through a transfer device (for example, a Universal Serial Bus (USB) data line), so as to implement a communication function between the source device and the destination device.
The connection between the source device and the server and the connection between the destination device and the server may both be communicated via a wireless network, a wired network, or a removable storage medium. The wireless Network is typically the Internet (Internet), but may be any Network including, but not limited to, bluetooth, LAN, Metropolitan Area Network (MAN), WAN, mobile Network, private Network, or any combination of virtual private networks. It should be understood that the example in fig. 1 is only used for understanding the present solution, and in practical applications, the number and type of source devices, the number and type of destination devices, and the number and type of servers should be flexibly determined according to practical situations.
Based on this, a screen projection control method provided by the present application is described below, taking a source device as a smart phone and a destination device as a smart television as an example, please refer to fig. 2, where fig. 2 is a schematic flow chart of the screen projection control method in the embodiment of the present application, and as shown in the figure, specifically:
in step S1, when the user wants to watch the first video, the first video may be directly selected at the smartphone terminal, where the first video may be a certain episode of the drama, or a certain period of the tv show, and for example, if the user selects to watch the tenth episode of the drama a, the "tenth episode of the drama" is the first video.
In step S2, after the user selects the first video, the smart phone starts playing the first video, where the first video may be played through a video client on the smart phone or through a player on the smart phone.
In step S3, when the user needs to project the first video onto the smart tv for playing, a screen projection instruction may be sent to the smart tv through the smart phone.
In step S4, after the smart television receives the screen projection instruction sent by the smart phone, the screen projection function may be started.
In step S5, the smart television continues to receive the first playing information sent by the smart phone, where the first playing information includes a playing address of the first video, and it should be understood that in practical applications, the first playing information may further include an address invalidation identifier, title information, and definition identifier, which are not limited herein.
In step S6, the smart television receives the first playing information sent by the smart phone, analyzes the first playing information to obtain a playing address of the first video, downloads the first video from the server based on the playing address, and plays the first video.
In step S7, if the smart tv successfully plays the first video, the smart tv actively sends play status information to the smart phone, so that the source device displays the play status information, where it can be understood that the play status information may include a prompt message that the playing is successful, and may also include a current play progress, a remaining play progress, and the like.
In step S8, when the first video of the smart tv has been played or is about to be played, the smart tv actively sends a video resume request to the source device.
In step S9, after receiving the video resume request sent by the destination device, the smart phone may send second play information to the smart tv, where the second play information includes a play address of the second video, and the second video may be "eleventh episode of drama a. It should be understood that, in practical applications, the second playing information may further include an address invalidation identifier, title information, and a definition identifier, which are not limited herein.
In step S10, the smart television receives the second playing information sent by the smart phone, and parses the second playing information to obtain a playing address of the second video, downloads the second video from the server based on the playing address, and plays the second video.
With reference to the above description, a method for controlling screen projection in the present application will be described below from the perspective of a source device, please refer to fig. 3, where fig. 3 is a schematic diagram of an embodiment of a screen projection control method in an embodiment of the present application, and as shown in the diagram, an embodiment of a method for controlling screen projection in an embodiment of the present application includes:
101. when the source end equipment plays the first video, the source end equipment sends a screen projection instruction to the target equipment, wherein the screen projection instruction is used for indicating that the playing content of the source end equipment is projected to the target equipment to be displayed;
in this embodiment, when the source device is playing a first video and a user has a screen projection requirement, a screen projection selection operation may be performed on the source device side, and the source device generates a corresponding screen projection instruction according to the screen projection selection operation and sends the screen projection instruction to the destination device, where the screen projection instruction is used to project the playing content of the source device to the destination device for display.
It should be noted that the first video may be a certain episode of a television show, or a certain period of a show in a synthesis program, or a certain movie in a series of movies, for example, the user selects to watch the tenth episode of a television show a, that is, the "tenth episode of a television show a" is the first video. The specific playing mode of the first video may be that the first video is played by a video client on the source end device, or may be played by a player of the source end device. The method for the user to perform the screen projection selection operation may be to click a screen projection button on the screen of the source device, or to trigger the screen projection button through voice control, or the like. The specific playing mode and screen projection selection operation of the first video are not limited.
For convenience of understanding, please refer to fig. 4, where fig. 4 is a schematic interface diagram of a source device playing a first video in the embodiment of the present application, as shown in the drawing, a1 indicates a video playing interface, a2 indicates a screen projection button on the video playing interface, and A3 indicates a first video played on the video playing interface, when a user wishes to project the first video played in the video playing interface to a destination device, the user may click the screen projection button, and the source device generates a screen projection instruction based on a click operation of the screen projection button by the user, and sends the screen projection instruction to the destination device.
102. When the target device responds to the screen projection instruction and starts the screen projection function, the source device sends first playing information to the target device, wherein the first playing information comprises a playing address of a first video, and the first playing information is used for indicating the target device to play the first video;
in this embodiment, the destination device may start a corresponding screen projecting function based on the screen projecting instruction, and when the destination device starts the screen projecting function, it indicates that the destination device has already made a screen projecting preparation, so that the source device may continue to send the first play information to the destination device, where the first play information at least includes a play address of the first video. The destination device analyzes the first playing information to obtain a playing address of the first video, and then uses the playing address to download the corresponding first video from the server and play the first video. It is understood that, in practical applications, the first playing information may further include an address invalidation flag, title information, and a definition flag, which are not limited herein.
Specifically, the playing address may be a Uniform Resource Locator (URL) address, each video corresponds to a unique URL address, and information included in the URL address may indicate a location of the video, so that the URL address corresponds to an extension of a file name in a network range. The URL address typically contains a protocol for indicating how to handle the video to be opened, a server name, a path and a file name, wherein the most common protocol is the hypertext transfer protocol (HTTP), which can be used to access the network, the server name, i.e. the name of the server where the first video is located, the path and the file name are the path to the first video and the name of the first video.
Illustratively, the data structure of the first play information may be represented as follows:
- -episode 1
- - -definition 1(URL 1)
Wherein, theepisode 1 may represent a first video, theURL 1 may represent a play address of the first video, and thedefinition 1 may represent a definition of the first video.
It can be understood that the definition of the first video may be standard definition, high definition, super definition, or blue light, and the specific definition needs to be determined flexibly according to the actual situation.
103. And if a video continuous playing request sent by the destination equipment is received, the source equipment sends second playing information to the destination equipment, wherein the second playing information comprises a playing address of a second video, and the second playing information is used for indicating the destination equipment to play the second video.
In this embodiment, if the first play message carries the play address of the first video, the destination device may actively send a video resume request to the source device when the first video has been played or is about to be played. If the first play message carries the play addresses of the videos, the destination device may actively send a video resume request to the source device when all the videos have been played or are about to be played.
After receiving the video continuous playing request, the source device determines to continuously play the next video to be played based on the video continuous playing request, so that the source device sends second playing information to the destination device, the second playing information comprises a playing address of the second video, the destination device analyzes the second playing information to obtain the playing address of the second video, and the playing address is used for downloading the corresponding second video from the server and playing the first video. It is understood that, in practical applications, the second playing information may further include an address invalidation flag, title information, and a definition flag, which are not limited herein.
The second video may be a next video adjacent to the first video, or may be a video content different from the first video. Assuming that the first video is the tenth episode of drama a, the second video may be the eleventh episode of drama if it is continued in the positive order, and the ninth episode of drama if it is continued in the reverse order. Further assuming that the first video is the second work of movie a, the second video may be the third work of movie a if it is continued in the positive order, and the first work of movie a if it is continued in the reverse order. Also assume that the first video is live content of live room a and the second video may be replay content of a neighboring live room of live room a.
Illustratively, the data structure of the second play information may be expressed as follows:
- -episode 2
- - -definition 1(URL 2)
Wherein, theepisode 2 represents the second video, theURL 2 represents the playing address of the second video, and thedefinition 1 represents the definition of the second video.
It should be noted that, in this embodiment, it is described by taking the example that the definitions of the first video and the second video are the same, in practical application, the definitions of the first video and the second video may be different, and the specific definition needs to be determined flexibly according to practical situations.
In the embodiment of the application, a method for controlling screen projection is introduced, and in the above manner, in the process that a source device projects an episode to a destination device for playing, the destination device actively initiates a video continuous playing request to the source device, and the source device reports a playing address of a subsequent video to the destination device based on the video continuous playing request, so that video continuous playing can be realized without perception of a user, a playing operation flow is simplified, signaling overhead between the source device and the destination device is reduced, and power consumption of the source device is saved.
Optionally, on the basis of the embodiment corresponding to fig. 3, in an optional embodiment provided in this application embodiment, when the source device plays the first video, before the source device sends a screen projection instruction to the destination device, the method for controlling screen projection may further include the following steps:
the source end equipment acquires a video playing instruction aiming at a first video through a video playing interface, wherein the video playing interface at least comprises a video list, the first video belongs to any one of videos in the video list, and the video playing instruction carries a video identifier of the first video;
the source end equipment sends a video playing instruction to the server, wherein the video playing instruction is used for indicating the server to determine a playing address of the first video;
the source device receives a play address of a first video sent at the server.
In this embodiment, a method for a user to actively select different videos is introduced, where a video list is also displayed on a video playing interface of a source device, and a first video being played belongs to one of videos in the video list. It can be understood that a user can select any one of videos through a video list in a video playing interface to play, in this application, taking a first video selected by the user as an example for description, a source end device obtains a video playing instruction for the first video based on a user operation, where the video playing instruction carries a video identifier of the first video. For example, if the first video is a first episode of a series a, the video identifier of the first video may be "01", and if the first video is a second episode of the series a, the video identifier of the first video may be "02", and the specific video identifier needs to be determined flexibly according to actual conditions.
It is understood that each first video may correspond to the same definition, or may have different definitions. Assuming that each video in the video list corresponds to the same definition, the user does not need to select the definition of the video. For example, the default definition in the video list is high definition (i.e., 480P), then the definition of the first video is high definition (i.e., 480P). For example, the default definition of the video list is blue light (i.e. 1080P), and then the definition of the first video is blue light (i.e. 1080P), it should be noted that in practical applications, the default definition of the video list may also be standard definition (i.e. 270P) or super definition (i.e. 720P).
Specifically, after obtaining a video playing instruction, a source device sends the video playing instruction to a server (e.g., a video server), and the server determines a first video of the source device based on a video identifier of the first video carried in the video playing instruction, and obtains a playing address of the first video, so that the server sends the playing address of the first video to the source. The playing address can be a URL address specifically, and the address where the first video is stored can be determined according to the URL address of the first video.
For easy understanding, please refer to fig. 5, fig. 5 is a schematic diagram of a video playing interface in an embodiment of the present application, and as shown in the drawing, B1 indicates the video playing interface, B2 indicates a video list on the video playing interface, B3 indicates a first video selected by a user in the video list, and B4 indicates a video playing area in the video playing interface. Taking television drama 'cherry blossom' as an example, a video list is displayed in a video playing interface, video entries of a first set to a sixth set are displayed in the video list, and if a user selects to watch the fourth set, the first video is 'the fourth set of cherry blossom', so that the first video is played in a video playing area of the video playing interface. After the source end device acquires a video playing instruction for the first video, the video playing instruction is sent to the server, wherein the video playing instruction carries a video identifier "04" corresponding to the "fourth set of cherry blossom, so that the server can determine the first video according to the video identifier" 04 "and acquire a corresponding playing address. The server sends the playing address of the first video to the source end device.
It can be understood that in this embodiment, how to obtain the play address of the first video is described, in an actual application, the source device may further send, to the server, a video identifier corresponding to each video in the entire video list, or a video identifier corresponding to a part of videos in the video list. For example, the video list includes 6 videos, the video identifiers corresponding to each video are "01", "02", "03", "04", "05", and "06", the source device may send a video playing instruction carrying all the video identifiers "01", "02", "03", "04", "05", and "06" to the server, the server determines a playing address corresponding to each video identifier and sends the playing addresses to the source device, and the source device may send first playing information to the destination device after obtaining the playing addresses corresponding to all the videos, where the first playing information may include a playing address of a first video in the video list, or a playing address of a part of videos in the video list, or a playing address of all the videos in the video list. For the case that the first playing information includes two or more playing addresses, the destination device can directly use the playing address of the next video in the first playing information after the first video is played, and does not need to request the playing address of the next video from the source device, so that the signaling consumption of the video continuous playing request can be saved, and the power consumption of the source device and the destination device can be saved.
Illustratively, if the playing addresses of the plurality of videos are included in the first playing information, the data structure of the first playing information may be represented as follows:
- -episode 1
- - -definition 1(URL 1)
- -episode 2
- - -definition 1(URL 2)
Wherein, theepisode 1 represents a first video, theURL 1 represents a play address of the first video, thedefinition 1 represents the definition of the first video, theepisode 2 represents another video in the video list, theURL 2 represents a play address of the another video, and thedefinition 2 represents the definition of the another video.
It should be noted that, in this embodiment, it is described by taking the example that the definitions of the first video and the other video are the same, in practical application, the definitions of the first video and the other video may be different, and the specific definition needs to be determined flexibly according to practical situations.
In the embodiment of the application, a method for enabling a user to actively select different videos is provided, and through the above manner, the user can actively select a played video at a source end device side and feed the video back to a destination device, so that convenience in operation is improved.
Optionally, on the basis of the embodiment corresponding to fig. 3, in an optional embodiment provided in this application embodiment, the receiving, by the source device, the play address of the first video sent by the server specifically includes the following steps:
a source end device receives an address to be played and an address failure identification corresponding to the address to be played, wherein the address to be played corresponds to a first video;
if the address failure identifier meets the video transmission condition, the source end equipment determines that the address to be played is the playing address of the first video;
if the address failure identifier does not meet the video transmission condition, the source device sends an address retransmission request to the server, so that the server sends the playing address of the first video to the source device, wherein the address retransmission request carries the video identifier of the first video.
In this embodiment, a processing method based on the failure condition of the playing address is introduced. After the source device sends a video playing instruction to the server, the server can determine the address to be played corresponding to the first video and the address invalidation identifier corresponding to the address to be played. The address invalidation identifier may be set to "-1", "0", or actual aging duration (unit is second), and if the address invalidation identifier is "-1", it indicates that the address to be played corresponding to the first video does not invalidate. If the address invalidation flag is '0', the address to be played corresponding to the unknown first video is invalidated or not. If the address invalidation flag is the actual aging duration, the actual aging duration is 300 seconds, that is, the address invalidation flag is "300", which indicates that the address to be played is invalidated after 300 seconds are acquired.
After receiving the address to be played and the address invalidation identifier fed back by the server, the source device needs to determine whether the address invalidation identifier meets the video transmission condition. It is understood that if the address invalidation flag is "-1", it indicates that the address invalidation flag satisfies the video transmission condition address. If the address invalidation flag is "0", it means that the address invalidation flag satisfies the video transmission condition address. And if the address invalidation identification is the actual aging duration and currently belongs to the actual aging duration, indicating that the address invalidation identification meets the video transmission condition address.
Based on this, in the case where the address invalidation flag is "0", it can be considered that the address invalidation flag satisfies the video transmission condition, and therefore, the address to be played is taken as the play address of the first video. Alternatively, in the case where the address invalidation flag is "0", the address invalidation flag may be considered not to satisfy the video transmission condition.
In the case that the address invalidation flag is "-1", the address invalidation flag may be considered to satisfy the video transmission condition, and thus, the address to be played is taken as the playing address of the first video.
And when the address invalidation identification is the actual aging duration, if the duration of obtaining the address invalidation identification is less than or equal to the actual aging duration indicated by the address invalidation identification, the address invalidation identification is considered to meet the video transmission condition, and therefore the address to be played is taken as the playing address of the first video. On the contrary, if the duration of obtaining the address invalidation identifier is greater than the actual aging duration indicated by the address invalidation identifier, the address invalidation identifier is considered to not satisfy the video transmission condition, and therefore the source device needs to request the playing address of the first video again, that is, the source device sends an address retransmission request carrying the video identifier of the first video to the server, so that the server determines the playing address of the first video again and sends the playing address of the first video to the source device, where the playing address of the first video sent by the server may still carry the corresponding address invalidation identifier, and therefore, the source device may also determine whether the video transmission condition is satisfied based on the retransmitted playing address, which is not described herein.
It is understood that, if the address invalidation flag is "-1", the address to be played may be determined to be the playing address of the first video. Or after the address invalidation identifier corresponding to the address to be played is obtained, the source device starts to count down when obtaining the address invalidation identifier belonging to the actual aging duration, for example, if the actual aging duration is 300 seconds, the count down is 300 seconds, before the count down is finished, the address to be played is determined to be the playing address of the first video, and after the count down is finished, it is stated that the address to be played is in the invalidation state, so that the playing address of the first video needs to be requested to the server again.
In the embodiment of the application, a processing mode based on the condition of playing address failure is provided, and through the above mode, the source end device needs to judge whether the address failure identifier meets the video transmission condition to determine the playing address, so that the playing address may fail within a period of time, and under the condition, the source end device needs to request the playing address again, and through setting the timeliness of the playing address, the security of playing links can be effectively improved, and the condition of address abuse is reduced.
Optionally, on the basis of the embodiment corresponding to fig. 3, in an optional embodiment provided in this application embodiment, when the source device plays the first video, before the source device sends a screen projection instruction to the destination device, the method for controlling screen projection may further include the following steps:
the source end equipment acquires a video playing instruction aiming at a first video through a video playing interface, wherein the video playing interface at least comprises a video list, the first video belongs to any one of videos in the video list, and the video playing instruction carries a video identifier of the first video and a definition identifier of the first video;
the source end equipment sends a video playing instruction to the server, wherein the video playing instruction is used for indicating the server to determine a playing address of the first video;
the source end device receives a playing address of a first video sent by the server.
In this embodiment, a method for setting video definition is introduced, where a user may further select the definition of a first video or adjust the definition, and a source device obtains a video playing instruction of the user for the first video, where the video playing instruction carries a video identifier of the first video and a definition identifier of the first video. The server analyzes the video identifier of the first video and the definition identifier of the first video based on the video playing instruction, and then determines the playing address of the video with the definition, namely the playing address of the first video.
In particular, each first video may correspond to at least one definition, including but not limited to standard definition, high definition, super definition, and blue light. For example, the definition of standard definition is denoted by "11", the definition of high definition is denoted by "12", the definition of ultra-clear definition is denoted by "13", the definition of blue light is denoted by "14", and the definition of 4k (kilo) corresponds to the definition denoted by "15". Assuming that the video identifier of the first video is "01" and the definition identifier of the first video is "12", the video playing instruction of the first video carries the identifier "01" and the identifier "12", based on which it is determined that the first video corresponding to the video identifier "01" is selected by the user, and the high-definition first video is played.
For easy understanding, please refer to fig. 6, fig. 6 is another schematic diagram of a video playing interface in an embodiment of the present application, and as shown in the drawing, C1 indicates the video playing interface, C2 indicates a video list on the video playing interface, C3 indicates a first video selected by a user in the video list, C4 indicates a video playing area in the video playing interface, C5 indicates a sharpness list in the video playing area, and C6 indicates a sharpness selected by the user in the sharpness list. Use TV drama "cherry blossom" as an example, show the video list in the video playback interface of source end equipment, include the video entry of first set to sixth set in the video list in the video playback interface, if the user selects to watch the fourth set, then first video is "the fourth set of cherry blossom", the definition of first video can also be selected to the user, if the definition that the user selected is "super clear (720P)", then in the video playback area on the video playback interface, the broadcast definition is the first video of super clear.
Assuming that the video identifier of the "fourth set of cherry blossom is" 04 "and the ultra-clear definition identifier is" 13 ", the source device sends a video playing instruction to the server, where the video playing instruction carries the video identifier of" 04 "and the definition identifier of" 13 ", and thus the server video playing instruction obtains a corresponding playing address.
It can be understood that in this embodiment, how to obtain the play address of the first video is described, in an actual application, the source device may further send, to the server, a video identifier corresponding to each video in the entire video list, or a video identifier corresponding to a part of videos in the video list. Wherein each video corresponds to at least one sharpness identifier.
For example, the video list includes 6 videos, the video identifier corresponding to each video is "01", "02", "03", "04", "05", and "06", and each video identifier may correspond to at least one definition identifier, for example, the definition identifiers corresponding to the video identifier "01" are "11", "12", "13", "14", and "15", respectively, the source device may send a video playing instruction carrying all the video identifiers and their corresponding definition identifiers to the server, for example, the video playing instruction carries the video identifier "01" and its corresponding definition identifier "12", the video playing instruction may also carry the video identifier "02" and its corresponding definition identifier "13", based on which, the server determines the playing address corresponding to each video identifier and definition identifier, and sends these playing addresses to the source device, after acquiring the play addresses corresponding to all the videos, the source device may send first play information to the destination device, where the first play information may include a play address of a first video in the video list, or a play address of a part of videos in the video list, or a play address of all the videos in the video list. For the case that the first playing information includes two or more playing addresses, the destination device can directly use the playing address of the next video in the first playing information after the first video is played, and does not need to request the playing address of the next video from the source device, so that the signaling consumption of the video continuous playing request can be saved, and the power consumption of the source device and the destination device can be saved.
Illustratively, if the first playing information includes playing addresses of a plurality of videos, and each video has a plurality of resolutions, the data structure of the first playing information may be expressed as follows:
- -episode 1
- - -definition 1(URL 1, definition 1)
Definition 2(URL 2, definition label 2)
Definition 3(URL 3, definition label 3)
- -episode 2
- - -definition 1(URL 4, definition label 1)
Definition 2(URL 5, definition label 2)
Definition 3(URL 6, definition label 3)
Where thesharpness flag 1 may indicate one sharpness (e.g., standard definition), thesharpness flag 2 indicates another sharpness (e.g., high definition), and thesharpness flag 3 indicates another sharpness (e.g., super definition).Episode 1 may represent one video (e.g., the first video), withepisode 1 having a URL address in three degrees of resolution. For example,URL 1 may represent a play address belonging toepisode 1 of the standard definition,URL 2 may represent a play address belonging toepisode 1 of the high definition, andURL 3 may represent a play address belonging toepisode 1 of the super definition.Episode 2 represents the other videos in the video list (i.e., not the first video), andepisode 2 also has a URL address in three degrees of resolution. For example,URL 4 may represent a play address belonging toepisode 2 of the standard definition,URL 5 may represent a play address belonging toepisode 2 of the high definition, andURL 6 may represent a play address belonging toepisode 2 of the super definition.
It is understood that the data structure of the first playing information may further include a plurality of video identifiers and corresponding definition identifiers, and the corresponding playing addresses are different for the same video at different definitions.
In the embodiment of the application, a method for setting video definition is provided, and through the above manner, a user can also select the definition of a played video on source end equipment according to requirements, and finally, a target equipment presents the video with corresponding definition, so that the flexibility of video playing is increased, and the video playing effect is improved.
Optionally, on the basis of the embodiment corresponding to fig. 3, in an optional embodiment provided in this application embodiment, after the source device sends the first play information to the destination device in response to the screen-casting instruction, the method for controlling screen casting may further include the following steps:
if the destination device determines that the first video is successfully played, the source device receives first playing state information sent by the destination device and displays the first playing state information;
and if the destination device determines that the first video playing fails, the source device receives second playing state information sent by the destination device and displays the second playing state information, wherein the second playing state information comprises a playing error code.
In this embodiment, a method for actively feeding back play status information to a source device by a destination device is introduced, where the destination device may start playing a first video after responding to a screen projection instruction and starting a screen projection function, and when the destination device determines that playing of the first video is successful, the destination device may send the first play status information to the source device, so as to notify the source device that the first video has been successfully played. The source end device can display the first playing state information after receiving the first playing state information, wherein the text information corresponding to the first playing state information can be 'screen-casting playing is successful', or 'screen-casting is successful', and the like, and the file content corresponding to the first playing state information can be set according to the actual situation. For easy understanding, please refer to fig. 7, fig. 7 is a schematic diagram of an interface showing first playing state information in an embodiment of the present application, as shown in the drawing, D1 indicates a video playing interface, and D2 is used for indicating the first playing state information shown on the video playing interface.
In practical applications, if the play address fails, the destination device may not play the first video, resulting in a failure of playing the first video. Failure of the first video playback may also result if the network connection between the source device and the destination device fails. The failure of the first video playback may also result if the video client or player of the destination device crashes. Therefore, when the destination device determines that the first video playing fails, second playing state information is actively sent to the destination device, and the source device displays the second playing state information after receiving the second playing state information, where the text information corresponding to the second playing state information may be "screen-cast playing fails", or "screen-cast fails", and the like, and the document content corresponding to the second playing state information may be set according to an actual situation. For easy understanding, please refer to fig. 8, fig. 8 is a schematic diagram of an interface showing second playing state information in the embodiment of the present application, as shown in the drawing, E1 indicates a video playing interface, and E2 indicates that the second playing state information is shown on the video playing interface.
It should be noted that the second playing status information further includes a playing error code, where the playing error code includes but is not limited to "401", "503", "504", "4000", "4100", "4200", "4300", "4400", "4500", "4600", "6000", "6100", and "6200", and each playing error code is described below:
the play error code "401" is used to indicate that the error type is not compliant with the HTTP request parameters.
The playback error code "503" is used to indicate that the error type is that the calling amount has exceeded the limit.
A play error code "504" is used to indicate that the error type is a service failure.
The playback error code "4000" is used to indicate that the error type is illegal to adjust the request parameters, that the necessary parameters are missing, or that the parameter values are in an incorrect format.
The play error code "4100" is used to indicate that the error type is authentication failure.
The play error code "4200" is used to indicate that the error type is request expired.
A play error code of "4300" is used to indicate that the error type is access denied, that the account is blocked, or that the user is not within the scope of the interface.
The broadcast error code "4400" is used to indicate that the error type is over quota, which indicates that the number of requests exceeds quota limit, please refer to the document request quota portion.
The playing error code "4500" is used to indicate that the type of the error is replay attack, and a temporary parameter and a timestamp (timestamp) parameter of the request are used to ensure that each request is executed only once on the server side, so the temporary parameter of the current time and the last time cannot be repeated, and the timestamp parameter cannot be different from the server by more than 2 hours.
The play error code "4600" is used to indicate that the error type is protocol unsupported.
The play error code "6000" is used to indicate that the error type is a server internal error.
The playback error code "6100" is used to indicate that the error type is temporarily not supported by the version, which indicates that the interface is not supported by the version or that the interface is in a maintenance state.
The playback error code "6200" is used to indicate that the error type is that the interface is temporarily inaccessible, which indicates that the current interface is in the service-suspended state.
In the embodiment of the present application, a method for actively feeding back play status information to a source device by a destination device is provided, and in this way, no matter a first video is successfully or unsuccessfully played, the destination device actively feeds back the play status information to the source device, and the source device does not need to initiate a polling request to the destination device, so that signaling overhead between the source device and the destination device is saved, and power consumption of the source device is saved.
Optionally, on the basis of the embodiment corresponding to fig. 3, in an optional embodiment provided in this application embodiment, after the source device receives the first play state information sent by the destination device and displays the first play state information, the method for controlling screen projection may further include the following steps:
the source end device sends a play pause instruction to the target device, so that the target device pauses playing the first video in response to the play pause instruction;
the source end device receives third play state information sent by the target device, wherein the third play state information comprises at least one of play pause prompt information, total video length information and current video progress information;
and the source end equipment displays the third play state information.
In this embodiment, a method for a destination device to actively push a pause state to a source device is introduced, where when the destination device determines that a first video is successfully played, the destination device may send first play state information to the source device, so as to notify the source device that the first video has been successfully played. When a user needs to pause playing of the first video, the source device can control the first video to pause playing, that is, the source device sends a playing pause instruction to the destination device, and the destination device pauses playing of the first video according to the playing pause instruction. The destination device also sends third play state information to the source device, where the third play state information includes at least one of play pause prompt information, total video length information, and current video progress information, and the source device displays the third play state information after receiving the third play state information. The text information corresponding to the play pause prompt information may be that "the screen projection play is paused", or "the screen projection is paused", or "the play is paused", and the like, the total length information of the video is the full length of the first video, for example, 90 minutes, or 5400 seconds, and the like, and the current progress information of the video is the length of time that the first video has been played.
For easy understanding, please refer to fig. 9, fig. 9 is a schematic interface diagram illustrating third play status information in the embodiment of the present application, as shown in the drawing, F1 indicates a video play interface, F2 indicates play/pause prompt information displayed on the video play interface, F3 indicates total length information of a video displayed on the video play interface, F4 indicates current progress information of the video displayed on the video play interface, the play/pause prompt information indicates that a destination device has tentatively played a first video, the total length information indicates that the length of the first video is 54minutes 08 seconds, the current progress information indicates that the first video has completed a play progress of 8minutes 03 seconds during pause.
In the embodiment of the present application, a method for a destination device to actively push a pause state to a source device is provided, and in this way, a video that is tentatively played by the destination device may be controlled by the source device, and the destination device may actively feed back a corresponding play state to the source device, so that it is beneficial to increase feasibility of a scheme.
Optionally, on the basis of the embodiment corresponding to fig. 3, in an optional embodiment provided by the embodiment of the present application, after the source device displays the third play state information, the method for controlling screen projection may further include the following steps:
the source end equipment sends a play starting instruction to the target equipment so that the target equipment responds to the play starting instruction and continues to play the first video;
the source end device receives fourth play state information sent by the destination device, wherein the fourth play state information comprises at least one of play start prompt information, total video length information and current video progress information;
and the source end equipment displays the fourth play state information.
In this embodiment, a method for a destination device to actively push an open state to a source device is introduced, where when a user needs to continue playing a paused first video, a play open instruction may be sent to the destination device by the source device, and the destination device responds to the play open instruction and continues playing the first video. The destination device also sends fourth play state information to the source device, where the fourth play state information includes at least one of play start prompt information, total video length information, and current video progress information, and the source device displays the fourth play state information after receiving the fourth play state information. The text information corresponding to the play start prompt information may be "the video has been continuously played", or "the screen projection has been continued", and the like, the total length information of the video is the full duration of the first video, for example, 90 minutes, or 5400 seconds, and the current progress information of the video is the duration of the first video that has been played.
For convenience of understanding, please refer to fig. 10, fig. 10 is a schematic interface diagram illustrating fourth play status information in an embodiment of the present application, as shown in the figure, G1 indicates a video play interface, G2 is used for indicating play start prompt information displayed on the video play interface, G3 is used for indicating total length information of a video displayed on the video play interface, and G4 is used for indicating current progress information of the video displayed on the video play interface, and the play start prompt information indicates that the destination device continues to play the first video. As can be seen from the video total length information, the length of the first video is 54minutes 08 seconds. As can be seen from the current progress information of the video, the first video has completed the playing progress of 8minutes 03 seconds when the playing is started.
In the embodiment of the present application, a method for a destination device to actively push an open state to a source device is provided, and in this way, the source device can control the destination device to continue playing a video, and the destination device can actively feed back a corresponding play state to the source device, thereby facilitating to increase the feasibility of a scheme.
With reference to the above description, a method for screen projection control in the present application will be described below from the perspective of a destination device, please refer to fig. 11, where fig. 11 is a schematic diagram of an embodiment of a method for screen projection playing in an embodiment of the present application, and as shown in the drawing, an embodiment of the method for screen projection playing in the embodiment of the present application includes:
201. when a source end device plays a first video, a target device receives a screen projection instruction sent by the source end device, wherein the screen projection instruction is used for indicating that the playing content of the source end device is projected to the target device to be displayed;
in this embodiment, when the source device is playing a first video and a user has a screen projection requirement, a screen projection selection operation may be performed on the source device side, and the source device generates a corresponding screen projection instruction according to the screen projection selection operation and sends the screen projection instruction to the destination device, where the screen projection instruction is used to project the playing content of the source device to the destination device for display.
It is understood that step 201 is similar to step 101, and therefore will not be described herein.
202. When the target device responds to a screen projection instruction and starts a screen projection function, the target device receives first playing information sent by the source device, wherein the first playing information comprises a playing address of a first video;
in this embodiment, when the destination device starts the screen projecting function, that is, it indicates that the destination device has already made a screen projecting preparation, therefore, the source device may continue to send the first play information to the destination device, where the first play information at least includes a play address of the first video. The destination device analyzes the first playing information to obtain a playing address of the first video, and then uses the playing address to download the corresponding first video from the server and play the first video.
It is understood thatstep 202 is similar to step 102, and therefore will not be described herein.
203. The target equipment plays the first video according to the first playing information;
in this embodiment, the destination device may acquire the first video from the corresponding path based on the play address of the first video in the first play information, and play the first video.
For easy understanding, referring to fig. 12, fig. 12 is a schematic diagram of an embodiment of playing a first video by a destination device in the embodiment of the present application, where H1 shown in (a) in fig. 12 is used to indicate the first video selected by a source device, and H2 shown in (B) in fig. 12 is used to indicate the first video played by the destination device, so that the source device determines the first video to be played through a user operation, and after the destination device starts a screen projection function, the first video can be played according to first playing information.
204. If the destination device sends a video continuous playing request to the source device, the destination device receives second playing information sent by the source device, wherein the second playing information comprises a playing address of a second video;
in this embodiment, when the first video has been played completely or is about to be played completely, the destination device may actively send a video resume request to the source device, and after the source device receives the video resume request sent by the destination device, the source device determines to continue playing the next video based on the video resume request, so that the source device sends second play information to the destination device, where the second play information includes a play address of the second video.
205. And the destination equipment plays the second video according to the second playing information.
In this embodiment, the destination device may obtain the playing address of the second video after analyzing the second playing information, and then download the corresponding second video from the server using the playing address, and play the first video. It is understood that, in practical applications, the first playing information may further include an address invalidation flag, title information, and a definition flag, which are not limited herein.
It can be understood that the display manner of the second video on the destination device is similar to the display manner of the first video, and is not described herein again.
In the embodiment of the present application, another screen projection control method is introduced, and in the foregoing manner, in a process that a source device projects an episode to a destination device for playing, the destination device actively initiates a video resume request to the source device, and the source device reports a play address of a subsequent video to the destination device based on the video resume request, so that video resume can be realized without perception by a user, a flow of a play operation is simplified, signaling overhead between the source device and the destination device is reduced, and power consumption of the source device is saved.
Optionally, on the basis of the embodiment corresponding to fig. 11, in an optional embodiment provided by the embodiment of the present application, after the destination device plays the first video according to the first play information, the method for playing by screen projection may further include the following steps:
if the first playing information further comprises a playing address of a second video, the destination device plays the second video according to the playing address of the second video, wherein the second video is a video adjacent to the first video;
and if the first playing information does not comprise the playing address of the second video, the destination device executes the step of sending a video continuous playing request to the source device.
In this embodiment, a method for actively requesting a source device to continue playing by a target device is introduced, where if first playing information further includes a playing address of a second video, when a first video has been already played or is about to be played, a second video may be directly obtained based on the playing address of the second video, and the second video is continuously played.
Illustratively, if the first playing information includes a playing address of the first video and a playing address of the second video, the data structure of the first playing information may be represented as follows:
- -episode 1
- - -definition 1(URL 1)
- -episode 2
- - -definition 1(URL 2)
Whereepisode 1 represents a first video,definition 1 represents definition of the first video and definition (e.g., high definition) of a second video,URL 1 represents a play address of the first video,episode 2 represents the second video, andURL 2 represents a play address of the second video.
If the first playing information does not include the playing address of the second video, the playing address of the second video needs to be requested from the source end device when the first video has been played or is about to be played.
In the embodiment of the present application, a method for actively requesting a source device for continuous playing by a target device is provided, and in this way, the source device does not need to initiate a polling request to the target device, so that signaling overhead between the source device and the target device is saved, and power consumption of the source device is saved.
Optionally, on the basis of the embodiment corresponding to fig. 11, in an optional embodiment provided in this application embodiment, the playing, by the destination device, the first video according to the first playing information specifically includes the following steps:
if the first playing information further comprises an address invalidation identification, and the address invalidation identification meets the video transmission condition, the destination device acquires the first video according to the playing address of the first video included in the first playing information;
the target equipment plays the first video;
the destination device plays the first video according to the first play information, and may include the following steps:
if the first playing information further comprises an address failure identifier, and the address failure identifier does not meet the video transmission condition, the destination device sends an address retransmission request to the source device, so that the source device requests the server for the playing address of the first video, wherein the address retransmission request carries the video identifier of the first video;
the target device receives a play address of a first video sent by a source device;
the target equipment acquires a first video according to the playing address of the first video;
the destination device plays the first video.
In this embodiment, a processing manner based on a broadcast address failure condition is introduced, and the first broadcast information may further include an address failure identifier, so that the destination device may further determine whether the address failure identifier meets the video transmission condition, and the determination manner may refer to the above embodiments, which is not described herein again.
Illustratively, if the playing address of the second video is also included in the first playing information, and each video has an address invalidation identifier, the data structure of the first playing information may be represented as follows:
- -episode 1
-definition 1(URL 1, address invalidation flag 1)
- -episode 2
-definition 1(URL 2, address invalidation flag 2)
Theepisode 1 represents a first video, thedefinition 1 represents the definition of the first video and the definition of a second video, theURL 1 represents a playing address of the first video, and theaddress invalidation identifier 1 represents an address invalidation identifier corresponding to the first video.Episode 2 represents a second video,URL 2 represents a playing address of the second video, andaddress invalidation identifier 2 represents an address invalidation identifier corresponding to the second video. In this embodiment, the definition of the first video is the same as that of the second video, and in practical applications, the definition of the first video may be different from that of the second video, which is not limited herein.
And if the address failure identifier meets the video transmission condition, the destination device acquires the first video according to the playing address of the first video. If the address failure identifier does not meet the video transmission condition, the destination device cannot acquire the first video, and therefore, an address retransmission request is required to be sent to the source device, the address retransmission request carries the video identifier of the first video, the source device can know the playing requirement of the destination device according to the address retransmission request, and then requests the server for the playing address of the first video again, and the destination device can acquire the first video according to the playing address of the first video and play the first video.
In the embodiment of the application, a processing mode based on the condition of playing address failure is provided, and through the above mode, the source end device needs to judge whether the address failure identifier meets the video transmission condition to determine the playing address, so that the playing address may fail within a period of time, and under the condition, the source end device needs to request the playing address again, and through setting the timeliness of the playing address, the security of playing links can be effectively improved, and the condition of address abuse is reduced.
Optionally, on the basis of the embodiment corresponding to fig. 11, in an optional embodiment provided in this application embodiment, the playing, by the destination device, the first video according to the first playing information specifically includes the following steps:
if the first playing information also comprises title information, the destination device plays the first video according to the first playing information and displays the title information;
and if the first playing information also comprises the definition identification, the destination device plays the first video corresponding to the definition identification according to the first playing information.
In this embodiment, a method for displaying title information and definition information by a destination device is introduced, that is, the first playing information may further include at least one of the title information and the definition identifier.
Illustratively, if the address invalidation identifier and the title information are also included in the first playing information, the data structure of the first playing information may be represented as follows:
- -episode 1
-definition 1(URL 1,definition flag 1,title information 1, address invalidation flag 1)
Definition 2(URL 2,definition label 2,title information 1, address invalidation label 2)
- -episode 2
-definition 1(URL 3,definition flag 1,title information 2, address invalidation flag 3)
Definition 2(URL 4,definition label 2,title information 2, address invalidation label 4)
Where thesharpness flag 1 indicates one sharpness (e.g., standard definition) and thesharpness flag 2 indicates another sharpness (e.g., high definition).Episode 1 represents a first video,title information 1 represents a title of the first video, the first video has two definitions,URL 1 represents a corresponding playing address of the first video in one definition (e.g., standard definition), andaddress invalidation flag 1 represents address invalidation flag of the first video in one definition (e.g., standard definition).URL 2 indicates a corresponding play address of the first video in another definition (e.g., high definition), and theaddress invalidation flag 2 indicates an address invalidation flag of the first video in another definition (e.g., high definition). Similarly, theepisode 2 represents a second video, thetitle information 2 represents a title of the second video, the second video also has two definitions, theURL 3 represents a corresponding playing address of the second video in one definition (e.g., a standard definition), and theaddress invalidation flag 3 represents an address invalidation flag of the second video in one definition (e.g., a standard definition).URL 4 indicates a corresponding play address of the second video in another definition (e.g., high definition), and theaddress invalidation flag 4 indicates an address invalidation flag of the second video in another definition (e.g., high definition).
For easy understanding, please refer to fig. 13, fig. 13 is a schematic diagram of another embodiment of playing the first video by the destination device in the embodiment of the present application, wherein I1 shown in fig. 13 (a) is used for indicating the title information, and I2 shown in fig. 13 (B) is used for indicating the title information and the sharpness information.
It should be noted that the first playing state information, the second playing state information, the third playing state information, and the fourth playing state information are the same as those described in the above embodiments, and therefore, the description thereof is omitted here.
In the embodiment of the application, a method for displaying title information and definition information by a target device is provided, and in the above manner, a user can check information related to a currently played video at the target device side without using a source device, so that convenience of a scheme is improved.
For convenience of description, a flow of the screen projection playing method provided by the present application will be described below with reference to fig. 14, please refer to fig. 14, where fig. 14 is a schematic flow diagram of a screen projection playing method in an embodiment of the present application, and as shown in the figure, specifically:
in step J1, when the destination device starts the screen-projection function, the source device sends first playing information to the destination device, where the first playing information includes a playing address of the first video, an address invalidation identifier, title information, and a definition identifier.
In step J2, the destination device determines whether the address invalidation flag satisfies the video transmission condition, if so, the destination device performs step J3, and if not, the destination device performs step J4.
In step J3, the destination device acquires the first video according to the playing address of the first video, and plays the first video.
In step J4, the destination device sends an address retransmission request to the source device, and the source device feeds back the playing address of the first video to the destination device again.
In step J5, when the first video has been played or is about to be played, the destination device may further query whether the first playing information includes the playing address of the second video, if the playing address of the second video is not included, the destination device performs step J6, and if the playing address of the second video is included, the destination device performs step J8.
In step J6, the destination device sends a video resume request to the source device.
In step J7, the source device sends second playing information to the destination device based on the video resume request, where the second playing information may include a playing address, an address invalidation identifier, title information, and a definition identifier of the second video.
In step J8, the destination device obtains the playing address of the second video based on the second playing information, obtains the second video according to the playing address, and plays the second video.
In step J9, the user controls the destination device to pause playing the first video or continue playing the first video through the source device, and the destination device actively sends playing state information to the source device, so that the source device knows the playing state of the video. It is understood that the step J9 can be any step between the step J1 and the step J8, and the timing sequence of the step J9 is not limited herein.
Referring to fig. 15, fig. 15 is a schematic view of an embodiment of a screen projection control apparatus in an embodiment of the present application, and as shown in the figure, the screenprojection control apparatus 30 includes:
a sendingmodule 301, configured to send a screen projection instruction to a destination device when a source device plays a first video, where the screen projection instruction is used to instruct that playing content of the source device is projected into the destination device for display;
the sendingmodule 301 is further configured to send first playing information to the destination device when the destination device starts a screen projecting function in response to the screen projecting instruction, where the first playing information includes a playing address of a first video, and the first playing information is used to instruct the destination device to play the first video;
the sendingmodule 301 is further configured to send second playing information to the destination device if a video continuous playing request sent by the destination device is received, where the second playing information includes a playing address of a second video, and the second playing information is used to instruct the destination device to play the second video.
In the embodiment of the application, in the process that the source device projects the episode to the destination device for playing, the destination device actively initiates the video continuous playing request to the source device, and the source device reports the playing address of the subsequent video to the destination device based on the video continuous playing request, so that the video continuous playing can be realized under the condition that a user does not sense, the playing operation flow is simplified, the signaling overhead between the source device and the destination device is reduced, and the power consumption of the source device is saved.
Optionally, on the basis of the embodiment corresponding to fig. 15, in another embodiment of the screenprojection control device 30 provided in the embodiment of the present application, the screenprojection control device 30 further includes an obtainingmodule 302 and areceiving module 303;
the obtainingmodule 302 is configured to, when a source device plays a first video and before a screen projection instruction is sent to a destination device, obtain a video playing instruction for the first video through a video playing interface, where the video playing interface at least includes a video list, the first video belongs to any one of videos in the video list, and the video playing instruction carries a video identifier of the first video;
the sendingmodule 301 is further configured to send a video playing instruction to the server, where the video playing instruction is used to instruct the server to determine a playing address of the first video;
the receivingmodule 303 is configured to receive a play address of the first video sent by the server.
In the embodiment of the application, through the manner, the user can actively select the played video at the source device side and feed the video back to the destination device, so that the convenience of operation is improved.
Alternatively, on the basis of the embodiment corresponding to fig. 15, in another embodiment of the screenprojection control device 30 provided in the embodiment of the present application,
the receivingmodule 303 is specifically configured to receive an address to be played and an address invalidation identifier corresponding to the address to be played, where the address to be played corresponds to the first video, and the address invalidation identifier is sent by the server;
if the address failure identification meets the video transmission condition, determining that the address to be played is the playing address of the first video;
and if the address failure identifier does not meet the video transmission condition, sending an address retransmission request to the server so that the server sends the playing address of the first video to the source end device, wherein the address retransmission request carries the video identifier of the first video.
In the embodiment of the application, in the above manner, the source device needs to determine whether the address invalidation identifier satisfies the video transmission condition to determine the play address, so that the play address may be invalidated in a period of time, and in this case, the source device needs to request the play address again, and by setting the timeliness of the play address, the security of the play link can be effectively improved, and the address abuse situation is reduced.
Alternatively, on the basis of the embodiment corresponding to fig. 15, in another embodiment of the screenprojection control device 30 provided in the embodiment of the present application,
the obtainingmodule 302 is further configured to, when the source device plays the first video, before sending a screen projection instruction to the destination device, obtain a video playing instruction for the first video through a video playing interface, where the video playing interface at least includes a video list, the first video belongs to any one of videos in the video list, and the video playing instruction carries a video identifier of the first video and a definition identifier of the first video;
the sendingmodule 301 is further configured to send a video playing instruction to the server, where the video playing instruction is used to instruct the server to determine a playing address of the first video;
the receivingmodule 303 is further configured to receive a play address of the first video sent by the server.
In the embodiment of the application, through the above manner, the user can also select the definition of the played video on the source end device according to the requirement, and finally the target device presents the video with the corresponding definition, so that the flexibility of video playing is increased, and the video playing effect is improved.
Alternatively, on the basis of the embodiment corresponding to fig. 15, in another embodiment of the screenprojection control device 30 provided in the embodiment of the present application,
the receivingmodule 303 is further configured to, in response to the screen-projecting instruction, after sending the first playing information to the destination device, if the destination device determines that the first video playing is successful, receive the first playing state information sent by the destination device, and display the first playing state information;
the receivingmodule 303 is further configured to, in response to the screen-projecting instruction, after sending the first playing information to the destination device, if the destination device determines that the first video playing fails, receive second playing state information sent by the destination device, and display the second playing state information, where the second playing state information includes a playing error code.
In the embodiment of the application, in the above manner, no matter the first video is played successfully or unsuccessfully, the destination device actively feeds back the playing state information to the source device, and the source device does not need to initiate a polling request to the destination device, so that signaling overhead between the source device and the destination device is saved, and power consumption of the source device is saved.
Optionally, on the basis of the embodiment corresponding to fig. 15, in another embodiment of the screenprojection control device 30 provided in the embodiment of the present application, the screenprojection control device 30 further includes adisplay module 304;
the sendingmodule 301 is further configured to receive first play state information sent by the destination device, and after the first play state information is displayed, send a play pause instruction to the destination device, so that the destination device pauses playing of the first video in response to the play pause instruction;
the receivingmodule 302 is further configured to receive third play state information sent by the destination device, where the third play state information includes at least one of play pause prompt information, total length information of the video, and current progress information of the video;
a displayingmodule 304, configured to display the third playing state information.
In the embodiment of the application, in the above manner, the video that is tentatively played by the destination device can be controlled by the source device, and the destination device can actively feed back a corresponding playing state to the source device, so that the feasibility of the scheme is increased.
Alternatively, on the basis of the embodiment corresponding to fig. 15, in another embodiment of the screenprojection control device 30 provided in the embodiment of the present application,
the sendingmodule 301 is further configured to send a play start instruction to the destination device after the third play state information is displayed, so that the destination device continues to play the first video in response to the play start instruction;
the receivingmodule 303 is further configured to receive fourth play status information sent by the destination device, where the fourth play status information includes at least one of play start prompt information, total video length information, and current video progress information;
the displayingmodule 304 is further configured to display the fourth playing status information.
In the embodiment of the application, the source device can control the destination device to continue playing the played video, and the destination device can actively feed back a corresponding playing state to the source device, so that the feasibility of the scheme is increased.
Referring to fig. 16, fig. 16 is a schematic view of an embodiment of a screen projection playing apparatus in an embodiment of the present application, and as shown in the drawing, the screenprojection playing apparatus 40 includes:
the receivingmodule 401 is configured to receive a screen projection instruction sent by a source end device when the source end device plays a first video, where the screen projection instruction is used to instruct that playing content of the source end device is projected to a destination device for display;
the receivingmodule 401 is further configured to receive first play information sent by the source end device when a screen projection function is started in response to the screen projection instruction, where the first play information includes a play address of a first video;
aplaying module 402, configured to play a first video according to first playing information;
the receivingmodule 401 is further configured to receive second play information sent by the source end device if a video resume request is sent to the source end device, where the second play information includes a play address of a second video;
theplaying module 402 is further configured to play the second video according to the second playing information.
In the embodiment of the application, in the process that the source device projects the episode to the destination device for playing, the destination device actively initiates the video continuous playing request to the source device, and the source device reports the playing address of the subsequent video to the destination device based on the video continuous playing request, so that the video continuous playing can be realized under the condition that a user does not sense, the playing operation flow is simplified, the signaling overhead between the source device and the destination device is reduced, and the power consumption of the source device is saved.
Optionally, on the basis of the embodiment corresponding to fig. 16, in another embodiment of the screenprojection playing apparatus 40 provided in the embodiment of the present application,
theplaying module 402 is further configured to play a second video according to a playing address of the second video after playing the first video according to the first playing information, if the first playing information further includes the playing address of the second video, where the second video is a video adjacent to the first video;
theplaying module 402 is further configured to, after playing the first video according to the first playing information, if the first playing information does not include the playing address of the second video, execute a step of sending a video continuous playing request to the source device.
In the embodiment of the application, in the above manner, the source device does not need to initiate a polling request to the destination device, so that signaling overhead between the source device and the destination device is saved, and power consumption of the source device is saved.
Optionally, on the basis of the embodiment corresponding to fig. 16, in another embodiment of the screenprojection playing apparatus 40 provided in the embodiment of the present application,
theplaying module 402 is specifically configured to, if the first playing information further includes an address invalidation identifier, and the address invalidation identifier meets the video transmission condition, obtain a first video according to a playing address of the first video included in the first playing information;
playing the first video;
theplaying module 402 is specifically configured to send an address retransmission request to the source device if the first playing information further includes an address invalidation identifier and the address invalidation identifier does not satisfy the video transmission condition, so that the source device requests a server for a playing address of the first video, where the address retransmission request carries a video identifier of the first video;
receiving a play address of a first video sent by source-end equipment;
acquiring a first video according to the playing address of the first video;
the first video is played.
In the embodiment of the application, in the above manner, the source device needs to determine whether the address invalidation identifier satisfies the video transmission condition to determine the play address, so that the play address may be invalidated in a period of time, and in this case, the source device needs to request the play address again, and by setting the timeliness of the play address, the security of the play link can be effectively improved, and the address abuse situation is reduced.
Optionally, on the basis of the embodiment corresponding to fig. 16, in another embodiment of the screenprojection playing apparatus 40 provided in the embodiment of the present application,
theplaying module 402 is specifically configured to play the first video according to the first playing information and display the title information if the first playing information further includes the title information;
and if the first playing information also comprises the definition identification, playing the first video corresponding to the definition identification according to the first playing information.
In the embodiment of the application, through the above manner, the user can check the information related to the currently played video at the destination device side without using the source device, so that the convenience of the scheme is improved.
As shown in fig. 17, for convenience of description, only the parts related to the embodiments of the present application are shown, and details of the specific technology are not disclosed, please refer to the method part of the embodiments of the present application. The source device includes but is not limited to any terminal device such as a smart phone, a tablet computer, a notebook computer, a palm computer, and a personal computer, taking the source device as the smart phone as an example:
fig. 17 is a block diagram illustrating a partial structure of a smartphone according to an embodiment of the present application. Referring to fig. 17, as shown, the smart phone includes: radio Frequency (RF)circuit 510,memory 520,input unit 530,display unit 540,sensor 530,audio circuit 560, wireless fidelity (WiFi)module 570,processor 580, andpower supply 590. Those skilled in the art will appreciate that the smartphone configuration shown in fig. 17 is not intended to be limiting, and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
The following describes each component of the smartphone in detail with reference to fig. 17:
RF circuit 510 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, for processing downlink information of a base station after receiving the downlink information toprocessor 580; in addition, the data for designing uplink is transmitted to the base station. In general,RF circuit 510 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition,RF circuit 510 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to global system for Mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Messaging Service (SMS), and the like.
Thememory 520 may be used to store software programs and modules, and theprocessor 580 executes various functional applications and data processing of the smart phone by operating the software programs and modules stored in thememory 520. Thememory 520 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the smartphone, and the like. Further, thememory 520 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
Theinput unit 530 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the smartphone. Specifically, theinput unit 530 may include atouch panel 531 andother input devices 532. Thetouch panel 531, also called a touch screen, can collect touch operations of a user on or near the touch panel 531 (for example, operations of the user on or near thetouch panel 531 by using any suitable object or accessory such as a finger or a stylus pen), and drive the corresponding connection device according to a preset program. Alternatively, thetouch panel 531 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, and sends the touch point coordinates to theprocessor 580, and can receive and execute commands sent by theprocessor 580. In addition, thetouch panel 531 may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. Theinput unit 530 may includeother input devices 532 in addition to thetouch panel 531. In particular,other input devices 532 include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
Thedisplay unit 540 may be used to display information input by the user or information provided to the user and various menus of the smartphone. TheDisplay unit 540 may include aDisplay panel 541, and optionally, theDisplay panel 541 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, thetouch panel 531 may cover thedisplay panel 541, and when thetouch panel 531 detects a touch operation on or near thetouch panel 531, the touch panel is transmitted to theprocessor 580 to determine the type of the touch event, and then theprocessor 580 provides a corresponding visual output on thedisplay panel 541 according to the type of the touch event. Although in fig. 17, thetouch panel 531 and thedisplay panel 541 are two independent components to implement the input and output functions of the smartphone, in some embodiments, thetouch panel 531 and thedisplay panel 541 may be integrated to implement the input and output functions of the smartphone.
The smartphone may also include at least onesensor 530, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of thedisplay panel 541 according to the brightness of ambient light, and the proximity sensor may turn off thedisplay panel 541 and/or the backlight when the smartphone moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration) for recognizing the attitude of the smartphone, and related functions (such as pedometer and tapping) for vibration recognition; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the smart phone, further description is omitted here.
Audio circuitry 560, speaker 561, microphone 562 may provide an audio interface between the user and the smartphone. Theaudio circuit 560 may transmit the electrical signal converted from the received audio data to the speaker 561, and convert the electrical signal into a sound signal by the speaker 561 for output; on the other hand, the microphone 562 converts the collected sound signals into electrical signals, which are received by theaudio circuit 560 and converted into audio data, which are then processed by the audiodata output processor 580, either by theRF circuit 510 for transmission to, for example, another smartphone, or by outputting the audio data to thememory 520 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the smart phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through theWiFi module 570, and provides wireless broadband internet access for the user. Although fig. 17 shows aWiFi module 570, it is understood that it does not belong to the essential constituents of a smartphone.
Theprocessor 580 is a control center of the smartphone, connects various parts of the entire smartphone by using various interfaces and lines, and performs various functions of the smartphone and processes data by running or executing software programs and/or modules stored in thememory 520 and calling data stored in thememory 520, thereby integrally monitoring the smartphone. Alternatively,processor 580 may include one or more processing units; preferably, theprocessor 580 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated intoprocessor 580.
The smart phone also includes a power supply 590 (e.g., a battery) for powering the various components, which may preferably be logically coupled to theprocessor 580 via a power management system, such that the power management system may manage charging, discharging, and power consumption.
Although not shown, the smart phone may further include a camera, a bluetooth module, and the like, which are not described herein.
In this embodiment, theprocessor 580 included in the source device may perform the functions in the foregoing embodiments, and details are not described here.
The embodiment of the present application further provides a destination device, as shown in fig. 18, for convenience of description, only a portion related to the embodiment of the present application is shown, and details of the specific technology are not disclosed, please refer to the method portion of the embodiment of the present application. The destination device includes but is not limited to a smart television or a screen projection device, taking the destination device as a smart television as an example:
fig. 18 is a block diagram illustrating a partial structure of an intelligent television provided in an embodiment of the present application. Referring to fig. 18, as shown, the smart tv includes:RF circuit 610,memory 620,input unit 630,display unit 640,audio circuit 650,WiFi module 660,processor 670, andpower supply 680. Those skilled in the art will appreciate that the smart tv architecture shown in fig. 18 does not constitute a limitation of the smart tv, and may include more or fewer components than those shown, or some components in combination, or a different arrangement of components.
The following specifically describes each component of the smart television with reference to fig. 18:
in general,RF circuitry 610 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, an LNA, a duplexer, and the like. In addition, theRF circuitry 610 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol including, but not limited to, GSM, GPRS, CDMA, WCDMA, LTE, email, and SMS, among others.
Thememory 620 may be used to store software programs and modules, and theprocessor 670 executes various functional applications and data processing of the smart tv by running the software programs and modules stored in thememory 620. Thememory 620 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; further, thememory 620 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
Theinput unit 630 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the smart tv. In particular, theinput unit 630 may includeother input devices 631. In particular,other input devices 631 include, but are not limited to, one or more of a remote control, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
Thedisplay unit 640 may be used to display information input by or provided to the user and various menus of the smart tv. TheDisplay unit 640 may include aDisplay panel 641, and optionally, theDisplay panel 641 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
Audio circuitry 650, speaker 651, microphone 652 may provide an audio interface between the user and the smart tv. Theaudio circuit 650 may transmit the electrical signal converted from the received audio data to the speaker 651, and convert the electrical signal into a sound signal by the speaker 651 for output; on the other hand, the microphone 652 converts the collected sound signal into an electrical signal, which is received by theaudio circuit 650 and converted into audio data, which is then processed by the audiodata output processor 670 and then transmitted to, for example, another smart tv via theRF circuit 610, or the audio data is output to thememory 620 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the smart television can help a user to receive and send e-mails, browse webpages, access streaming media and the like through theWiFi module 660, and provides wireless broadband internet access for the user. Although fig. 18 showsWiFi module 660, it is understood that it does not belong to the essential constituents of the smart tv.
Theprocessor 670 is a control center of the smart tv, connects various parts of the whole smart tv by using various interfaces and lines, and performs various functions of the smart tv and processes data by running or executing software programs and/or modules stored in thememory 620 and calling data stored in thememory 620, thereby performing overall monitoring of the smart tv. Alternatively,processor 670 may include one or more processing units; preferably, theprocessor 670 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated intoprocessor 670.
The smart tv further includes a power supply 680 (such as a battery) for supplying power to various components, and preferably, the power supply is logically connected to theprocessor 670 through a power management system, so that functions of managing charging, discharging, and power consumption are implemented through the power management system.
Although not shown, the smart television may further include a camera, a bluetooth module, and the like, which are not described herein.
In this embodiment, theprocessor 670 included in the destination device may perform the functions in the foregoing embodiments, which are not described herein again.
Embodiments of the present application also provide a computer-readable storage medium, which stores a computer program, and when the computer program runs on a computer, the computer program causes the computer to execute the steps executed by a source device in the method described in the foregoing embodiments, or causes the computer to execute the steps executed by a destination device in the method described in the foregoing embodiments.
Embodiments of the present application also provide a computer program product including a program, which, when run on a computer, causes the computer to perform the steps performed by the source device in the method described in the foregoing embodiments, or causes the computer to perform the steps performed by the destination device in the method described in the foregoing embodiments.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (15)

CN202010657216.0A2020-07-092020-07-09Screen projection control method, screen projection playing method and related devicePendingCN111741352A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202010657216.0ACN111741352A (en)2020-07-092020-07-09Screen projection control method, screen projection playing method and related device

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202010657216.0ACN111741352A (en)2020-07-092020-07-09Screen projection control method, screen projection playing method and related device

Publications (1)

Publication NumberPublication Date
CN111741352Atrue CN111741352A (en)2020-10-02

Family

ID=72655814

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202010657216.0APendingCN111741352A (en)2020-07-092020-07-09Screen projection control method, screen projection playing method and related device

Country Status (1)

CountryLink
CN (1)CN111741352A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113825007A (en)*2021-09-272021-12-21海信视像科技股份有限公司Video playing method and device and display equipment
CN113918110A (en)*2021-12-132022-01-11荣耀终端有限公司 Screen projection interaction method, device, system, storage medium and product
CN114501089A (en)*2022-01-302022-05-13深圳创维-Rgb电子有限公司 Screen-casting call method, device, electronic device and storage medium
CN114679610A (en)*2020-12-242022-06-28花瓣云科技有限公司Screen projection method, device and system for continuously playing video
CN115665469A (en)*2022-10-202023-01-31湖南快乐阳光互动娱乐传媒有限公司Screen projection method and system
WO2024244224A1 (en)*2023-06-022024-12-05亿咖通(湖北)技术有限公司Video synchronous display method and apparatus, device and medium
WO2025123763A1 (en)*2023-12-152025-06-19海信视像科技股份有限公司Display device and screen mirroring playback resuming method

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102843366A (en)*2012-08-132012-12-26北京百度网讯科技有限公司Network resource access permission control method and device
CN103747326A (en)*2014-01-142014-04-23华为技术有限公司Continuous playing method and device for multimedia file
CN103763303A (en)*2013-12-202014-04-30百度在线网络技术(北京)有限公司Method and device for drama series playing
US20190278472A1 (en)*2018-03-082019-09-12Canon Kabushiki KaishaCommunication apparatus, communication method, and recording medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102843366A (en)*2012-08-132012-12-26北京百度网讯科技有限公司Network resource access permission control method and device
CN103763303A (en)*2013-12-202014-04-30百度在线网络技术(北京)有限公司Method and device for drama series playing
CN103747326A (en)*2014-01-142014-04-23华为技术有限公司Continuous playing method and device for multimedia file
US20190278472A1 (en)*2018-03-082019-09-12Canon Kabushiki KaishaCommunication apparatus, communication method, and recording medium

Cited By (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114679610A (en)*2020-12-242022-06-28花瓣云科技有限公司Screen projection method, device and system for continuously playing video
CN113825007A (en)*2021-09-272021-12-21海信视像科技股份有限公司Video playing method and device and display equipment
CN113918110A (en)*2021-12-132022-01-11荣耀终端有限公司 Screen projection interaction method, device, system, storage medium and product
CN114501089A (en)*2022-01-302022-05-13深圳创维-Rgb电子有限公司 Screen-casting call method, device, electronic device and storage medium
CN114501089B (en)*2022-01-302023-05-05深圳创维-Rgb电子有限公司Screen-throwing communication method and device, electronic equipment and storage medium
CN115665469A (en)*2022-10-202023-01-31湖南快乐阳光互动娱乐传媒有限公司Screen projection method and system
CN115665469B (en)*2022-10-202025-06-17湖南快乐阳光互动娱乐传媒有限公司Screen projection method and system
WO2024244224A1 (en)*2023-06-022024-12-05亿咖通(湖北)技术有限公司Video synchronous display method and apparatus, device and medium
WO2025123763A1 (en)*2023-12-152025-06-19海信视像科技股份有限公司Display device and screen mirroring playback resuming method

Similar Documents

PublicationPublication DateTitle
CN111741352A (en)Screen projection control method, screen projection playing method and related device
CN106791892B (en)Method, device and system for live broadcasting of wheelhouses
CN107948664B (en)Live broadcast room video playing control method and device and terminal
US11153620B2 (en)Media broadcasting method, server, terminal device, and storage medium
JP6430656B2 (en) System, method and apparatus for displaying content items
JP6877808B2 (en) Methods, devices, and systems for processing video stream data
CN111866433B (en)Video source switching method, video source playing method, video source switching device, video source playing device, video source equipment and storage medium
CN103108222B (en) Mobile terminal, TV set, TV digital program switching system, device and method
CN105430424B (en)A kind of methods, devices and systems of net cast
EP3799404B1 (en)Device capable of notifying operation state change thereof through network and communication method of the device
EP2744169B1 (en)Method and apparatus for playing streaming media files
CN106412681B (en)Live bullet screen video broadcasting method and device
CN106210754B (en)Method, server, mobile terminal, system and storage medium for controlling live video
CN107332976B (en)Karaoke method, device, equipment and system
CN103391473B (en)Method and device for providing and acquiring audio and video
WO2016173513A1 (en)Recommended content-based interaction method, terminal, and server
CN108040091B (en)Data processing method, device and storage medium
WO2017008627A1 (en)Multimedia live broadcast method, apparatus and system
CN110719319B (en)Resource sharing method, device, terminal equipment and storage medium
US20150304701A1 (en)Play control method and device
CN112104897B (en)Video acquisition method, terminal and storage medium
CN105100848A (en)Intelligent equipment, and intelligent equipment control system and realization method thereof
CN111182335A (en)Streaming media processing method and device
CN112333337B (en)Message checking method, device, equipment and storage medium
CN109495769B (en)Video communication method, terminal, smart television, server and storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
REGReference to a national code

Ref country code:HK

Ref legal event code:DE

Ref document number:40030856

Country of ref document:HK

RJ01Rejection of invention patent application after publication
RJ01Rejection of invention patent application after publication

Application publication date:20201002


[8]ページ先頭

©2009-2025 Movatter.jp