Movatterモバイル変換


[0]ホーム

URL:


CN116193176B - Screen projection method, device, equipment and storage medium - Google Patents

Screen projection method, device, equipment and storage medium
Download PDF

Info

Publication number
CN116193176B
CN116193176BCN202310151175.1ACN202310151175ACN116193176BCN 116193176 BCN116193176 BCN 116193176BCN 202310151175 ACN202310151175 ACN 202310151175ACN 116193176 BCN116193176 BCN 116193176B
Authority
CN
China
Prior art keywords
video stream
coordinate information
terminal device
view
virtual screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310151175.1A
Other languages
Chinese (zh)
Other versions
CN116193176A (en
Inventor
黄启立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Intelligent Connectivity Beijing Technology Co Ltd
Original Assignee
Apollo Intelligent Connectivity Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Intelligent Connectivity Beijing Technology Co LtdfiledCriticalApollo Intelligent Connectivity Beijing Technology Co Ltd
Priority to CN202310151175.1ApriorityCriticalpatent/CN116193176B/en
Publication of CN116193176ApublicationCriticalpatent/CN116193176A/en
Application grantedgrantedCritical
Publication of CN116193176BpublicationCriticalpatent/CN116193176B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本公开提供了一种投屏方法、装置、设备以及存储介质,涉及自动驾驶技术领域,尤其涉及车联网技术领域。具体实现方案为:获取终端设备的第一视频流和第二视频流,其中,第一视频流是根据终端设备中第一虚拟屏的视图内容生成的,第二视频流是根据终端设备中第二虚拟屏的视图内容生成的;将第一视频流与第二视频流相叠加,得到合成视频流;以及将合成视频流发送至显示设备,以便显示设备展示合成视频流。

The present disclosure provides a screen projection method, device, equipment and storage medium, which relate to the field of autonomous driving technology, and in particular to the field of vehicle networking technology. The specific implementation scheme is: obtaining a first video stream and a second video stream of a terminal device, wherein the first video stream is generated according to the view content of a first virtual screen in the terminal device, and the second video stream is generated according to the view content of a second virtual screen in the terminal device; superimposing the first video stream and the second video stream to obtain a composite video stream; and sending the composite video stream to a display device so that the display device displays the composite video stream.

Description

Screen projection method, device, equipment and storage medium
Technical Field
The disclosure relates to the technical field of automatic driving, in particular to the technical field of internet of vehicles.
Background
With the development of automobiles, the requirements of people on the interconnection of automobile machines and mobile phones are increasing. For example, there are a number of users desiring to interconnect a car set with a cell phone to achieve a screen-throwing function. The screen projection function is to project the content displayed in the terminal equipment such as a mobile phone and the like into the vehicle. The screen throwing function can enrich ecology of the vehicle. Based on this, how to perfect the screen throwing function is a problem to be solved.
Disclosure of Invention
The present disclosure provides a screen projection method, apparatus, device, storage medium, and program product.
According to an aspect of the present disclosure, there is provided a screen projection method, including: acquiring a first video stream and a second video stream of a terminal device, wherein the first video stream is generated according to view contents of a first virtual screen in the terminal device, and the second video stream is generated according to view contents of a second virtual screen in the terminal device; overlapping the first video stream and the second video stream to obtain a synthesized video stream; and sending the composite video stream to a display device for presentation of the composite video stream by the display device.
According to another aspect of the present disclosure, there is provided a screen projection method, including: generating a first video stream according to view contents of the first virtual screen; generating a second video stream according to the view content of the second virtual screen; and transmitting the first video stream and the second video stream to a conversion device.
According to another aspect of the present disclosure, there is provided a screen projection method, including: receiving a composite video stream from a conversion device, wherein the composite video stream is obtained by superposing a first video stream and a second video stream, the first video stream is generated according to view contents of a first virtual screen in a terminal device, and the second video stream is generated according to view contents of a second virtual screen in the terminal device; and displaying the composite video stream.
According to another aspect of the present disclosure, there is provided a screen projection apparatus including: the terminal equipment comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a first video stream and a second video stream of terminal equipment, the first video stream is generated according to the view content of a first virtual screen in the terminal equipment, and the second video stream is generated according to the view content of a second virtual screen in the terminal equipment; the superposition module is used for superposing the first video stream and the second video stream to obtain a synthesized video stream; and the first sending module is used for sending the synthesized video stream to a display device so that the display device can display the synthesized video stream.
According to another aspect of the present disclosure, there is provided a screen projection apparatus including: the first generation module is used for generating a first video stream according to the view content of the first virtual screen; the second generation module is used for generating a second video stream according to the view content of the second virtual screen; and a second sending module, configured to send the first video stream and the second video stream to a conversion device.
According to another aspect of the present disclosure, there is provided a screen projection apparatus including: the receiving module is used for receiving a synthesized video stream from the conversion equipment, wherein the synthesized video stream is obtained by superposing a first video stream and a second video stream, the first video stream is generated according to the view content of a first virtual screen in the terminal equipment, and the second video stream is generated according to the view content of a second virtual screen in the terminal equipment; and the display module is used for displaying the synthesized video stream.
Another aspect of the present disclosure provides an electronic device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the methods shown in the embodiments of the present disclosure.
According to another aspect of the disclosed embodiments, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the methods shown in the disclosed embodiments.
According to another aspect of the disclosed embodiments, there is provided a computer program product comprising a computer program/instruction, characterized in that the computer program/instruction, when executed by a processor, implements the steps of the method shown in the disclosed embodiments.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram of a system architecture of a method, apparatus, electronic device, and storage medium for screen projection according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart of a screen casting method according to an embodiment of the disclosure;
FIG. 3 schematically illustrates a flow chart of a method of screening according to another embodiment of the present disclosure;
FIG. 4 schematically illustrates a flowchart of a method of screening according to another embodiment of the present disclosure;
fig. 5 schematically illustrates a flowchart of a conversion device acquiring a first video stream and a second video stream of a terminal device according to an embodiment of the present disclosure;
fig. 6 schematically illustrates a flowchart of a method of acquiring a first video stream and a second video stream of a terminal device according to another embodiment of the present disclosure;
FIG. 7 schematically illustrates a flow chart of a method of obtaining a composite video stream according to an embodiment of the disclosure;
FIG. 8 schematically illustrates a flowchart of a method of screening according to another embodiment of the present disclosure;
FIG. 9 schematically illustrates a flow chart of a method of obtaining a composite video stream in accordance with an embodiment of the disclosure;
FIG. 10 schematically illustrates a schematic diagram of a screen projection method according to an embodiment of the present disclosure;
FIG. 11 schematically illustrates a block diagram of a screen-casting device according to an embodiment of the disclosure;
FIG. 12 schematically illustrates a block diagram of a screen-casting device according to another embodiment of the present disclosure;
FIG. 13 schematically illustrates a block diagram of a screen-casting device according to another embodiment of the present disclosure;
FIG. 14 schematically illustrates a block diagram of an example electronic device that may be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The system architecture of the screen projection method and apparatus provided in the present disclosure will be described below with reference to fig. 1.
Fig. 1 is a schematic diagram of a system architecture of a screen projection method, apparatus, electronic device, and storage medium according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which embodiments of the present disclosure may be applied to assist those skilled in the art in understanding the technical content of the present disclosure, but does not mean that embodiments of the present disclosure may not be used in other devices, systems, environments, or scenarios.
As shown in fig. 1, the system architecture 100 according to this embodiment may include a display device 110, a terminal device 120, and a conversion device 130. Wherein the display device 110 may be used to display picture data entered into the display device 110. The terminal device 120 may include a display screen for displaying a local display interface. The conversion device 130 may include, for example, carLife conversion boxes, wherein CarLife is an internet of vehicles standard.
The display device 110 may be various electronic devices having a display screen and supporting communication with other terminals, such as an in-vehicle infotainment device (also referred to as a car machine). Terminal device 120 may be a variety of electronic devices that have a display screen and support communication with other terminals, including but not limited to smartphones, tablets, laptop and desktop computers, and the like. Various communication client applications, such as a car networking application, a web browser application, a search class application, an instant messaging tool, etc., may be installed on the display device 110 and the terminal device 120.
A wired or wireless connection may be established between the display device 110 and the translation device 130. The wired connection may include, for example, a USB (Universal Serial Bus ) connection, an HDMI (High Definition Multimedia Interface, high-definition multimedia interface) connection, and the wireless connection may include, for example, a WLAN (Wireless Local Area Network ) connection, an NFC (Near field Communication) connection, a bluetooth connection, and the like.
A wired or wireless connection may be established between the terminal device 120 and the translation device 130. The wired connection may include, for example, a USB connection, an HDMI connection, etc., and the wireless connection may include, for example, a WLAN connection, an NFC connection, a bluetooth connection, etc.
Interaction between the terminal device 120 and the display device 110 may be achieved through a transition device 130. For example, the terminal device 120 may transmit the screen displayed in the display screen 121 to the conversion device 130 through a connection with the conversion device 130. The conversion device 130 may transmit the screen from the terminal device 120 to the display device 110. After receiving the screen from the switching device 130, the display device 110 may display the screen in the display screen 111 of the display device 110 to realize screen casting. For another example, the user may perform operations such as touch control through the display device 110, and the display device 110 may send operation information corresponding to the operations to the conversion device 130, and then the conversion device 130 sends the operation information to the terminal device 120. The terminal device 120 may perform a corresponding operation according to the operation information.
Those skilled in the art will appreciate that the number of terminal devices and screen-drop devices in fig. 1 is merely illustrative. There may be any number of terminal devices and screen-casting devices, as desired for implementation.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing, applying and the like of the personal information of the user all conform to the regulations of related laws and regulations, necessary security measures are adopted, and the public order harmony is not violated.
In the technical scheme of the disclosure, the authorization or consent of the user is obtained before the personal information of the user is obtained or acquired.
The screen projection method provided by the present disclosure will be described below in connection with fig. 2-4.
Fig. 2 schematically illustrates a flowchart of a screen projection method according to an embodiment of the present disclosure. The screen projection method can be applied to the above-described conversion apparatus, for example.
As shown in fig. 2, the screen projection method 200 includes acquiring a first video stream and a second video stream of a terminal device in operation S210.
According to an embodiment of the present disclosure, a first video stream is generated from view content of a first virtual screen in a terminal device, and a second video stream is generated from view content of a second virtual screen in the terminal device.
Then, in operation S220, the first video stream is superimposed with the second video stream to obtain a composite video stream.
According to the embodiment of the disclosure, for example, a picture of a first video stream may be superimposed on an upper layer of a corresponding picture in a second video stream, or a picture of the second video stream may be superimposed on an upper layer of a corresponding picture in the second video stream, so as to obtain a composite video stream.
According to another embodiment of the present disclosure, the pictures of the first video stream and the pictures of the second video stream may also be displayed side by side, resulting in a composite video stream.
According to embodiments of the present disclosure, the resolutions of the first video stream and the second video stream may be the same or different.
In operation S230, the composite video stream is transmitted to the display device so that the display device presents the composite video stream.
According to embodiments of the present disclosure, a wired or wireless connection may be established in advance between the conversion device and the display device. The wired connection may include, for example, a USB connection, an HDMI connection, etc., and the wireless connection may include, for example, a WLAN connection, an NFC connection, a bluetooth connection, etc. Based on this, the conversion device can transmit the composite video stream to the display device through the connection established in advance. The display device may play the composite video stream after receiving the composite video stream.
According to an embodiment of the present disclosure, the first video stream and the second video stream may correspond to interfaces of two applications, respectively. The first video stream and the second video stream are transmitted through the conversion equipment, so that the display equipment can display interfaces of a plurality of applications at the same time, namely, a multi-channel video function is realized, and the use experience of a user is improved. For example, the news application interface may be displayed while the navigation application interface is displayed. In addition, the display device can support the multi-path video function only by updating the conversion device. Since the user does not need to update the display device, monetary costs and time are saved.
Fig. 3 schematically illustrates a flowchart of a screen projection method according to another embodiment of the present disclosure. The screen projection method can be applied to the terminal equipment.
As shown in fig. 3, the screen projection method 300 includes generating a first video stream according to view contents of a first virtual screen in operation S340.
According to the embodiment of the disclosure, the terminal device may pre-establish a first virtual screen (Display), then acquire view content of the first virtual screen, and then encode the view content to obtain a first video stream.
In operation S350, a second video stream is generated according to view contents of the second virtual screen.
According to the embodiment of the disclosure, the terminal device may further pre-establish the second virtual screen, then acquire view content of the second virtual screen, and then encode the view content to obtain the first video stream.
In operation S360, the first video stream and the second video stream are transmitted to the conversion device.
According to the embodiments of the present disclosure, a wired or wireless connection may be pre-established between the terminal device and the conversion device. The wired connection may include, for example, a USB connection, an HDMI connection, etc., and the wireless connection may include, for example, a WLAN connection, an NFC connection, a bluetooth connection, etc. Based on this, the terminal device can transmit the first video stream and the second video stream to the conversion device through the connection established in advance.
According to the embodiment of the disclosure, the terminal device may display the first virtual screen and the second virtual screen. Or only one of the first virtual screen and the second virtual screen may be displayed, and the other may be hidden in the background. Or the first virtual screen and the second virtual screen are hidden in the background and are not displayed.
Fig. 4 schematically illustrates a flowchart of a screen projection method according to another embodiment of the present disclosure. The screen projection method can be applied to the display device.
As shown in fig. 4, the screen projection method 400 includes receiving a composite video stream from a conversion apparatus in operation S470.
According to embodiments of the present disclosure, the display device may receive the composite video stream, for example, through a connection with the conversion device.
In operation S480, a composite video stream is presented.
According to embodiments of the present disclosure, after receiving the composite video stream, the display device may, for example, play the composite video stream to present the composite video stream.
A method for the conversion device provided in the present disclosure to acquire the first video stream and the second video stream of the terminal device will be described below with reference to fig. 5.
Fig. 5 schematically illustrates a flowchart of a conversion device acquiring a first video stream and a second video stream of a terminal device according to an embodiment of the present disclosure.
As shown in fig. 5, the method 500 of the conversion apparatus acquiring the first video stream and the second video stream of the terminal apparatus includes the terminal apparatus creating a first transmission channel and a first transmission channel in operation S510.
According to an embodiment of the present disclosure, the first transmission channel may include, for example, a Video Socket (Video channel).
In operation S520, the terminal device transmits a first video stream to the switching device through a first transmission channel.
According to an embodiment of the present disclosure, the conversion device may be connected to the first transmission channel in advance, for example, so that the terminal device may transmit the first video stream to the conversion device through the first transmission channel.
In operation S530, the switching device acquires a first video stream from the terminal device through a first transmission channel.
In operation S540, the terminal device transmits the second video stream to the switching device through the second transmission channel.
According to an embodiment of the present disclosure, the second transmission channel may include, for example, a Video Socket.
In operation S550, the switching device acquires a second video stream from the terminal device through a second transmission channel.
According to an embodiment of the present disclosure, the conversion device may be connected to the second transmission channel in advance, for example, so that the terminal device may transmit the second video stream to the conversion device through the second transmission channel.
According to the embodiment of the disclosure, through transmission through the first channel and the second channel, logic is clear from the service layer, and the developer is well understood. The first video stream is transmitted by the first transmission channel, the second video stream is transmitted by the second transmission channel, and the first video stream and the second video stream do not need to be distinguished.
Another method for the conversion device provided in the present disclosure to acquire the first video stream and the second video stream of the terminal device will be described below with reference to fig. 6.
Fig. 6 schematically illustrates a flowchart of a method of acquiring a first video stream and a second video stream of a terminal device according to another embodiment of the present disclosure.
As shown in fig. 6, the method 600 of acquiring the first video stream and the second video stream of the terminal device includes the terminal device creating a third transmission channel in operation S610.
According to the disclosed embodiments, the third transmission channel may comprise, for example, a Video Socket.
In operation S620, the terminal device generates original video data according to the first video stream and the second video stream.
According to embodiments of the present disclosure, for example, the first video stream and the second video stream may be encapsulated together to obtain the original video data. Wherein the original video data may comprise an identification indicating the location of the first video stream and the second video stream in the original video data. Illustratively, the identification may be recorded at the head of the original video data.
In operation S630, the terminal device transmits the first video stream to the switching device through the third transmission channel.
In operation S640, the conversion apparatus acquires original video data from the terminal apparatus through the third transmission channel.
In operation S650, the conversion apparatus extracts the first video stream and the second video stream from the original video data according to the identification in the original video data.
According to embodiments of the present disclosure, the conversion device may determine a position of the first video stream and a position of the second video stream in the original video data according to the identification, and extract the first video stream and the second video stream from the positions.
According to the embodiment of the disclosure, the first video stream and the second video stream are transmitted through one transmission channel, so that resources such as memory, a CPU (central processing unit) and the like can be saved.
The method of obtaining a composite video stream provided by the present disclosure will be described below in connection with fig. 7.
Fig. 7 schematically illustrates a flow chart of a method of obtaining a composite video stream according to an embodiment of the disclosure.
As shown in fig. 7, the method 720 of obtaining a composite video stream includes generating a first view component from a first video stream in operation S721.
According to embodiments of the present disclosure, the first video component may include, for example, a SurfaceView component. Wherein SurfaceView is the inheritance class of View (View), surfaceView has embedded therein a Surface specific for rendering. Surface is the handle of the native buffer managed by the screen display content compositor (screen compositor).
For example, the first video stream may be decoded and the decoded video stream may be rendered into SurfaceView components to obtain the first video component.
In operation S722, a second view component is generated from the second video stream.
According to embodiments of the present disclosure, the second video component may include, for example, a SurfaceView component. For example, the second video stream may be decoded and the decoded video stream may be rendered into SurfaceView components to obtain the second video component.
According to an embodiment of the present disclosure, the operations S721 and S722 may be performed in any order therebetween.
In operation S723, the first view component and the second view component are placed in a stacked manner, resulting in a target view.
According to embodiments of the present disclosure, for example, a first view assembly may be stacked on top of a second view assembly, or a second view assembly may be stacked on top of a first view assembly.
In operation S724, a composite video stream is determined from the target view.
According to embodiments of the present disclosure, for example, a target view may be captured at a predetermined frequency, resulting in a plurality of images. A composite video stream is then generated from the plurality of images. Wherein, the predetermined frequency can be set according to actual needs.
For example, the conversion device may intercept the view of the entire SurfaceView area at a frequency of 30 frames per second to obtain a corresponding image, and then soft-encode the image to obtain encoded binary data, i.e., a composite video stream. Wherein soft coding may be implemented based on ffmpeg, for example.
Fig. 8 schematically illustrates a flowchart of a screen projection method according to another embodiment of the present disclosure.
As shown in fig. 8, the screen projection method 800 may further include determining coordinate information of a control operation in response to receiving a control instruction for the composite video stream at operation S880.
According to embodiments of the present disclosure, coordinate information may be used to represent the location of a control operation. The user may input control instructions for the composite video stream to the display device. For example, a user may perform a touch operation on the display device, generate a corresponding control instruction, and obtain coordinate information corresponding to the touch operation. The coordinate information may include, for example, (x, y), where x may be a horizontal axis coordinate and y may be a vertical axis coordinate.
In operation S890, the display apparatus transmits the coordinate information to the conversion apparatus.
In operation S8100, the transformation device transmits the coordinate information to the terminal device.
In operation S8110, the terminal device receives the coordinate information from the conversion device, and performs an operation corresponding to the first virtual screen or the second virtual screen according to the coordinate information.
According to embodiments of the present disclosure, operations may include, for example, application startup or shutdown, function startup or shutdown, system settings, and so forth.
For example, the position indicated by the coordinate information may be a start button of the application a, and an operation of triggering the button may be performed accordingly to start the application a.
According to another embodiment of the present disclosure, the conversion apparatus may further set a manipulation area and a content area in the target view. The first view component or the second view component is then placed in the content area. Next, in response to receiving the coordinate information from the display device, the coordinate information may be transmitted to the terminal device in a case where the coordinate information matches the content area. And when the coordinate information is matched with the control area, adjusting the position of the content area according to the coordinate information. For example, if the location indicated by the coordinate information is located within the content area, it is determined that the coordinate information matches the content area, otherwise it is determined that the coordinate information does not match the content area.
The composite video stream shown above is further described in connection with the exemplary embodiment with reference to fig. 9. Those skilled in the art will appreciate that the following example embodiments are merely for the understanding of the present disclosure, and the present disclosure is not limited thereto.
Fig. 9 schematically illustrates a flow chart of a method of obtaining a composite video stream according to an embodiment of the disclosure.
As shown in fig. 9, in this embodiment, the first video stream may be a vertical screen video stream (a video stream having a horizontal resolution smaller than a vertical resolution), and the second video stream may be a horizontal screen video stream (a video stream having a vertical resolution smaller than a horizontal resolution), by way of example. Based on this, the conversion device may generate a first video component 901 from the first video stream and a second video component from the second video stream. The first video component 901 may then be stacked on top of the second view component 902. The conversion device may also generate a manipulation area 903. When the user drags the manipulation area 903, the conversion device may adjust the position of the content area 904 accordingly according to the coordinate information of the drag operation. For example, the user drags the manipulation area 903 laterally, the transition device may move the content area 904 laterally accordingly.
According to another embodiment of the present disclosure, the conversion device may acquire the resolution of the display device in advance. Based on this, the terminal device can also acquire the resolution of the display device through the conversion device. And generating a first virtual screen and a second virtual screen according to the resolution. For example, the portrait resolution of the first virtual screen may be set according to the portrait resolution of the display device, and the landscape resolution of the first virtual screen may be set according to the video ratio and the portrait resolution corresponding to the first virtual screen. Or the lateral resolution of the second virtual screen may be set according to the lateral resolution of the display device, and the longitudinal resolution of the second virtual screen may be set according to the video proportion and the lateral resolution corresponding to the second virtual screen.
The screen projection method shown above is further described with reference to fig. 10 in conjunction with the specific embodiment. Those skilled in the art will appreciate that the following example embodiments are merely for the understanding of the present disclosure, and the present disclosure is not limited thereto.
Fig. 10 schematically illustrates a schematic diagram of a screen projection method according to an embodiment of the present disclosure.
In fig. 10, it is shown that the conversion device and the display device may establish a wired connection in operation S1001. For example, the conversion device and the display device may be connected through USB.
In operation S1002, the conversion apparatus and the terminal apparatus may establish a wireless connection. For example, the terminal device may detect a bluetooth signal of the switching device, and connect with the switching device according to the bluetooth signal.
After the conversion equipment is successfully connected with the display equipment, the conversion equipment and the terminal equipment, the conversion equipment notifies the terminal equipment to start the function of the multipath video.
Then, in operation S1003, the terminal device may start transmitting a Video stream of the vertical screen, i.e., a first Video stream, through the Video Socket a channel.
Before transmission, the terminal device may create a virtual screen (Display) of the vertical screen according to the agreed resolution. At the beginning, an initial page is displayed in the virtual screen, a plurality of icons (icon) can be arranged in the initial page, the icons correspond to a plurality of applications respectively, and a user can select a target application from the plurality of applications. After the user selects the target application, an interface of the target application is displayed in the virtual screen.
The terminal device may then monitor the page element changes of the vertical screen virtual screen via MediaCodec. Wherein MediaCodec is an audio/video coding and decrypting tool. And after the change of the page element is monitored, starting to acquire the view content of the vertical screen virtual screen, and then starting to encode to generate a video stream. Then, the terminal equipment monitors the Video stream of the vertical screen started to be transmitted through the Video Socket A channel.
Similarly, the terminal device establishes a virtual screen of the horizontal screen according to the agreed resolution. The terminal device may then monitor the page element changes of the cross-screen virtual screen via MediaCodec. And after the change of the page element is monitored, starting to acquire the view content of the cross screen virtual screen, and then starting to encode to generate a video stream. Then, in operation S1004, the transmission of the Video stream of the vertical screen, i.e., the second Video stream, is started through the Video Socket B channel.
In operation S1005, after the conversion apparatus establishes a connection with the terminal apparatus, two SurfaceView modules may be newly built, one being SurfaceView modules of the vertical screen and the other being SurfaceView modules of the horizontal screen. The SurfaceView component of the portrait screen may be used to present the video stream of the portrait screen and the SurfaceView component of the landscape screen may be used to present the video stream of the landscape screen. The SurfaceView assembly of vertical screens may then be stacked over the SurfaceView assembly of horizontal screens. The video stream is decoded after it is received and then rendered into the vertical screen SurfaceView. The video stream is decoded after it is received and then rendered into SurfaceView of the cross-screen.
Next, the conversion device may intercept the view of the entire SurfaceView area at a rate of 30 frames per second, and obtain a video stream, i.e., a composite video stream, by soft-coding the intercepted image through ffmpeg.
The conversion device may then transmit the composite video stream to the display device in real time in operation S1006.
In operation S1007, when the display device receives a touch signal, for example, a click signal or a slide signal, the display device may transmit real-time touch coordinates (x, y) to the conversion device.
After receiving the touch coordinates, the conversion device can determine whether the coordinates belong to the content area of the vertical screen or the content area of the horizontal screen. If so, the coordinates are directly given to the terminal device, directly consumed and reacted by the corresponding virtual screen in the terminal device in operation S1008. Wherein coordinates in the content area of the vertical screen are consumed by the vertical screen virtual screen and coordinates in the content area of the horizontal screen are consumed by the horizontal screen virtual screen.
The switching device may also provide a manipulation area in the SurfaceView component of the vertical screen. Based on this, in operation S1009, if the received coordinates are the manipulation area, the conversion device consumes itself, and the conversion device adjusts the position of SurfaceView components of the vertical screen according to the coordinates, so that the content area of the vertical screen can be changed in position according to the touch.
The conversion apparatus provided by the present disclosure will be described below with reference to fig. 11.
Fig. 11 schematically illustrates a block diagram of a screen-casting device according to an embodiment of the disclosure. The screen projection device can be applied to a conversion device.
As shown in fig. 11, the screen projection device 1100 includes an acquisition module 1110, a superposition module 1120, and a first transmission module 1130.
An acquiring module 1110, configured to acquire a first video stream and a second video stream of the terminal device, where the first video stream is generated according to view content of a first virtual screen in the terminal device, and the second video stream is generated according to view content of a second virtual screen in the terminal device.
And the superposition module 1120 is configured to superimpose the first video stream and the second video stream to obtain a composite video stream.
A first sending module 1130 is configured to send the composite video stream to a display device, so that the display device displays the composite video stream.
According to an embodiment of the present disclosure, the superposition module may include: the first component generating sub-module is used for generating a first view component according to the first video stream; a second component generating sub-module for generating a second view component from the second video stream; the stacking sub-module is used for placing the first view assembly and the second view assembly in a stacking manner to obtain a target view; and a synthesis submodule for determining a synthesized video stream according to the target view.
According to an embodiment of the present disclosure, the synthesis submodule may include: the screenshot unit is used for screenshot the target view at a preset frequency to obtain a plurality of images; and a video stream generating unit for generating a composite video stream from the plurality of images.
According to an embodiment of the present disclosure, the screen projection device 1100 may further include: the setting module is used for setting a control area and a content area in the target view; and a placement module for placing the first view component or the second view component in the content area.
According to an embodiment of the present disclosure, the screen projection device 1100 may further include: the forwarding module is used for responding to the received coordinate information from the display equipment and sending the coordinate information to the terminal equipment under the condition that the coordinate information is matched with the content area; and the adjusting module is used for adjusting the position of the content area according to the coordinate information under the condition that the coordinate information is matched with the control area.
According to an embodiment of the present disclosure, the acquiring module may include: the first acquisition submodule is used for acquiring a first video stream from the terminal equipment through a first transmission channel; and a second obtaining sub-module, configured to obtain a second video stream from the terminal device through a second transmission channel.
According to another embodiment of the present disclosure, the acquisition module may include: the third acquisition submodule is used for acquiring original video data from the terminal equipment through a third transmission channel; and an extraction sub-module for extracting the first video stream and the second video stream from the original video data according to the identification in the original video data, wherein the identification is used for indicating the positions of the first video stream and the second video stream in the original video data.
The terminal device provided by the present disclosure will be described below with reference to fig. 12.
Fig. 12 schematically illustrates a block diagram of a screen-casting device according to another embodiment of the present disclosure. The screen projection device can be applied to terminal equipment.
As shown in fig. 12, the screen projection device 1200 includes a first generation module 1210, a second generation module 1220, and a second transmission module 1230. The first generation module 1210 is configured to generate a first video stream according to view content of a first virtual screen.
The second generating module 1220 is configured to generate a second video stream according to the view content of the second virtual screen.
The second transmitting module 1230 is configured to transmit the first video stream and the second video stream to the conversion device.
According to an embodiment of the present disclosure, the second transmitting module may include: the first creation submodule is used for creating a first transmission channel and a first transmission channel; the first video stream sending submodule is used for sending a first video stream to the conversion equipment through a first transmission channel; and a second video stream transmitting sub-module for transmitting the second video stream to the conversion device through the second transmission channel.
According to an embodiment of the present disclosure, the second transmitting module may include: the second creation submodule is used for creating a third transmission channel; the system comprises an original video data generation sub-module, a first video data generation sub-module and a second video data generation sub-module, wherein the original video data comprises an identifier, and the identifier is used for indicating the positions of the first video stream and the second video stream in the original video data; and a third video stream transmitting sub-module for transmitting the original video data to the conversion device through a third transmission channel.
According to an embodiment of the present disclosure, the screen projection device 1200 may further include: the resolution obtaining module is used for obtaining the resolution of the display device through the conversion device; and the virtual screen generating module is used for generating a first virtual screen and a second virtual screen according to the resolution.
According to an embodiment of the present disclosure, the screen projection device 1200 may further include: the coordinate receiving module is used for receiving the coordinate information from the conversion equipment; and the execution module is used for executing the operation corresponding to the first virtual screen or the second virtual screen according to the coordinate information.
The screen projection device provided by the present disclosure will be described below with reference to fig. 13.
Fig. 13 schematically illustrates a block diagram of a screen-casting device according to another embodiment of the present disclosure. The screen projection device can be applied to a display device.
As shown in fig. 13, the screen projection device 1300 includes a receiving module 1310 and a presentation module 1320.
And a receiving module 1310, configured to receive a composite video stream from the conversion device, where the composite video stream is obtained by superimposing a first video stream and a second video stream, where the first video stream is generated according to view content of a first virtual screen in a terminal device, and the second video stream is generated according to view content of a second virtual screen in the terminal device.
And a display module 1320 for displaying the composite video stream.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 14 schematically illustrates a block diagram of an example electronic device 1400 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 14, the apparatus 1400 includes a computing unit 1401 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1402 or a computer program loaded from a storage unit 1408 into a Random Access Memory (RAM) 1403. In the RAM 1403, various programs and data required for the operation of the device 1400 can also be stored. The computing unit 1401, the ROM 1402, and the RAM 1403 are connected to each other through a bus 1404. An input/output (I/O) interface 1405 is also connected to the bus 1404.
Various components in device 1400 are connected to I/O interface 1405, including: an input unit 1406 such as a keyboard, a mouse, or the like; an output unit 1407 such as various types of displays, speakers, and the like; a storage unit 1408 such as a magnetic disk, an optical disk, or the like; and a communication unit 1409 such as a network card, a modem, a wireless communication transceiver, and the like. The communication unit 1409 allows the device 1400 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunications networks.
The computing unit 1401 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1401 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 1401 performs the respective methods and processes described above, for example, a screen-projection method. For example, in some embodiments, the screening method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 1408. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 1400 via the ROM 1402 and/or the communication unit 1409. When a computer program is loaded into RAM 1403 and executed by computing unit 1401, one or more steps of the screen projection method described above may be performed. Alternatively, in other embodiments, computing unit 1401 may be configured to perform the screen-casting method in any other suitable way (e.g. by means of firmware).
Various implementations of the systems and techniques described here above can be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of large management difficulty and weak service expansibility in the traditional physical hosts and VPS service ("Virtual PRIVATE SERVER" or simply "VPS"). The server may also be a server of a distributed system or a server that incorporates a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (18)

Translated fromChinese
1.一种投屏方法,包括:1. A screen projection method, comprising:获取终端设备的第一视频流和第二视频流,其中,所述第一视频流是根据所述终端设备中第一虚拟屏的视图内容生成的,所述第二视频流是根据所述终端设备中第二虚拟屏的视图内容生成的;Acquire a first video stream and a second video stream of a terminal device, wherein the first video stream is generated according to the view content of a first virtual screen in the terminal device, and the second video stream is generated according to the view content of a second virtual screen in the terminal device;将所述第一视频流与所述第二视频流相叠加,得到合成视频流,所述合成视频流包括目标视图;以及Superimposing the first video stream and the second video stream to obtain a composite video stream, wherein the composite video stream includes a target view; and将所述合成视频流发送至显示设备,以便所述显示设备展示所述合成视频流;Sending the composite video stream to a display device so that the display device displays the composite video stream;其中,所述目标视图包括操控区域和内容区域;所述方法还包括:Wherein, the target view includes a control area and a content area; and the method further includes:响应于接收到来自所述显示设备的坐标信息,在所述坐标信息与所述内容区域匹配的情况下,将所述坐标信息发送至所述终端设备;以及In response to receiving the coordinate information from the display device, if the coordinate information matches the content area, sending the coordinate information to the terminal device; and在所述坐标信息与操控区域匹配的情况下,根据所述坐标信息,调整所述内容区域的位置。When the coordinate information matches the manipulation area, the position of the content area is adjusted according to the coordinate information.2.根据权利要求1所述的方法,其中,所述将所述第一视频流与所述第二视频流相叠加,得到合成视频流,包括:2. The method according to claim 1, wherein the step of superimposing the first video stream and the second video stream to obtain a composite video stream comprises:根据所述第一视频流,生成第一视图组件;Generate a first view component according to the first video stream;根据所述第二视频流,生成第二视图组件;generating a second view component according to the second video stream;将所述第一视图组件与所述第二视图组件以层叠的方式进行放置,得到目标视图;以及Placing the first view component and the second view component in a stacked manner to obtain a target view; and根据所述目标视图,确定所述合成视频流。The synthesized video stream is determined according to the target view.3.根据权利要求2所述的方法,其中,所述根据所述目标视图,确定所述合成视频流,包括:3. The method according to claim 2, wherein determining the synthesized video stream according to the target view comprises:以预定频率对所述目标视图进行截图,得到多个图像;以及Taking screenshots of the target view at a predetermined frequency to obtain a plurality of images; and根据所述多个图像,生成所述合成视频流。The composite video stream is generated based on the multiple images.4.根据权利要求2所述的方法,还包括:4. The method according to claim 2, further comprising:将所述第一视图组件或第二视图组件放置于所述内容区域。Place the first view component or the second view component in the content area.5.根据权利要求1所述的方法,其中,所述获取终端设备的第一视频流和第二视频流包括:5. The method according to claim 1, wherein the obtaining the first video stream and the second video stream of the terminal device comprises:通过第一传输通道从所述终端设备获取所述第一视频流;以及Acquire the first video stream from the terminal device through a first transmission channel; and通过第二传输通道从所述终端设备获取所述第二视频流。The second video stream is obtained from the terminal device through a second transmission channel.6.根据权利要求1所述的方法,其中,所述获取终端设备的第一视频流和第二视频流包括:6. The method according to claim 1, wherein the obtaining the first video stream and the second video stream of the terminal device comprises:通过第三传输通道从所述终端设备获取原始视频数据;以及Acquiring original video data from the terminal device via a third transmission channel; and根据所述原始视频数据中的标识,从所述原始视频数据中提取所述第一视频流和所述第二视频流,其中,所述标识用于指示所述第一视频流和所述第二视频流在所述原始视频数据中的位置。The first video stream and the second video stream are extracted from the original video data according to an identifier in the original video data, wherein the identifier is used to indicate positions of the first video stream and the second video stream in the original video data.7.一种投屏方法,包括:7. A screen projection method, comprising:根据第一虚拟屏的视图内容,生成第一视频流;Generate a first video stream according to the view content of the first virtual screen;根据第二虚拟屏的视图内容,生成第二视频流;以及generating a second video stream according to the view content of the second virtual screen; and向转换设备发送所述第一视频流和所述第二视频流;sending the first video stream and the second video stream to a conversion device;其中,所述方法还包括:Wherein, the method further comprises:接收来自所述转换设备的坐标信息;以及receiving coordinate information from the conversion device; and根据所述坐标信息,执行与所述第一虚拟屏或所述第二虚拟屏对应的操作;According to the coordinate information, executing an operation corresponding to the first virtual screen or the second virtual screen;其中,所述转换设备将所述第一视频流与所述第二视频流相叠加,得到合成视频流,所述合成视频流包括目标视图,所述目标视图包括操控区域和内容区域,所述坐标信息与所述内容区域匹配。The conversion device superimposes the first video stream and the second video stream to obtain a composite video stream, wherein the composite video stream includes a target view, the target view includes a control area and a content area, and the coordinate information matches the content area.8.根据权利要求7所述的方法,其中,所述向转换设备发送所述第一视频流和所述第二视频流,包括:8. The method according to claim 7, wherein the sending the first video stream and the second video stream to a conversion device comprises:创建第一传输通道和第二传输通道;Creating a first transmission channel and a second transmission channel;通过所述第一传输通道向所述转换设备发送所述第一视频流;以及Sending the first video stream to the conversion device through the first transmission channel; and通过所述第二传输通道向所述转换设备发送所述第二视频流。The second video stream is sent to the conversion device through the second transmission channel.9.根据权利要求7所述的方法,其中,所述向转换设备发送所述第一视频流和所述第二视频流,包括:9. The method according to claim 7, wherein the sending the first video stream and the second video stream to a conversion device comprises:创建第三传输通道;Create a third transmission channel;根据所述第一视频流和所述第二视频流,生成原始视频数据,其中,所述原始视频数据包括标识,所述标识用于指示所述第一视频流和所述第二视频流在所述原始视频数据中的位置;以及Generate original video data according to the first video stream and the second video stream, wherein the original video data includes an identifier, and the identifier is used to indicate the positions of the first video stream and the second video stream in the original video data; and通过所述第三传输通道向所述转换设备发送所述原始视频数据。The original video data is transmitted to the conversion device through the third transmission channel.10.根据权利要求7所述的方法,还包括:10. The method according to claim 7, further comprising:通过所述转换设备获取显示设备的分辨率;以及Acquiring the resolution of the display device through the conversion device; and根据所述分辨率,生成所述第一虚拟屏和所述第二虚拟屏。The first virtual screen and the second virtual screen are generated according to the resolution.11.一种投屏方法,包括:11. A screen projection method, comprising:接收来自转换设备的合成视频流,其中,合成视频流是通过将第一视频流与第二视频流相叠加得到的,所述合成视频流包括目标视图;所述第一视频流是根据终端设备中第一虚拟屏的视图内容生成的,所述第二视频流是根据所述终端设备中第二虚拟屏的视图内容生成的;以及Receiving a composite video stream from a conversion device, wherein the composite video stream is obtained by superimposing a first video stream and a second video stream, and the composite video stream includes a target view; the first video stream is generated according to the view content of a first virtual screen in a terminal device, and the second video stream is generated according to the view content of a second virtual screen in the terminal device; and展示所述合成视频流;displaying the synthesized video stream;其中,所述目标视图包括操控区域和内容区域;所述转换设备响应于接收到的坐标信息,在所述坐标信息与所述内容区域匹配的情况下,将所述坐标信息发送至所述终端设备;以及在所述坐标信息与操控区域匹配的情况下,根据所述坐标信息,调整所述内容区域的位置。Wherein, the target view includes a control area and a content area; the conversion device responds to the received coordinate information, and when the coordinate information matches the content area, sends the coordinate information to the terminal device; and when the coordinate information matches the control area, adjusts the position of the content area according to the coordinate information.12.根据权利要求11所述的方法,还包括:12. The method according to claim 11, further comprising:响应于接收到针对所述合成视频流的控制指令,确定控制操作的坐标信息;以及In response to receiving a control instruction for the composite video stream, determining coordinate information of a control operation; and将所述坐标信息发送至所述转换设备。The coordinate information is sent to the conversion device.13.一种投屏设备,包括:13. A screen projection device, comprising:获取模块,用于获取终端设备的第一视频流和第二视频流,其中,所述第一视频流是根据所述终端设备中第一虚拟屏的视图内容生成的,所述第二视频流是根据所述终端设备中第二虚拟屏的视图内容生成的;An acquisition module, configured to acquire a first video stream and a second video stream of a terminal device, wherein the first video stream is generated according to the view content of a first virtual screen in the terminal device, and the second video stream is generated according to the view content of a second virtual screen in the terminal device;叠加模块,用于将所述第一视频流与所述第二视频流相叠加,得到合成视频流,所述合成视频流包括目标视图;以及a superposition module, configured to superimpose the first video stream and the second video stream to obtain a composite video stream, wherein the composite video stream includes a target view; and第一发送模块,用于将所述合成视频流发送至显示设备,以便所述显示设备展示所述合成视频流;A first sending module, used for sending the composite video stream to a display device so that the display device displays the composite video stream;其中,所述目标视图包括操控区域和内容区域;所述投屏设备还包括模块,用于:The target view includes a control area and a content area; the projection device also includes a module for:响应于接收到来自所述显示设备的坐标信息,在所述坐标信息与所述内容区域匹配的情况下,将所述坐标信息发送至所述终端设备;以及In response to receiving the coordinate information from the display device, if the coordinate information matches the content area, sending the coordinate information to the terminal device; and在所述坐标信息与操控区域匹配的情况下,根据所述坐标信息,调整所述内容区域的位置。When the coordinate information matches the manipulation area, the position of the content area is adjusted according to the coordinate information.14.一种投屏设备,包括:14. A screen projection device, comprising:第一生成模块,用于根据第一虚拟屏的视图内容,生成第一视频流;A first generating module, used to generate a first video stream according to the view content of the first virtual screen;第二生成模块,用于根据第二虚拟屏的视图内容,生成第二视频流;以及A second generating module, configured to generate a second video stream according to the view content of the second virtual screen; and第二发送模块,用于向转换设备发送所述第一视频流和所述第二视频流;A second sending module, used for sending the first video stream and the second video stream to a conversion device;其中,所述投屏设备还包括模块,用于:The screen projection device further includes a module for:接收来自所述转换设备的坐标信息;以及receiving coordinate information from the conversion device; and根据所述坐标信息,执行与所述第一虚拟屏或所述第二虚拟屏对应的操作;According to the coordinate information, executing an operation corresponding to the first virtual screen or the second virtual screen;其中,所述转换设备将所述第一视频流与所述第二视频流相叠加,得到合成视频流,所述合成视频流包括目标视图,所述目标视图包括操控区域和内容区域,所述坐标信息与所述内容区域匹配。The conversion device superimposes the first video stream and the second video stream to obtain a composite video stream, wherein the composite video stream includes a target view, the target view includes a control area and a content area, and the coordinate information matches the content area.15.一种投屏设备,包括:15. A screen projection device, comprising:接收模块,用于接收来自转换设备的合成视频流,其中,合成视频流是通过将第一视频流与第二视频流相叠加得到的,所述合成视频流包括目标视图,所述第一视频流是根据终端设备中第一虚拟屏的视图内容生成的,所述第二视频流是根据所述终端设备中第二虚拟屏的视图内容生成的;以及a receiving module, configured to receive a composite video stream from a conversion device, wherein the composite video stream is obtained by superimposing a first video stream and a second video stream, the composite video stream includes a target view, the first video stream is generated according to a view content of a first virtual screen in a terminal device, and the second video stream is generated according to a view content of a second virtual screen in the terminal device; and展示模块,用于展示所述合成视频流;A display module, used for displaying the synthesized video stream;其中,所述目标视图包括操控区域和内容区域;所述转换设备响应于接收到的坐标信息,在所述坐标信息与所述内容区域匹配的情况下,将所述坐标信息发送至所述终端设备;以及在所述坐标信息与操控区域匹配的情况下,根据所述坐标信息,调整所述内容区域的位置。Wherein, the target view includes a control area and a content area; the conversion device responds to the received coordinate information, and when the coordinate information matches the content area, sends the coordinate information to the terminal device; and when the coordinate information matches the control area, adjusts the position of the content area according to the coordinate information.16.一种电子设备,包括:16. An electronic device comprising:至少一个处理器;以及at least one processor; and与所述至少一个处理器通信连接的存储器;其中,a memory communicatively connected to the at least one processor; wherein,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-11中任一项所述的方法。The memory stores instructions that can be executed by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to perform the method according to any one of claims 1 to 11.17.一种存储有计算机指令的非瞬时计算机可读存储介质,其中,所述计算机指令用于使所述计算机执行根据权利要求1-12中任一项所述的方法。17. A non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are used to cause the computer to execute the method according to any one of claims 1 to 12.18.一种计算机程序产品,包括计算机程序/指令,其特征在于,该计算机程序/指令被处理器执行时实现权利要求1-12中任一项所述方法的步骤。18. A computer program product, comprising a computer program/instruction, characterized in that when the computer program/instruction is executed by a processor, the steps of the method according to any one of claims 1 to 12 are implemented.
CN202310151175.1A2023-02-132023-02-13Screen projection method, device, equipment and storage mediumActiveCN116193176B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202310151175.1ACN116193176B (en)2023-02-132023-02-13Screen projection method, device, equipment and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202310151175.1ACN116193176B (en)2023-02-132023-02-13Screen projection method, device, equipment and storage medium

Publications (2)

Publication NumberPublication Date
CN116193176A CN116193176A (en)2023-05-30
CN116193176Btrue CN116193176B (en)2024-11-19

Family

ID=86436230

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202310151175.1AActiveCN116193176B (en)2023-02-132023-02-13Screen projection method, device, equipment and storage medium

Country Status (1)

CountryLink
CN (1)CN116193176B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112214186A (en)*2019-07-112021-01-12上海博泰悦臻网络技术服务有限公司Information sharing method and vehicle-mounted terminal
CN114153542A (en)*2021-11-302022-03-08阿波罗智联(北京)科技有限公司Screen projection method and device, electronic equipment and computer readable storage medium
CN115550498A (en)*2022-08-032022-12-30阿波罗智联(北京)科技有限公司Screen projection method, device, equipment and storage medium

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP2016206289A (en)*2015-04-172016-12-08三菱電機株式会社Vehicle-purposed display device
CN109992231B (en)*2019-03-282021-07-23维沃移动通信有限公司Screen projection method and terminal
CN114006625B (en)*2019-08-262023-03-28华为技术有限公司Split-screen display method and electronic equipment
CN111324327B (en)*2020-02-202022-03-25华为技术有限公司Screen projection method and terminal equipment
CN111432070B (en)*2020-03-172022-04-08阿波罗智联(北京)科技有限公司Application screen projection control method, device, equipment and medium
CN112118558B (en)*2020-06-302023-05-16上汽通用五菱汽车股份有限公司Vehicle-mounted screen display method, vehicle and computer readable storage medium
CN112202967B (en)*2020-09-012021-08-03武汉卡比特信息有限公司Split screen display method based on mobile phone interconnection
CN114173184B (en)*2020-09-102024-10-18华为终端有限公司Screen projection method and electronic equipment
CN115209167A (en)*2021-04-132022-10-18腾讯科技(深圳)有限公司Video data processing method and device and storage medium
CN113282260B (en)*2021-06-092024-04-23深圳康佳电子科技有限公司Screen projection control method and device, intelligent terminal and computer readable storage medium
CN114035973A (en)*2021-10-082022-02-11阿波罗智联(北京)科技有限公司 Application program screen projection method, device, electronic device and storage medium
CN114884990B (en)*2022-05-062025-03-25亿咖通(湖北)技术有限公司 Screen projection method and device based on virtual screen
CN115098052B (en)*2022-06-152023-10-13阿波罗智联(北京)科技有限公司Screen projection method, device, equipment and storage medium
CN115623243A (en)*2022-09-302023-01-17海信视像科技股份有限公司Display device, terminal device and action following method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112214186A (en)*2019-07-112021-01-12上海博泰悦臻网络技术服务有限公司Information sharing method and vehicle-mounted terminal
CN114153542A (en)*2021-11-302022-03-08阿波罗智联(北京)科技有限公司Screen projection method and device, electronic equipment and computer readable storage medium
CN115550498A (en)*2022-08-032022-12-30阿波罗智联(北京)科技有限公司Screen projection method, device, equipment and storage medium

Also Published As

Publication numberPublication date
CN116193176A (en)2023-05-30

Similar Documents

PublicationPublication DateTitle
WO2021143182A1 (en)Game processing method and apparatus, electronic device, and computer-readable storage medium
JP6368033B2 (en) Terminal, server, and terminal control method
CN105900074A (en) Method and device for screen sharing
CN110517214A (en) Method and device for generating images
US9801146B2 (en)Terminal and synchronization control method among terminals
US20240292089A1 (en)Video sharing method and apparatus, electronic device, and storage medium
US11372658B2 (en)Cross-device mulit-monitor setup for remote desktops via image scanning
US10887195B2 (en)Computer system, remote control notification method and program
CN112040468A (en)Method, computing device, and computer storage medium for vehicle interaction
US12271415B2 (en)Method, apparatus, device, readable storage medium and product for media content processing
WO2023143299A1 (en)Message display method and apparatus, device, and storage medium
CN102541499B (en)The management process of local computer equipment and device
CN110928509B (en) Display control method, display control device, storage medium, communication terminal
CN110178111B (en) Image processing method and device for terminal
WO2021143310A1 (en)Animation generation method and apparatus, electronic device, and computer-readable storage medium
JP2019102001A (en)Program, information processing method, and information processing device
CN107851096A (en)For providing the user terminal apparatus and its control method of translation service
EP4170588B1 (en)Video photographing method and apparatus, and device and storage medium
CN116193176B (en)Screen projection method, device, equipment and storage medium
JP2023538825A (en) Methods, devices, equipment and storage media for picture to video conversion
WO2024027819A1 (en)Image processing method and apparatus, device, and storage medium
CN113641439B (en)Text recognition and display method, device, electronic equipment and medium
CN115767469A (en)Interaction system and method for vehicle-mounted terminal and mobile terminal, vehicle-mounted terminal and medium
CN114581294A (en) Brow shape transformation method, device, client, server and storage medium
WO2023273084A1 (en)Method and apparatus for transmitting picture data, device, and storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp