Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The system architecture of the screen projection method and apparatus provided in the present disclosure will be described below with reference to fig. 1.
Fig. 1 is a schematic diagram of a system architecture of a screen projection method, apparatus, electronic device, and storage medium according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which embodiments of the present disclosure may be applied to assist those skilled in the art in understanding the technical content of the present disclosure, but does not mean that embodiments of the present disclosure may not be used in other devices, systems, environments, or scenarios.
As shown in fig. 1, the system architecture 100 according to this embodiment may include a display device 110, a terminal device 120, and a conversion device 130. Wherein the display device 110 may be used to display picture data entered into the display device 110. The terminal device 120 may include a display screen for displaying a local display interface. The conversion device 130 may include, for example, carLife conversion boxes, wherein CarLife is an internet of vehicles standard.
The display device 110 may be various electronic devices having a display screen and supporting communication with other terminals, such as an in-vehicle infotainment device (also referred to as a car machine). Terminal device 120 may be a variety of electronic devices that have a display screen and support communication with other terminals, including but not limited to smartphones, tablets, laptop and desktop computers, and the like. Various communication client applications, such as a car networking application, a web browser application, a search class application, an instant messaging tool, etc., may be installed on the display device 110 and the terminal device 120.
A wired or wireless connection may be established between the display device 110 and the translation device 130. The wired connection may include, for example, a USB (Universal Serial Bus ) connection, an HDMI (High Definition Multimedia Interface, high-definition multimedia interface) connection, and the wireless connection may include, for example, a WLAN (Wireless Local Area Network ) connection, an NFC (Near field Communication) connection, a bluetooth connection, and the like.
A wired or wireless connection may be established between the terminal device 120 and the translation device 130. The wired connection may include, for example, a USB connection, an HDMI connection, etc., and the wireless connection may include, for example, a WLAN connection, an NFC connection, a bluetooth connection, etc.
Interaction between the terminal device 120 and the display device 110 may be achieved through a transition device 130. For example, the terminal device 120 may transmit the screen displayed in the display screen 121 to the conversion device 130 through a connection with the conversion device 130. The conversion device 130 may transmit the screen from the terminal device 120 to the display device 110. After receiving the screen from the switching device 130, the display device 110 may display the screen in the display screen 111 of the display device 110 to realize screen casting. For another example, the user may perform operations such as touch control through the display device 110, and the display device 110 may send operation information corresponding to the operations to the conversion device 130, and then the conversion device 130 sends the operation information to the terminal device 120. The terminal device 120 may perform a corresponding operation according to the operation information.
Those skilled in the art will appreciate that the number of terminal devices and screen-drop devices in fig. 1 is merely illustrative. There may be any number of terminal devices and screen-casting devices, as desired for implementation.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing, applying and the like of the personal information of the user all conform to the regulations of related laws and regulations, necessary security measures are adopted, and the public order harmony is not violated.
In the technical scheme of the disclosure, the authorization or consent of the user is obtained before the personal information of the user is obtained or acquired.
The screen projection method provided by the present disclosure will be described below in connection with fig. 2-4.
Fig. 2 schematically illustrates a flowchart of a screen projection method according to an embodiment of the present disclosure. The screen projection method can be applied to the above-described conversion apparatus, for example.
As shown in fig. 2, the screen projection method 200 includes acquiring a first video stream and a second video stream of a terminal device in operation S210.
According to an embodiment of the present disclosure, a first video stream is generated from view content of a first virtual screen in a terminal device, and a second video stream is generated from view content of a second virtual screen in the terminal device.
Then, in operation S220, the first video stream is superimposed with the second video stream to obtain a composite video stream.
According to the embodiment of the disclosure, for example, a picture of a first video stream may be superimposed on an upper layer of a corresponding picture in a second video stream, or a picture of the second video stream may be superimposed on an upper layer of a corresponding picture in the second video stream, so as to obtain a composite video stream.
According to another embodiment of the present disclosure, the pictures of the first video stream and the pictures of the second video stream may also be displayed side by side, resulting in a composite video stream.
According to embodiments of the present disclosure, the resolutions of the first video stream and the second video stream may be the same or different.
In operation S230, the composite video stream is transmitted to the display device so that the display device presents the composite video stream.
According to embodiments of the present disclosure, a wired or wireless connection may be established in advance between the conversion device and the display device. The wired connection may include, for example, a USB connection, an HDMI connection, etc., and the wireless connection may include, for example, a WLAN connection, an NFC connection, a bluetooth connection, etc. Based on this, the conversion device can transmit the composite video stream to the display device through the connection established in advance. The display device may play the composite video stream after receiving the composite video stream.
According to an embodiment of the present disclosure, the first video stream and the second video stream may correspond to interfaces of two applications, respectively. The first video stream and the second video stream are transmitted through the conversion equipment, so that the display equipment can display interfaces of a plurality of applications at the same time, namely, a multi-channel video function is realized, and the use experience of a user is improved. For example, the news application interface may be displayed while the navigation application interface is displayed. In addition, the display device can support the multi-path video function only by updating the conversion device. Since the user does not need to update the display device, monetary costs and time are saved.
Fig. 3 schematically illustrates a flowchart of a screen projection method according to another embodiment of the present disclosure. The screen projection method can be applied to the terminal equipment.
As shown in fig. 3, the screen projection method 300 includes generating a first video stream according to view contents of a first virtual screen in operation S340.
According to the embodiment of the disclosure, the terminal device may pre-establish a first virtual screen (Display), then acquire view content of the first virtual screen, and then encode the view content to obtain a first video stream.
In operation S350, a second video stream is generated according to view contents of the second virtual screen.
According to the embodiment of the disclosure, the terminal device may further pre-establish the second virtual screen, then acquire view content of the second virtual screen, and then encode the view content to obtain the first video stream.
In operation S360, the first video stream and the second video stream are transmitted to the conversion device.
According to the embodiments of the present disclosure, a wired or wireless connection may be pre-established between the terminal device and the conversion device. The wired connection may include, for example, a USB connection, an HDMI connection, etc., and the wireless connection may include, for example, a WLAN connection, an NFC connection, a bluetooth connection, etc. Based on this, the terminal device can transmit the first video stream and the second video stream to the conversion device through the connection established in advance.
According to the embodiment of the disclosure, the terminal device may display the first virtual screen and the second virtual screen. Or only one of the first virtual screen and the second virtual screen may be displayed, and the other may be hidden in the background. Or the first virtual screen and the second virtual screen are hidden in the background and are not displayed.
Fig. 4 schematically illustrates a flowchart of a screen projection method according to another embodiment of the present disclosure. The screen projection method can be applied to the display device.
As shown in fig. 4, the screen projection method 400 includes receiving a composite video stream from a conversion apparatus in operation S470.
According to embodiments of the present disclosure, the display device may receive the composite video stream, for example, through a connection with the conversion device.
In operation S480, a composite video stream is presented.
According to embodiments of the present disclosure, after receiving the composite video stream, the display device may, for example, play the composite video stream to present the composite video stream.
A method for the conversion device provided in the present disclosure to acquire the first video stream and the second video stream of the terminal device will be described below with reference to fig. 5.
Fig. 5 schematically illustrates a flowchart of a conversion device acquiring a first video stream and a second video stream of a terminal device according to an embodiment of the present disclosure.
As shown in fig. 5, the method 500 of the conversion apparatus acquiring the first video stream and the second video stream of the terminal apparatus includes the terminal apparatus creating a first transmission channel and a first transmission channel in operation S510.
According to an embodiment of the present disclosure, the first transmission channel may include, for example, a Video Socket (Video channel).
In operation S520, the terminal device transmits a first video stream to the switching device through a first transmission channel.
According to an embodiment of the present disclosure, the conversion device may be connected to the first transmission channel in advance, for example, so that the terminal device may transmit the first video stream to the conversion device through the first transmission channel.
In operation S530, the switching device acquires a first video stream from the terminal device through a first transmission channel.
In operation S540, the terminal device transmits the second video stream to the switching device through the second transmission channel.
According to an embodiment of the present disclosure, the second transmission channel may include, for example, a Video Socket.
In operation S550, the switching device acquires a second video stream from the terminal device through a second transmission channel.
According to an embodiment of the present disclosure, the conversion device may be connected to the second transmission channel in advance, for example, so that the terminal device may transmit the second video stream to the conversion device through the second transmission channel.
According to the embodiment of the disclosure, through transmission through the first channel and the second channel, logic is clear from the service layer, and the developer is well understood. The first video stream is transmitted by the first transmission channel, the second video stream is transmitted by the second transmission channel, and the first video stream and the second video stream do not need to be distinguished.
Another method for the conversion device provided in the present disclosure to acquire the first video stream and the second video stream of the terminal device will be described below with reference to fig. 6.
Fig. 6 schematically illustrates a flowchart of a method of acquiring a first video stream and a second video stream of a terminal device according to another embodiment of the present disclosure.
As shown in fig. 6, the method 600 of acquiring the first video stream and the second video stream of the terminal device includes the terminal device creating a third transmission channel in operation S610.
According to the disclosed embodiments, the third transmission channel may comprise, for example, a Video Socket.
In operation S620, the terminal device generates original video data according to the first video stream and the second video stream.
According to embodiments of the present disclosure, for example, the first video stream and the second video stream may be encapsulated together to obtain the original video data. Wherein the original video data may comprise an identification indicating the location of the first video stream and the second video stream in the original video data. Illustratively, the identification may be recorded at the head of the original video data.
In operation S630, the terminal device transmits the first video stream to the switching device through the third transmission channel.
In operation S640, the conversion apparatus acquires original video data from the terminal apparatus through the third transmission channel.
In operation S650, the conversion apparatus extracts the first video stream and the second video stream from the original video data according to the identification in the original video data.
According to embodiments of the present disclosure, the conversion device may determine a position of the first video stream and a position of the second video stream in the original video data according to the identification, and extract the first video stream and the second video stream from the positions.
According to the embodiment of the disclosure, the first video stream and the second video stream are transmitted through one transmission channel, so that resources such as memory, a CPU (central processing unit) and the like can be saved.
The method of obtaining a composite video stream provided by the present disclosure will be described below in connection with fig. 7.
Fig. 7 schematically illustrates a flow chart of a method of obtaining a composite video stream according to an embodiment of the disclosure.
As shown in fig. 7, the method 720 of obtaining a composite video stream includes generating a first view component from a first video stream in operation S721.
According to embodiments of the present disclosure, the first video component may include, for example, a SurfaceView component. Wherein SurfaceView is the inheritance class of View (View), surfaceView has embedded therein a Surface specific for rendering. Surface is the handle of the native buffer managed by the screen display content compositor (screen compositor).
For example, the first video stream may be decoded and the decoded video stream may be rendered into SurfaceView components to obtain the first video component.
In operation S722, a second view component is generated from the second video stream.
According to embodiments of the present disclosure, the second video component may include, for example, a SurfaceView component. For example, the second video stream may be decoded and the decoded video stream may be rendered into SurfaceView components to obtain the second video component.
According to an embodiment of the present disclosure, the operations S721 and S722 may be performed in any order therebetween.
In operation S723, the first view component and the second view component are placed in a stacked manner, resulting in a target view.
According to embodiments of the present disclosure, for example, a first view assembly may be stacked on top of a second view assembly, or a second view assembly may be stacked on top of a first view assembly.
In operation S724, a composite video stream is determined from the target view.
According to embodiments of the present disclosure, for example, a target view may be captured at a predetermined frequency, resulting in a plurality of images. A composite video stream is then generated from the plurality of images. Wherein, the predetermined frequency can be set according to actual needs.
For example, the conversion device may intercept the view of the entire SurfaceView area at a frequency of 30 frames per second to obtain a corresponding image, and then soft-encode the image to obtain encoded binary data, i.e., a composite video stream. Wherein soft coding may be implemented based on ffmpeg, for example.
Fig. 8 schematically illustrates a flowchart of a screen projection method according to another embodiment of the present disclosure.
As shown in fig. 8, the screen projection method 800 may further include determining coordinate information of a control operation in response to receiving a control instruction for the composite video stream at operation S880.
According to embodiments of the present disclosure, coordinate information may be used to represent the location of a control operation. The user may input control instructions for the composite video stream to the display device. For example, a user may perform a touch operation on the display device, generate a corresponding control instruction, and obtain coordinate information corresponding to the touch operation. The coordinate information may include, for example, (x, y), where x may be a horizontal axis coordinate and y may be a vertical axis coordinate.
In operation S890, the display apparatus transmits the coordinate information to the conversion apparatus.
In operation S8100, the transformation device transmits the coordinate information to the terminal device.
In operation S8110, the terminal device receives the coordinate information from the conversion device, and performs an operation corresponding to the first virtual screen or the second virtual screen according to the coordinate information.
According to embodiments of the present disclosure, operations may include, for example, application startup or shutdown, function startup or shutdown, system settings, and so forth.
For example, the position indicated by the coordinate information may be a start button of the application a, and an operation of triggering the button may be performed accordingly to start the application a.
According to another embodiment of the present disclosure, the conversion apparatus may further set a manipulation area and a content area in the target view. The first view component or the second view component is then placed in the content area. Next, in response to receiving the coordinate information from the display device, the coordinate information may be transmitted to the terminal device in a case where the coordinate information matches the content area. And when the coordinate information is matched with the control area, adjusting the position of the content area according to the coordinate information. For example, if the location indicated by the coordinate information is located within the content area, it is determined that the coordinate information matches the content area, otherwise it is determined that the coordinate information does not match the content area.
The composite video stream shown above is further described in connection with the exemplary embodiment with reference to fig. 9. Those skilled in the art will appreciate that the following example embodiments are merely for the understanding of the present disclosure, and the present disclosure is not limited thereto.
Fig. 9 schematically illustrates a flow chart of a method of obtaining a composite video stream according to an embodiment of the disclosure.
As shown in fig. 9, in this embodiment, the first video stream may be a vertical screen video stream (a video stream having a horizontal resolution smaller than a vertical resolution), and the second video stream may be a horizontal screen video stream (a video stream having a vertical resolution smaller than a horizontal resolution), by way of example. Based on this, the conversion device may generate a first video component 901 from the first video stream and a second video component from the second video stream. The first video component 901 may then be stacked on top of the second view component 902. The conversion device may also generate a manipulation area 903. When the user drags the manipulation area 903, the conversion device may adjust the position of the content area 904 accordingly according to the coordinate information of the drag operation. For example, the user drags the manipulation area 903 laterally, the transition device may move the content area 904 laterally accordingly.
According to another embodiment of the present disclosure, the conversion device may acquire the resolution of the display device in advance. Based on this, the terminal device can also acquire the resolution of the display device through the conversion device. And generating a first virtual screen and a second virtual screen according to the resolution. For example, the portrait resolution of the first virtual screen may be set according to the portrait resolution of the display device, and the landscape resolution of the first virtual screen may be set according to the video ratio and the portrait resolution corresponding to the first virtual screen. Or the lateral resolution of the second virtual screen may be set according to the lateral resolution of the display device, and the longitudinal resolution of the second virtual screen may be set according to the video proportion and the lateral resolution corresponding to the second virtual screen.
The screen projection method shown above is further described with reference to fig. 10 in conjunction with the specific embodiment. Those skilled in the art will appreciate that the following example embodiments are merely for the understanding of the present disclosure, and the present disclosure is not limited thereto.
Fig. 10 schematically illustrates a schematic diagram of a screen projection method according to an embodiment of the present disclosure.
In fig. 10, it is shown that the conversion device and the display device may establish a wired connection in operation S1001. For example, the conversion device and the display device may be connected through USB.
In operation S1002, the conversion apparatus and the terminal apparatus may establish a wireless connection. For example, the terminal device may detect a bluetooth signal of the switching device, and connect with the switching device according to the bluetooth signal.
After the conversion equipment is successfully connected with the display equipment, the conversion equipment and the terminal equipment, the conversion equipment notifies the terminal equipment to start the function of the multipath video.
Then, in operation S1003, the terminal device may start transmitting a Video stream of the vertical screen, i.e., a first Video stream, through the Video Socket a channel.
Before transmission, the terminal device may create a virtual screen (Display) of the vertical screen according to the agreed resolution. At the beginning, an initial page is displayed in the virtual screen, a plurality of icons (icon) can be arranged in the initial page, the icons correspond to a plurality of applications respectively, and a user can select a target application from the plurality of applications. After the user selects the target application, an interface of the target application is displayed in the virtual screen.
The terminal device may then monitor the page element changes of the vertical screen virtual screen via MediaCodec. Wherein MediaCodec is an audio/video coding and decrypting tool. And after the change of the page element is monitored, starting to acquire the view content of the vertical screen virtual screen, and then starting to encode to generate a video stream. Then, the terminal equipment monitors the Video stream of the vertical screen started to be transmitted through the Video Socket A channel.
Similarly, the terminal device establishes a virtual screen of the horizontal screen according to the agreed resolution. The terminal device may then monitor the page element changes of the cross-screen virtual screen via MediaCodec. And after the change of the page element is monitored, starting to acquire the view content of the cross screen virtual screen, and then starting to encode to generate a video stream. Then, in operation S1004, the transmission of the Video stream of the vertical screen, i.e., the second Video stream, is started through the Video Socket B channel.
In operation S1005, after the conversion apparatus establishes a connection with the terminal apparatus, two SurfaceView modules may be newly built, one being SurfaceView modules of the vertical screen and the other being SurfaceView modules of the horizontal screen. The SurfaceView component of the portrait screen may be used to present the video stream of the portrait screen and the SurfaceView component of the landscape screen may be used to present the video stream of the landscape screen. The SurfaceView assembly of vertical screens may then be stacked over the SurfaceView assembly of horizontal screens. The video stream is decoded after it is received and then rendered into the vertical screen SurfaceView. The video stream is decoded after it is received and then rendered into SurfaceView of the cross-screen.
Next, the conversion device may intercept the view of the entire SurfaceView area at a rate of 30 frames per second, and obtain a video stream, i.e., a composite video stream, by soft-coding the intercepted image through ffmpeg.
The conversion device may then transmit the composite video stream to the display device in real time in operation S1006.
In operation S1007, when the display device receives a touch signal, for example, a click signal or a slide signal, the display device may transmit real-time touch coordinates (x, y) to the conversion device.
After receiving the touch coordinates, the conversion device can determine whether the coordinates belong to the content area of the vertical screen or the content area of the horizontal screen. If so, the coordinates are directly given to the terminal device, directly consumed and reacted by the corresponding virtual screen in the terminal device in operation S1008. Wherein coordinates in the content area of the vertical screen are consumed by the vertical screen virtual screen and coordinates in the content area of the horizontal screen are consumed by the horizontal screen virtual screen.
The switching device may also provide a manipulation area in the SurfaceView component of the vertical screen. Based on this, in operation S1009, if the received coordinates are the manipulation area, the conversion device consumes itself, and the conversion device adjusts the position of SurfaceView components of the vertical screen according to the coordinates, so that the content area of the vertical screen can be changed in position according to the touch.
The conversion apparatus provided by the present disclosure will be described below with reference to fig. 11.
Fig. 11 schematically illustrates a block diagram of a screen-casting device according to an embodiment of the disclosure. The screen projection device can be applied to a conversion device.
As shown in fig. 11, the screen projection device 1100 includes an acquisition module 1110, a superposition module 1120, and a first transmission module 1130.
An acquiring module 1110, configured to acquire a first video stream and a second video stream of the terminal device, where the first video stream is generated according to view content of a first virtual screen in the terminal device, and the second video stream is generated according to view content of a second virtual screen in the terminal device.
And the superposition module 1120 is configured to superimpose the first video stream and the second video stream to obtain a composite video stream.
A first sending module 1130 is configured to send the composite video stream to a display device, so that the display device displays the composite video stream.
According to an embodiment of the present disclosure, the superposition module may include: the first component generating sub-module is used for generating a first view component according to the first video stream; a second component generating sub-module for generating a second view component from the second video stream; the stacking sub-module is used for placing the first view assembly and the second view assembly in a stacking manner to obtain a target view; and a synthesis submodule for determining a synthesized video stream according to the target view.
According to an embodiment of the present disclosure, the synthesis submodule may include: the screenshot unit is used for screenshot the target view at a preset frequency to obtain a plurality of images; and a video stream generating unit for generating a composite video stream from the plurality of images.
According to an embodiment of the present disclosure, the screen projection device 1100 may further include: the setting module is used for setting a control area and a content area in the target view; and a placement module for placing the first view component or the second view component in the content area.
According to an embodiment of the present disclosure, the screen projection device 1100 may further include: the forwarding module is used for responding to the received coordinate information from the display equipment and sending the coordinate information to the terminal equipment under the condition that the coordinate information is matched with the content area; and the adjusting module is used for adjusting the position of the content area according to the coordinate information under the condition that the coordinate information is matched with the control area.
According to an embodiment of the present disclosure, the acquiring module may include: the first acquisition submodule is used for acquiring a first video stream from the terminal equipment through a first transmission channel; and a second obtaining sub-module, configured to obtain a second video stream from the terminal device through a second transmission channel.
According to another embodiment of the present disclosure, the acquisition module may include: the third acquisition submodule is used for acquiring original video data from the terminal equipment through a third transmission channel; and an extraction sub-module for extracting the first video stream and the second video stream from the original video data according to the identification in the original video data, wherein the identification is used for indicating the positions of the first video stream and the second video stream in the original video data.
The terminal device provided by the present disclosure will be described below with reference to fig. 12.
Fig. 12 schematically illustrates a block diagram of a screen-casting device according to another embodiment of the present disclosure. The screen projection device can be applied to terminal equipment.
As shown in fig. 12, the screen projection device 1200 includes a first generation module 1210, a second generation module 1220, and a second transmission module 1230. The first generation module 1210 is configured to generate a first video stream according to view content of a first virtual screen.
The second generating module 1220 is configured to generate a second video stream according to the view content of the second virtual screen.
The second transmitting module 1230 is configured to transmit the first video stream and the second video stream to the conversion device.
According to an embodiment of the present disclosure, the second transmitting module may include: the first creation submodule is used for creating a first transmission channel and a first transmission channel; the first video stream sending submodule is used for sending a first video stream to the conversion equipment through a first transmission channel; and a second video stream transmitting sub-module for transmitting the second video stream to the conversion device through the second transmission channel.
According to an embodiment of the present disclosure, the second transmitting module may include: the second creation submodule is used for creating a third transmission channel; the system comprises an original video data generation sub-module, a first video data generation sub-module and a second video data generation sub-module, wherein the original video data comprises an identifier, and the identifier is used for indicating the positions of the first video stream and the second video stream in the original video data; and a third video stream transmitting sub-module for transmitting the original video data to the conversion device through a third transmission channel.
According to an embodiment of the present disclosure, the screen projection device 1200 may further include: the resolution obtaining module is used for obtaining the resolution of the display device through the conversion device; and the virtual screen generating module is used for generating a first virtual screen and a second virtual screen according to the resolution.
According to an embodiment of the present disclosure, the screen projection device 1200 may further include: the coordinate receiving module is used for receiving the coordinate information from the conversion equipment; and the execution module is used for executing the operation corresponding to the first virtual screen or the second virtual screen according to the coordinate information.
The screen projection device provided by the present disclosure will be described below with reference to fig. 13.
Fig. 13 schematically illustrates a block diagram of a screen-casting device according to another embodiment of the present disclosure. The screen projection device can be applied to a display device.
As shown in fig. 13, the screen projection device 1300 includes a receiving module 1310 and a presentation module 1320.
And a receiving module 1310, configured to receive a composite video stream from the conversion device, where the composite video stream is obtained by superimposing a first video stream and a second video stream, where the first video stream is generated according to view content of a first virtual screen in a terminal device, and the second video stream is generated according to view content of a second virtual screen in the terminal device.
And a display module 1320 for displaying the composite video stream.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 14 schematically illustrates a block diagram of an example electronic device 1400 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 14, the apparatus 1400 includes a computing unit 1401 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1402 or a computer program loaded from a storage unit 1408 into a Random Access Memory (RAM) 1403. In the RAM 1403, various programs and data required for the operation of the device 1400 can also be stored. The computing unit 1401, the ROM 1402, and the RAM 1403 are connected to each other through a bus 1404. An input/output (I/O) interface 1405 is also connected to the bus 1404.
Various components in device 1400 are connected to I/O interface 1405, including: an input unit 1406 such as a keyboard, a mouse, or the like; an output unit 1407 such as various types of displays, speakers, and the like; a storage unit 1408 such as a magnetic disk, an optical disk, or the like; and a communication unit 1409 such as a network card, a modem, a wireless communication transceiver, and the like. The communication unit 1409 allows the device 1400 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunications networks.
The computing unit 1401 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1401 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 1401 performs the respective methods and processes described above, for example, a screen-projection method. For example, in some embodiments, the screening method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 1408. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 1400 via the ROM 1402 and/or the communication unit 1409. When a computer program is loaded into RAM 1403 and executed by computing unit 1401, one or more steps of the screen projection method described above may be performed. Alternatively, in other embodiments, computing unit 1401 may be configured to perform the screen-casting method in any other suitable way (e.g. by means of firmware).
Various implementations of the systems and techniques described here above can be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of large management difficulty and weak service expansibility in the traditional physical hosts and VPS service ("Virtual PRIVATE SERVER" or simply "VPS"). The server may also be a server of a distributed system or a server that incorporates a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.