Movatterモバイル変換


[0]ホーム

URL:


CN116126176A - Interaction method and device - Google Patents

Interaction method and device
Download PDF

Info

Publication number
CN116126176A
CN116126176ACN202111349438.7ACN202111349438ACN116126176ACN 116126176 ACN116126176 ACN 116126176ACN 202111349438 ACN202111349438 ACN 202111349438ACN 116126176 ACN116126176 ACN 116126176A
Authority
CN
China
Prior art keywords
window
image
area
window area
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111349438.7A
Other languages
Chinese (zh)
Inventor
范振印
李斌飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Device Co Ltd
Original Assignee
Huawei Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Device Co LtdfiledCriticalHuawei Device Co Ltd
Priority to CN202111349438.7ApriorityCriticalpatent/CN116126176A/en
Priority to PCT/CN2022/128921prioritypatent/WO2023083052A1/en
Publication of CN116126176ApublicationCriticalpatent/CN116126176A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

The application relates to an interaction method and device, wherein the method comprises the following steps: the method comprises the steps that a first device obtains a first image and position information of a window area in the first image, wherein the first image represents a picture currently displayed by the first device, and the window area represents an area in the first image, which is shared and displayed by the first device and a second device; the first device encodes the first image based on the position information of the window area to obtain a second image, so that the transparency of the area except the window area in the second image is larger than or equal to a preset value; the first device sends a second image to the second device; the second device receives and displays a second image. The interaction method and the interaction device can improve the operation smoothness of the electronic equipment during screen content sharing, so that the user experience is improved.

Description

Interaction method and device
Technical Field
The present disclosure relates to the field of multimedia technologies, and in particular, to an interaction method and apparatus.
Background
When a user browses to highlight content on a display screen of one electronic device, it may be desirable to share the content to other electronic devices for display, which is screen content sharing. The systems in which the electronic devices that perform screen content sharing operate may be the same or different. For example, screen content sharing may be performed between a tablet computer running an android system and a personal computer running a Windows system. When the screen content of the tablet computer is displayed on the personal computer in a sharing way, the user can display the shared content on the personal computer through the input devices such as a touch screen, a handwriting pen and the like (for example, a window is created, a window is moved or closed, and the like), and meanwhile, the user can operate the application of the tablet computer like other applications in the personal computer through the input devices such as a keyboard, a mouse and the like.
Fig. 1 shows an interactive system architecture diagram in the related art. As shown in fig. 1, the interactive system includes a personal computer and a tablet computer, and the interactive system may be used to realize screen content sharing between the personal computer and the tablet computer. In the scene that the tablet personal computer is used as a virtual expansion screen of the personal computer and the personal computer carries out sharing display on a local window on the tablet personal computer, a data transmission channel is established between the personal computer and the tablet personal computer. The personal computer detects the local window by invoking the application window detection interface. When the change of the local window is detected, the personal computer grabs the screen, encodes an image obtained by grabbing the screen and then sends the encoded image to the tablet personal computer, and simultaneously, each window needing sharing is transmitted in an independent channel. After receiving the image, the tablet computer decodes and displays the image, and simultaneously creates a separate floating window for each window. When a user operates a floating window on a tablet computer, the tablet computer can transmit events (such as movement, closing, scaling and the like) corresponding to the user operation and coordinate position information to the personal computer, so that the window displayed in the personal computer is reversely controlled.
In the related art, each window needs a separate code transmission channel, so that the number of code channels needed for multiple windows increases. In addition, the tablet computer needs to create a separate floating window for each shared window, and the number of the floating windows is also continuously accumulated. Thus, the functions, time delay and other performances of the personal computer and the tablet computer are affected, so that the use scene is limited to the condition with less shared windows. How to improve the running smoothness of the device during screen content sharing is a problem to be solved.
Disclosure of Invention
In view of the above, an interaction method and device are provided, which can improve the operation smoothness of the electronic device during the sharing of screen content.
In a first aspect, embodiments of the present application provide an interaction method, the method including: acquiring a first image and position information of a window area in the first image, wherein the first image represents a picture currently displayed by a first device, and the window area represents an area in the first image in which the first device and a second device are in shared display; encoding the first image based on the position information of the window area to obtain a second image, so that the transparency of the area except the window area in the second image is larger than or equal to a preset value; and sending the second image.
In the embodiment of the application, the first image representing the picture currently displayed by the first device is encoded to obtain the second image, so that the transparency of the region in the second image except for the region needing to be shared and displayed by the first device and the second device is larger than or equal to the preset value, window region sharing can be realized by transmitting the second image between the first device and the second device and displaying the second image on the second device, a separate encoding channel does not need to be established for each window in the window region, a floating window does not need to be established for each window in the window region, time delay is reduced, and occupation of resources is reduced, so that the running smoothness of the electronic device during screen content sharing is improved, and user experience is improved. The smoothness of the operation of the electronic equipment can be effectively improved for scenes with more windows in the window area, and the application scene of screen content sharing is enlarged.
In a first possible implementation manner of the method according to the first aspect, the method further includes: and sending the position information of the window area.
In the embodiment of the application, the first device sends the position information of the window area to the second device, so that the second device can conveniently determine the reverse control area and the automatic control area, and further whether the user operation is reverse control on the first device or local control on the second device is determined, and user experience is improved.
In a second possible implementation manner of the method according to the first aspect or the first possible implementation manner of the first aspect, the acquiring a first image, and the location information of the window area in the first image includes: and responding to a window control instruction, acquiring the first image and the position information of the window area, wherein the window control instruction is used for controlling the window area displayed on the first device.
In the embodiment of the application, when the window area is controlled to change, the first image is acquired again, so that a new second image is obtained, and the update of the second image on the second device can be realized.
In a third possible implementation manner of the method according to the second possible implementation manner of the first aspect, the method further includes: acquiring first coordinate information and a first event type in response to a first user operation, wherein the first user operation represents a user operation performed on the window area displayed on the first device; generating the window control instruction based on the first coordinate information and a first event type; and controlling the window area displayed on the first device according to the window control instruction.
In the embodiment of the application, the user can operate the shared window area on the first device, respond to the operation of the user, change the window area and correspondingly update the second image.
In a fourth possible implementation manner of the method according to the second possible implementation manner of the first aspect, the method further includes: in response to receiving a countercontrol message, obtaining second coordinate information and a second event type from the countercontrol message, the countercontrol message being generated according to a second user operation, the second user operation representing a user operation performed on the window area displayed on the second device; performing coordinate transformation on the second coordinate information; generating the window control instruction based on the transformed second coordinate information and the second event type; and controlling the window area displayed on the first device according to the window control instruction.
In the embodiment of the application, the user can operate the shared window area on the second device, respond to the operation of the user, change the window area and correspondingly update the second image.
In one possible implementation manner, the acquiring the first image and the position information of the window area in the first image includes: and acquiring the first image and the position information of the window area in response to a screen content sharing operation.
In one possible implementation, the window area includes one or more windows.
In the embodiment of the application, sharing of one or more windows can be achieved by transmitting the second image, so that the application scene is enlarged.
In a second aspect, embodiments of the present application provide an interaction method, the method including: receiving a second image, wherein the transparency of an area except a window area in the second image is larger than or equal to a preset value, and the window area represents an area shared with second equipment in a picture currently displayed by first equipment; displaying the second image on the second device.
In a first possible implementation manner of the method according to the second aspect, the method further includes: receiving position information of the window area; determining a countercontrol area in the second device according to the position information of the window area; responding to a second user operation aiming at the reverse control area, and acquiring second coordinate information and a second event type; generating a countercontrol message based on the second coordinate information and the second event type; and returning the back control message to control the first equipment.
In a second possible implementation manner of the method according to the first possible implementation manner of the second aspect, the method further includes: acquiring third coordinate information and a third event type in response to a third user operation for an autonomous region, the autonomous region representing a region other than the countercontrol region in a screen currently displayed by the second device; and controlling the second device based on the third coordinate information and the third event type.
In a third possible implementation manner of the method according to the second aspect or any one of the possible implementation manners of the second aspect above, the window area includes one or more windows.
In a third aspect, embodiments of the present application provide an interaction device, the device including:
the first acquisition module is used for acquiring a first image and position information of a window area in the first image, wherein the first image represents a picture currently displayed by first equipment, and the window area represents an area in the first image in which the first equipment and second equipment are in shared display;
the encoding module is used for encoding the first image acquired by the first acquisition module based on the position information of the window area acquired by the first acquisition module to obtain a second image, so that the transparency of the area except the window area in the second image is larger than or equal to a preset value;
And the first sending module is used for sending the second image obtained by the encoding module.
In one possible implementation, the apparatus further includes:
and the second sending module is used for sending the position information of the window area.
In one possible implementation manner, the first obtaining module is further configured to:
and responding to a window control instruction, acquiring the first image and the position information of the window area, wherein the window control instruction is used for controlling the window area displayed on the first device.
In one possible implementation, the apparatus further includes:
a second acquisition module configured to acquire first coordinate information and a first event type in response to a first user operation, the first user operation representing a user operation performed on the window area displayed on the first device;
the first generation module is used for generating the window control instruction based on the first coordinate information and the first event type;
and the first control module is used for controlling the window area displayed on the first device according to the window control instruction.
In one possible implementation, the apparatus further includes:
The third acquisition module is used for responding to the received countercontrol message, acquiring second coordinate information and a second event type from the countercontrol message, wherein the countercontrol message is generated according to a second user operation, and the second user operation represents a user operation performed on the window area displayed on the second equipment;
the transformation module is used for carrying out coordinate transformation on the second coordinate information;
the second generation module is used for generating the window control instruction based on the transformed second coordinate information and the second event type;
and the second control module is used for controlling the window area displayed on the first device according to the window control instruction.
In one possible implementation manner, the first obtaining module is further configured to:
and acquiring the first image and the position information of the window area in response to a screen content sharing operation.
In one possible implementation, the window area includes one or more windows.
In a fourth aspect, embodiments of the present application provide an interaction device, the device including:
the first receiving module is used for receiving a second image, the transparency of the area except for a window area in the second image is larger than or equal to a preset value, and the window area represents an area shared with the second device in a picture currently displayed by the first device;
And the display module is used for displaying the second image received by the first receiving module on the second device.
In one possible implementation, the apparatus further includes:
the second receiving module is used for receiving the position information of the window area;
the determining module is used for determining a reverse control area in the second device according to the position information of the window area;
the first acquisition module is used for responding to a second user operation aiming at the countercontrol area and acquiring second coordinate information and a second event type;
the generation module is used for generating a reverse control message based on the second coordinate information and the second event type;
and the return module is used for returning the back control message so as to control the first equipment.
In one possible implementation, the apparatus further includes:
the second acquisition module is used for responding to a third user operation aiming at an automatic control area, and acquiring third coordinate information and a third event type, wherein the automatic control area represents an area except for the reverse control area in a picture currently displayed by the second equipment;
and the control module is used for controlling the second equipment based on the third coordinate information and the third event type.
In one possible implementation, the window area includes one or more windows.
In a fifth aspect, embodiments of the present application provide an electronic device, which may perform the interaction method of the first aspect or one or several of the multiple possible implementations of the first aspect, or perform the interaction method of the second aspect or one or several of the multiple possible implementations of the second aspect.
In a sixth aspect, embodiments of the present application provide an electronic device including a display screen for displaying a screen; a processor; a memory for storing processor-executable instructions; wherein the processor is configured to implement the interaction method of the first aspect or one or more of the plurality of possible implementations of the first aspect or implement the interaction method of the second aspect or one or more of the plurality of possible implementations of the second aspect when executing the instructions.
In a seventh aspect, embodiments of the present application provide a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the interaction method of the first aspect or one or more of the possible implementations of the first aspect, or perform the interaction method of the second aspect or one or more of the possible implementations of the second aspect.
In an eighth aspect, embodiments of the present application provide a computer program product comprising computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in an electronic device, a processor in the electronic device performs the interaction method of the first aspect or one or more of the possible implementations of the first aspect, or performs the interaction method of the second aspect or one or more of the possible implementations of the second aspect.
These and other aspects of the application will be apparent from and elucidated with reference to the embodiment(s) described hereinafter.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features and aspects of the present application and together with the description, serve to explain the principles of the present application.
FIG. 1 illustrates a diagram of an interactive system architecture in the related art;
fig. 2 illustrates an application scenario schematic diagram of an interaction method provided in an embodiment of the present application;
FIG. 3 illustrates an architecture diagram of an interactive system provided by an embodiment of the present application;
FIG. 4 shows an interaction flow chart of an interaction method provided by an embodiment of the present application;
FIG. 5a shows an exemplary schematic of a first image of an embodiment of the present application;
FIGS. 5b, 5c and 5d illustrate an exemplary schematic view of a window region in an embodiment of the present application, respectively;
FIG. 6 shows an interaction flow chart of an interaction method provided by an embodiment of the present application;
fig. 7 shows a schematic structural diagram of an interaction device provided in an embodiment of the present application;
fig. 8 shows a schematic structural diagram of an interaction device provided in an embodiment of the present application;
fig. 9 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Various exemplary embodiments, features and aspects of the present application will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
In addition, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present application. It will be understood by those skilled in the art that the present application may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits have not been described in detail as not to unnecessarily obscure the present application.
Fig. 2 shows an application scenario schematic diagram of an interaction method provided in an embodiment of the present application. As shown in fig. 2, screen content sharing is possible between apersonal computer 11, a tablet 12, amobile phone 13, and adigital television 14, and electronic devices having a display screen such as an outdoor large screen (not shown), a projector (not shown), and the like. The electronic device of the application may refer to a device with a wireless connection function, and the wireless connection function may refer to a function of performing communication through wireless connection modes such as wireless fidelity (Wireless Fidelity, wiFi) and bluetooth, and the electronic device of the application may also have a wired connection function. Operating systems running in the electronic device of the present application include, but are not limited to, android systems, windows systems, and the like. The electronic equipment can be a touch screen or a non-touch screen; the touch screen can control the electronic equipment in a mode of clicking, sliding and the like on the display screen by fingers, a touch pen and the like; the non-touch screen device can be connected with input devices such as a mouse, a keyboard, a touch panel and the like, and the electronic device is controlled through the input devices. According to the method and the device for sharing the screen content, the electronic equipment for sharing the screen content is not limited, an operating system running on the electronic equipment is not limited, and communication connection established between the electronic equipment and the type of a display screen of the electronic equipment are not limited.
In the embodiment of the application, the electronic equipment can share screen content through interaction. The pictures displayed in the display screen of one electronic device may be shared among one or more other electronic devices. For example, as shown in fig. 2, the screen displayed on the tablet 12 may be shared in themobile phone 13; the screen displayed by the tablet pc 12 can be shared and displayed simultaneously in thepersonal computer 11 and themobile phone 13. In the embodiment of the application, when one electronic device performs screen content sharing with other electronic devices, all the pictures displayed by the current display screen of the electronic device can be shared, and part of the pictures (for example, one or more windows) displayed by the current display screen of the electronic device can also be shared. The window can be an instant chat application window, a webpage window, a video playing window, an audio playing window, an image display window or a text display window and the like. The positions between the windows may overlap. In the embodiment of the application, the number of the electronic devices for sharing the screen content is not limited, and the shared content is not limited.
Fig. 3 shows a schematic architecture diagram of an interactive system provided in an embodiment of the present application. As shown in fig. 3, the interactive system comprises afirst device 31 and asecond device 32. Thefirst device 31 and thesecond device 32 include, but are not limited to, the electronic device shown in fig. 2. In the embodiment of the present application, the first device 301 may include afirst negotiation module 311, adriving module 312, afirst display module 313, animage processing module 314, and a first transmission module 315. Thesecond device 32 may include asecond negotiation module 321, asecond display module 322, and asecond transmission module 323.
As shown in fig. 3, thefirst device 31 and thesecond device 32 perform service negotiation through thefirst negotiation module 311 and thesecond negotiation module 321. After the negotiation is completed, the initialization work of the screen content sharing is completed between thefirst device 31 and thesecond device 32, and the screen content sharing can be performed.
The user performs a screen content sharing operation (for example, may click a screen content sharing button, or issue a voice instruction for screen content sharing, or the like) with respect to thefirst device 31. In response to the screen content sharing operation, thedriving module 312 of thefirst device 31 may enable the virtual display driver such that thefirst display module 313 enters the extended screen mode. At this time, theimage processing module 314 of thefirst device 31 may acquire the screen currently displayed by thefirst display module 313 as the first image, and acquire the position information of the window area for sharing in the first image. Theimage processing module 314 may encode the first image based on the position information of the window area to obtain the second image, so that the transparency of the area except the window area in the second image is greater than or equal to a preset value. Thereafter, thefirst device 31 may transmit the second image to thesecond transmission module 323 of thesecond device 32 through the first transmission module 315. After thesecond transmission module 323 receives the second image, thesecond display module 322 may display the second image.
In the embodiment of the present application, since the transparency of the area other than the window area in the second image is greater than or equal to the preset value, thesecond device 32 can achieve the effect of displaying the window area shared by thefirst device 31 on the original image (i.e., the image displayed before the screen content sharing) when displaying the second image, thereby realizing the screen content sharing between thefirst device 31 and thesecond device 32. In the above screen content sharing process, only the second image needs to be transmitted between thefirst device 31 and thesecond device 32, and a separate encoding channel does not need to be established for each window in the window area; while thesecond device 32 need only display the second image without creating a floating window for each window in the window area. Therefore, in the embodiment of the application, the smoothness of the operation of the electronic equipment during the screen content sharing is improved, and the user experience is improved.
In one possible implementation, thefirst device 31 shown in fig. 3 may further include awindow management module 316. The user may perform an operation on thefirst device 31 or thesecond device 32 for the window area (e.g., may close a window in the window area, move a window in the window area, change a size of a window in the window area, etc.). In response to the user operation performed on the window area, thewindow management module 316 of thefirst device 31 may detect that the window area is changed, at this time, theimage processing module 314 of thefirst device 31 may re-acquire the picture displayed by the currentfirst display module 313 as the first image, and acquire the position information of the window area used for sharing in the first image, so as to obtain a new second image, and display the new second image in thesecond display module 322 of thesecond device 32, so as to implement updating of the shared content displayed in the second device.
Fig. 4 shows an interaction flow chart of an interaction method provided in an embodiment of the present application. The method may be applied to the interactive system shown in fig. 3. As shown in fig. 4, the method may include:
in step S401, the first device acquires a first image, and position information of a window area in the first image.
The first image represents a picture currently displayed by the first device, and the window area represents an area in the first image where the first device and the second device share and display. In one possible implementation, the window area may include one or more windows. The window may be a window of an instant chat communication application, a browser window, a document window, a picture window, or a player window, etc. The window may or may not be displayed full screen. In the case that the window area includes a plurality of windows, the windows may overlap, that is, any one window may cover a part of or all of the area by other windows, or may cover a part of or all of the area of other windows; the windows may not overlap, i.e., any window is not covered by other windows, nor is another window covered. The user can select the window to be shared according to the requirement, and the first device can determine the position information of the window area according to the position information of the window selected by the user and the layering relation of the window selected by the user.
Fig. 5a shows an exemplary schematic of a first image of an embodiment of the present application. As shown in fig. 5a, three windows, window 1, window 2 and window 3, are displayed in the first image. Fig. 5b, 5c and 5d show an exemplary schematic view of a window area in an embodiment of the present application, respectively. In one example, the user selects window 1 for sharing display, where the window area in step S401 is the area where window 1 is located (as shown in fig. 5 b). In yet another example, the user selects window 1 and window 2 for sharing display, where the window area in step S401 is the union of the area where window 1 is located and the area where window 2 is located (as shown in fig. 5 c). In another example, the user selects window 1 and window 3 for sharing display, where the window area in step S401 is the union of the area where window 1 is located and the area where window 3 is located (as shown in fig. 5 d). It will be appreciated that the above is merely an exemplary illustration of a window area and is not intended to limit the window area.
In one example, the location information of the window region may be coordinates of respective vertices of the window region. Taking fig. 5b as an example, the position information of the window area may be coordinates of four vertices. Taking fig. 5c as an example, the position information of the window area may be coordinates of eight points. Taking fig. 5d as an example, the position information of the window area may include two sets of coordinates, each set of coordinates including coordinates of four vertices. In yet another example, the location information of the window region may be coordinates of vertices of the window region and lengths of edges. Taking fig. 5b as an example, the position information of the window area may be coordinates, width, and height of the left lower corner vertex. The above is merely an example of the position information of the window area, and is not intended to limit the position information of the window area.
In one possible implementation, step S401 may include: the first device responds to a window control instruction to acquire the first image and position information of the window area.
The window control instruction can be used for controlling a window area displayed on the first device. In one example, the window control instructions may be used to implement one or more of moving a window in a window area, closing a window in a window area, removing a window in a window area, adding an existing window to a window area, and creating and adding a created window to a window area. For example, as shown in fig. 5d, the window control instruction is used to remove the left window from the window area (i.e., the left window is no longer shared for display), and in response to the window control instruction, the first device may acquire the first image, and the position information of the window area shown in fig. 5 b.
In one example, the window control instruction may be generated based on a first user operation. Wherein the first user operation represents a user operation performed with respect to a window area displayed on the first device. That is, the first user operation is an operation performed on the first device by the user through an input device such as a mouse, a keyboard, a touch screen, or a stylus pen, and the first user operation is an operation performed with respect to the window area. The coordinate information corresponding to the first user operation may be referred to as first coordinate information. For example, the first coordinate information may be coordinate information of a position clicked by a mouse on the first device, coordinate information of a touched position on a touch screen of the first device, and the like. The event type corresponding to the first user operation may be referred to as a first event type. For example, the first event type may include, but is not limited to: one or more of creating a window, moving a window, closing a window, removing a shared window, adding a shared window, and the like.
In the embodiment of the application, the first device responds to a first user operation to acquire first coordinate information and a first event type; generating the window control instruction based on the first coordinate information and a first event type; and controlling the window area displayed on the first device according to the window control instruction. In a specific implementation, the first device may control a window area displayed on the first device according to the first coordinate information and the first event type corresponding to the window control instruction. For example, the position indicated by the first coordinate information is located on a close button of the window 1 displayed by the first device, the first event type is a closed window, and the first device may close the window 1 in the window area displayed on the first device according to the first coordinate information and the first event type.
In yet another example, the window control instruction may be generated based on a second user operation. Wherein the second user operation represents a user operation performed with respect to a window area displayed on the second device. That is, the second user operation is an operation performed on the second device by the user through an input device such as a mouse, a keyboard, a touch screen, or a stylus pen, and the second user operation is an operation performed with respect to the window area. The coordinate information corresponding to the second user operation may be referred to as second coordinate information. For example, the second coordinate information may be coordinate information of a position clicked by a mouse on the second device, coordinate information of a touched position on a touch screen of the second device, and the like. The event type corresponding to the second user operation may be referred to as a second event type. For example, the second event type may include, but is not limited to, one or more of moving a window, closing a window, moving out of a shared window, and the like. The second device may generate a countercontrol message based on the second coordinate information and the second event type and send the countercontrol message to the first device.
In the embodiment of the application, the first device responds to a received countercontrol message, and acquires second coordinate information and a second event type from the countercontrol message, wherein the countercontrol message is generated according to a second user operation; performing coordinate transformation on the second coordinate information; generating the window control instruction based on the transformed second coordinate information and the second event type; and controlling the window area displayed on the first device according to the window control instruction. In a specific implementation, the first device may control the window area displayed on the first device according to the transformed second coordinate information and the second event type corresponding to the window control instruction. For example, the position indicated by the second coordinate information is located on a closing button of the window 1 displayed by the second device, the second event type is closing the window, and the first device may close the window 1 in the window area displayed on the first device according to the transformed second coordinate information and the second event type. It will be appreciated that the display of the first device and the display of the second device may be of different sizes and/or resolutions, and therefore the first device may need to transform the second coordinate information before control takes place. In the embodiment of the present application, the method for transforming the second coordinate information is not limited.
Then, the first device may obtain the first image and the position information of the window area in response to the window control instruction generated based on the first user operation or the window control instruction generated based on the second user operation, and further may execute steps S402 to S405 to implement updating of the window area displayed in a sharing manner on the second device.
In one possible implementation, step S401 may include: the first device acquires the first image and the position information of the window area in response to a screen content sharing operation.
Wherein the screen content sharing operation may be used to share display one or more windows displayed on the first device. After the user selects a window for sharing, the screen content sharing operation may be performed by triggering (e.g., clicking, double clicking, etc.) a screen content sharing button, making a screen content sharing gesture (e.g., palm covering the screen, circling a finger on the display screen, etc.), or dragging the selected window out of the display screen, etc. The first device detects the screen content sharing operation, which indicates that the user wants to perform window sharing display, so that the first device may acquire the first image and the position information of the window area in response to the screen content sharing operation, and may further perform steps S402 to S405 to implement sharing display of the window area on the second device.
In step S402, the first device encodes the first image based on the position information of the window area, so as to obtain a second image, so that the transparency of the area except the window area in the second image is greater than or equal to a preset value.
The preset value can be set according to the requirement. Taking 0 for complete opacity and 100% for complete transparency as an example, the preset value may be set to 90%, 95% or 100%, etc. It will be appreciated that the larger the preset value, the more blurred (i.e. less visible to the user) the non-window regions (i.e. regions other than window regions) in the second image, the more clear (i.e. more visible to the user) the picture covered by the non-window regions in the second image. In the case where the preset value is set to 100%, the areas other than the window area in the second image are not displayed at all, and the user does not see the non-window area in the second image, and the picture covered by the non-window area in the second image can be displayed normally (i.e., for one picture, the effect seen by the user is the same when the picture is covered by the non-window area in the second image and when the picture is not covered by the non-window area in the second image). In the case where the preset value is set to 0, the area except for the window area in the second window is normally displayed, and the user can clearly see the area except for the window area in the second image, and the user cannot see the picture covered by the area except for the window area in the second image.
When the transparency of the area except the window area in the second image is larger than or equal to the preset value, the user cannot see or can not clearly see the area except the window area in the second image, and accordingly, the user can clearly see the picture covered by the area except the window area in the second image. Therefore, when the first device sends the second image to the second device for display, the sharing display of the window area in the first device and the second device can be realized, information leakage caused by clear display of the non-window area in the second image in the second device can be avoided, and the privacy security of the first device and the flexibility of sharing content are improved. Meanwhile, when the second device displays the second image, the user can clearly see the picture covered by the non-window area in the second image, so that the influence of screen content sharing on the use of the second device by the user is reduced, and the experience of the use of the second device by the user is improved. In addition, the sharing display of the window area can be realized by transmitting the second image between the first device and the second device, a separate coding channel is not required to be established for each window, a separate floating window is not required to be established for each window by the second device, sharing time delay is reduced, resources are saved, the running smoothness of the electronic device during screen content sharing is improved, user experience is improved, particularly, in the case that the window area comprises more windows, the lifting effect is more obvious, the application scene is enlarged, and the application scene is not limited to the situation that the shared windows are fewer.
In one example, the first device may encode the first image using an ARGB encoding scheme to obtain the second image. In the ARGB coding scheme, a represents an Alpha channel for indicating transparency, R represents a Red channel for indicating Red, G represents a Green channel for indicating Green, and B represents a Blue channel for indicating Blue. When the ARGB coding mode is adopted to code the first image, the value of an Alpha channel of a non-window area in the first image can be set as a specified value, and an RGB channel (comprising a Red channel, a Green channel and a Blue channel) is normally coded; and normally encoding Alpha channels and RGB channels of a window area in the first image. In practical applications, the value of the Alpha channel represents opacity, the value of the Alpha channel is 0, which indicates complete transparency, the value of the Alpha channel is 100%, which indicates complete opacity, and the value of the Alpha channel is between 0 and 100%, which indicates translucency. In this embodiment of the present application, the value of the Alpha channel may be set to a value smaller than (100% -preset value), so that the transparency of the non-window area in the second image may be greater than or equal to the preset value.
In step S403, the first device transmits the second image to the second device.
In step S404, the second device receives the second image.
Step S405, the second device displays the second image.
In the embodiment of the application, the first image representing the picture currently displayed by the first device is encoded to obtain the second image, so that the transparency of the region in the second image except for the region needing to be shared and displayed by the first device and the second device is larger than or equal to the preset value, window region sharing can be realized by transmitting the second image between the first device and the second device and displaying the second image on the second device, a separate encoding channel does not need to be established for each window in the window region, a floating window does not need to be established for each window in the window region, time delay is reduced, and occupation of resources is reduced, so that the running smoothness of the electronic device during screen content sharing is improved, and user experience is improved. The smoothness of the operation of the electronic equipment can be effectively improved for scenes with more windows in the window area, and the application scene of screen content sharing is enlarged.
Fig. 6 shows an interaction flow chart of an interaction method provided in an embodiment of the present application. The method may be applied to the interactive system shown in fig. 3. As shown in fig. 6, the method may include:
In step S501, the first device acquires a first image, and position information of a window area in the first image.
This step may refer to step S401, and will not be described here again.
In step S502, the first device encodes the first image based on the position information of the window area, so as to obtain a second image, so that the transparency of the area except the window area in the second image is greater than or equal to a preset value.
The present step may refer to step S402, and will not be described herein.
In step S503, the first device sends the second image and the position information of the window area to the second device.
In this step, the first device may first send the second image to the second device, and then send the position information of the window area to the second device; or the position information of the window area is sent to the second device, and then the second image is sent to the second device; the second image and the location information of the window area may also be transmitted simultaneously to the second device.
In step S504, the second device receives the second image and the position information of the window area.
In step S505, the second device displays the second image.
In step S506, the second device determines the countercontrol area according to the position information of the window area.
In this step, the second device may first transform the position information of the window area, and then determine the countercontrol area according to the transformed position information of the window area. Specifically, the area indicated by the position information of the window area after the transformation in the picture displayed by the second device is the countercontrol area, and the areas except the countercontrol area can be called as the automatic control area. The reverse control region indicates a region in which reverse control is performed on the first device. The self-control region represents a region in which the second device itself is directly controlled. It can be understood that when the user operation received by the second device is in the reverse control area, it indicates that the user wants to operate the window of the first device, and needs to control the first device; when the user operation received by the second device processes the automatic control area, the user is indicated to want to operate the local window, and the second device needs to be controlled, wherein the user operation in the reverse control area can be recorded as a second user operation, and the user operation in the automatic control area can be recorded as a third user operation.
And the second device can determine whether the user operation is in the reverse control area or the automatic control area according to the coordinate information corresponding to the user operation, the position information of the reverse control area and the position information of the automatic control area, namely, whether the user operation is the second user operation aiming at the reverse control area or the third user operation aiming at the automatic control area. In the case where the user operation is the second user operation, steps S507 to S514 may be performed; in the case where the user operation is a third user operation, step S515 may be performed.
In step S507, the second device obtains second coordinate information and a second event type in response to a second user operation for the countercontrol area.
The second coordinate information represents coordinate information corresponding to the second user operation, the second event type represents an event type corresponding to the second user operation, and the second coordinate information and the second event type may refer to step S401, which is not described herein.
In step S508, the second device generates a countercontrol message based on the second coordinate information and the second event type.
The back control message may refer to step S401, and will not be described herein.
In step S509, the second device returns a countercontrol message to the first device.
In step S510, the first device obtains the second coordinate information and the second event type from the countercontrol message in response to the received countercontrol message.
In step S511, the first device performs coordinate transformation on the second coordinate information.
In step S512, the first device generates a window control instruction based on the transformed second coordinate information and the second event type.
Wherein the window control instruction may be used to control a window in the first device.
In step S513, the first device controls the displayed window area according to the window control instruction.
In this step, the first device may control the window area displayed on the first device according to the transformed second coordinate information and the second event type corresponding to the window control instruction. For example, the position indicated by the second coordinate information is located on a closing button of the window 1 displayed by the second device, the second event type is closing the window, and the first device may close the window 1 in the window area displayed on the first device according to the transformed second coordinate information and the second event type.
Step S514, the first device performs step S501 in response to the window control instruction.
In this step, the first device controls the displayed window area according to the window control instruction, for example, moves the window in the window area, closes the window in the window area, moves one or more windows out of the window area, and the like, and as a result, the window area in the first device changes, and the displayed picture changes, so that the first device can reacquire the first image in response to the window control instruction, and the position information of the window area in the first image, encode the first image based on the position information of the window area, obtain a new second image, and then send the new second image and the position information of the new window area to the second device. The second device displays a new second image, and updating of the window area of the shared display is achieved. The second device can determine a new counter control area and a new automatic control area according to the position information of the new window area, update of the counter control area and the automatic control area is achieved, and user operation is more accurate.
In step S515, the second device acquires third coordinate information and a third event type in response to a third user operation for the autonomous region, and controls the second device based on the third coordinate information and the third event type.
In the step, the second device can control the local window, so that the use and operation of the second device are not affected when sharing display is performed, and the user experience is improved.
In one example, the second device may utilize virtual driven techniques to inject the third coordinate information and the third event type directly into the second device so as not to affect the operation and experience of the second device's own application.
In the embodiment of the application, the reverse control area and the automatic control area in the second device are determined based on the position information of the window area, and the first device is reversely controlled or the second device is locally controlled according to whether the user operation is in the reverse control area or the automatic control area, so that the effect that the second device reversely controls the window area in the first device is achieved, the operation of the application of the second device is not influenced, and the user experience is improved.
Fig. 7 shows a schematic structural diagram of an interaction device provided in an embodiment of the present application. The apparatus may be applied to thefirst device 31 shown in fig. 3. As shown in fig. 7, theapparatus 70 may include: a first obtainingmodule 71, configured to obtain a first image, and location information of a window area in the first image, where the first image represents a picture currently displayed by a first device, and the window area represents an area in the first image where the first device and a second device share and display; anencoding module 72, configured to encode the first image acquired by thefirst acquisition module 71 based on the position information of the window area acquired by thefirst acquisition module 71, so as to obtain a second image, so that transparency of an area except the window area in the second image is greater than or equal to a preset value; and afirst sending module 73, configured to send the second image obtained by the encoding module.
In the embodiment of the application, the first image representing the picture currently displayed by the first device is encoded to obtain the second image, so that the transparency of the region in the second image except for the region needing to be shared and displayed by the first device and the second device is larger than or equal to the preset value, window region sharing can be realized by transmitting the second image between the first device and the second device and displaying the second image on the second device, a separate encoding channel does not need to be established for each window in the window region, a floating window does not need to be established for each window in the window region, time delay is reduced, and occupation of resources is reduced, so that the running smoothness of the electronic device during screen content sharing is improved, and user experience is improved. The smoothness of the operation of the electronic equipment can be effectively improved for scenes with more windows in the window area, and the application scene of screen content sharing is enlarged.
In one possible implementation, the apparatus further includes: and the second sending module is used for sending the position information of the window area.
In one possible implementation manner, the first obtaining module is further configured to: and responding to a window control instruction, acquiring the first image and the position information of the window area, wherein the window control instruction is used for controlling the window area displayed on the first device.
In one possible implementation, the apparatus further includes: a second acquisition module configured to acquire first coordinate information and a first event type in response to a first user operation, the first user operation representing a user operation performed on the window area displayed on the first device; the first generation module is used for generating the window control instruction based on the first coordinate information and the first event type; and the first control module is used for controlling the window area displayed on the first device according to the window control instruction.
In one possible implementation, the apparatus further includes: the third acquisition module is used for responding to the received countercontrol message, acquiring second coordinate information and a second event type from the countercontrol message, wherein the countercontrol message is generated according to a second user operation, and the second user operation represents a user operation performed on the window area displayed on the second equipment; the transformation module is used for carrying out coordinate transformation on the second coordinate information; the second generation module is used for generating the window control instruction based on the transformed second coordinate information and the second event type; and the second control module is used for controlling the window area displayed on the first device according to the window control instruction.
In one possible implementation manner, the first obtaining module is further configured to: and acquiring the first image and the position information of the window area in response to a screen content sharing operation.
In one possible implementation, the window area includes one or more windows.
Fig. 8 shows a schematic structural diagram of an interaction device provided in an embodiment of the present application. The apparatus may be applied to thesecond device 32 shown in fig. 3. As shown in fig. 8, theapparatus 80 may include:
afirst receiving module 81, configured to receive a second image, where transparency of an area in the second image except for a window area is greater than or equal to a preset value, where the window area represents an area shared with a second device in a screen currently displayed by the first device;
and adisplay module 82, configured to display the second image received by thefirst receiving module 81 on the second device.
In the embodiment of the application, the second device displays the second image, wherein the transparency of the region of the second image except the region where the first device and the second device share and display is greater than or equal to the preset value, so that window region sharing can be realized by transmitting the second image between the first device and the second device and displaying the second image on the second device, a separate coding channel does not need to be established for each window in the window region, a floating window does not need to be established for each window in the window region, time delay is reduced, occupation of resources is reduced, and therefore smoothness of operation of the electronic device during screen content sharing is improved, and user experience is improved. The smoothness of the operation of the electronic equipment can be effectively improved for scenes with more windows in the window area, and the application scene of screen content sharing is enlarged.
In one possible implementation, the apparatus further includes: the second receiving module is used for receiving the position information of the window area; the determining module is used for determining a reverse control area in the second device according to the position information of the window area; the first acquisition module is used for responding to a second user operation aiming at the countercontrol area and acquiring second coordinate information and a second event type; the generation module is used for generating a reverse control message based on the second coordinate information and the second event type; and the return module is used for returning the back control message so as to control the first equipment.
In one possible implementation, the apparatus further includes: the second acquisition module is used for responding to a third user operation aiming at an automatic control area, and acquiring third coordinate information and a third event type, wherein the automatic control area represents an area except for the reverse control area in a picture currently displayed by the second equipment; and the control module is used for controlling the second equipment based on the third coordinate information and the third event type.
In one possible implementation, the window area includes one or more windows.
Fig. 9 shows a schematic structural diagram of an electronic device according to an embodiment of the present application. Taking the example that the electronic device is a mobile phone, fig. 9 shows a schematic structural diagram of the mobile phone 200.
The handset 200 may include aprocessor 210, anexternal memory interface 220, aninternal memory 221, ausb interface 230, acharge management module 240, apower management module 241, abattery 242, an antenna 1, an antenna 2, amobile communication module 251, awireless communication module 252, anaudio module 270, aspeaker 270A, areceiver 270B, amicrophone 270C, anearphone interface 270D, asensor module 280,keys 290, amotor 291, anindicator 292, acamera 293, adisplay 294, aSIM card interface 295, and the like. Thesensor module 280 may include a gyroscope sensor, an acceleration sensor, a proximity sensor, a fingerprint sensor, a touch sensor, a temperature sensor, a pressure sensor, a distance sensor, a magnetic sensor, an ambient light sensor, a barometric sensor, a bone conduction sensor, etc. (not shown).
It should be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the mobile phone 200. In other embodiments of the present application, the cell phone 200 may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 210 may include one or more processing units such as, for example: theprocessor 210 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a Neural network processor (Neural-network Processing Unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors. The controller may be a neural center or a command center of the mobile phone 200. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in theprocessor 210 for storing instructions and data. In some embodiments, the memory in theprocessor 210 is a cache memory. The memory may hold instructions or data that theprocessor 210 has just used or recycled. If theprocessor 210 needs to reuse the instruction or data, it may be called directly from the memory. Repeated accesses are avoided and the latency of theprocessor 210 is reduced, thereby improving the efficiency of the system.
Theprocessor 210 may run the method provided in the embodiments of the present application, so as to improve the fluency of the mobile phone 200 during the sharing of the screen content. Theprocessor 210 may include different devices, such as an integrated CPU and a GPU, where the CPU and the GPU may cooperate to perform a message processing method provided in the embodiments of the present application, such as a part of an algorithm in the message processing method is performed by the CPU, and another part of the algorithm is performed by the GPU, so as to obtain a faster processing efficiency.
Thedisplay 294 is used to display images, videos, and the like. Thedisplay 294 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the cell phone 200 may include 1 or N displays 294, N being a positive integer greater than 1. Thedisplay 294 may be used to display information entered by a user or provided to a user as well as various graphical user interfaces (graphical user interface, GUI). For example, thedisplay 294 may display photographs, videos, web pages, or files, etc. For another example, thedisplay 294 may display a graphical user interface. The graphical user interface includes status bars, hidden navigation bars, time and weather gadgets (widgets), and icons of applications, such as browser icons, etc. The status bar includes the name of the operator (e.g., chinese mobile), the mobile network (e.g., 4G), time, and the remaining power. The navigation bar includes a back (back) key icon, a home screen (home) key icon, and a forward key icon. Further, it is to be appreciated that in some embodiments, bluetooth icons, wi-Fi icons, external device icons, etc. may also be included in the status bar. It will also be appreciated that in other embodiments, a Dock may be included in the graphical user interface, a commonly used application icon may be included in the Dock, and the like. When theprocessor 210 detects a touch event of a user's finger (or a stylus, etc.) for a certain application icon, a user interface of the application corresponding to the application icon is opened in response to the touch event, and the user interface of the application is displayed on thedisplay 294.
In the embodiment of the present application, thedisplay 294 may be an integral flexible display, or a tiled display formed of two rigid screens and a flexible screen located between the two rigid screens may be used.
After theprocessor 210 runs the method provided in the embodiment of the present application, the mobile phone 200 may establish a connection with other electronic devices through the antenna 1, the antenna 2 or the USB interface, and transmit data and control thedisplay 294 to display a corresponding graphical user interface according to the method provided in the embodiment of the present application. In the embodiment of the present application, the second image, the position information of the window area, the countercontrol message, and the like may be transmitted, and thedisplay screen 294 may be controlled to display the first image or the second image.
The camera 293 (front camera or rear camera, or one camera may be used as either a front camera or a rear camera) is used to capture still images or video. In general, thecamera 293 may include a photosensitive element such as a lens group including a plurality of lenses (convex lenses or concave lenses) for collecting optical signals reflected by an object to be photographed and transmitting the collected optical signals to an image sensor. The image sensor generates an original image of the object to be photographed according to the optical signal.
Internal memory 221 may be used to store computer executable program code that includes instructions. Theprocessor 210 executes various functional applications of the cellular phone 200 and data processing by executing instructions stored in theinternal memory 221. Theinternal memory 221 may include a storage program area and a storage data area. The storage program area may store, among other things, code for an operating system, an application program (e.g., a camera application, a WeChat application, etc.), and so on. The storage data area may store data created during use of the handset 200 (e.g., images, video, etc. captured by the camera application), etc.
Theinternal memory 221 may also store one or more computer programs 1310 corresponding to the message processing methods provided in embodiments of the present application. The one or more computer programs 1304 are stored in thememory 221 and configured to be executed by the one ormore processors 210, the one or more computer programs 1310 comprising instructions.
In addition, theinternal memory 221 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
Of course, the code of the interaction method provided in the embodiment of the present application may also be stored in an external memory. In this case, theprocessor 210 may run the code of the interaction method stored in the external memory through theexternal memory interface 220.
The touch sensor 280K, also referred to as a "touch panel" is described below. The touch sensor 280K may be disposed on thedisplay screen 294, and the touch sensor 280K and thedisplay screen 294 form a touch screen, which is also referred to as a "touch screen". The touch sensor 280K is used to detect a touch operation acting on or near it. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through thedisplay 294. In other embodiments, the touch sensor 280K may also be disposed on the surface of the mobile phone 200 at a different location than thedisplay 294.
Illustratively, thedisplay 294 of the handset 200 displays a main interface that includes icons of a plurality of applications (e.g., camera applications, weChat applications, etc.). The user clicks on the icon of the camera application in the main interface by touching the sensor 280K, triggering theprocessor 210 to launch the camera application, opening thecamera 293. Thedisplay 294 displays an interface of the camera application, such as a viewfinder interface.
The wireless communication function of the mobile phone 200 can be implemented by the antenna 1, the antenna 2, themobile communication module 251, thewireless communication module 252, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the handset 200 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
Themobile communication module 251 may provide a solution including 2G/3G/4G/5G wireless communication applied to the cell phone 200. Themobile communication module 251 may include at least one filter, switch, power amplifier, low noise amplifier (lownoise amplifier, LNA), etc. Themobile communication module 251 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. Themobile communication module 251 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of themobile communication module 251 may be provided in theprocessor 210. In some embodiments, at least some of the functional modules of themobile communication module 251 may be disposed in the same device as at least some of the modules of theprocessor 210. In the embodiment of the present application, themobile communication module 251 may also be used for information interaction with other electronic devices. For example, the position information of the second image and the window area is sent to the second device, the inverse control message sent by the second device is received, or the position information of the second image and the window area sent by the first device is received, and the inverse control message is sent to the first device.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited tospeaker 270A,receiver 270B, etc.), or displays images or video throughdisplay screen 294. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as themobile communication module 251 or other functional module, independent of theprocessor 210.
Thewireless communication module 252 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wi-Fi network, wireless fidelity), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near Field Communication (NFC), infrared (IR), etc. applied to the mobile phone 200. Thewireless communication module 252 may be one or more devices that integrate at least one communication processing module. Thewireless communication module 252 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to theprocessor 210. Thewireless communication module 252 may also receive a signal to be transmitted from theprocessor 210, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2. In the embodiment of the present application, thewireless communication module 252 is configured to transmit data with other electronic devices under the control of theprocessor 210, for example, when theprocessor 210 runs the message processing method provided in the embodiment of the present application, the processor may control thewireless communication module 252 to send data such as the second image to the second device.
In addition, the mobile phone 200 may implement audio functions through anaudio module 270, aspeaker 270A, areceiver 270B, amicrophone 270C, anearphone interface 270D, an application processor, and the like. Such as music playing, recording, etc. The handset 200 may receive key 290 inputs, generating key signal inputs related to user settings and function control of the handset 200. The cell phone 200 may use themotor 291 to generate a vibration alert (e.g., an incoming call vibration alert). Theindicator 292 in the mobile phone 200 may be an indicator light, which may be used to indicate a state of charge, a change in power, an indication message, a missed call, a notification, etc. TheSIM card interface 295 in the handset 200 is used to connect to a SIM card. The SIM card may be inserted into theSIM card interface 295 or removed from theSIM card interface 295 to allow contact and separation from the handset 200.
It should be understood that in practical applications, the mobile phone 200 may include more or fewer components than shown in fig. 4, and embodiments of the present application are not limited. The illustrated cell phone 200 is only one example, and cell phone 200 may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
An embodiment of the present application provides an electronic device, including: a display screen for displaying a picture, a processor, and a memory for storing processor-executable instructions; wherein the processor is configured to implement the above-described method when executing the instructions.
Embodiments of the present application provide a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
Embodiments of the present application provide a computer program product comprising a computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in a processor of an electronic device, performs the above method.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disk, hard disk, random access Memory (Random Access Memory, RAM), read Only Memory (ROM), erasable programmable Read Only Memory (Electrically Programmable Read-Only-Memory, EPROM or flash Memory), static Random access Memory (Static Random-AccessMemory, SRAM), portable compact disk Read Only Memory (Compact Disc Read-Only Memory, CD-ROM), digital versatile disk (Digital Video Disc, DVD), memory stick, floppy disk, mechanical coding device, punch cards or in-groove protrusion structures such as instructions stored thereon, and any suitable combination of the foregoing.
The computer readable program instructions or code described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for carrying out operations of the present application may be assembly instructions, instruction set architecture (Instruction Set Architecture, ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a local area network (Local Area Network, LAN) or a wide area network (Wide Area Network, WAN), or it may be connected to an external computer (e.g., through the internet using an internet service provider). In some embodiments, aspects of the present application are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field programmable gate arrays (Field-Programmable Gate Array, FPGA), or programmable logic arrays (Programmable Logic Array, PLA), with state information of computer readable program instructions.
Various aspects of the present application are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by hardware (e.g., circuits or ASICs (Application Specific Integrated Circuit, application specific integrated circuits)) which perform the corresponding functions or acts, or combinations of hardware and software, such as firmware, etc.
Although the invention is described herein in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
The embodiments of the present application have been described above, the foregoing description is exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (25)

CN202111349438.7A2021-11-152021-11-15Interaction method and devicePendingCN116126176A (en)

Priority Applications (2)

Application NumberPriority DateFiling DateTitle
CN202111349438.7ACN116126176A (en)2021-11-152021-11-15Interaction method and device
PCT/CN2022/128921WO2023083052A1 (en)2021-11-152022-11-01Interaction method and apparatus

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202111349438.7ACN116126176A (en)2021-11-152021-11-15Interaction method and device

Publications (1)

Publication NumberPublication Date
CN116126176Atrue CN116126176A (en)2023-05-16

Family

ID=86306840

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202111349438.7APendingCN116126176A (en)2021-11-152021-11-15Interaction method and device

Country Status (2)

CountryLink
CN (1)CN116126176A (en)
WO (1)WO2023083052A1 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US8661355B1 (en)*2008-10-102014-02-25Cisco Technology, Inc.Distinguishing shared and non-shared applications during collaborative computing sessions
US8898577B2 (en)*2010-03-092014-11-25Microsoft CorporationApplication sharing with occlusion removal
CN103986935B (en)*2014-04-302018-03-06华为技术有限公司Coding method, encoder, Screen sharing equipment and system
US10437549B2 (en)*2017-05-192019-10-08Vmware, Inc.Selective screen sharing

Also Published As

Publication numberPublication date
WO2023083052A1 (en)2023-05-19

Similar Documents

PublicationPublication DateTitle
US12217336B2 (en)Image cropping method and electronic device
EP4057120B1 (en)Application icon display method and electronic device
AU2017439979B2 (en)Method and device for dynamically displaying icon according to background image
KR101983725B1 (en)Electronic device and method for controlling of the same
US10783684B2 (en)Method and device for displaying image
US20170205894A1 (en)Method and device for switching tasks
CN104866262B (en)Wearable device
KR102187236B1 (en)Preview method of picture taken in camera and electronic device implementing the same
EP2846250A2 (en)Mobile terminal and method for controlling the same
US9236003B2 (en)Display apparatus, user terminal apparatus, external apparatus, display method, data receiving method and data transmitting method
US20160077606A1 (en)Method and apparatus for providing lock-screen
JP2015524954A (en) Multiple display method using a plurality of communication terminals, machine-readable storage medium, and communication terminal
KR102023393B1 (en)Mobile terminal and method of controlling the mobile terminal
CN114615362B (en)Camera control method, device and storage medium
CN119256292A (en) Multi-screen interaction method and electronic device
KR20180017638A (en)Mobile terminal and method for controlling the same
CN111597592B (en)Input method, input device and mobile terminal
WO2018133200A1 (en)Icon arrangement method and terminal
CN116126176A (en)Interaction method and device
CN114449686B (en) Wireless network access method and device
CN113076031B (en)Display equipment, touch positioning method and device
CN112306314B (en) Interface display method and electronic device
KR20160073679A (en)Mobile terminal and method for controlling the same
KR20170021616A (en)Mobile terminal and method for controlling the same
CN119127039B (en) Display method and electronic device

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp