Detailed Description
The window control method provided by the embodiment of the application is applied to electronic equipment, wherein the electronic equipment comprises but is not limited to AR equipment or VR equipment. Among them, the AR device adopts an augmented reality technology, which is a new technology of integrating real world information and virtual world information "seamlessly". The AR device adopts a virtual reality technology, and the virtual reality technology mainly comprises aspects of simulation environment, perception, natural technology, sensing equipment and the like. The simulated environment is a computer-generated, dynamic three-dimensional realistic image. Perception means that an ideal VR should have the perception of everyone. In addition to the visual perception generated by computer graphics technology, there are auditory, tactile, kinesthesia, and other perceptions. Natural skills refer to human head rotation, eyes, gestures, or other human behavior actions, and data corresponding to the actions of participants are processed by a computer, and are responded in real time to user inputs and fed back to the five sense organs of the user respectively.
Referring to fig. 1A and fig. 1B, fig. 1A and fig. 1B are schematic views of an electronic device and a pointing device according to an embodiment of the present application.
In the scene diagrams shown in fig. 1A and 1B, including but not limited to an electronic device 10, exemplified by a VR device, a pointing device 11, exemplified by a remote control handle, and a plurality of windows 1000, presented within a viewable area 100 (e.g., the area between dashed line 101 and dashed line 102 in fig. 1A) of the VR device. The VR device and the remote control handle are associated such that the VR device controls a plurality of windows selected by the remote control handle. After the VR device is worn by the user, a plurality of windows 1000 are presented within the viewable area 100 of the VR device. The user can select a partial window from the plurality of windows as a target window to be controlled using the pointing part 11. After selecting the target windows, the VR device performs the operation indicated by the pointing part 11 on each target window at the same time.
It is to be noted that, although the example of the pointing part 11 explained above may be a remote control handle, the present application is not limited to the pointing part 11 being a glove as shown in fig. 1B, a wristwatch as shown in fig. 1C, a terminal such as a bracelet as shown in fig. 1D, or a hand of a user. Although the pointing device 11 is explained above as being used to control a window displayed by a VR device, the present application is not limited thereto, and the pointing device may also be used for other functional operations, such as controlling movement of an object in a game, performing an action, and the like.
According to the embodiment of the application, the selection modes of the target window include, but are not limited to, point selection (click selection, the clicked window is selected as the target window), surface selection (drawing a closed figure to select, the window covered by the closed figure is the target window), linear selection (drawing a line to select, the window covered by the line is selected as the target window), and the like.
Fig. 2 is a schematic diagram of a selection manner of a target window according to an embodiment of the present application. For example, as illustrated in the leftmost fig. 2a of fig. 2, a user may click one or more windows presented in a viewable area of the VR device using keys provided on the remote control handle, or by holding the remote control handle to perform a similar tap or click action. The clicked window may be the selected target window. As illustrated in the middle fig. 2b of fig. 2, the user may select windows in the area through which the hand swipe passes by a tap or click action of the hand with respect to one or more windows, or swipe the hand, as the selected target window. The hand swing can move in a visible area of the VR device, and when a moving track forms a closed figure, a window covered by the closed figure is a target window. When the hand moves in the visible area of the VR device, the moving track forms a curve or a straight line, and the window covered by the curve or the straight line can be selected as the target window, and it can be understood that the user can also use the remote control handle to move in the visible area of the VR device, so as to establish a closed graph to select the target window. As illustrated in the middle fig. 2c of fig. 2, the user may grab a portion of the window by hand to serve as a target window, and when grabbing by hand, the window covered by the hand is the selected target window.
Fig. 3A to fig. 3C are schematic diagrams of selection results corresponding to a selection manner of a target window according to an embodiment of the present application. As illustrated in fig. 3A, when the user clicks on window a, window C, and window D among window a, window B, window C, and window D based on the remote handle or finger. Window a, window C, and window D, which are selected target windows, may show the selected logo 1001 in the upper right corner. The selected logo 1001 may be a black dot as shown in fig. 3A. Other shapes and types of logos are also possible, such as triangular, diamond, square, etc. The embodiments of the present application are not limited herein.
As illustrated in fig. 3B, the user moves within the viewable area of the VR device using a hand or a remote control handle. The closed pattern formed by the moving track is a regular rectangle 1002, a triangle 1003, an irregular shape 1004, or the like as shown in fig. 3B. The window a, window B, and window C covered by the regular rectangle 1002, triangle 1003, and irregular shape 1004 shown in the closed figure of fig. 3B are selected target windows. Of course, the shape of the closed figure is not limited to that shown in fig. 3B in the present embodiment, but may be any other shape, and the embodiment of the present application is not limited herein. It will be appreciated that the closed figure formed by the movement track may determine that the selected target window is the target window if the window is entirely within the closed figure, and the selected target window if the window is partially within the closed figure (e.g., if 50% of the area of the window is within the closed figure, the selected target window is considered the target window).
As illustrated in fig. 3C, the user moves within the viewable area of the VR device using a hand or a remote control handle. The movement locus forms a locus such as a straight line 1005 or a curve 1006 shown in fig. 3C. Window a, window B, and window C, through which line 1005 or curve 1006 shown in fig. 3C passes, are selected target windows.
For multiple windows 1000 within the viewing area 100 of the VR device, they are located in three-dimensional space. The three-dimensional space is a space composed of three dimensions of length, width, and height, which corresponds to an x-axis (horizontal axis), a y-axis (vertical axis), and a z-axis (vertical axis) of the three-dimensional space. The windows 1000 may be located in the same plane region (the same hierarchy) in the three-dimensional space, or may be located in different plane regions (different hierarchies). For example, as shown in fig. 4, the window a, the window B, the window C, and the window D are all in the same plane area (as the first plane area), that is, the window a, the window B, the window C, and the window D are windows in the same hierarchy (which is denoted as Layer-1 in the embodiment of the present application). Window E and window F are in a second planar region after the first planar region, i.e., window E and window F are in a second level (denoted Layer-2 in the present embodiment), and window a and window B, window C, and window D are in a first level. I.e. window E and window F are at a different level than window a and window B and window C and window D. Window G, window H, and window I are in a third planar region after the second planar region, i.e., window G, window H, and window I are in a third level (this is denoted as Layer-3 in the embodiment of the present application). The windows in the three planar areas are then respectively at different levels.
Of course, there may be other types of display of the plurality of windows 1000 in the viewing area 100 of the VR device, and embodiments of the present application are not limited in this respect.
According to some embodiments of the application, the VR device and the remote control handle are associated with each other, and the VR device can be used to control a plurality of windows selected by the remote control handle. When a user is using a VR device, multiple windows 1000 at the same level or at different levels as shown in fig. 1A and 1B are presented within the viewable area 100 of the VR device. The user can select a partial window from the plurality of windows as a target window to be controlled using the pointing part 11. The selected target window may be a plurality of windows at the same level or at different levels, respectively. After selecting the target windows, the VR device performs the operation indicated by the pointing part 11 on each target window at the same time. Operations indicated by the indication device 11 include, but are not limited to, a zoom-in operation to zoom out or zoom in on the target window, a replace operation to replace the selected target window with each other, a delete operation to delete the selected target window, and the like. Therefore, the user selects a plurality of target windows by using the pointing component, so that batch operation of VR equipment on each target window is facilitated, and the experience of the user is improved.
The following describes the structure of a VR device implementing the window control method exemplified in the above embodiment of the present application:
As shown in fig. 5A, the VR device 50 shown in fig. 5A includes, but is not limited to, a processor 11, where the processor 11 is configured to generate corresponding operation control signals to corresponding components in the device, and read and process data in software, and in particular, read and process data and programs in a memory, so that each functional module in the device performs a corresponding function, thereby controlling the corresponding components to act as required by the instructions. Such as for various media processing algorithms including human-machine interaction, motion tracking/prediction (e.g., tracking user hand movement, remote handle movement and rotation, etc., in embodiments of the present application), rendering display, audio processing, window reduction or enlargement, window replacement, window deletion, etc.
The sensor system 12 is used to acquire, acquire or transmit information, including image information and distance information, such as hand information, and information of a remote control handle's ray click in an embodiment of the present application. The sensor system of the embodiment of the application can comprise a 3-axis or 6-axis sensor for acquiring motion information of VR equipment, such as angular velocity and linear acceleration, and simultaneously positioning, tracking and identifying hand motion, and acquiring static and dynamic characteristics of hands. Static characteristic information such as fingertip pointing, palm centroid, hand joints, etc. Such features typically employ single frame data acquisition. Dynamic characteristic information such as displacement vector, motion speed, etc. Such characteristic information is typically obtained by multi-frame data. As a sensor system, some specific program instructions may also be stored therein.
The memory 13 is used for storing programs and various data, mainly storing software elements such as an operating system, applications and functional instructions, or a subset thereof, or an extended set thereof. Non-volatile random access memory may also be included to provide the processor 11 with control software and applications including managing hardware, software, and data resources in the computing processing device. The method is also used for storing the selection range of the multiple windows by a user and storing running programs and applications. As shown in fig. 5B, at least one storage unit may be disposed in the memory 13 of the VR device 50, where each storage unit may have a respective storage function, for example, a first storage unit is used to store software units such as an operating system, applications, and function instructions, a second storage unit is used to store applications and running programs, and a third storage unit is used to store a selection range selected by a user for multiple windows.
The display element 14 typically includes a display screen and associated optics for content display, and typically a display interface is presented in the display screen for human-machine interaction and viewing of the window.
Acoustic elements 15 such as microphones, speakers, headphones, etc. for outputting sound.
Physical hardware 16, such as physical function keys, e.g., on-off keys, volume keys, mechanical control keys, etc.
The device may also comprise other components 17 than the above 11-16 for making the function and appearance of the device more rich and elegant.
The above hardware 11-16 and part of the hardware 17 may be coupled for communication via bus electrical connections.
The following describes a window control method according to an embodiment of the present application with reference to the accompanying drawings:
According to some embodiments of the application, the plurality of windows within the viewable area of the VR device are windows at the same level. The window control method of the embodiment of the application is applied to control a plurality of windows at the same level to be played back.
As shown in fig. 6A, windows at the same level (Layer-1) include, but are not limited to, window a, window B, window C, and window D. The user establishes the selection area in any one of the ways shown in fig. 2. For example, the user creates a rectangular movement track by using the movement of the hand, and the rectangular shape formed by the movement track covers the window a, the window B, the window C, and the window D. The selected window A, window B, window C and window D are used as target windows. And, the selected window A, window B, window C and window D present a selected identifier 1001 for prompting the user which pages are currently in the selected state and waiting for subsequent operations.
After the window A, the window B, the window C and the window D are selected, the user can simultaneously zoom in or zoom out the window A, the window B, the window C and the window D through the retracting operation of the hands. As shown in fig. 6B, when the fingers of the user's hand are closed, window a, window B, window C, and window D are reduced together. The scale of the window a, window B, window C, and window D reduction may be determined by the degree of closure of the user's hand fingers relative to their home positions. For example, the VR device tracks the movement of the fingers of the user's hand in real time, with window a, window B, window C, and window D scaled to a minimum when the fingers of the user's hand are fully closed. When the fingers of the user's hand are closed to half of the original state, window A, window B, window C, and window D are scaled to half of the original size. The scaling of window a, window B, window C, and window D may also be determined by other means, and embodiments of the present application are not limited in this respect.
In addition, it is understood that in the context of utilizing a remote control handle, the reduction or expansion of the control window may be accomplished by a case of the remote control handle or by a particular manner of waving the remote control handle.
It should be noted that, the reduced displayed window a, window B, window C, and window D may still be displayed in the planar areas of the window a, window B, window C, and window D before being reduced, and the reduced levels of the window a, window B, window C, and window D may be the same as the levels of the window a, window B, window C, and window D before being reduced. In addition, the various windows that are scaled down may be presented in the form of icons at locations within the viewable area of the VR device that are not centered (e.g., lower right or lower left) so that they are again magnified for display by the user.
Therefore, the window control method provided by the embodiment of the application is applied to control a plurality of windows in the same hierarchy, and the plurality of windows can be operated in batches, so that the experience of a user is improved relative to a mode of operating a single window by the user.
Although the above embodiment explained that all of the windows a to D on the first-level display area are selected and reduced. The present application is not limited thereto and may be to select a part of a plurality of windows therein, such as windows a and D, or windows A, B and C.
According to some embodiments of the application, the plurality of windows within the viewable area of the VR device are windows at different levels. The window control method of the embodiment of the application is applied to control a plurality of windows at different levels to be played back. The window control method provided by the embodiment of the present application will be described as being applied to control a plurality of windows at different levels, taking as an example windows at a first level (which is denoted as Layer-1 by the embodiment of the present application), a second level (which is denoted as Layer-2 by the embodiment of the present application), and a third level (which is denoted as Layer-3 by the embodiment of the present application), respectively. In the embodiment of the present application, the plurality of windows of different levels refer to a window of a first level, a window of a second level, and a window of a third level, which may be located at the same position on the z-axis of the three-dimensional space coordinate. It will be appreciated that the levels of the remaining windows in the VR device may be different from the first to third levels mentioned in the embodiments of the present application, and that there may be more levels in the visible area in the VR device, which embodiments of the present application are not limited in this respect.
In three-dimensional space, as shown in fig. 7A, the window of the first level, the window of the second level, and the window of the third level may be at the same position on the z-axis of the three-dimensional space coordinates. That is, the coordinates of the vertex Layer-10 of the planar area formed by the window of the first level, the vertex Layer-20 of the planar area formed by the window of the second level, and the vertex Layer-30 of the planar area formed by the window of the third level are the same in the z-axis, that is, the coordinates in the z-axis are z1. Wherein windows at the first level include, but are not limited to, window a, window B, window C, and window D. Windows at the second level include, but are not limited to, window E and window F. Windows at the third level include, but are not limited to, window G, window H, and window I.
As shown in fig. 7B, the user establishes the selection area in any of the ways shown in fig. 2. For example, the user selects window a, window B, and window D in the first hierarchy as the target window (first selection area) selected in the first hierarchy by clicking or establishing the selection area with the movement of the finger. And selecting the window F of the second level as a selected target window (second selected area) by clicking or establishing the selected area, and selecting the window I in the third level as a selected target window (third selected area) by clicking or establishing the selected area. And, the selected window A, window B, window D, window F and window I present a selected identifier 1001 for prompting the user which pages are now in the selected state and waiting for subsequent operations. Notably, when the user clicks on a window of a different hierarchy with a finger to select a target window, when the user clicks on a window a, a window B, and a window D of a first hierarchy as the target window selected in the first hierarchy. Through the first level, when the overlapping area between the window a, the window B and the window D of the second level located at the next level of the first level and the selected window a, window B and window D of the first level exceeds the threshold (the overlapping area may be 80%), the window of the overlapping area of the second level may be directly selected as the target window.
After the window A, the window B, the window D, the window F and the window I are selected, the user can simultaneously zoom in or zoom out the window A, the window B, the window D, the window F and the window I through the retracting operation of the hand. As shown in fig. 7C, when the fingers of the user's hand are closed, window a, window B, window D, window F, and window I are reduced together. The scale of the window a, window B, window D, window F, and window I scaling may be determined by the degree of closure of the user's hand fingers relative to their home positions. For example, the VR device tracks the movement of the fingers of the user's hand in real time, with window a, window B, window D, window F, and window I scaled to a minimum when the fingers of the user's hand are fully closed. When the fingers of the user's hand are closed to half of the original state, window A, window B, window D, window F, and window I are scaled to half of the original size. The scaling of window a, window B, window D, window F, and window I may also be determined by other means, and embodiments of the present application are not limited in this respect.
Notably, window a, window B, and window D of the scaled-down presentation may still present a planar area at the first level of window a, window B, and window D before scaling down. I.e. the levels of window a, window B and window D after shrinking may be the same as the levels of window a, window B and window D before non-shrinking. The window F that is scaled down may still be displayed in the planar area where the second level of the window F is located before being scaled down. The window I displayed in a reduced manner may still be displayed in a planar area where the third level of the window I is located before being reduced. Therefore, the window of the first level is reduced, so that the window of the later level can be conveniently displayed more. Further, the various windows that are scaled down may be presented in the form of icons at locations within the viewable area of the VR device that are not centered (e.g., the lower right or lower left corner of the viewable area) so that the user again zooms in on.
In addition, it is understood that in the context of utilizing a remote control handle, the reduction or expansion of the control window may be accomplished by the keys of the remote control handle or by a particular manner of waving the remote control handle.
Therefore, the window control method provided by the embodiment of the application is applied to control a plurality of windows in different levels, and the plurality of windows can be operated in batches, so that the experience of a user is improved relative to a mode of operating a single window by the user.
Although the above embodiment explains that the window a, the window B, the window D, the window F, and the window I are all selected and reduced on the first to third hierarchical display areas. However, the present application is not limited thereto, and some of the plurality of windows may be selected, for example, a first level of windows a and B, a second level of windows E and F, and a third level of windows H and I.
According to some embodiments of the application, the plurality of windows within the viewable area of the VR device are windows at different levels but in the same longitudinal space. The window control method of the embodiment of the application is applied to control a plurality of windows which are in different levels and are in the same longitudinal space to be played back. Taking windows respectively at a first level (which is denoted as Layer-1 in the embodiment of the present application), a second level (which is denoted as Layer-2 in the embodiment of the present application), and a third level (which is denoted as Layer-3 in the embodiment of the present application) as an example, the window control method provided in the embodiment of the present application is applied to control a plurality of windows at different levels. In the embodiment of the present application, the plurality of windows of different levels refer to a window of a first level, a window of a second level, and a window of a third level, which are located at the same position on the y-axis of the three-dimensional space coordinate (the same position refers to that the overlapping area of the windows of three levels reaches a threshold value, and the threshold value may be any one of 80%, 85%, etc., which is not limited in this embodiment of the present application). It will be appreciated that the levels of the remaining windows in the VR device may be different from the first to third levels mentioned in the embodiments of the present application, and that there may be more levels in the visible area in the VR device, which embodiments of the present application are not limited in this respect.
In three-dimensional space, as shown in fig. 8A, the window of the first level, the window of the second level, and the window of the third level may be at the same position on the y-axis of the three-dimensional space coordinates. That is, the coordinates of the vertex Layer-40 of the planar area formed by the window of the first level, the vertex Layer-50 of the planar area formed by the window of the second level, and the vertex Layer-60 of the planar area formed by the window of the third level are the same on the y-axis, that is, the coordinates on the y-axis are all y1. Wherein windows at the first level include, but are not limited to, window a, window B, window C, and window D. Windows at the second level include, but are not limited to, window E, window F, window G, and window H. Windows at the third level include, but are not limited to, window I, window J, and window K.
As shown in fig. 8B, the user establishes the selection area in any of the ways shown in fig. 2. For example, the user selects the window a and the window B in the first level by clicking or establishing a selection area, selects the window E in the second level by clicking or establishing a selection area, and selects the window I in the third level as a target window of the first selection area (in the embodiment of the present application, indicated by the group 1) by clicking or establishing a selection area by using the movement of the finger. The user selects the window D in the first level by clicking or establishing a selection area by using the movement of the finger, selects the window M and the window G in the second level by clicking or establishing a selection area, and selects the window K in the third level as a target window of the second selection area (expressed by the group2 in the embodiment of the application) by clicking or establishing a selection area. And, the selected window A, window B, window E, window I, window D, window M, window G and window K present a selected identifier 1001 for prompting the user which pages are currently in the selected state and waiting for subsequent operations.
After the window A, the window B, the window E, the window I, the window D, the window M, the window G and the window K are selected, a user can zoom in or zoom out the window A, the window B, the window E, the window I, the window D, the window M, the window G and the window K through the retracting operation of the hand. As shown in fig. 8C, when the fingers of the user's hand are closed, window a, window B, window E, window I, window D, window M, window G, and window K are reduced together. Wherein window a, window B, window E, window I are scaled down as a first set of windows. Window D, window M, window G, and window K are scaled down as a second set of windows. The scale of the window A, window B, window E, window I, window D, window M, window G, and window K scaling may be determined by the degree of closure of the user's hand fingers relative to their home positions. For example, the VR device tracks the movement of the fingers of the user's hand in real time, with window a, window B, window E, window I, window D, window M, window G, and window K scaled to a minimum when the fingers of the user's hand are fully closed. When the fingers of the user's hand are closed to half of the original state, window A, window B, window E, window I, window D, window M, window G, and window K are scaled to half of the original size. The scaling of window a, window B, window E, window I, window D, window M, window G, and window K may also be determined by other means, and embodiments of the present application are not limited in this respect.
In addition, it is understood that in the context of utilizing a remote control handle, the reduction or expansion of the control window may be accomplished by a case of the remote control handle or by a particular manner of waving the remote control handle.
Notably, window a, window B, window E, window I, window D, window M, window G, and window K of the scaled-down presentation may still present planar areas of the hierarchy of windows a, B, and D prior to scaling. That is, the levels of window a, window B, window E, window I, window D, window M, window G, and window K after shrinking may be the same as the levels of window a, window B, window E, window I, window D, window M, window G, and window K before shrinking. Therefore, the window of the first level is reduced, so that the window of the later level can be conveniently displayed more. Further, the various windows that are scaled down may be presented in the form of icons at locations within the viewable area of the VR device that are not centered (e.g., the lower right or lower left corner of the viewable area) so that the user again zooms in on.
Therefore, the window control method provided by the embodiment of the application is applied to control a plurality of windows which are in different levels and in the same longitudinal space, and the plurality of windows can be operated in batches, so that the experience of a user is improved relative to a mode of operating a single window by the user.
According to some embodiments of the application, the plurality of windows within the viewable area of the VR device are windows at different levels. The window control method of the embodiment of the application is applied to control a plurality of windows at different levels to be played back. The window control method provided by the embodiment of the present application will be described as being applied to control a plurality of windows at different levels, taking as an example windows at a first level (which is denoted as Layer-1 by the embodiment of the present application), a second level (which is denoted as Layer-2 by the embodiment of the present application), and a third level (which is denoted as Layer-3 by the embodiment of the present application), respectively. In the embodiment of the present application, the plurality of windows of different levels refer to a window of a first level, a window of a second level, and a window of a third level, which may be located at the same position on the x-axis of the three-dimensional space coordinate. It will be appreciated that the levels of the remaining windows in the VR device may be different from the first to third levels mentioned in the embodiments of the present application, and that there may be more levels in the visible area in the VR device, which embodiments of the present application are not limited in this respect.
In three-dimensional space, as shown in fig. 9A, the windows of the first level, the windows of the second level, and the windows of the third level may be overlaid at the same position on the x-axis of the three-dimensional space coordinates. That is, the coordinates of the vertex Layer-70 of the planar area formed by the window of the first level, the vertex Layer-80 of the planar area formed by the window of the second level, and the vertex Layer-90 of the planar area formed by the window of the third level are the same or partially coincide on the x-axis. I.e. the coordinates on the x-axis are x1. For example, the window D of the first hierarchy, when the user clicks on the window D with a finger, is selected, and the coordinates of the window D and the coordinates of the window F of the second hierarchy and the coordinates of the window I of the third hierarchy. It will be appreciated that when the coordinates of the window F of the second level and the coordinates of the window I of the third level overlap with the coordinates of the window D, it may be considered that the window of the first level, the window of the second level and the window of the third level may be covered at the same position on the x-axis of the three-dimensional space coordinates, and the embodiment of the present application is not limited herein. Wherein windows at the first level include, but are not limited to, window a, window B, window C, and window D. Windows at the second level include, but are not limited to, window E and window F. Windows at the third level include, but are not limited to, window G, window H, and window I.
As shown in fig. 9B, the user establishes the selection area in any of the ways shown in fig. 2. For example, the user selects the window D in the first hierarchy as the target window (first selection area) selected in the first hierarchy by clicking or establishing the selection area with the movement of the finger. And selecting the window F of the second level as a selected target window (second selected area) by clicking or establishing the selected area, and selecting the window I in the third level as a selected target window (third selected area) by clicking or establishing the selected area. And, the selected window D, window F and window I present a selected identifier 1001, which is used to prompt the user which pages are currently in the selected state, and wait for the subsequent operation. Notably, when the user clicks on a window of a different hierarchy with a finger to select a target window, when the user clicks on a window D of a first hierarchy as the target window selected in the first hierarchy. Through the first level, when the overlapping area between the window a, the window B and the window D of the second level located in the next level of the first level and the selected window a, window B and window D of the first level exceeds a threshold value (may be that the overlapping area is 80%), the window (such as window F) of the overlapping area of the second level may be directly selected as the target window. The selection mode of the target window is not limited in the embodiment of the application.
After the window D, the window F and the window I are selected, the user can simultaneously zoom in or zoom out the window D, the window F and the window I through the retracting operation of the hands. As shown in fig. 9C, when the fingers of the user's hand are closed, window D, window F, and window I are reduced together. The scale of the window D, window F, and window I reduction may be determined by the degree of closure of the user's hand fingers relative to their home positions. For example, the VR device tracks the movement of the fingers of the user's hand in real time, with window D, window F, and window I scaled to a minimum when the fingers of the user's hand are fully closed. When the fingers of the user's hand are closed to half of the original state, window D, window F, and window I are scaled to half of the original size. The scaling of window D, window F, and window I may also be determined in other manners, and embodiments of the application are not limited in this regard.
In addition, it is understood that in the context of utilizing a remote control handle, the reduction or expansion of the control window may be accomplished by a case of the remote control handle or by a particular manner of waving the remote control handle.
It is noted that the window D displayed in a reduced manner may still be displayed in a planar area where the first level of the window D is located before being reduced. I.e. the level of the window D after shrinking may be the same as the level of the window D before non-shrinking. The window F that is scaled down may still be displayed in the planar area where the second level of the window F is located before being scaled down. The window I displayed in a reduced manner may still be displayed in a planar area where the third level of the window I is located before being reduced. Further, the various windows that are scaled down may be presented in the form of icons at locations within the viewable area of the VR device that are not centered (e.g., the lower right or lower left corner of the viewable area) so that the user again zooms in on.
In addition, the selection area established for the target window can be stored in a memory, so that the selection area is convenient to recall and directly use, the selection area is prevented from being established again, and the efficiency of operating the window is improved.
Therefore, the window control method provided by the embodiment of the application is applied to control a plurality of windows in different levels, and the plurality of windows can be operated in batches, so that the experience of a user is improved relative to a mode of operating a single window by the user.
Although the above embodiment explains that the window D, the window F, and the window I on the first to third hierarchical display areas are all selected and reduced. However, the present application is not limited thereto, and some of the plurality of windows may be selected, for example, a first level of windows a and B, a second level of windows E and F, and a third level of windows H and I.
According to some embodiments of the application, the plurality of windows within the viewable area of the VR device are windows at different levels or at the same level. The window control method of the embodiment of the application is applied to control the replacement between two windows at the same level. The window at the first level is taken as an example to describe that the window control method provided by the embodiment of the application is applied to control the replacement between two windows at the same level.
In the embodiment of the present application, windows in different levels are exemplified by windows in a first level (which is denoted as Layer-1 in the embodiment of the present application), a second level (which is denoted as Layer-2 in the embodiment of the present application), and a third level (which is denoted as Layer-3 in the embodiment of the present application), respectively. The plurality of windows of different levels refer to a window of a first level, a window of a second level, and a window of a third level may be overlaid at the same position on an x-axis of three-dimensional space coordinates. That is, the coordinates of the planar area formed by the windows of the first level, the planar area formed by the windows of the second level, and the planar area formed by the windows of the third level are the same or partially coincide on the x-axis. For example, a window D of the first hierarchy, when the user clicks on the window D with a finger, the window D is selected. Coordinates on the plane of the first hierarchy for the position clicked by the user on window D are mapped onto the plane of the second hierarchy and the plane of the third hierarchy, coinciding with window F of the second hierarchy and with window I of the third hierarchy, respectively. Thus, windows F of the second level coincide, and windows I of the third level are also selected. In addition, it is understood that when the coordinates of the window F of the second level and the coordinates of the window I of the third level overlap with the coordinates covered by the window D to a preset extent, the coordinates of the window F of the first level, the window F of the second level, and the window F of the third level may be considered to overlap at the same position on the x-axis of the three-dimensional space coordinates. For example, when the user clicks on window D with a finger, window D is selected. When it is determined that the range covered by the window D maps onto the plane of the second hierarchy and the plane of the third hierarchy, the window F of the second hierarchy is determined to be coincident with the window F of the second hierarchy and the window I of the third hierarchy is also selected to be coincident with the window I of the third hierarchy to more than 50% of the area covered by the window D. The embodiments of the present application are not limited herein. It will be appreciated that the levels of the remaining windows in the VR device may be different from the first to third levels mentioned in the embodiments of the present application, and that there may be more levels in the visible area in the VR device, which embodiments of the present application are not limited in this respect.
In the three-dimensional space, the positions of the windows of the first level, the windows of the second level, and the windows of the third level in the three-dimensional space may be referred to as positions shown in fig. 9A. The embodiments of the present application are not described herein.
As shown in fig. 10A, the user establishes the selection area in any of the ways shown in fig. 2. For example, the user selects window a in the first hierarchy as the first target window by clicking or creating a selection field with movement of a finger. And selecting the window B in the first level as a second target window by using a finger in a clicking or selecting area establishing mode. And, the selected window A and window B present a selected identifier 1001, which is used to prompt the user which pages are currently in the selected state, and wait for the subsequent operation.
After the window a and the window B are selected, as shown in fig. 10B, the user moves the window B with a finger toward the display area where the window a is located. To move window B to the position of window a and window a to the position of window B.
As shown in fig. 10C, after the replacement of the positions of the window a and the window B is completed, the window B is located in the display area of the window a, and the window a is located in the display area of the window B. It is noted that when the area of the display area of the window a is not identical to the area of the display area of the window B, after the window a and the window B are interchanged, the window area of the window B is adjusted to be identical to the area of the display area at the position of the window a. The window area of the window A is adjusted to be the same as the area of the display area at the position of the window B. The area of the first target window is adapted to the display area at the position of the second target window, and the area of the second target window is adapted to the display area at the position of the first target window.
The above is to move some windows toward other windows with fingers and trigger replacement recently, and according to an embodiment of the present application, it is also possible to trigger with a specific gesture, such as rotating the palm of the hand, after selecting both windows as the replacement objects. In addition, it is understood that in the context of utilizing a remote control handle, the replacement of the control window may be accomplished by a key of the remote control handle or by a particular way of waving the remote control handle.
Therefore, the window control method provided by the embodiment of the application is applied to control a plurality of windows at the same level, and the plurality of windows can be operated in batches, so that the experience of a user is improved relative to a mode of operating a single window by the user.
Although the above embodiment explained that both window a and window B are selected on the first-level display area, the positions of window a and window B are replaced with each other. However, the present application is not limited thereto, and the remaining windows of the first hierarchy or the windows of other hierarchies may be selected, for example, the windows a and D of the first hierarchy or the windows E and F of the second hierarchy or the windows H and I of the third hierarchy are selected.
According to some embodiments of the application, the plurality of windows within the viewable area of the VR device are windows at different levels or at the same level. The window control method of the embodiment of the application is applied to control the replacement among a plurality of windows at different levels. The window control method provided by the embodiment of the application is applied to control the replacement among a plurality of windows in different levels by taking the window in a first level and the window in a second level as an example.
In the embodiment of the present application, windows in different levels are exemplified by windows in a first level (which is denoted as Layer-1 in the embodiment of the present application), a second level (which is denoted as Layer-2 in the embodiment of the present application), and a third level (which is denoted as Layer-3 in the embodiment of the present application), respectively.
The windows respectively located at the different levels may be selected in a manner of selecting the windows of the different levels as explained in connection with fig. 9A to 9B above.
In the three-dimensional space, the positions of the windows of the first level, the windows of the second level, and the windows of the third level in the three-dimensional space may be referred to as positions shown in fig. 9A. The embodiments of the present application are not described herein.
As shown in fig. 11A, the user establishes the selection area in any of the ways shown in fig. 2. For example, the user selects window a, window B, and window C in the first hierarchy as the first target window by clicking or creating a selection field with movement of a finger. And selecting the window E in the second level by using a finger in a clicking or selecting area establishing mode as a second target window. And, the selected window A, window B, window C and window E present a selected identifier 1001 for prompting the user which pages are currently in the selected state and waiting for subsequent operations.
After the window A, the window B, the window C and the window E are selected, the user moves the window A, the window B and the window C with fingers to approach to the display area where the window E is positioned. And moving the window A, the window B and the window C to the position of the window E, and moving the window E to the position of a display area formed by the window A, the window B and the window C.
As shown in fig. 11B, after the positions of the window a, the window B, and the window C are replaced, the window E is located in the display area where the window a, the window B, and the window C are located, and the window a, the window B, and the window C are located in the display area where the window E is located.
It is noted that when the area of the display area formed by the window a, the window B, and the window C is inconsistent with the area of the display area of the window E, after the positions of the window a, the window B, and the window C are interchanged with the window E, the window area of the window E is adjusted to be the same as the area of the display area at the position where the window a, the window B, and the window C are located. And adjusting the window area of a plane area formed by the window A, the window B and the window C to be the same as the area of a display area at the position of the window E. The area of the first target window is adapted to the display area at the position of the second target window, and the area of the second target window is adapted to the display area at the position of the first target window.
The above is to move some windows toward other windows with fingers and trigger replacement recently, and according to an embodiment of the present application, it is also possible to trigger with a specific gesture, such as rotating the palm of the hand, after selecting both windows as the replacement objects. In addition, it is understood that in the context of utilizing a remote control handle, the replacement of the control window may be accomplished by a key of the remote control handle or by a particular way of waving the remote control handle.
Therefore, the window control method provided by the embodiment of the application is applied to control a plurality of windows in different levels, and the plurality of windows can be operated in batches, so that the experience of a user is improved relative to a mode of operating a single window by the user.
Although the above embodiment explains that the window a, the window B, the window C, and the window E of the second hierarchy are selected on the first hierarchy display area, and the positions of the window a, the window B, the window C, and the window E are replaced with each other. However, the present application is not limited thereto, and the remaining windows of the first level or the windows of other levels may be selected for position replacement, for example, the windows a, B, C and D of the first level and the windows E and F of the second level are selected for replacement, or the windows G and E of the third level and the second level are selected for replacement.
According to some embodiments of the application, the plurality of windows within the viewable area of the VR device are windows at different levels or at the same level. The window control method of the embodiment of the application is applied to control the replacement among a plurality of windows which are in different levels and are in the same longitudinal space. The window control method provided by the embodiment of the application is applied to control the replacement among a plurality of windows in different levels by taking the window in the first level, the window in the second level and the window in the third level as an example.
In the embodiment of the present application, windows in different levels are exemplified by windows in a first level (which is denoted as Layer-1 in the embodiment of the present application), a second level (which is denoted as Layer-2 in the embodiment of the present application), and a third level (which is denoted as Layer-3 in the embodiment of the present application), respectively. The plurality of windows of different levels refer to a window of a first level, a window of a second level, and a window of a third level, which may be overlaid at the same position on the y-axis of the three-dimensional space coordinate (i.e., the window of the first level, the window of the second level, and the window of the third level are in the same longitudinal space). That is, the coordinates of the plane area formed by the windows of the first level, the plane area formed by the windows of the second level, and the plane area formed by the windows of the third level are the same or partially coincide on the y-axis. For example, when the user clicks on the window D with a finger, the window D is selected, and the coordinates of the window D are identical to or partially overlap with the coordinates of the window H of the second hierarchy and the coordinates of the window K of the third hierarchy. It is to be understood that, when the coordinates of the window H of the second level and the coordinates of the window K of the third level overlap with the coordinates of the window D, it may be considered that the window of the first level, the window of the second level and the window of the third level may be covered at the same position on the y-axis of the three-dimensional space coordinates, which is not limited herein. It will be appreciated that the levels of the remaining windows in the VR device may be different from the first to third levels mentioned in the embodiments of the present application, and that there may be more levels in the visible area in the VR device, which embodiments of the present application are not limited in this respect.
In the three-dimensional space, the positions of the windows of the first level, the windows of the second level, and the windows of the third level in the three-dimensional space may be referred to as positions shown in fig. 8A. The embodiments of the present application are not described herein.
As shown in fig. 12A, the user establishes the selection area in any of the ways shown in fig. 2. For example, the user selects the window B, the window C, the window F of the second level, and the window J of the third level in the first level as the first target window (indicated by group1 in the figure) by clicking or creating a selection area by using the movement of the finger. The first-level window D, the second-level windows G and H and the third-level window K are selected by clicking or establishing a selection area by using a finger, and serve as second target windows (shown as group2 in the figure). And, the selected windows B, C, F, J, D, G, H and K present a selected identifier 1001 for prompting the user which pages are currently in the selected state and waiting for the subsequent operation.
After window B, window C, window F, window J, and window D, window G, window H, and window K are selected, the user moves the window B, window C, window F, and window J with a finger toward the display area where window D, window G, window H, and window K are located. To move windows B and C to the display area where window D is located, window F to the display area where window H and G are located, and window J to the display area where window K is located. Window D moves to the display area where window B and window C are located, window H and window G move to the display area where window F is located, window K moves to the display area where window J is located.
As shown in fig. 12B, after the replacement of the positions of the window B, the window C, the window F, the window J, and the window D, the window G, the window H, and the window K is completed, the window B and the window C are located in the display area where the window D is located, the window F is shifted in the display area where the window H and the window G are located, and the window J is located in the display area where the window K is located. The window D is located in the display area of the positions of the window B and the window C, the window H and the window G are located in the display area of the position of the window F, and the window K is located in the display area of the position of the window J.
It is noted that after the window B, the window C and the window D are interchanged, when the area of the display area formed by the window B and the window C is inconsistent with the area of the display area of the window D, the area of the window B and the area of the display area formed by the window C are adjusted to be the same as the area of the display area at the position of the window D. The window area of the window D is adjusted to be the same as the area of the display area of the plane area formed by both windows B and C.
After the window F and the windows G and H are interchanged, when the window area of the window F is inconsistent with the area of the plane area formed by the window G and the window H, the window area of the window F and the window G and the window H are adjusted, and the display areas of the plane area formed by the positions of the window F and the window G and the window H are identical. The area of the plane area formed by the two adjustment windows G and H is the same as the area of the display area at the position of the window F.
After the window K and the window J are interchanged, when the window area of the window K is inconsistent with the area of the display area at the position of the window J, the window area of the window K is adjusted to be the same as the area of the display area at the position of the window J. The window area of the adjustment window J is the same as the area of the display area at the position of the window K.
In addition, it is understood that in the context of utilizing a remote control handle, the replacement of the control window may be accomplished by a key of the remote control handle or by a particular way of waving the remote control handle. Further, after the replacement of the positions of window B, window C, window F, window J, and window D, window G, window H, and window K is completed, to prompt the user which pages are now in the selected state and are replaced, the display of the identifier 1001 on the selected window may be continued.
Therefore, the window control method provided by the embodiment of the application is applied to control a plurality of windows in different levels, and the plurality of windows can be operated in batches, so that the experience of a user is improved relative to a mode of operating a single window by the user.
The following describes a procedure of controlling the retraction of windows of different levels or the same level by applying the window control method provided by the embodiment of the present application.
Referring to fig. 13, fig. 13 is a schematic flow chart of a window control method applied to control the retraction of windows of different levels or the same level according to an embodiment of the present application.
The method includes steps S130-S133.
Step S130, receiving a batch operation instruction, where the batch operation instruction is used to instruct to play and receive a plurality of windows in a visible area of the VR device, where the plurality of windows may be a plurality of windows in a same level, for example, a window a, a window B, a window C, and a window D in a same level shown in fig. 6A. The plurality of windows may also be a plurality of windows at different levels, such as window a, window B, window C, and window D, window E and window F, window G, window H, and window I of the first level to the third level shown in fig. 7A. Window a, window B, window C, and window D of the first hierarchy shown in fig. 8A. Window E, window F, window G, and window H of the second hierarchy. Window I, window J, and window K of the third hierarchy. And windows a, B, C, and D of the first hierarchy shown in fig. 9A. A window E and a window F of the second hierarchy. Window G, window H, and window I of the third hierarchy. The batch operation instruction may be issued by a user using a pointing device such as a gesture or a remote control handle, for example, the user triggers an zoom-in instruction or a zoom-out instruction by expanding or closing five fingers in a visible area of the VR device (the batch operation instruction includes a zoom-in instruction or a zoom-out instruction). Or the user triggers an enlargement instruction or a contraction instruction (the batch operation instruction includes an enlargement instruction or a contraction instruction) by manipulating a key on the remote control handle.
Step S131, identifying batch operation instructions. The batch operation instruction includes, but is not limited to, an enlargement instruction to enlarge a window in a visible area of the VR device or a contraction instruction to reduce the window. The enlargement ratio and the reduction ratio of the enlargement instruction or the reduction instruction can be determined by the unfolding degree or the folding degree of the fingers of the user or can be determined by keys on a remote control handle. For example, the VR device tracks the movement of the fingers of the user's hand in real time, and when the fingers of the user's hand are fully closed, the selected target window is scaled to a minimum. When the fingers of the user's hand are closed to half of the original state, the selected target window is scaled to half of the original size. The scale of the selected target window may also be determined by other manners, and embodiments of the present application are not limited in this regard.
Step S132, determining a selection range. The selection range may be a plane area where the selected target window is located. The selection of the target window may be determined by the user using a selection area established by the hand or a remote control handle or clicking on a single window. For example, a user may click one or more windows presented in a viewable area of the VR device using keys provided on a remote control handle, or by holding the remote control handle in the hand to perform a similar tap or click action. The clicked window may be the selected target window. For example, as shown in fig. 7B, the user selects window a, window B, and window D in the first hierarchy as the target window (first selection area) selected in the first hierarchy by clicking or establishing the selection area with the movement of the finger. And selecting the window F of the second level as a selected target window (second selected area) by clicking or establishing the selected area, and selecting the window I in the third level as a selected target window (third selected area) by clicking or establishing the selected area.
Or the user swings the window in the area through which the hand is swung, and takes the window as the selected target window. The hand swing can move in a visible area of the VR device, and when a moving track forms a closed figure, a window covered by the closed figure is a target window. When the hand moves in the visible area of the VR device, the moving track forms a curve or a straight line, and the window covered by the curve or the straight line can be selected as the target window. For example, as shown in fig. 6A, the user creates a rectangular movement track using the movement of the hand, and the rectangular formed by the movement track covers the window a, the window B, the window C, and the window D as the selected target window.
It will be appreciated that the user may also use the remote control handle to move within the viewable area of the VR device to create a closed figure to select a target window. The closed figure formed may be a regular rectangle 1002 or triangle 1003 or irregular shape 1004 shown in fig. 3B, or the like. It will be appreciated that the closed figure formed by the movement track may determine that the selected target window is the target window if the window is entirely within the closed figure, and the selected target window if the window is partially within the closed figure (e.g., if 50% of the area of the window is within the closed figure, the selected target window is considered the target window).
Step S133, executing the operation indicated by the batch operation instruction. After the selection range is determined, the target window in the selection range is enlarged or reduced after the enlargement operation or the reduction operation in the batch operation instruction at the identification point. The above zoom-in operation or zoom-out operation is an example of an operation instruction, which may also include instructions for other control windows, such as closing a window, or the like.
Therefore, the window control method provided by the embodiment of the application is applied to control a plurality of windows in different levels or in the same level, and the plurality of windows can be operated in batches, so that the experience of a user is improved relative to a mode of operating a single window by the user.
Next, a process of replacing a window control method applied to control windows of different levels or the same level according to the embodiment of the present application is described below.
Referring to fig. 14A, fig. 14A is a schematic flow chart of a window control method for controlling replacement of windows of different levels or the same level according to an embodiment of the present application.
The method includes steps S140-S144.
Step S140, receiving a batch operation instruction, where the batch operation instruction is used to perform a replacement operation on a plurality of windows in a visible area of the VR device, where the plurality of windows may be a plurality of windows in a same level, or may be a plurality of windows in different levels, such as a window a, a window B, a window C, and a window D in a first level shown in fig. 9A. A window E and a window F of the second hierarchy. Window G, window H, and window I of the third hierarchy. The batch operation instruction may be issued by a user using a pointing device such as a gesture or a remote control handle, for example, the user may trigger a replacement instruction by moving a finger in a visible area of the VR device (the batch operation instruction includes the replacement instruction). Or the user triggers the replacement instruction by manipulating a key on the remote control handle.
Step S141, recognizing a batch operation instruction. Wherein the batch operation instructions include, but are not limited to, instructions to replace between at least two windows within a viewable area of the VR device. For example, the VR device tracks the movement of the finger of the user's hand in real time, and when the finger of the user's hand moves toward window a by long pressing window B as shown in fig. 10B until moving to the display area where window a is located, the batch operation instruction at this time is a replacement instruction between target windows.
Step S142, determining a selection range. The selection range may be a plane area where the first target window and the second target window of the same or different selected hierarchy are located. The selection of the target window may be determined by the user using a selection area established by the hand or a remote control handle or clicking on a single window. For example, a user may click one or more windows presented in a viewable area of the VR device using keys provided on a remote control handle, or by holding the remote control handle in the hand to perform a similar tap or click action. The clicked window may be the selected target window. For example, as shown in fig. 10A, the user selects window a in the first hierarchy as the first target window by clicking or creating a selection field with the movement of the finger. And selecting the window B in the first level as a second target window by using a finger in a clicking or selecting area establishing mode. Or as shown in fig. 11A, the user uses the movement of the finger to select the window a, the window B and the window C in the first hierarchy as the first target window in a clicking or selecting area setting manner. And selecting the window E in the second level by using a finger in a clicking or selecting area establishing mode as a second target window. Or as shown in fig. 12A, the user uses the movement of the finger to select the window B, the window C and the window F in the first level and the window J in the third level as the first target window (shown as group1 in the figure) in a clicking or selecting area setting manner. The first-level window D, the second-level windows G and H and the third-level window K are selected by clicking or establishing a selection area by using a finger, and serve as second target windows (shown as group2 in the figure).
Step S143, the coordinates of the first target window are identified. Referring to fig. 14B, the windows a and B at the same level serve as the selected first target window and second target window. Corresponding to the three-dimensional space, window a and window B are both in the region of forward formation of the x-axis and the z-axis. Corresponding to the rectangular window a, four vertexes of the window a have four coordinates in three-dimensional space, A1 (x 1,0, z 1), A2 (x 1,0, z 2), A3 (x 2,0, z 1), A4 (x 2,0, z 2), respectively. Of course, the first target window may also be located at the rest of the three-dimensional space, and the number of the first target windows is not limited to one, which is not limited in this embodiment of the present application.
Step S144 identifies coordinates of the second target window. Referring to fig. 14B, corresponding to a rectangular window B in three-dimensional space, four vertices of the window B have four coordinates in three-dimensional space, B1 (x 3,0, z 3), B2 (x 3,0, z 4), B3 (x 4,0, z 3), and B4 (x 4,0, z 4), respectively. Of course, the second target window may also be located at the rest of the three-dimensional space, the number of the second target windows is not limited to one, and the first target window and the second target window may also be windows of different levels, which is not limited herein.
Step S145, executing the operation indicated by the batch operation instruction. After the coordinates of the first target window and the second target window in the selection range are determined, the positions of the first target window and the second target window in the selection range are replaced for the identified replacement operation in the batch operation instruction. Namely, the first target window is moved to the coordinate where the second target window is located, and the second target window is moved to the coordinate where the first target window is located. When the coordinates of the first target window are not coincident with the coordinates of the second target window, the area of the first target window is adjusted so that the coordinates at the vertex of the first target window and the coordinates at the vertex of the second target window are coincident. When the coordinates of the second target window are not coincident with the coordinates of the first target window, the area of the second target window is adjusted so that the coordinates at the vertex of the second target window are coincident with the coordinates at the vertex of the first target window.
Therefore, the window control method provided by the embodiment of the application is applied to control a plurality of windows in different levels or in the same level, and the plurality of windows can be operated in batches, so that the experience of a user is improved relative to a mode of operating a single window by the user.
Referring to fig. 15, fig. 15 is a schematic structural diagram of a window control device according to an embodiment of the present application.
The window control device 2 shown in fig. 15 includes a receiving module 150, an acquisition module 151, a collision detection module 152, a region identification module 153, a grouping module 154, a display module 155, and an output module 156.
The receiving module 150 is configured to receive the batch operation instruction shown in fig. 13 and 14A or receive the position of the selected target window in the visible area 100 detected by the collision detecting module 152. The batch operation instruction may be a window retraction instruction or a window replacement instruction issued by the user by using movement of a hand or a remote control handle, or the like.
The collection module 151 is configured to collect gestures, actions, and areas pointed by the pointing component, so as to determine a movement track of the pointing component.
The collision detection module 152 is configured to detect a portion of the window 1000 in the visible area 100 that is collided by a ray emitted from the remote control handle into the visible area 100 shown in fig. 1A or 1B. Or the user may be directed to the window 1000 within the viewable area 100 shown in fig. 1A or 1B in the manner of a glove or gesture. The window where the ray emitted by the remote control handle collides or the window where the direction pointed by the finger of the user is located is the target window. Or the window covered by the closed figure shown in fig. 3B formed by the ray emitted by the remote control handle, the collision detection module 152 may also identify and determine the window covered in the closed figure as the target window.
The area identifying module 153 is configured to identify a selected area of the window selected by the user using the hand or the remote control handle. For example, the user creates an area as shown in fig. 3B by using the movement of the hand, and the area recognition module 153 is used to recognize the shape, size and window covered in the closed figure of the selected area formed by the closed figure. Or the user creates a straight line or curve as shown in fig. 3C by using the movement of the hand, and the area recognition module 153 is used to recognize the length of the straight line or curve and the window through which the straight line or curve passes.
The grouping module 154 groups the windows and manages the windows by the user.
A display module 155 for displaying the selected target frame, for example, on the selected target frame using the identifier 1001 shown in fig. 3A. Or to display the position of the target window after completion of the playback or replacement, such as the display position in the viewable area 100 after the window shown in fig. 6B, 7C, 8C, and 9C is reduced. Or the display position after the replacement of the target window or the target window group as shown in fig. 10C, 11B, and 12B.
And an output module 156 for performing the action of the window batch operation interaction. The actions of window batch operation interactions include, but are not limited to, moving windows, stitching windows, closing windows, maximizing and minimizing windows, and the like.
In some embodiments of the present application, an electronic device is further provided, and an electronic device in the embodiment of the present application is described below with reference to fig. 16. Fig. 16 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
For at least one embodiment, the controller hub 804 communicates with the processor 801 via a multi-drop bus, such as a Front Side Bus (FSB), a point-to-point interface, such as a Quick Path Interconnect (QPI), or similar connection. The processor 801 executes instructions that control the general type of data processing operations. In one embodiment, controller hub 804 includes, but is not limited to, a Graphics Memory Controller Hub (GMCH) (not shown) and an input/output hub (IOH) (which may be on separate chips) (not shown), where the GMCH includes memory and graphics controllers and is coupled to the IOH.
The electronic device 800 may also include a coprocessor 806 coupled to the controller hub 804 and memory 802. Or one or both of the memory 802 and GMCH may be integrated within the processor 801 (as described in the present application), with the memory 802 and co-processor 806 being directly coupled to the processor 801 and the controller hub 804, the controller hub 804 being in a single chip with the IOH.
In one embodiment, memory 802 may be, for example, dynamic Random Access Memory (DRAM), phase Change Memory (PCM), or a combination of both. Memory 802 may include one or more tangible, non-transitory computer-readable media for storing data and/or instructions therein. The computer-readable storage medium has stored therein instructions, and in particular, temporary and permanent copies of the instructions.
In one embodiment, coprocessor 806 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPU, embedded processor, or the like. Optional properties of coprocessor 806 are shown in dashed lines in fig. 16.
In one embodiment, electronic device 800 may further comprise a Network Interface (NIC) 803. The network interface 803 may include a transceiver to provide a radio interface for the device 800 to communicate with any other suitable device (e.g., front end module, antenna, etc.). In various embodiments, the network interface 803 may be integrated with other components of the electronic device 800. The network interface 803 can realize the functions of the communication unit in the above-described embodiment.
In one embodiment, as shown in FIG. 16, the electronic device 800 may further include an input/output (I/O) device 805. Input/output (I/O) devices 805 may include a user interface designed to enable a user to interact with electronic device 800, a peripheral component interface designed to enable a peripheral component to also interact with electronic device 800, and/or a sensor designed to determine environmental conditions and/or location information associated with electronic device 800.
It is noted that fig. 16 is merely exemplary. That is, although the electronic apparatus 800 is shown in fig. 16 as including a plurality of devices such as the processor 801, the controller hub 804, the memory 802, etc., in practical applications, the apparatus using the methods of the present application may include only a part of the devices of the electronic apparatus 800, for example, may include only the processor 801 and the NIC803. The nature of the alternative device is shown in dashed lines in fig. 16.
In some embodiments of the present application, the computer readable storage medium of the electronic device 800 may have stored therein instructions that, when executed by at least one unit in a processor, cause the device to implement the method for detecting wireless charging alignment mentioned in the above embodiments. The instructions, when executed on a computer, cause the computer to perform the window control method as mentioned in the above embodiments.
Referring now to fig. 17, fig. 17 is a schematic diagram of an SOC according to an embodiment of the present application, and is shown as a block diagram of an exemplary SOC (System on Chip) 1000 according to an embodiment of the present application. In fig. 17, similar parts have the same reference numerals. In addition, the dashed box is an optional feature of a more advanced SoC. The SoC may be used in an electronic device according to an embodiment of the present application, and may implement corresponding functions according to instructions stored therein.
In fig. 17, soC 1000 includes an interconnect unit 1002 coupled to processor 1001, a system agent unit 1006, a bus controller unit 1005, an integrated memory controller unit 1003, a set of one or more coprocessors 1007 that may include integrated graphics logic, an image processor, an audio processor, and a video processor, a Static Random Access Memory (SRAM) unit 1008, and a Direct Memory Access (DMA) unit 1004. In one embodiment, coprocessor 1007 includes a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPU, high-throughput MIC processor, embedded processor, or the like.
One or more computer-readable media for storing data and/or instructions may be included in a Static Random Access Memory (SRAM) unit 1008. The computer-readable storage medium may have stored therein instructions, and in particular, temporary and permanent copies of the instructions.
When SoC 1000 is applied to an electronic device in accordance with the present application, the computer-readable storage medium having stored therein instructions may include instructions that, when executed by at least one unit in a processor, cause the electronic device to implement a method of detecting wireless charging alignment as mentioned in the above embodiments. The instructions, when executed on a computer, cause the computer to perform the window control method as mentioned in the above embodiments.
In addition, the embodiment of the application also discloses a computer readable storage medium, wherein a processing program is stored on the computer readable storage medium, and the processing program realizes the window control method mentioned in the above embodiment when being executed by a processor.
The computer readable storage medium may be a read-only memory, a random access memory, a hard disk, an optical disk, or the like.