Detailed Description
The following describes the embodiments in further detail with reference to the accompanying drawings.
An embodiment of the present invention provides an information display method, as shown in fig. 1, including:
step 101: determining at least one piece of background information, wherein different pieces of background information can be used as backgrounds to be displayed by the same or different display screens;
step 102: when a first background is displayed in a display area and content information of a first display screen is displayed in the display area, detecting at least one type of input information; wherein the first context is one of the at least one context information; the content information of the display screen comprises at least one application icon;
step 103: when at least one type of input information is detected, whether to update the first background displayed in the display area and/or whether to update the content information of the first display screen displayed in the display area is determined based on the input information.
Here, the scheme provided by the embodiment may be applied to an electronic device, especially an electronic device with a touch display screen, such as a smart phone, a tablet computer, and the like.
In the foregoing step 101, a background to be displayed on each display screen in at least one display screen is determined; specifically, the display screen may include a background and content information when the electronic device enters the icon display state. That is, when step 101 is executed, how many display screens can be currently provided, and the background corresponding to each display screen are set.
Further, in the background to be displayed on each display screen, the pictures of the backgrounds corresponding to two adjacent display screens may be related or unrelated, for example:
scene one,
And dividing the image based on a preset image to obtain at least one divided sub-image, and respectively setting two adjacent sub-images as backgrounds to be displayed by two adjacent display screens.
For example, a certain complete image a may be divided to obtain a plurality of sub-images, and it is assumed that there may be 3 sub-images, which are sub-images 1 to 3 respectively; adjacent sub-images 1 and 2 can be selected from the background images and are respectively set as the background corresponding to the 1 st screen and the background corresponding to the 2 nd screen. Up to this point, it can be set that two adjacent display screens have related pictures as backgrounds. For example, as shown in fig. 2, when the related picture is used as the background, the related image may be displayed on the negative one screen, the default first screen, and the second screen as the background.
Scene two,
Respectively setting a background to be displayed on each display screen, and setting position information of the background to be displayed on each display screen in the display screens; wherein the backgrounds to be presented by the different display screens are not relevant.
That is to say, when setting the background to be displayed on each display screen, the setting can be performed according to the content information corresponding to different display screens, so that the backgrounds to be displayed on different display screens are not related. Alternatively, the background to be displayed on different display screens may be set not to be related to the content information, but to be set according to the personal interests of the user, which is not exhaustive.
In addition, the content information corresponding to each screen is set as a background, which may be set according to the type of application or the content of the application, for example, referring to fig. 3, if the application of minus one screen 31 may include an application of a real estate class, the background may be set as a house; the default first screen 32 may include a game, and the content in the game is set as a background, such as a plant shown in the figure; the application of the first screen 33 may be a car class application and the background is set to be a car.
It should be understood that the schematic diagram given in fig. 3 is only an example, and other pictures may exist in the actual processing, which is not exhaustive here.
In the step 102, the detecting at least one type of input information includes at least one of:
when touch operation aiming at a display screen is detected, determining that input information is detected;
sensing parameters are obtained through detection of at least one sensor, and the sensing parameters are used as input information;
detecting and recognizing voice input information, and using the voice input information as input information;
and acquiring information of the target type from the server side as input information.
For example, the touch operation may be as shown in fig. 3, and the user may perform a sliding operation on the display screen, for example, the user may slide rightward as shown in the figure, although there may also be sliding in other directions, and sliding in multiple directions, such as upward, downward, leftward, and the like, may all determine that the input information is detected;
the sensor comprises sensing parameters detected by a sensor, wherein the type of the sensor can comprise a plurality of types, such as an acceleration sensor, a temperature sensor and the like, so that different types of sensing parameters can be acquired; accordingly, acceleration parameters, temperature parameters, and the like;
recognizing the voice input information may include collecting voice through an audio collector (such as a microphone), and then recognizing the collected voice to obtain voice input information, so as to use the voice input information as input information;
the information of the target type is obtained from the server side, and may be information of a specified type obtained from a preset website, for example, latest current news information is requested to be pulled from a network (server side) as input information.
On the basis of the foregoing, when at least one type of input information is detected, determining whether to update the first background displayed in the display area and/or whether to update the content information of the first display screen displayed in the display area in step 103 may include the following cases:
firstly, determining a first background displayed in a display area based on input information, and replacing the first background with a second background;
secondly, determining to update the display effect of the first background displayed in the display area based on the input information to obtain an updated first background;
and thirdly, updating the content information of the first display screen in the display area to the content information of the second display screen based on the input information.
The first case and the third case may exist simultaneously, and the second case and the third case may also exist simultaneously.
Specifically, the first case may include:
when at least one type of input information is detected, whether to update the first background displayed in the display area and/or whether to update the content information of the first display screen displayed in the display area is determined based on the at least one type of input information, wherein the determining includes
When touch operation aiming at a display screen is detected, determining to update a first background displayed in a display area to a second background based on the touch operation, and updating content information of the first display screen displayed in the display area to content information of the second display screen; the second background is at least partially different from the first background, and the content information of the second display screen is the same as or different from that of the first display screen.
Determining the direction of the touch operation based on the start and end positions of the touch operation; selecting one background information adjacent to the first background information from the at least one background information as a background of a second display screen based on the direction of the touch operation; and selecting content information to be displayed on one display screen adjacent to the first display screen as content information of a second display screen based on the direction of the touch operation.
With reference to fig. 3, it is assumed that the currently displayed screen 32 detects that when the touch operation is a rightward sliding operation, it may be determined that a preset next background information adjacent to the currently displayed screen is selected from the background information, that is, the first screen 33 in the drawing is the background 2, and the background 2 is used as the second background information. It is understood that the illustration is only one example of a scene, and actually when the touch operation is a leftward sliding operation, the previous background information adjacent to the current first background may be selected as the second background information. Correspondingly, in the scene corresponding to the touch operation, the content information to be displayed on the screen also needs to be replaced and updated to the content information of the second display screen. That is, a scenario in which the first case is combined with the third case; of course, if the screen has slid to the leftmost side and no other content information is available at this time, the first case may be performed alone, i.e., without updating the content information of the first display screen.
In relation to the second case, the method further comprises: when one of the sensing parameters, the voice input information and the target type information is detected as the input information, whether the display effect of the first background displayed in the display area is updated or not is determined based on the input information.
The display effect can be adjusting display parameters or adding special effects; for example, the display parameter of the first background may be adjusted to a higher brightness or a lower brightness, and the specific adjustment manner may be related to a preset policy; the adding of the special effect may be adding a preset certain type of special effect on the basis of the first background, including adding an aperture and the like, and specifically, may also be set according to a preset strategy.
It is to be understood that in addition to the foregoing one implementation, another implementation may be included: when one of the sensing parameters, the voice input information and the target type information is detected as the input information, whether the first background displayed in the display area is updated to the second background is determined based on the input information.
That is, when the sensing parameter is shaking to the right, it may be determined that the first background is updated to a preset background of a next screen, that is, to a second background;
alternatively, when the voice input information is some preset voice, such as "next screen background", the next screen background can be directly selected as the second background for display.
In addition, whether to switch to the next screen background as the second background can be determined based on the information pulled by the network and the characteristic image and/or the keyword in the information.
In summary, the processing method provided in this embodiment may include: starting a dynamic wallpaper program, generating wallpaper contents of each screen to be displayed through a screen content constructor, initializing the positions and display states of the contents, and then drawing wallpaper through a content drawing device; after the drawing is finished, the user event input (sliding event, sensor time, voice input event and the like) is waited, and the sliding event processor and the logic processor update the wallpaper content information in real time according to the events; and after the update is finished, the content renderer is informed to redraw the wallpaper, and the event input of the user is continuously waited.
Specifically, referring to fig. 4, first, generating background content of each screen, that is, generating content of each screen in the dynamic wallpaper, and recording the position and state of the screen where each content is located, and the required presentation effect;
then, according to the moving position of the current dynamic wallpaper, the position and the state of each content and the current time point, the content which should be displayed on the current mobile phone screen can be drawn; that is, when the dynamic wallpaper is slid to a different position, the wallpaper content at the current position is drawn and displayed;
monitoring the operation of sliding the screen of the user, and changing the display position of the dynamic wallpaper in real time according to the sliding events (sliding left and right, sliding up and down) of the user to enable the user to feel that the wallpaper slides when being dragged; for example, at this time, the default background and content information of the first screen are displayed, and if the screen slides to the right, the background and content information of the second screen are displayed.
In addition, as shown in fig. 4, the dynamic wallpaper has a very high functional extensibility, and may also monitor sensor events, such as various broadcasts of a mobile phone and real-time information of the sensor, and may also monitor other input events (such as voice input information of a user), even a network request to pull up latest current news information.
Further, whether a new event exists is determined in real time based on the monitoring of the sensor and other input events, if so, the position and the state of each content in the background can be recalculated, that is, the information can be processed in real time and the content and the state of the wallpaper display can be changed; otherwise, the slip event may continue to be monitored.
In addition, based on the monitoring event or the input of the sensor, the currently displayed picture in the background may be changed, and the specific changing manner may include adjusting information such as gray scale corresponding to the picture in the background based on the content detected by the sensor, for example, if the light sensor detects that the current light is dark, the brightness of the picture in the background may be adjusted; when the sound sensor detects that the current input sound sources are more, the background picture can be adjusted to be gray (for example, the current possible user is in a conference scene, and the mobile phone is not required to be too colorful); the display content is adjusted according to the monitoring event, latest star news can be detected for the current news application, and then the background of one screen where the news application is located can be adjusted to be the related picture of the star, that is, the background picture of the screen where the application is located is adjusted based on the latest information pulled by the content of different screens.
It should be understood that the foregoing is merely an example, and that in actual processing, there may be more processing modes, and the description is not exhaustive.
The technical scheme of the dynamic wallpaper multi-screen multi-scene has the advantages that different scenes are used for displaying richer contents, and the user is prevented from being disturbed. Two simple examples are given: the method for displaying the current news by using the dynamic wallpaper minus one screen position prevents the interference of normal use of the desktop Launcher by a user when displaying the news. A wallpaper type game is constructed in a multi-screen multi-scene mode, for example, a pet growing game, and different interactions can be conducted between different screen scenes and users.
Therefore, by adopting the scheme, multi-screen background information can be constructed, and then when input information is detected, whether current background information is updated or not is judged based on the input information, and whether currently displayed content information is updated or not is determined; therefore, the background information can be dynamically adjusted by combining the input information so as to meet more scene requirements and provide more interactive effects.
On the basis of the flow of the information displaying method, the embodiment further provides an electronic device, as shown in fig. 5, including:
an information constructing unit 51, configured to determine at least one piece of background information, where different pieces of background information can be used as a background to be displayed on the same or different display screens;
an input detection unit 52, configured to detect at least one type of input information when a first background is displayed in a display area and content information of a first display screen is displayed in the display area; wherein the first context is one of the at least one context information; the content information of the display screen comprises at least one application icon;
the presentation control unit 53 is configured to determine whether to update the first background presented in the display area and/or whether to update the content information of the first display screen presented in the display area based on at least one type of input information when the input information is detected.
Here, the scheme provided by the embodiment may be applied to an electronic device, especially an electronic device with a touch display screen, such as a smart phone, a tablet computer, and the like.
The aforementioned information constructing unit 51, configured to determine a background to be presented in each of at least one display screen; specifically, the display screen may include a background and content information when the electronic device enters the icon display state. That is, how many display screens can be currently provided, and the background corresponding to each display screen are set.
Further, in the background to be displayed on each display screen, the pictures of the backgrounds corresponding to two adjacent display screens may be related or unrelated, for example:
scene one,
And dividing the image based on a preset image to obtain at least one divided sub-image, and respectively setting two adjacent sub-images as backgrounds to be displayed by two adjacent display screens.
For example, a certain complete image a may be divided to obtain a plurality of sub-images, and 4 sub-images may be shown in the figure, which are sub-images 1 to 4 respectively; adjacent sub-images 1 and 2 can be selected from the background images and are respectively set as the background corresponding to the 1 st screen and the background corresponding to the 2 nd screen. Up to this point, it can be set that two adjacent display screens have related pictures as backgrounds.
Scene two,
Respectively setting a background to be displayed on each display screen, and setting position information of the background to be displayed on each display screen in the display screens; wherein the backgrounds to be presented by the different display screens are not relevant.
That is to say, when setting the background to be displayed on each display screen, the setting can be performed according to the content information corresponding to different display screens, so that the backgrounds to be displayed on different display screens are not related. Alternatively, the background to be displayed on different display screens may be set not to be related to the content information, but to be set according to the personal interests of the user, which is not exhaustive.
The above-mentioned detection input detection unit is configured to execute at least one of:
when touch operation aiming at a display screen is detected, determining that input information is detected;
sensing parameters are obtained through detection of at least one sensor, and the sensing parameters are used as input information;
detecting and recognizing voice input information, and using the voice input information as input information;
and acquiring information of the target type from the server side as input information.
For example, the touch operation may be a sliding operation of the user on the display screen, for example, the user may slide to the right as shown in the figure, although there may also be sliding in other directions, sliding in multiple directions such as up, down, left, and the like, and it may be determined that the input information is detected;
the sensor comprises sensing parameters detected by a sensor, wherein the type of the sensor can comprise a plurality of types, such as an acceleration sensor, a temperature sensor and the like, so that different types of sensing parameters can be acquired; accordingly, acceleration parameters, temperature parameters, and the like;
recognizing the voice input information may include collecting voice through an audio collector (such as a microphone), and then recognizing the collected voice to obtain voice input information, so as to use the voice input information as input information;
the information of the target type is obtained from the server side, and may be information of a specified type obtained from a preset website, for example, latest current news information is requested to be pulled from a network (server side) as input information.
On the basis of the foregoing, when at least one type of input information is detected, the display control unit is configured to determine whether to update the first background displayed in the display area and/or whether to update the content information of the first display screen displayed in the display area based on the input information, where the following conditions are included:
firstly, determining a first background displayed in a display area based on input information, and replacing the first background with a second background;
secondly, determining to update the display effect of the first background displayed in the display area based on the input information to obtain an updated first background;
and thirdly, updating the content information of the first display screen in the display area to the content information of the second display screen based on the input information.
The first case and the third case may exist simultaneously, and the second case and the third case may also exist simultaneously.
Specifically, the first case may include:
the display control unit is used for determining to update a first background displayed in a display area to a second background based on touch operation when the touch operation for the display screen is detected, and updating content information of the first display screen displayed in the display area to content information displaying the second display screen; the second background is at least partially different from the first background, and the content information of the second display screen is the same as or different from that of the first display screen.
Determining the direction of the touch operation based on the start and end positions of the touch operation; selecting one background information adjacent to the first background information from the at least one background information as a background of a second display screen based on the direction of the touch operation; and selecting content information to be displayed on one display screen adjacent to the first display screen as content information of a second display screen based on the direction of the touch operation.
When the touch operation is a rightward sliding operation, it may be determined that a preset next background information adjacent to the touch operation, that is, the background 2, is selected from the background information, and the background 2 is used as the second background information. It is understood that the illustration is only one example of a scene, and actually when the touch operation is a leftward sliding operation, the previous background information adjacent to the current first background may be selected as the second background information. Correspondingly, in the scene corresponding to the touch operation, the content information to be displayed on the screen also needs to be replaced and updated to the content information of the second display screen. That is, a scenario in which the first case is combined with the third case; of course, if the screen has slid to the leftmost side and no other content information is available at this time, the first case may be performed alone, i.e., without updating the content information of the first display screen.
In relation to the second case, the presentation control unit is configured to determine whether to update the presentation effect of the first background presented in the display area based on one of the sensing parameter, the voice input information, and the target type information when the one of the sensing parameter, the voice input information, and the target type information is detected as the input information.
The display effect can be adjusting display parameters or adding special effects; for example, the display parameter of the first background may be adjusted to a higher brightness or a lower brightness, and the specific adjustment manner may be related to a preset policy; the adding of the special effect may be adding a preset certain type of special effect on the basis of the first background, including adding an aperture and the like, and specifically, may also be set according to a preset strategy.
It is to be understood that in addition to the foregoing one implementation, another implementation may be included: when one of the sensing parameters, the voice input information and the target type information is detected as the input information, whether the first background displayed in the display area is updated to the second background is determined based on the input information.
That is, when the sensing parameter is shaking to the right, it may be determined that the first background is updated to a preset background of a next screen, that is, to a second background;
alternatively, when the voice input information is some preset voice, such as "next screen background", the next screen background can be directly selected as the second background for display.
In addition, whether to switch to the next screen background as the second background can be determined based on the information pulled by the network and the characteristic image and/or the keyword in the information.
In summary, the processing method provided in this embodiment may include: starting a dynamic wallpaper program, generating wallpaper contents of each screen to be displayed through a screen content constructor, initializing the positions and display states of the contents, and then drawing wallpaper through a content drawing device; after the drawing is finished, the user event input (sliding event, sensor time, voice input event and the like) is waited, and the sliding event processor and the logic processor update the wallpaper content information in real time according to the events; and after the update is finished, the content renderer is informed to redraw the wallpaper, and the event input of the user is continuously waited.
Specifically, firstly, generating background content of each screen, namely generating content of each screen in the dynamic wallpaper, and recording the position and state of the screen where each content is located and a required display effect;
then, according to the moving position of the current dynamic wallpaper, the position and the state of each content and the current time point, the content which should be displayed on the current mobile phone screen can be drawn; that is, when the dynamic wallpaper is slid to a different position, the wallpaper content at the current position is drawn and displayed;
receiving the screen sliding operation of a user, and changing the display position of the dynamic wallpaper in real time according to the sliding events (sliding left and right, sliding up and down) of the user to enable the user to feel that the user slides while dragging the wallpaper; for example, the default first screen background and content information are displayed, and if the screen slides to the right, the second screen background and content information are displayed.
The dynamic wallpaper has strong function expandability, and can receive various broadcasts of a mobile phone and real-time information of a sensor, voice input information of a user and even network request pull up latest current news information in a logic processor. Where it can be processed in real-time and change the content and state of the wallpaper presentation.
The technical scheme of the dynamic wallpaper multi-screen multi-scene has the advantages that different scenes are used for displaying richer contents, and the user is prevented from being disturbed. Two simple examples are given: the method for displaying the current news by using the dynamic wallpaper minus one screen position prevents the interference of normal use of the desktop Launcher by a user when displaying the news. A wallpaper type game is constructed in a multi-screen multi-scene mode, for example, a pet growing game, and different interactions can be conducted between different screen scenes and users.
Specifically, the processing units provided by this embodiment may be embodied in the form shown in fig. 6, and the information constructing unit may be specifically a content constructor, configured to generate content of each screen in the dynamic wallpaper, and record the position and state of the screen where each content is located, and a required presentation effect;
the display control unit can be specifically a content renderer and is used for drawing the content to be displayed on the current mobile phone screen according to the moving position of the current dynamic wallpaper, the position and the state of each content and the current time point; that is, when the dynamic wallpaper is slid to a different position, the wallpaper content at the current position is drawn and displayed;
an input detection unit, which may be specifically the sliding event processor in fig. 6, is configured to receive an operation of a user sliding a screen, and change a display position of the dynamic wallpaper in real time according to a sliding event (sliding left and right, sliding up and down) of the user, so that the user feels that the user slides while dragging the wallpaper;
the input detection unit may also be embodied as a logical processor in fig. 7: the dynamic wallpaper has strong function expandability, and can receive various broadcasts of a mobile phone and real-time information of a sensor, voice input information of a user and even network request pull up latest current news information in a logic processor. Where it can be processed in real-time and change the content and state of the wallpaper presentation.
The technical scheme of the dynamic wallpaper multi-screen multi-scene has the advantages that different scenes are used for displaying richer contents, and the user is prevented from being disturbed. The usage scenario on the product side gives two simple examples:
the method for displaying the current news by using the dynamic wallpaper minus one screen position prevents the interference of normal use of the desktop Launcher by a user when displaying the news.
A wallpaper type game is constructed in a multi-screen multi-scene mode, for example, a pet growing game, and different interactions can be conducted between different screen scenes and users.
Therefore, by adopting the scheme, multi-screen background information can be constructed, and then when input information is detected, whether current background information is updated or not is judged based on the input information, and whether currently displayed content information is updated or not is determined; therefore, the background information can be dynamically adjusted by combining the input information so as to meet more scene requirements and provide more interactive effects.
An embodiment of the present invention further provides an electronic device, which may be as shown in fig. 7, and includes: a processor 701 and a memory 703 for storing computer programs capable of running on the processor.
The memory 703 may be used to store software programs and modules, such as the extraction unit in the embodiment of the present invention;
the processor 701 executes various functional applications and data processing, that is, implements the above-described media file processing method, by executing software programs and modules stored in the memory 703. The memory 703 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory. In some examples, the memory 703 may further include memory located remotely from the processor 701, which may be connected to the terminal through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input/output devices shown in the figures are used for receiving and transmitting data via a network, and may also be used for data transmission between the processor and the memory. Examples of the network may include a wired network and a wireless network. In one example, the i/o device includes a Network adapter (NIC) that can be connected to a router via a Network cable to communicate with the internet or a local area Network. In one example, the input/output device is a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
Among other things, the memory 703 is used to store application programs.
The processor 701 may perform the following steps by calling an application program stored in the memory 703: determining at least one piece of background information, wherein different pieces of background information can be used as backgrounds to be displayed by the same or different display screens;
when a first background is displayed in a display area and content information of a first display screen is displayed in the display area, detecting at least one type of input information; wherein the first context is one of the at least one context information; the content information of the display screen comprises at least one application icon;
when at least one type of input information is detected, whether to update the first background displayed in the display area and/or whether to update the content information of the first display screen displayed in the display area is determined based on the input information.
The processor 701 is further configured to perform the following steps: when touch operation aiming at a display screen is detected, determining to update a first background displayed in a display area to a second background based on the touch operation, and updating content information of the first display screen displayed in the display area to content information of the second display screen;
the second background is at least partially different from the first background, and the content information of the second display screen is the same as or different from that of the first display screen.
The processor 701 is further configured to perform the following steps: when one of the sensing parameters, the voice input information and the target type information is detected as the input information, whether the display effect of the first background displayed in the display area is updated or not is determined based on the input information.
The processor 701 is further configured to perform the following steps: determining the direction of the touch operation based on the start and end positions of the touch operation;
selecting one background information adjacent to the first background information from the at least one background information as a background of a second display screen based on the direction of the touch operation; and selecting content information to be displayed on one display screen adjacent to the first display screen as content information of a second display screen based on the direction of the touch operation.
The processor 701 is further configured to perform the following steps: dividing based on a preset image to obtain at least one divided sub-image, and respectively setting two adjacent sub-images as backgrounds to be displayed by two adjacent display screens;
or,
respectively setting a background to be displayed on each display screen, and setting position information of the background to be displayed on each display screen in the display screens; wherein the backgrounds to be presented by the different display screens are not relevant.
The embodiment of the invention also provides a storage medium. Alternatively, in this embodiment, the storage medium may be a program code for executing a processing method of a media file.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps:
determining at least one piece of background information, wherein different pieces of background information can be used as backgrounds to be displayed by the same or different display screens;
when a first background is displayed in a display area and content information of a first display screen is displayed in the display area, detecting at least one type of input information; wherein the first context is one of the at least one context information; the content information of the display screen comprises at least one application icon;
when at least one type of input information is detected, whether to update the first background displayed in the display area and/or whether to update the content information of the first display screen displayed in the display area is determined based on the input information.
Optionally, the storage medium is further arranged to store program code for performing the steps of: when touch operation aiming at a display screen is detected, determining to update a first background displayed in a display area to a second background based on the touch operation, and updating content information of the first display screen displayed in the display area to content information of the second display screen;
the second background is at least partially different from the first background, and the content information of the second display screen is the same as or different from that of the first display screen.
Optionally, the storage medium is further arranged to store program code for performing the steps of: when one of the sensing parameters, the voice input information and the target type information is detected as the input information, whether the display effect of the first background displayed in the display area is updated or not is determined based on the input information.
Optionally, the storage medium is further arranged to store program code for performing the steps of: determining the direction of the touch operation based on the start and end positions of the touch operation;
selecting one background information adjacent to the first background information from the at least one background information as a background of a second display screen based on the direction of the touch operation; and selecting content information to be displayed on one display screen adjacent to the first display screen as content information of a second display screen based on the direction of the touch operation.
Optionally, the storage medium is further arranged to store program code for performing the steps of: dividing based on a preset image to obtain at least one divided sub-image, and respectively setting two adjacent sub-images as backgrounds to be displayed by two adjacent display screens;
or,
respectively setting a background to be displayed on each display screen, and setting position information of the background to be displayed on each display screen in the display screens; wherein the backgrounds to be presented by the different display screens are not relevant.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for enabling an electronic device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.