Movatterモバイル変換


[0]ホーム

URL:


CN113849142B - Image display method, device, electronic equipment and computer readable storage medium - Google Patents

Image display method, device, electronic equipment and computer readable storage medium
Download PDF

Info

Publication number
CN113849142B
CN113849142BCN202111131215.3ACN202111131215ACN113849142BCN 113849142 BCN113849142 BCN 113849142BCN 202111131215 ACN202111131215 ACN 202111131215ACN 113849142 BCN113849142 BCN 113849142B
Authority
CN
China
Prior art keywords
image
dynamic
display
static image
current target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111131215.3A
Other languages
Chinese (zh)
Other versions
CN113849142A (en
Inventor
李禹�
张聪
胡震宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huole Science and Technology Development Co Ltd
Original Assignee
Shenzhen Huole Science and Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huole Science and Technology Development Co LtdfiledCriticalShenzhen Huole Science and Technology Development Co Ltd
Priority to CN202111131215.3ApriorityCriticalpatent/CN113849142B/en
Publication of CN113849142ApublicationCriticalpatent/CN113849142A/en
Application grantedgrantedCritical
Publication of CN113849142BpublicationCriticalpatent/CN113849142B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The present disclosure provides an image display method, apparatus, electronic device, and computer-readable storage medium; the method comprises the steps of firstly detecting a first position parameter of the focus of attention of a target user in a display plane, wherein a static image is displayed in the display plane, then determining a current target static image according to the first position parameter and a second position parameter of each static image in the display plane, and finally dynamically processing the current target static image to enable the current target static image to be changed into a current dynamic image or video and playing and displaying. According to the method and the device, the attention focus of the target user is detected, and the display object in the attention focus range is dynamically displayed, so that the image display is diversified, and the interestingness of the user experience is improved.

Description

Image display method, device, electronic equipment and computer readable storage medium
Technical Field
The disclosure relates to the technical field of image display, and in particular relates to an image display method, an image display device, electronic equipment and a computer readable storage medium.
Background
With the development of computer technology and multimedia technology, multimedia resources that people contact are becoming increasingly rich. Nowadays, people are used to shooting photos/videos by using terminal devices such as mobile phones and tablet computers, or making the photos/videos themselves, or downloading other photos/videos, and then storing the photos/videos in an electronic album, so that the later use and viewing are convenient. However, the current display manner of the electronic album is single, and the pictures/videos in the current sight range cannot be dynamically displayed according to the attention focus of the user.
Therefore, it is necessary to provide an image display method for alleviating the technical problem that the current image display mode is single.
Disclosure of Invention
The disclosure provides an image display method, an image display device, electronic equipment and a computer readable storage medium, which are used for relieving the technical problem that the current image display mode is single.
In order to solve the technical problems, the present disclosure provides the following technical solutions:
the present disclosure provides an image display method, including:
detecting a first position parameter of the focus of attention of the target user in the display plane; a static image is displayed in the display plane;
determining a current target static image according to the first position parameter and a second position parameter of each static image in the display plane;
and carrying out dynamic processing on the current target static image to change the current target static image into a current dynamic image or video, and playing and displaying.
Meanwhile, the present disclosure provides an image display apparatus, including:
A first position parameter detection module for detecting a first position parameter of a focus of attention of a target user in a display plane; a static image is displayed in the display plane;
The target determining module is used for determining a current target static image according to the first position parameter and the second position parameter of each static image in the display plane;
And the dynamic display module is used for dynamically processing the current target static image, so that the current target static image is changed into a current dynamic image or video and is displayed.
Optionally, the first location parameter detection module includes:
the information acquisition module is used for acquiring the binocular position information and the head posture information of the target user;
The sight line generating module is used for inputting the binocular position information and the head gesture information into a sight line assessment model to obtain binocular sight lines of the target user;
the focus determining module is used for determining the focus of attention of the target user according to the binocular vision line and the display plane;
And the position parameter determining module is used for determining a first position parameter of the attention focus in the display plane according to the display plane coordinate system and the attention focus.
Optionally, the image display device further comprises:
The prediction module is used for predicting a third position parameter according to the position prediction model and the first position parameter;
A future target determining module, configured to determine a future target still image according to the third location parameter and the second location parameter of each still image in the display plane;
And the dynamic preprocessing module is used for carrying out dynamic preprocessing on the future target static image, so that the future target static image is changed into a future dynamic image or video, and playing and displaying the current dynamic image or video after the current dynamic image or video is played and displayed.
Optionally, the targeting module includes:
The second position parameter determining module is used for determining the second position parameter of each static image on the display plane according to the relative position of each static image on the display plane;
the user attention area generating module is used for generating a user attention area according to the first position parameter;
The associated parameter judging module is used for judging the second position parameter of each static image on the display plane and the associated parameter of the user attention area;
and the object determining module is used for determining the current target static image from the static images according to the association parameters.
Optionally, the dynamic display module includes:
the attribute information acquisition module is used for acquiring attribute information of the current target static image;
the associated image determining module is used for determining a second image associated with the current target static image according to the attribute information;
And the first video generation module is used for generating a dynamic video based on the current target static image and the second image and playing and exhibiting the dynamic video.
Optionally, the dynamic display module includes:
A similar image acquisition module, configured to acquire a third image similar to the current target still image;
And the second video generation module is used for generating a dynamic image based on the current target static image and the third image and playing and displaying the dynamic image.
Optionally, the dynamic display module includes:
The feature point acquisition module is used for acquiring target feature points of the current target static image;
and the dynamic image generation module is used for dynamically processing the current target static image based on the target characteristic points, generating a dynamic image and playing and displaying the dynamic image.
Furthermore, the present disclosure provides an electronic device comprising a processor and a memory for storing a computer program, the processor being adapted to run the computer program in the memory for performing the steps in the above-described image presentation method.
Furthermore, the present disclosure provides a computer readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the steps in the above-described image presentation method.
The beneficial effects are that: the present disclosure provides an image display method, apparatus, electronic device, and computer-readable storage medium; the method comprises the steps of firstly detecting a first position parameter of the focus of attention of a target user in a display plane, wherein a static image is displayed in the display plane, then determining a current target static image according to the first position parameter and a second position parameter of each static image in the display plane, and finally dynamically processing the current target static image to enable the current target static image to be changed into a current dynamic image or video and playing and displaying. The method and the device mainly determine the target static image through the first position parameter of the focus of attention of the target user on the display plane and the second position parameter of each static image on the display plane, and dynamically process the target static image to display the dynamically processed image, so that the static image seen by eyes of the user is converted into dynamic display, the image display mode is enriched, and the interestingness of user experience is improved.
Drawings
The technical solution and other advantageous effects of the present disclosure will be made apparent by the following detailed description of the specific embodiments of the present disclosure with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of a system architecture of an image display system according to an embodiment of the present disclosure.
Fig. 2 is a flowchart of an image display method according to an embodiment of the disclosure.
Fig. 3 is a schematic view of an area of an album display template according to an embodiment of the present disclosure.
Fig. 4 is a schematic diagram of a first display screen provided in an embodiment of the disclosure.
Fig. 5 is a schematic diagram of a user region of interest provided by an embodiment of the present disclosure.
Fig. 6 is a schematic structural diagram of an image display device according to an embodiment of the present disclosure.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Reference numerals illustrate:
101-a cloud server; 102-a display device; 103-control terminal.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure. It will be apparent that the described embodiments are merely some, but not all embodiments of the present disclosure. Based on the embodiments in this disclosure, all other embodiments that a person of ordinary skill in the art would obtain without making any inventive effort are within the scope of the disclosure.
The terms first, second and the like in the description and in the claims of the embodiments of the disclosure and in the above-described figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or modules is not necessarily limited to those explicitly listed but may include other steps or modules not expressly listed or inherent to such process, method, article, or apparatus, such that the partitioning of modules by embodiments of the disclosure is only one logical partitioning, such that a plurality of modules may be combined or integrated in another system, or some features may be omitted, or not implemented, and further that the coupling or direct coupling or communication connection between modules may be via some interfaces, such that indirect coupling or communication connection between modules may be electrical or other like, none of the embodiments of the disclosure are limited. In addition, the modules or sub-modules described as separate components may or may not be physically separated, may or may not be physical modules, or may be distributed in a plurality of circuit modules, and some or all of the modules may be selected according to actual needs to achieve the purpose of the embodiment of the disclosure.
In the disclosed embodiment, the still image includes the picture itself and an image of a certain frame extracted from the video.
In the embodiment Of the present disclosure, the target user refers to a user in a current projection display environment detected by a sensor (camera, time Of Flight) sensor, infrared sensor, UWB (Ultra Wide Band) wireless ranging sensor, millimeter wave sensor, etc.) built in or external to the display device.
In the embodiment of the present disclosure, the focus of attention refers to a point where the attention of the user falls in the display plane, where the attention may be determined according to the line of sight of the user, or may be determined according to the gesture, face orientation/body orientation, etc. of the user.
In the embodiment of the disclosure, the display plane may be a picture projected by the display device or a picture displayed by the display screen.
The present disclosure provides an image display method, an image display device, an electronic device, and a computer-readable storage medium.
Referring to fig. 1, fig. 1 is a schematic system architecture diagram of an image display system provided by the present disclosure, as shown in fig. 1, the image display system at least includes a cloud server 101, a display device 102, and a control terminal 103, wherein:
Communication links are arranged among the cloud server 101, the display device 102 and the control terminal 103 so as to realize information interaction. The type of communication link may include a wired, wireless communication link, or fiber optic cable, etc., and the disclosure is not limited herein.
The cloud server 101 may be an independent server, or may be a server network or a server cluster formed by servers; for example, the servers described in this disclosure include, but are not limited to, computers, network hosts, database servers, storage servers, and Cloud servers of application servers or servers, where the Cloud servers are made up of a large number of computers or network servers based on Cloud Computing (Cloud Computing).
The display device 102 is a device capable of projecting an image or Video onto a display plane, and may be connected to a computer, a mobile phone, a game machine, a DV (Digital Video), etc. through different interfaces or networks to play a corresponding Video or image signal, where the display device 102 may be a projector, a micro-projector, etc. with a projection function, an electronic display screen, a liquid crystal display screen, etc.
The control terminal 103 may be a smart phone, a tablet computer, a notebook computer, a wearable device, a remote controller, or the like, which may transmit a signal.
The disclosure provides an image display system, which comprises a cloud server, display equipment and a control terminal. Specifically, the display device 102 obtains each display object from the cloud server 101 or the control terminal 103, where the display device 102 performs preprocessing on each display object (picture/video) to obtain each still image, for example, extracting a suitable frame in the video as a still cover, etc.; then detecting a first position parameter of the focus of attention of the target user in the display plane by a sensor built-in or built-out of the display device 102, and determining a current target static image from the static images according to the first position parameter and a second position parameter of the static images in the display plane; the current target still image is then dynamically processed by the display device 102 to change the current target still image into a current moving image or video, and a display control operation is performed on the current image by the control terminal 103 to be displayed in the display plane.
According to the method, whether a user appears in a current projection display environment or not is detected through a sensor arranged in or arranged outside the display device 102, the attention focus of the user is further judged, the static image which is in the attention range of the user or is most relevant to the attention range of the user in the display plane is selected as a current target static image according to the position parameter of the attention focus and the position parameter of each static image in the display plane, and then the current target static image is dynamically processed, wherein the dynamic processing comprises playing of a pre-processed video, displaying of a dynamic special effect of a pre-processed picture and the like.
In one embodiment, the display device 102 is in communication with the control terminal 103 to implement aspects of the embodiments of the present disclosure.
It should be noted that the schematic system architecture shown in fig. 1 is only an example, and the servers, terminals, devices and scenarios described in the embodiments of the present disclosure are for more clearly describing the technical solutions of the embodiments of the present disclosure, and do not constitute a limitation on the technical solutions provided by the embodiments of the present disclosure, and those skilled in the art can know that, with the evolution of the system and the appearance of a new service scenario, the technical solutions provided by the embodiments of the present disclosure are equally applicable to similar technical problems. The following will describe in detail. The following description of the embodiments is not intended to limit the preferred embodiments.
Referring to fig. 2, fig. 2 is a schematic flow chart of the image display method provided by the present disclosure, and the image display method at least includes the following steps:
Step 201: detecting a first position parameter of the focus of attention of the target user in the display plane; the display plane has a still image displayed therein.
The static image displayed in the display plane is obtained by uploading the video/picture to the cloud server for storage by a user through an application program on the display device/control terminal, and then preprocessing the video/picture by the cloud server, or directly processing the stored video/picture by the display device/control terminal.
Specifically, the picture is static, so that corresponding static processing is not needed, but the picture can be simply adjusted in contrast, brightness, color temperature and the like according to the preference of a user or according to the current environment, and static stickers and the like are added on the basis of the original picture; for video, because the video is composed of images of one frame and one frame, a proper frame can be extracted as a static cover according to the preference of a user to be displayed in a display plane, and meanwhile, the video can be simply adjusted in contrast, brightness, color temperature and the like according to the preference of the user or according to the current environment, and static stickers and the like are added on the basis of original pictures.
In one embodiment, each still image may be presented in the form of an electronic album. Specifically, fusion processing is performed on a preset album display template and each static image obtained from a cloud server to obtain a first display picture, and the first display picture is displayed in a display plane. The preset album display template, that is, a template provided by the cloud server and used for displaying each display object (including pictures or videos), can comprise a plurality of different display frames, and each display frame is used for displaying one display object. As shown in fig. 3, an area schematic diagram of an album display template according to an embodiment of the present disclosure is shown, where the album display template shown in fig. 3 includes 6 display frames, and the backgrounds (i.e., the areas except the display frames) of different album display templates are also different, and in the album display template shown in fig. 3, the backgrounds are all gray. The preset album display template can be a basic display template preset by a system, or can be an album display template which is arranged by a user in a self-defining way according to personal preference. Whether the system is preset or user-defined, the system is uploaded to the cloud server for storage, and when the system needs to be used, the system is called from the cloud server through the display device. For example, each static image is filled into each display frame of the album display template shown in fig. 3, so as to obtain a first display picture shown in fig. 4, wherein the first display picture consists of a display object a, a display object B, a display object C, a display object D, a display object E, a display object F and the background of the album display template, each display frame correspondingly displays one display object, and the final first display picture is projected and displayed in a display plane.
The display device can detect the current display environment through an internal or external sensor (a camera, a TOF sensor, an infrared sensor, a UWB (ultra wide band) wireless ranging sensor, a millimeter wave sensor and the like) to judge whether a user is in the current display environment, and if the user is not in the current display environment, all images in the current display plane are in a static state; when a user appears in the current display environment, the attention focus of the user is judged through the sensor, and then a first position parameter of the attention focus in the display plane is obtained. The method for judging the attention focus comprises estimating the angle based on the body gesture of the user/the face orientation of the user, estimating the sight line, estimating the pointing direction of the fingers of the user and the like.
In one embodiment, the specific step of estimating the focus of attention of the user by line of sight comprises: acquiring binocular position information and head posture information of a target user; inputting the binocular position information and the head posture information into a sight line assessment model to obtain binocular sight lines of the target user; determining the focus of attention of the target user according to the binocular vision line and the display plane; and determining a first position parameter of the attention focus in the display plane according to a display plane coordinate system and the attention focus. The binocular position information is a binocular photo of a user shot by the camera, and the head posture information is a head photo of the user shot by the camera; the eyes are used as starting points, and rays are led out according to the directions of the eyes.
Specifically, photographs of the eyes and the head of the user may be captured by a camera of the display device and then input into a line-of-sight evaluation model for line-of-sight estimation to obtain the binocular line-of-sight of the target user. General gaze estimation can be classified into a geometry-based method, which reads some features of the eye (e.g., key points such as eye corners, pupil positions, etc.) through a photograph of the eye, and an appearance-based method, which calculates binocular gaze by combining the features of the eye and the head pose, since the direction of gaze depends not only on the state of the eye (eye bead position, degree of eye opening and closing, etc.), but also on the head pose; the appearance-based method directly learns a model for mapping the appearance of eyes and heads to the vision line, and can directly obtain the vision line of eyes through the model; the intersection point of the two eye sight lines falling on the display plane is the attention focus of the target user; finally, a first position parameter of the focus of attention in the display plane coordinate system may be determined based on the relative position of the focus of attention in the display plane, as will be described below.
Optionally, when the user has obvious finger pointing, the attention focus can be estimated through the pointing of the user's finger, and the specific steps include: acquiring gesture information of a target user; carrying out recognition processing on the gesture information according to a gesture recognition model to obtain a first attention guide line; determining an attention focus of the target user according to the first attention guide wire and the display plane; and determining a first position parameter of the attention focus in the display plane according to a display plane coordinate system and the attention focus. The gesture information can be a user hand photo shot by a camera; the first attention pointing line is a ray which starts from a finger having a directivity characteristic and points in the direction of the finger.
Specifically, a photograph of a user's hand may be taken through a camera of the display device, and then the gesture information may be input into a gesture recognition model, and the first attention-pointing line may be obtained through gesture detection, gesture segmentation, gesture analysis, and static or dynamic gesture recognition. The gesture segmentation is a key part in the gesture recognition process, the effect of the gesture segmentation directly influences the next gesture analysis and the final gesture recognition, and the most commonly used gesture segmentation method at present mainly comprises gesture segmentation based on monocular vision and gesture segmentation based on stereoscopic vision; gesture analysis is one of key technologies for completing a gesture recognition system, shape features or motion tracks of gestures can be obtained through gesture analysis, and the gesture analysis is mainly performed through a multi-feature combination method such as an edge contour extraction method, a centroid finger and the like, a finger joint tracking method and the like; gesture recognition is the process of classifying a trajectory (or point) in a model parameter space into a subset of that space, which includes static gesture recognition and dynamic gesture recognition, which ultimately can be converted to static gesture recognition, typically by template matching, neural network, and hidden markov model methods. The finger pointing direction of the user is obtained through gesture recognition, a ray with the finger pointing direction of the user as a starting point is a first attention guide line, and an intersection point of the first attention guide line falling on a display plane is an attention focus of the target user; finally, a first position parameter of the focus of attention in the display plane coordinate system may be determined based on the relative position of the focus of attention in the display plane, as will be described below.
Optionally, if a photograph of the eyes of the user cannot be taken and the user has no obvious finger pointing, the attention focus can be estimated by the angle of the body posture of the user or the face orientation of the user, which specifically includes the following steps: acquiring the orientation information of a target user; determining a second attention guide line according to the attention discrimination model and the orientation information; determining an attention focus of the target user according to the second attention guide wire and the display plane; and determining a first position parameter of the attention focus in the display plane according to a display plane coordinate system and the attention focus. The orientation information of the user comprises an angle of the body gesture of the user and the face orientation of the user.
Specifically, the body gesture of the user or the face of the user can be shot through a camera of the display device, then the body gesture or the face direction of the user is obtained based on the shot image through gesture angle recognition or face direction recognition, the center of the body or the face of the user is regarded as a particle, the particle is taken as a ray with the body direction or the face direction of the user as a direction, namely a second attention guide line, and the intersection point of the second attention guide line falling on a display plane is the attention focus of the target user; finally, a first position parameter of the focus of attention in the display plane coordinate system may be determined based on the relative position of the focus of attention in the display plane, as will be described below.
In one embodiment, in addition to the above manner of determining the focus of attention of the user according to the sensor, the current first location parameter may be predicted according to the location prediction model, so as to determine the focus of attention of the user at a future time, which specifically includes the following steps: predicting a third position parameter according to the position prediction model and the first position parameter; determining a future target static image according to the third position parameter and the second position parameter of each static image in the display plane; and carrying out dynamic preprocessing on the future target static image to change the future target static image into a future dynamic image or video, and carrying out playing and displaying or caching the future dynamic image or video after the current dynamic image or video is played and displayed, so that the response speed can be improved, and the user experience is further improved. Wherein the third position parameter is a position parameter of the focus of attention at the future time in the display plane.
Since the attention focus is obtained according to the judgment of the user's sight, body posture/face orientation or finger orientation, and the like, and the dynamic display content of the target display object acquired from the cloud may be limited by the internet speed, and the like, the image display has some problems of delay or slowness, so if the attention focus can be prejudged in advance, the static image in the attention focus range can be dynamically preprocessed in advance, and the problems of delay or slowness can be relieved. Specifically, the position prediction model is trained according to a training set consisting of a plurality of historical attention focuses, and can predict attention focuses at future moments according to attention focuses at current moments, namely, predict where a user may look at the next moment. Assume that the relevant content is scanned every second so that the attention focus is acquired, that is, the acquisition period of the attention focus is 1s, and the current time is 9:00, the preset period is 10, then 10 attention focuses can be acquired in total in the preset period, and the 10 attention focuses are respectively at the time 8: 51. 8: 52. 8: 53. 8: 54. 8: 55. 8: 56. 8: 57. 8: 58. 8: 59. 9:00, from these 10 focus points, the future time 9 can be predicted: 01, and finally, the position parameter of the attention focus in the display plane coordinate system can be determined according to the predicted relative position of the attention focus in the display plane, which will be described below.
After acquiring the focus of attention of the user, the position of the focus of attention in the display plane needs to be determined, and in order to quantify the position of the focus of attention in the display plane, the present disclosure proposes a concept of a position parameter, and the specific steps of acquiring the position parameter thereof include: modeling the display plane to obtain a display plane coordinate system; and determining a first position parameter of the focus of attention in the display plane coordinate system according to the relative position of the focus of attention in the display plane.
Specifically, as shown in fig. 5, a display plane coordinate system is established with any two sides of the display plane as coordinate axes, and then the position coordinates of the attention focus can be obtained according to the relative position of the attention focus in the display plane coordinate system, where the position coordinates are the position parameters of the attention focus in the display plane coordinate system.
202: The current target still image is determined based on the first location parameter and the second location parameter of each still image in the display plane.
In one embodiment, after the first position parameter of the focus of attention in the display plane and the second position parameter of each static image in the display plane are acquired, a static image in the user focus range or most relevant to the user focus range can be selected as a current target static image from static images displayed in the current display plane according to the position relation between the first position parameter and the second position parameter, and the specific steps include: determining a second position parameter of each static image on the display plane according to the relative position of each static image on the display plane; generating a user attention area according to the first position parameter; judging a second position parameter of each static image on the display plane and an associated parameter of the user attention area; and determining the current target static image from the static images according to the association parameters.
Taking the form of displaying the static images through the electronic album as an example, since the electronic album is displayed in the current display plane, the coordinate information of each static image in the album can be determined according to the display plane coordinate system, and because each static image occupies a relatively large area relative to the current display plane, the static image cannot be directly regarded as a particle, as shown in fig. 5, the coordinates of the four corners of each display frame and the coordinates of the center of the display frame can be used as the second position parameter of the display object. The display device draws a virtual circular area by taking a first position parameter of the current attention focus as a circle center and taking a preset attention radius R as a radius, wherein the area is the attention area of a user. After the user attention area is determined, calculating a second position parameter of each display object and an associated parameter of the user attention area, namely the distance between the four-corner coordinates and the center coordinates of each display frame and the user attention area, if the coordinates fall in the user attention area, the display object represented by the coordinates is a target static image, and otherwise, the display object is a non-target static image. As shown in fig. 5, the focus of attention of the user is the focus S in fig. 5, the radius of the user focus area is R, and the second position parameters of the display object a, the display object C, and the display object D fall into the user focus area, so that the display object a, the display object C, and the display object D are target still images, and the display object B, the display object E, and the display object F are non-target still images.
203: And dynamically processing the current target static image to change the current target static image into a current dynamic image or video, and playing and displaying.
After screening the target static image in the user focus range or the most relevant target static image in the user focus range and the non-target static image outside the user focus range, dynamically processing the target static image on the basis of the first display picture, so that the target static image is changed into a dynamic image or video and displayed in a playing mode, and the non-target static image is still displayed in a static mode to obtain a second display picture, and displaying the second display picture in a display plane. As shown in fig. 5, moving images or videos of the display object a, the display object C, and the display object D, and still images of the display object B, the display object E, and the display object F are displayed on the current display plane.
In one embodiment, the dynamic processing of the static image of the object may be performed by searching for other images associated therewith to form a video, which specifically includes the steps of: acquiring attribute information of a current target static image; determining a second image associated with the current target static image according to the attribute information; and generating a dynamic video and playing the display based on the current target static image and the second image. Wherein the attribute information includes position information, character information, time information, etc. of capturing/acquiring the image; association refers to at least one of the attribute information being the same or similar.
Specifically, the position information of the current target still image is obtained, for example, the position information of the current target still image is a city, so that at least one second image with the position information of a city is downloaded from the cloud server according to the position information, and then the target still image and all the downloaded second images are made into video for playing. For another example, the character information of the current target still image is shown as XX, the time is 2021, 5 and 6, and the place is B city, so that at least one second image of XX shot in 2021, 5 and 6 on B city can be found on the local or internet according to the cloud server, and then the target still image and all downloaded second images are made into video for playing.
In one embodiment, the method can also search for an image similar to the target static image and make the image into a dynamic image, and the specific steps include: acquiring a third image similar to the current target static image; and generating a dynamic image based on the current target static image and the third image, and playing and displaying. Specifically, a third image similar to the target still image in characteristics is obtained, the characteristics refer to image characteristics, then a dynamic image of the target still image is obtained through dynamic processing, and the dynamic processing can be that the target still image and the third image are processed in a combined way to achieve the effect of image dithering and the like.
In one embodiment, the dynamic image can be obtained by dynamically transforming the target feature points of the target static image, which specifically comprises the following steps: obtaining target feature points of a current target static image; and dynamically processing the current target static image based on the target feature points, generating a dynamic image and playing and displaying the dynamic image. The target feature points, such as the five sense organs (eyebrows, eyes, ears, nose and mouth) in the face, are processed to achieve the moving effect of the face, so as to obtain the dynamic image corresponding to the target static image.
In the process of dynamically processing the picture, the corresponding dynamic transformation parameters can be obtained according to the current environment or user preference, so that the picture is dynamically transformed based on the dynamic transformation parameters (for example, a dynamic special effect is added or dynamic rendering is performed based on the dynamic rendering parameters), the picture with a dynamic effect is finally obtained, for example, the content of the picture is identified, the picture is dynamically transformed in a targeted manner (for example, a fluttering effect is added to static white cloud, a ripple scattering effect is added to static water surface, and the like), for example, for the picture with image content which is difficult to identify, a layer of dynamic frame can be superimposed around the picture to form the picture with the dynamic effect, and besides, the dynamic special effect and the like can be added to the picture to form the picture with the dynamic effect. Wherein, the dynamic transformation parameters can include dynamic special effects (explosion, smoke, liquid, light effect, distortion, color change, blurring, shading, dithering, adding variegation, etc.), clockwise rotation by 90 degrees, rightward shift by 10 pixels, up-down flipping, etc.; the dynamic transformation parameters may also be dynamic rendering parameters obtained by algorithms such as Nex (Real-TIME VIEW SYNTHESIS WITH Neural Basis Expansion, a novel view synthesis method based on multi-plane image enhancement), LLFF (Local LIGHT FIELD Fusion), neRF (Neural RADIANCE FIELD, neural network implicit field method), and SRN (Spatial Regularization, scene representation network).
Taking the target still image, namely the picture itself as an example, when the target still image is a certain frame image in the video, the video corresponding to the frame image can be directly pulled through the cloud server, and then the video can be played and displayed in the current display plane or a certain section in the video can be played and displayed. In addition, in order to increase the interestingness of video playing, dynamic special effects (such as inserting special effects/layers between frames to make the dynamic video look smoother, and processing the brightness/color of each frame of the video to make the brightness/color of each frame look more uniform, etc.) can be added to the video according to the current environment or the preference of the user to obtain a video picture with richer content.
Based on the foregoing, embodiments of the present disclosure provide an image display apparatus. The image display device is configured to execute the image display method provided in the above method embodiment, specifically referring to fig. 6, the device includes:
a first position parameter detection module 601, configured to detect a first position parameter of a focus of attention of a target user in a display plane; a static image is displayed in the display plane;
A target determining module 602, configured to determine a current target still image according to the first location parameter and a second location parameter of each still image in the display plane;
and the dynamic display module 603 is configured to dynamically process the current target still image, so that the current target still image is changed into a current dynamic image or video, and play and display.
In one embodiment, the first location parameter detection module 601 includes:
the information acquisition module is used for acquiring the binocular position information and the head posture information of the target user;
The sight line generating module is used for inputting the binocular position information and the head gesture information into a sight line assessment model to obtain binocular sight lines of the target user;
the focus determining module is used for determining the focus of attention of the target user according to the binocular vision line and the display plane;
And the position parameter determining module is used for determining a first position parameter of the attention focus in the display plane according to the display plane coordinate system and the attention focus.
In one embodiment, the image display apparatus further comprises:
The prediction module is used for predicting a third position parameter according to the position prediction model and the first position parameter;
A future target determining module, configured to determine a future target still image according to the third location parameter and the second location parameter of each still image in the display plane;
And the dynamic preprocessing module is used for carrying out dynamic preprocessing on the future target static image, so that the future target static image is changed into a future dynamic image or video, and playing and displaying the current dynamic image or video after the current dynamic image or video is played and displayed.
In one embodiment, the targeting module 602 includes:
The second position parameter determining module is used for determining the second position parameter of each static image on the display plane according to the relative position of each static image on the display plane;
the user attention area generating module is used for generating a user attention area according to the first position parameter;
The associated parameter judging module is used for judging the second position parameter of each static image on the display plane and the associated parameter of the user attention area;
and the object determining module is used for determining the current target static image from the static images according to the association parameters.
In one embodiment, the dynamic presentation module 603 includes:
the attribute information acquisition module is used for acquiring attribute information of the current target static image;
the associated image determining module is used for determining a second image associated with the current target static image according to the attribute information;
And the first video generation module is used for generating a dynamic video based on the current target static image and the second image and playing and exhibiting the dynamic video.
In one embodiment, the dynamic presentation module 603 includes:
A similar image acquisition module, configured to acquire a third image similar to the current target still image;
And the second video generation module is used for generating a dynamic image based on the current target static image and the third image and playing and displaying the dynamic image.
In one embodiment, the dynamic presentation module 603 includes:
The feature point acquisition module is used for acquiring target feature points of the current target static image;
and the dynamic image generation module is used for dynamically processing the current target static image based on the target characteristic points, generating a dynamic image and playing and displaying the dynamic image.
The image display device of the embodiment of the present disclosure may be used to execute the technical solution of the foregoing method embodiment, and its implementation principle and technical effects are similar, and are not repeated here.
Different from the current technology, the image display device provided by the disclosure is provided with the target determining module and the dynamic display module, the target static image in the user attention range is determined from the static images through the target determining module, and then the target static image is dynamically processed through the dynamic display module, so that the target static image is changed into a dynamic image or video and displayed in a playing way, the image display is diversified in this way, and the interestingness of user experience is improved.
Correspondingly, the embodiment of the disclosure also provides electronic equipment. As shown in fig. 7, the electronic device may include a processor 701 having one or more processing cores, a wireless (WiFi, wireless Fidelity) module 702, a memory 703 having one or more computer-readable storage media, an audio circuit 704, a display unit 705, an input unit 706, a sensor 707, a power supply 708, and a Radio Frequency (RF) circuit 709. It will be appreciated by those skilled in the art that the configuration of the electronic device shown in fig. 7 is not limiting of the electronic device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. Wherein:
the processor 701 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 703, and calling data stored in the memory 703, thereby performing overall monitoring of the electronic device. In one embodiment, processor 701 may include one or more processing cores; preferably, the processor 701 may integrate an application processor and a modem processor, wherein the application processor primarily handles operating systems, user interfaces, applications, etc., and the modem processor primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 701.
WiFi belongs to a short-distance wireless transmission technology, and the electronic equipment can help a user to send and receive e-mails, browse web pages, access streaming media and the like through the wireless module 702, so that wireless broadband Internet access is provided for the user. Although fig. 7 shows a wireless module 702, it is to be understood that it does not belong to the essential constitution of the terminal and can be omitted entirely as required within a range not changing the essence of the invention.
The memory 703 may be used to store software programs and modules, and the processor 701 performs various functional applications and data processing by executing the computer programs and modules stored in the memory 703. The memory 703 may mainly include a storage program area that may store an operating system, application programs required for at least one function (such as a sound playing function, an image playing function, etc.), and a storage data area; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the terminal, etc. In addition, the memory 703 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory 703 may also include a memory controller to provide access to the memory 703 by the processor 701 and the input unit 706.
Audio circuitry 704 includes speakers, which may provide an audio interface between a user and the electronic device. The audio circuit 704 may transmit the received electrical signal after audio data conversion to a speaker, where the electrical signal is converted into a sound signal for output; on the other hand, the speaker converts the collected sound signals into electrical signals, which are received by the audio circuit 704 and converted into audio data, which are processed by the audio data output processor 701 and sent to, for example, another device via the radio frequency circuit 709, or the audio data are output to the memory 703 for further processing. Audio circuitry 704 may also include an ear bud jack to provide communication of the peripheral headphones with the electronic device.
The display unit 705 may be used to display information input by a user or information provided to the user and various graphic user interfaces of the terminal, which may be composed of graphics, text, icons, video, and any combination thereof.
The input unit 706 may be used to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, in one particular embodiment, the input unit 706 may include a touch-sensitive surface as well as other input devices. The touch-sensitive surface, also referred to as a touch display screen or a touch pad, may collect touch operations thereon or thereabout by a user (e.g., operations thereon or thereabout by a user using any suitable object or accessory such as a finger, stylus, etc.), and actuate the corresponding connection means according to a predetermined program. In one embodiment, the touch-sensitive surface may include two portions, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into touch point coordinates, and sends the touch point coordinates to the processor 701, and can receive and execute commands sent from the processor 701. In addition, touch sensitive surfaces may be implemented in a variety of types, such as resistive, capacitive, infrared, and surface acoustic waves. In addition to the touch-sensitive surface, the input unit 706 may also include other input devices. In particular, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, mouse, joystick, etc.
The electronic device may also include at least one sensor 707, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display area according to the brightness of ambient light; the motion sensor may generate corresponding instructions from gestures or other actions of the user. Other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. that may also be configured with the electronic device are not described in detail herein.
The electronic device also includes a power supply 708 (e.g., a battery) that provides power to the various components, preferably in logical communication with the processor 701 via a power management system, to manage charging, discharging, and power consumption. The power supply 708 may also include one or more of any components, such as a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
The radio frequency circuit 709 can be used for receiving and transmitting signals during the process of receiving and transmitting information or communication, in particular, after receiving downlink information of the base station, the downlink information is processed by one or more processors 701; in addition, data relating to uplink is transmitted to the base station. Typically, the radio frequency circuitry 709 includes, but is not limited to, an antenna, at least one amplifier, a tuner, one or more oscillators, a subscriber identity module (SIM, subscriber Identity Module) card, a transceiver, a coupler, a low noise amplifier (LNA, low Noise Amplifier), a duplexer, and the like. In addition, the radio frequency circuitry 709 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol including, but not limited to, global system for mobile communications (GSM, global System of Mobile communication), universal packet Radio Service (GPRS, general Packet Radio Service), code division multiple access (CDMA, code Division Multiple Access), wideband code division multiple access (WCDMA, wideband Code Division Multiple Access), long term evolution (LTE, long Term Evolution), email, short message Service (SMS, short MESSAGING SERVICE), and the like.
Although not shown, the electronic device may further include a camera, a bluetooth module, etc., which will not be described herein. In particular, in this embodiment, the processor 701 in the electronic device loads executable files corresponding to the processes of one or more application programs into the memory 703 according to the following instructions, and the processor 701 executes the application programs stored in the memory 703, so as to implement the following functions:
detecting a first position parameter of the focus of attention of the target user in the display plane; a static image is displayed in the display plane;
determining a current target static image according to the first position parameter and a second position parameter of each static image in the display plane;
and carrying out dynamic processing on the current target static image to change the current target static image into a current dynamic image or video, and playing and displaying.
In the foregoing embodiments, the descriptions of the embodiments are focused on, and the portions of an embodiment that are not described in detail in the foregoing embodiments may be referred to in the foregoing detailed description, which is not repeated herein.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present disclosure provide a computer readable storage medium having stored therein a plurality of instructions capable of being loaded by a processor to perform the following functions:
detecting a first position parameter of the focus of attention of the target user in the display plane; a static image is displayed in the display plane;
determining a current target static image according to the first position parameter and a second position parameter of each static image in the display plane;
and carrying out dynamic processing on the current target static image to change the current target static image into a current dynamic image or video, and playing and displaying.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the computer-readable storage medium may comprise: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
Since the instructions stored in the computer readable storage medium may perform the steps in any of the methods provided in the embodiments of the present disclosure, the beneficial effects that any of the methods provided in the embodiments of the present disclosure may be achieved are detailed in the previous embodiments, and are not described herein.
Meanwhile, the disclosed embodiments provide a computer program product or a computer program including computer instructions stored in a computer-readable storage medium. The computer instructions are read from the computer-readable storage medium by a processor of a computer device, and executed by the processor, cause the computer device to perform the methods provided in the various alternative implementations described above. For example, the following functions are implemented:
detecting a first position parameter of the focus of attention of the target user in the display plane; a static image is displayed in the display plane;
determining a current target static image according to the first position parameter and a second position parameter of each static image in the display plane;
and carrying out dynamic processing on the current target static image to change the current target static image into a current dynamic image or video, and playing and displaying.
The foregoing has described in detail the methods, apparatuses, electronic devices and computer readable storage medium for displaying images provided by the embodiments of the present disclosure, and specific examples have been applied to illustrate the principles and embodiments of the present disclosure, and the above description of the embodiments is only for aiding in understanding the methods of the present disclosure and the core ideas thereof; meanwhile, as those skilled in the art will appreciate from the idea of the present disclosure, there are variations in the specific embodiments and the application scope, and in light of the above, the present disclosure should not be construed as being limited to the present disclosure.

Claims (10)

CN202111131215.3A2021-09-262021-09-26Image display method, device, electronic equipment and computer readable storage mediumActiveCN113849142B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202111131215.3ACN113849142B (en)2021-09-262021-09-26Image display method, device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202111131215.3ACN113849142B (en)2021-09-262021-09-26Image display method, device, electronic equipment and computer readable storage medium

Publications (2)

Publication NumberPublication Date
CN113849142A CN113849142A (en)2021-12-28
CN113849142Btrue CN113849142B (en)2024-05-28

Family

ID=78980206

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202111131215.3AActiveCN113849142B (en)2021-09-262021-09-26Image display method, device, electronic equipment and computer readable storage medium

Country Status (1)

CountryLink
CN (1)CN113849142B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114842390A (en)*2022-05-132022-08-02中国银行股份有限公司Bank product pushing method and device
CN116579963A (en)*2023-05-192023-08-11上海芯赛云计算科技有限公司 A static image generation dynamic image processing system and method

Citations (20)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP2010068347A (en)*2008-09-112010-03-25Ricoh Co LtdImage forming apparatus, image forming method and image forming program
CN106851114A (en)*2017-03-312017-06-13努比亚技术有限公司A kind of photo shows, photo generating means and method, terminal
CN107436879A (en)*2016-05-252017-12-05广州市动景计算机科技有限公司The loading method and loading system of a kind of dynamic picture
CN108139799A (en)*2016-04-222018-06-08深圳市大疆创新科技有限公司The system and method for region of interest (ROI) processing image data based on user
CN108271021A (en)*2016-12-302018-07-10安讯士有限公司It is controlled based on the block grade renewal rate for watching sensing attentively
CN109544667A (en)*2018-11-082019-03-29三星电子(中国)研发中心 A kind of picture preview method and device
CN110022445A (en)*2019-02-262019-07-16维沃软件技术有限公司A kind of content outputting method and terminal device
CN110019897A (en)*2017-08-012019-07-16北京小米移动软件有限公司Show the method and device of picture
CN110245250A (en)*2019-06-112019-09-17Oppo广东移动通信有限公司 Image processing method and related device
KR20190122461A (en)*2018-04-202019-10-30주식회사 카카오Method and apparatus of displaying preview image
CN110853073A (en)*2018-07-252020-02-28北京三星通信技术研究有限公司Method, device, equipment and system for determining attention point and information processing method
CN111046744A (en)*2019-11-212020-04-21深圳云天励飞技术有限公司 A method, device, readable storage medium and terminal device for detecting an area of interest
CN111083553A (en)*2019-12-312020-04-28联想(北京)有限公司Image processing method and image output equipment
CN111309146A (en)*2020-02-102020-06-19Oppo广东移动通信有限公司 Image display method and related products
CN111432278A (en)*2020-02-272020-07-17北京达佳互联信息技术有限公司Video control method, device, terminal and storage medium
CN111768352A (en)*2020-06-302020-10-13Oppo广东移动通信有限公司 Image processing method and device
CN111970566A (en)*2020-08-262020-11-20北京达佳互联信息技术有限公司Video playing method and device, electronic equipment and storage medium
WO2021139353A1 (en)*2020-01-102021-07-15腾讯科技(深圳)有限公司Item display method and apparatus, computer device, and storage medium
CN113268622A (en)*2021-04-212021-08-17北京达佳互联信息技术有限公司Picture browsing method and device, electronic equipment and storage medium
CN113313072A (en)*2021-06-282021-08-27中国平安人寿保险股份有限公司Method, device and equipment for constructing intelligent dynamic page and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP5363259B2 (en)*2009-09-292013-12-11富士フイルム株式会社 Image display device, image display method, and program
EP3746912B1 (en)*2018-01-312023-08-16Nureva Inc.Method, apparatus and computer-readable media for converting static objects into dynamic intelligent objects on a display device

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP2010068347A (en)*2008-09-112010-03-25Ricoh Co LtdImage forming apparatus, image forming method and image forming program
CN108139799A (en)*2016-04-222018-06-08深圳市大疆创新科技有限公司The system and method for region of interest (ROI) processing image data based on user
CN107436879A (en)*2016-05-252017-12-05广州市动景计算机科技有限公司The loading method and loading system of a kind of dynamic picture
CN108271021A (en)*2016-12-302018-07-10安讯士有限公司It is controlled based on the block grade renewal rate for watching sensing attentively
CN106851114A (en)*2017-03-312017-06-13努比亚技术有限公司A kind of photo shows, photo generating means and method, terminal
CN110019897A (en)*2017-08-012019-07-16北京小米移动软件有限公司Show the method and device of picture
KR20190122461A (en)*2018-04-202019-10-30주식회사 카카오Method and apparatus of displaying preview image
CN110853073A (en)*2018-07-252020-02-28北京三星通信技术研究有限公司Method, device, equipment and system for determining attention point and information processing method
CN109544667A (en)*2018-11-082019-03-29三星电子(中国)研发中心 A kind of picture preview method and device
CN110022445A (en)*2019-02-262019-07-16维沃软件技术有限公司A kind of content outputting method and terminal device
CN110245250A (en)*2019-06-112019-09-17Oppo广东移动通信有限公司 Image processing method and related device
CN111046744A (en)*2019-11-212020-04-21深圳云天励飞技术有限公司 A method, device, readable storage medium and terminal device for detecting an area of interest
CN111083553A (en)*2019-12-312020-04-28联想(北京)有限公司Image processing method and image output equipment
WO2021139353A1 (en)*2020-01-102021-07-15腾讯科技(深圳)有限公司Item display method and apparatus, computer device, and storage medium
CN111309146A (en)*2020-02-102020-06-19Oppo广东移动通信有限公司 Image display method and related products
CN111432278A (en)*2020-02-272020-07-17北京达佳互联信息技术有限公司Video control method, device, terminal and storage medium
CN111768352A (en)*2020-06-302020-10-13Oppo广东移动通信有限公司 Image processing method and device
CN111970566A (en)*2020-08-262020-11-20北京达佳互联信息技术有限公司Video playing method and device, electronic equipment and storage medium
CN113268622A (en)*2021-04-212021-08-17北京达佳互联信息技术有限公司Picture browsing method and device, electronic equipment and storage medium
CN113313072A (en)*2021-06-282021-08-27中国平安人寿保险股份有限公司Method, device and equipment for constructing intelligent dynamic page and storage medium

Also Published As

Publication numberPublication date
CN113849142A (en)2021-12-28

Similar Documents

PublicationPublication DateTitle
US12136210B2 (en)Image processing method and apparatus
US10891799B2 (en)Augmented reality processing method, object recognition method, and related device
WO2020216054A1 (en)Sight line tracking model training method, and sight line tracking method and device
WO2020177582A1 (en)Video synthesis method, model training method, device and storage medium
WO2019184889A1 (en)Method and apparatus for adjusting augmented reality model, storage medium, and electronic device
CN108520552A (en)Image processing method, image processing device, storage medium and electronic equipment
CN108712603B (en) An image processing method and mobile terminal
CN108234882B (en)Image blurring method and mobile terminal
CN103871051A (en)Image processing method, device and electronic equipment
CN111010508B (en) A shooting method and electronic device
CN108491775A (en)A kind of image correcting method and mobile terminal
CN107592466A (en)A kind of photographic method and mobile terminal
CN110290426B (en)Method, device and equipment for displaying resources and storage medium
CN109272473B (en)Image processing method and mobile terminal
CN109495616B (en)Photographing method and terminal equipment
CN109104578B (en) An image processing method and mobile terminal
CN111401463B (en)Method for outputting detection result, electronic equipment and medium
CN108156374A (en)A kind of image processing method, terminal and readable storage medium storing program for executing
CN111385481A (en)Image processing method and device, electronic device and storage medium
CN113849142B (en)Image display method, device, electronic equipment and computer readable storage medium
CN103869977B (en)Method for displaying image, device and electronics
CN115375835A (en)Three-dimensional model establishing method based on two-dimensional key points, computer and storage medium
CN109639981B (en)Image shooting method and mobile terminal
CN109104573B (en)Method for determining focusing point and terminal equipment
CN113705309B (en) A method, device, electronic device and storage medium for determining scene type

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp