Background
With the rapid development of internet technology, the live broadcast era has come, many enterprises have publicity for their brands through live broadcast, and the traffic brought by live broadcast has become the preferred channel of many merchants. Live broadcast refers to an information network distribution mode which synchronously makes and distributes information along with the occurrence and development processes of events and has a bidirectional circulation process on site. In some live broadcasts, the anchor can recommend commodities to the user, and the commodities interact with the user around the anchor, so that compared with image-text recommendation, the live broadcast recommendation method greatly improves the shopping experience of the user.
However, in the existing technical solutions, a user in a live broadcast room generally only sees a complete picture when watching a main broadcast, and cannot meet the requirements of different users, and the interaction mode needs to be further improved.
Disclosure of Invention
The embodiment of the application provides an image amplification method, an image amplification device and a storage medium, which are used for meeting different requirements of different users when watching live broadcasts.
In a first aspect, an embodiment of the present application provides an image magnification method, where the method includes:
performing image recognition on an image to be processed to obtain at least one detail area in the image to be processed;
responding to a magnification instruction of a user, and displaying a magnification point on the image to be processed; wherein each enlargement point corresponds to a detail area;
and responding to a display instruction of a user, and amplifying and displaying the amplification point corresponding to the display instruction.
According to the method, the detail area is identified and determined for the image, and the corresponding amplification area is displayed according to the operation of the user, so that the area which can be locally amplified by the user is predetermined, the picture is divided into a complete picture and a plurality of local pictures, when the user watches the live broadcast, if the user wants to amplify and watch the picture which the user is interested in, the local content in the complete live broadcast picture can be obtained through the amplification operation, the requirements of different users on different focus points in the live broadcast picture are met, and the watching experience of the user is further improved.
In a possible implementation manner, before performing image recognition on the image to be processed to obtain at least one detail region in the image to be processed, the method further includes:
responding to a playing instruction of a user, and playing a live video;
and taking the currently played live video picture as the image to be processed.
According to the method, the live video picture is taken as the image to be processed, so that the method can be applied to the field of live broadcasting, and the user experience is better when watching the live broadcasting.
In a possible implementation manner, the displaying a zoom-in point corresponding to a display instruction in response to the display instruction of a user, includes:
responding to a display instruction of a user, and displaying a full screen of an amplification point corresponding to the display instruction; or the like, or, alternatively,
and responding to a display instruction of a user, amplifying the amplification point corresponding to the display instruction by a preset multiple, and displaying the amplified amplification point on the image to be processed in a suspension manner.
According to the method, the amplification area can be flexibly displayed through different amplification modes, so that the requirements of different users can be met.
In one possible implementation, the method further includes:
and responding to a display instruction of a user, and in the process of displaying the enlargement point corresponding to the display instruction in a full screen mode, displaying the image to be processed on the enlargement point image in a suspension mode in the display state of the thumbnail.
According to the method, if the local picture is displayed in a full screen mode, the user can know the current live broadcast picture through the display state display of the thumbnail, and therefore user experience is improved.
In a possible implementation manner, after the displaying the to-be-processed image in the display state of the thumbnail, the method further includes:
and responding to a restoring instruction of a user, and displaying the image to be processed in a full screen mode.
According to the method, the complete picture is restored through the restoration instruction, so that the user can watch the complete picture again after watching the partial picture, and the user experience is improved.
In a possible implementation manner, the performing image recognition on the image to be processed to obtain at least one detail region in the image to be processed includes:
carrying out image recognition on an image to be processed, and determining the image content of the image to be processed;
and determining a detail area in the image to be processed according to the position of the object in the image content.
According to the method, the detail area is determined according to the position of the object in the image content, so that the local picture can better conform to the picture which a user wants to see, and the user experience is improved.
In a second aspect, an image magnification device provided in an embodiment of the present application includes:
the identification module is used for carrying out image identification on the image to be processed to obtain at least one detail area in the image to be processed;
the display module is used for responding to a magnification instruction of a user and displaying a magnification point on the image to be processed; wherein each enlargement point corresponds to a detail area;
and the amplifying module is used for responding to a display instruction of a user and amplifying and displaying an amplifying point corresponding to the display instruction.
In one possible implementation, the apparatus further includes:
the playing module is used for responding to a playing instruction of a user and playing a live video before the recognition module carries out image recognition on the image to be processed and at least one detail area in the image to be processed is obtained;
and the determining module is used for taking the currently played live video picture as the image to be processed.
In one possible implementation, the amplifying module includes:
the first amplification unit is used for responding to a display instruction of a user and displaying an amplification point corresponding to the display instruction in a full screen mode;
and the second amplification unit is used for responding to a display instruction of a user, amplifying the amplification point corresponding to the display instruction by a preset multiple, and displaying the amplified amplification point on the image to be processed in a suspension manner.
In one possible implementation, the apparatus further includes:
and the suspension display unit is used for suspending and displaying the image to be processed on the magnified point image in the thumbnail display state in the process that the first magnifying unit responds to a display instruction of a user and displays the magnified point corresponding to the display instruction in a full screen mode.
In one possible implementation, the apparatus further includes:
and the restoring unit is used for responding to a restoring instruction of a user and displaying the image to be processed in a full screen mode after the suspension display unit displays the image to be processed in the thumbnail display state.
In one possible implementation, the identification module includes:
the image content determining unit is used for carrying out image recognition on an image to be processed and determining the image content of the image to be processed;
and the detail area determining unit is used for determining a detail area in the image to be processed according to the position of the object in the image content.
In a third aspect, a computing device is provided, comprising at least one processing unit, and at least one memory unit, wherein the memory unit stores a computer program that, when executed by the processing unit, causes the processing unit to perform the steps of any of the image magnification methods described above.
In one embodiment, the computing device may be a server or a terminal device.
In a fourth aspect, there is provided a computer readable medium storing a computer program executable by a terminal device, the program, when run on the terminal device, causing the terminal device to perform the steps of any of the image magnification methods described above.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Detailed Description
In order to meet different requirements of different users when watching live broadcasts, the embodiment of the application provides an image amplification method, an image amplification device and a storage medium. In order to better understand the technical solution provided by the embodiments of the present application, the following brief description is made on the basic principle of the solution:
with the rapid development of internet technology, the live broadcast era has come, many enterprises have publicity for their brands through live broadcast, and the traffic brought by live broadcast has become the preferred channel of many merchants. Live broadcast refers to an information network distribution mode which synchronously makes and distributes information along with the occurrence and development processes of events and has a bidirectional circulation process on site. In some live broadcasts, the anchor can recommend commodities to the user, and the commodities interact with the user around the anchor, so that compared with image-text recommendation, the live broadcast recommendation method greatly improves the shopping experience of the user.
When a user enters a live broadcast room to watch live broadcast, a live broadcast picture is fixed, and a main broadcast operates a camera to determine the content shot by the picture, but elements appearing in the picture in the live broadcast shopping process are many, and mainly background, the main broadcast, products and the like appear in the picture. The user watching the live broadcast may be interested in the main broadcast or the commodities, but the shot content of the picture cannot be decided by the user, and the user cannot see the content which the user wants to pay attention to.
In the existing technical scheme, a user in a live broadcast room generally only can see a complete picture when watching a main broadcast. When the anchor needs to show the local details in the complete picture to the user, the interface can not present the complete picture, and partial users can not obtain the complete picture when wanting to see the complete picture; or when the user wants to enlarge the local detail in the picture, the contents that different users want to see are different, the anchor can not meet the requirements of different users, and the interaction mode needs to be further improved.
In order to solve the above problem, a live view may be partially enlarged. In view of this, embodiments of the present application provide an image magnification method, apparatus, and storage medium, where a detail area is determined by identifying an image, and a corresponding magnified area is displayed according to an operation of a user, so that a picture is divided into a complete picture and several partial pictures by determining an area where the user may partially magnify in advance, and when the user watches live broadcast, if the user wants to magnify and watch a picture of interest of the user, the user may obtain partial content in the complete live broadcast picture through a magnification operation, thereby satisfying requirements of different users for different points of interest in the live broadcast picture, and further improving the watching experience of the user.
The preferred embodiments of the present application will be described below with reference to the accompanying drawings of the specification, it should be understood that the preferred embodiments described herein are merely for illustrating and explaining the present application, and are not intended to limit the present application, and that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
The following further explains the image enlarging method provided in the embodiments of the present application. As shown in fig. 1, the method comprises the following steps:
s101: and performing image recognition on the image to be processed to obtain at least one detail area in the image to be processed.
Wherein, the image to be processed is obtained according to the live broadcast picture. The method can be specifically implemented as follows:
responding to a playing instruction of a user, and playing a live video; and taking the currently played live video picture as the image to be processed.
In the embodiment of the application, a user can watch live broadcast through an application program of the intelligent terminal, and after entering a live broadcast room, a live broadcast picture can be obtained. And performing image recognition on the current live broadcast picture as an image to be processed to determine a detail area.
Therefore, the method can be applied to the live broadcast field by taking the live broadcast video picture as the image to be processed, so that the user experience is better when watching the live broadcast.
It should be noted that image recognition is performed in real time as live broadcasting. That is, if there are 10 frames of live video frames, image recognition is performed on the 10 frames of images as the frames are played back at the time of image recognition.
After the image is identified, the detailed region of the image may be determined. Wherein, the rich content part in the picture can be used as the detail area. The method can be specifically implemented as follows:
carrying out image recognition on an image to be processed, and determining the image content of the image to be processed; and determining a detail area in the image to be processed according to the position of the object in the image content.
In the embodiment of the application, the composition of one image may have a background, a person and an article. Therefore, after image recognition is performed, the content in the image, that is, the positional relationship between the objects can be obtained. If the person is in the center of the image and the item is to the left or right of the image. After the position of each object is determined, the position of the object may be used as a determination criterion for the detailed area. Such as the center position, left position, and right position of the image as the detail regions of the image.
In the live broadcast process, a user only needs to watch the live broadcast, so that the detailed area is focused on the position of a person, and in the live broadcast room in which the anchor recommends commodities to audiences, the detailed area can be focused on the position of an article.
It should be noted that there may be multiple detailed areas of an item or person. For example, the head, upper body, and lower body of the anchor are each a detail area, so that one character has 3 detail areas.
Therefore, the detailed area is determined by the position of the object in the image content, so that the local picture can better conform to the picture which the user wants to see, and the user experience is improved.
After the detail area is determined, the partial division is performed according to the determined detail area. For example, after a detail area is determined, the image to be processed is enlarged according to a preset enlargement factor, and an area with the size of a display screen is selected as a local picture on the enlarged image to be processed, wherein the local picture contains the detail area.
S102: responding to a magnification instruction of a user, and displaying a magnification point on the image to be processed; wherein each magnification point corresponds to a detail region.
S103: and responding to a display instruction of a user, and amplifying and displaying the amplification point corresponding to the display instruction.
In the embodiment of the application, after the detail area of the image to be processed is obtained, the amplification effect can be realized through the operation of a user. As shown in fig. 2, this is a live view display screen. Wherein, there is a magnifying key at the right side of the image, if the user clicks the magnifying key on the display screen in the process of watching the live broadcast, the magnifying point will be displayed on the picture. As shown in fig. 3, there are three enlarged points on the hat, the face of the person and the animal. The user can select a desired local area according to the displayed zoom point.
After the user clicks the amplifying point, the detail area can be amplified in different amplifying modes.
Firstly, responding to a display instruction of a user, and displaying a magnification point corresponding to the display instruction in a full screen mode.
In the embodiment of the application, the partial picture corresponding to the detail area is displayed in a full screen mode, so that the picture of the partial area can be watched more clearly by a user.
In order to enable a user to see a current live-broadcast picture while watching a clearer partial picture, a complete picture can be displayed through the display state of the thumbnail, and the method specifically includes: and responding to a display instruction of a user, and in the process of displaying the enlargement point corresponding to the display instruction in a full screen mode, displaying the image to be processed on the enlargement point image in a suspension mode in the display state of the thumbnail.
As shown in fig. 4, this is a display screen of the live view. In the figure, a user clicks an enlargement point of an animal, a partial picture corresponding to the enlargement point is displayed in a full screen mode, and meanwhile, the complete picture of the current live broadcast picture is displayed in a suspension state at the lower right corner. Therefore, when the local picture is displayed in a full screen mode, the picture is displayed through the display state of the thumbnail, the user can know the current live broadcast picture, and user experience is improved.
If the user wants to restore the initial state, the user can click the thumbnail of the floating state, and the specific implementation can be as follows: and responding to a restoring instruction of a user, and displaying the image to be processed in a full screen mode.
Therefore, the complete picture is restored through the restoration instruction, the user can watch the complete picture again after watching the partial picture, and the user experience is improved.
And secondly, responding to a display instruction of a user, amplifying the amplification point corresponding to the display instruction by a preset multiple, and displaying the amplified amplification point on the image to be processed in a suspension manner.
In the embodiment of the application, after the user clicks the zoom-in point, the full picture is still displayed in a full screen, but the partial picture corresponding to the zoom-in point can be displayed in a floating manner.
The user may click on the partial screen in the floating state to restore the original state.
Therefore, the area in which the user can partially magnify is predetermined, the picture is divided into a complete picture and a plurality of partial pictures, when the user watches the live broadcast, if the user wants to magnify the picture in which the user is interested, the user can obtain the partial content in the complete live broadcast picture through the magnifying operation, so that the requirements of different users on different points of interest in the live broadcast picture are met, and the watching experience of the user is further improved.
In the embodiment of the application, after the user clicks the zoom-in key on the display screen, the entire live broadcast picture can be displayed instead of the zoom-in point.
At this time, the user can click on the live broadcast picture, and the response area is enlarged according to the click of the user. In this way, the user can click where to zoom in on the responsive area, depending on the user's needs. The enlargement area can be made more flexible.
Based on the same inventive concept, the embodiment of the application also provides an image amplifying device. As shown in fig. 5, the apparatus includes:
theidentification module 501 is configured to perform image identification on an image to be processed to obtain at least one detail area in the image to be processed;
adisplay module 502, configured to respond to a user's zoom-in instruction and display a zoom-in point on the image to be processed; wherein each enlargement point corresponds to a detail area;
theamplifying module 503 is configured to respond to a display instruction of a user, and amplify and display an amplifying point corresponding to the display instruction.
In one possible implementation, the apparatus further includes:
the playing module is used for responding to a playing instruction of a user before therecognition module 501 performs image recognition on the image to be processed to obtain at least one detail area in the image to be processed, and playing a live video;
and the determining module is used for taking the currently played live video picture as the image to be processed.
In one possible implementation, the amplifyingmodule 503 includes:
the first amplification unit is used for responding to a display instruction of a user and displaying an amplification point corresponding to the display instruction in a full screen mode;
and the second amplification unit is used for responding to a display instruction of a user, amplifying the amplification point corresponding to the display instruction by a preset multiple, and displaying the amplified amplification point on the image to be processed in a suspension manner.
In one possible implementation, the apparatus further includes:
and the suspension display unit is used for suspending and displaying the image to be processed on the magnified point image in the thumbnail display state in the process that the first magnifying unit responds to a display instruction of a user and displays the magnified point corresponding to the display instruction in a full screen mode.
In one possible implementation, the apparatus further includes:
and the restoring unit is used for responding to a restoring instruction of a user and displaying the image to be processed in a full screen mode after the suspension display unit displays the image to be processed in the thumbnail display state.
In one possible implementation, the identifyingmodule 501 includes:
the image content determining unit is used for carrying out image recognition on an image to be processed and determining the image content of the image to be processed;
and the detail area determining unit is used for determining a detail area in the image to be processed according to the position of the object in the image content.
Based on the same technical concept, the present application further provides aterminal device 600, as shown in fig. 6, theterminal device 600 is configured to implement the methods described in the above various method embodiments, for example, implement the embodiment shown in fig. 2, and theterminal device 600 may include amemory 601, aprocessor 602, aninput unit 603, and adisplay panel 604.
Amemory 601 for storing computer programs executed by theprocessor 602. Thememory 601 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of theterminal apparatus 600, and the like. Theprocessor 602 may be a Central Processing Unit (CPU), a digital processing unit, or the like. Theinput unit 603 may be configured to obtain a user instruction input by a user. Thedisplay panel 604 is configured to display information input by a user or information provided to the user, and in this embodiment of the present application, thedisplay panel 604 is mainly used to display a display interface of each application program in the terminal device and a control entity displayed in each display interface. Alternatively, thedisplay panel 604 may be configured in the form of a Liquid Crystal Display (LCD) or an organic light-emitting diode (OLED), and the like.
The embodiment of the present application does not limit a specific connection medium among thememory 601, theprocessor 602, theinput unit 603, and thedisplay panel 604. In the embodiment of the present application, thememory 601, theprocessor 602, theinput unit 603, and thedisplay panel 604 are connected by thebus 605 in fig. 6, thebus 605 is shown by a thick line in fig. 6, and the connection manner between other components is merely illustrative and not limited thereto. Thebus 605 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 6, but this is not intended to represent only one bus or type of bus.
Thememory 601 may be a volatile memory (volatile memory), such as a random-access memory (RAM); thememory 601 may also be a non-volatile memory (non-volatile memory) such as, but not limited to, a read-only memory (rom), a flash memory (flash memory), a Hard Disk Drive (HDD) or a solid-state drive (SSD), or any other medium which can be used to carry or store desired program code in the form of instructions or data structures and which can be accessed by a computer. Thememory 601 may be a combination of the above memories.
Theprocessor 602, configured to implement the embodiment shown in fig. 1, includes:
aprocessor 602 for invoking the computer program stored in thememory 601 to perform the embodiment as shown in fig. 1.
The embodiment of the present application further provides a computer-readable storage medium, which stores computer-executable instructions required to be executed by the processor, and includes a program required to be executed by the processor.
In some possible embodiments, aspects of an image magnification method provided by the present application may also be implemented in the form of a program product including program code for causing a terminal device to perform the steps of an image magnification method according to various exemplary embodiments of the present application described above in this specification when the program product is run on the terminal device. For example, the terminal device may perform the embodiment as shown in fig. 1.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
An image magnification program product for an embodiment of the present application may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be executable on a computing device. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including a physical programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device over any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., over the internet using an internet service provider).
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functions of two or more units described above may be embodied in one unit, according to embodiments of the application. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Further, while the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable document processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable document processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable document processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable document processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.