Movatterモバイル変換


[0]ホーム

URL:


CN113742183A - Screen recording method, terminal and storage medium - Google Patents

Screen recording method, terminal and storage medium
Download PDF

Info

Publication number
CN113742183A
CN113742183ACN202010472681.7ACN202010472681ACN113742183ACN 113742183 ACN113742183 ACN 113742183ACN 202010472681 ACN202010472681 ACN 202010472681ACN 113742183 ACN113742183 ACN 113742183A
Authority
CN
China
Prior art keywords
image frame
target area
image
terminal
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010472681.7A
Other languages
Chinese (zh)
Inventor
李冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Mobile Communications Technology Co Ltd
Original Assignee
Hisense Mobile Communications Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Mobile Communications Technology Co LtdfiledCriticalHisense Mobile Communications Technology Co Ltd
Priority to CN202010472681.7ApriorityCriticalpatent/CN113742183A/en
Publication of CN113742183ApublicationCriticalpatent/CN113742183A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种录屏方法、终端及存储介质,用于在录屏过程中保护用户的隐私信息。本发明实施例终端响应用户触发的录屏指令,对显示屏中当前显示的用户界面进行录制,得到包含至少一张图像帧的录制视频;针对录制视频中的任意一张图像帧,若图像帧中包括预设类型的目标区域,则对图像帧中的目标区域进行模糊化处理;根据模糊化处理后的图像帧生成处理后的目标录制视频。本发明实施例能够确定图像帧中是否存在包含用户隐私信息的目标区域,并对图像帧中的目标区域进行模糊化处理,从而对用户的隐私信息进行遮盖,不需用户手动处理录制的视频数据,即可保护用户隐私,提高录屏的安全性,提升用户体验。

Figure 202010472681

The invention discloses a screen recording method, a terminal and a storage medium, which are used for protecting the privacy information of users during the screen recording process. In this embodiment of the present invention, the terminal records the user interface currently displayed on the display screen in response to the screen recording instruction triggered by the user, and obtains a recorded video including at least one image frame; for any image frame in the recorded video, if the image frame If a preset type of target area is included in the image frame, the target area in the image frame is blurred; the processed target recording video is generated according to the blurred image frame. The embodiment of the present invention can determine whether there is a target area containing the user's private information in the image frame, and blur the target area in the image frame, so as to cover the user's private information, and the user does not need to manually process the recorded video data , you can protect user privacy, improve the security of screen recording, and improve user experience.

Figure 202010472681

Description

Screen recording method, terminal and storage medium
Technical Field
The invention relates to the technical field of terminal display, in particular to a screen recording method, a terminal and a storage medium.
Background
The screen recording means that the operation process and the display content of a user on a terminal display interface are recorded and stored locally in a video form.
The existing screen recording technology is to obtain each frame of original image of a terminal display interface, compress and encode the obtained original image, and obtain a final screen recording video.
In the screen recording process, because the original image is directly compressed and encoded, if a user has behaviors related to user privacy such as password input or chat, the original image containing the user privacy information can be used as a frame of image of the screen recording video, and when the user publishes the screen recording video to a network, the user privacy can be revealed, so that potential safety hazards are brought. In conclusion, how to protect user privacy in the screen recording process becomes a problem to be solved.
Disclosure of Invention
The invention provides a screen recording method, a terminal and a storage medium, which are used for protecting privacy information of a user in a screen recording process.
According to a first aspect of the exemplary embodiments, there is provided a terminal comprising a display screen, a processor, and a receiving unit:
the receiving unit is configured to receive a screen recording instruction triggered by a user;
the processor is configured to respond to a screen recording instruction triggered by a user, record a user interface currently displayed in a display screen, and obtain a recorded video containing at least one image frame; for any image frame in the recorded video, if the image frame comprises a preset type target area, performing fuzzification processing on the target area in the image frame; generating a processed target recording video according to the image frame subjected to the fuzzification processing;
the display screen is configured to display a user interface.
In the embodiment of the invention, aiming at any image frame in the recorded video, whether the image frame comprises a preset type target area or not can be judged, and whether the image frame comprises the privacy information of a user needing fuzzification or not can be determined; after the image frame is confirmed to comprise the preset target area, the target area in the image frame is subjected to fuzzification processing, so that privacy information of a user is covered, a processed target recording video is generated according to the image frame subjected to fuzzification processing, the recorded video data does not need to be processed manually by the user, the target area containing privacy content in the fuzzified image frame can be achieved, the privacy of the user is protected, the screen recording safety is improved, and the user experience is improved.
In some exemplary embodiments, the treatment appliance is configured to:
judging whether the image frame comprises a preset type of target area according to the following modes:
determining similarity between the image frame and a reference image in a preset image set;
and if the similarity between the image frame and the reference image is determined to be larger than a preset threshold value, determining that the image frame comprises a preset type of target area.
In the above embodiment, whether the image frame includes the preset type target area is determined according to the similarity between the image frame and the reference image, and since the reference image is an image including the preset type target area, if the similarity between the image frame and the reference image is greater than a preset threshold, it may be determined that the image frame includes the preset type target area, so that it is determined that an area including user privacy information exists in the image frame, and it is necessary to perform blurring processing.
In some exemplary embodiments, the treatment appliance is configured to:
determining a target region in the image frame according to:
determining the position information of a target area corresponding to a reference image according to the corresponding relation between the preset reference image and the position information of the target area;
and determining a target area in the image frame according to the position information of the target area corresponding to the reference image.
In the above embodiment, the target area in the image frame is determined according to the position information of the target area corresponding to the reference image whose image frame similarity is greater than the preset threshold, and the target area containing the user privacy information can be determined for the subsequent blurring processing.
In some exemplary embodiments, the processor is further configured to:
determining a fuzzification image layer corresponding to a target area in the image frame according to a preset pixel unit with a fuzzy effect;
covering, by a shader, a target area in the image frame with the determined blurred layer.
In the embodiment, the determined fuzzified layer covers the target area in the image frame, and the area containing the user privacy information in the image frame is fuzzified to present a fuzzy visual effect, so that the user privacy information is protected, and the screen recording safety is improved.
According to a second aspect of the exemplary embodiments, there is provided a terminal including a display screen, a processor, and a receiving unit:
the receiving unit is configured to receive a screen recording instruction triggered by a user;
the processor is configured to respond to a screen recording instruction triggered by a user, record a user interface currently displayed in a display screen, and obtain a recorded video containing at least one image frame; if the fact that the application program running in the current foreground is in a preset application program set is determined, aiming at any image frame in the recorded video, if the image frame comprises a preset type target area, fuzzifying the target area in the image frame; generating a processed target recording video according to the image frame subjected to the fuzzification processing; the application programs in the preset application program set are application programs containing user personal information and/or user safety information;
the display screen is configured to display a user interface.
In some exemplary embodiments, the treatment appliance is configured to:
determining a target region in the image frame according to:
determining the position information of the target area corresponding to the reference image according to the corresponding relation between the preset reference image and the position information of the target area; the reference image is a reference image with similarity to an image frame greater than a preset threshold value in a preset image set;
and determining a target area in the image frame according to the position information of the target area corresponding to the reference image.
In some exemplary embodiments, the treatment appliance is configured to:
determining a fuzzification image layer corresponding to a target area in the image frame according to a preset pixel unit with a fuzzy effect;
covering, by a shader, a target area in the image frame with the determined blurred layer.
According to a third aspect of the exemplary embodiments, there is provided a screen recording method, including:
responding to a screen recording instruction triggered by a user, and recording a user interface currently displayed in a display screen by the terminal to obtain a recorded video containing at least one image frame;
for any image frame in the recorded video, if the image frame comprises a preset type target area, the terminal fuzzifies the target area in the image frame;
and the terminal generates a processed target recording video according to the image frame after the fuzzification processing.
In some exemplary embodiments, whether a preset type of target area is included in the image frame is determined according to the following manner:
the terminal determines the similarity between the image frame and a reference image in a preset image set;
and if the similarity between the image frame and the reference image is determined to be larger than a preset threshold value, determining that the image frame comprises a preset type of target area.
In some exemplary embodiments, the target region in the image frame is determined according to the following:
determining the position information of a target area corresponding to a reference image according to the corresponding relation between the preset reference image and the position information of the target area;
and determining a target area in the image frame according to the position information of the target area corresponding to the reference image.
In some exemplary embodiments, the blurring, by the terminal, the target region in the image frame includes:
the terminal determines a fuzzification image layer corresponding to a target area in the image frame according to a preset pixel unit with a fuzzy effect;
and the terminal covers a target area in the image frame by using the determined fuzzification layer through a shader.
According to a fourth aspect of the exemplary embodiments, there is provided a screen recording method, including:
responding to a screen recording instruction triggered by a user, and recording a user interface currently displayed in a display screen by the terminal to obtain a recorded video containing at least one image frame;
if the terminal determines that the current foreground running application program is in a preset application program set, aiming at any image frame in the recorded video, if the image frame comprises a preset type target area, blurring the target area in the image frame;
the terminal generates a processed target recording video according to the image frame after the fuzzification processing;
and the application programs in the preset application program set are application programs containing user personal information and/or user safety information.
In some exemplary embodiments, the target region in the image frame is determined according to the following:
the terminal determines the position information of the target area corresponding to the reference image according to the corresponding relation between the preset reference image and the position information of the target area; the reference image is a reference image with similarity to an image frame greater than a preset threshold value in a preset image set;
and the terminal determines a target area in the image frame according to the position information of the target area corresponding to the reference image.
In some exemplary embodiments, the blurring, by the terminal, the target region in the image frame includes:
the terminal determines a fuzzification image layer corresponding to a target area in the image frame according to a preset pixel unit with a fuzzy effect;
and the terminal covers a target area in the image frame by using the determined fuzzification layer through a shader.
According to a fifth aspect of the exemplary embodiments, there is provided a screen recording apparatus configured to perform the screen recording method according to the third or fourth aspect.
According to a sixth aspect of the exemplary embodiments, there is provided a computer storage medium having stored therein computer program instructions, which when run on a computer, cause the computer to perform the screen recording method according to the third or fourth aspect.
On the basis of the common knowledge in the field, the above preferred conditions can be combined randomly to obtain the preferred embodiments of the application.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic structural diagram of a terminal according to an embodiment of the present invention;
fig. 2 is a schematic diagram illustrating a software architecture of a terminal according to an embodiment of the present invention;
fig. 3 is a schematic diagram illustrating a user interface of a terminal provided by an embodiment of the present invention;
fig. 4 is a flowchart illustrating a screen recording method according to an embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating an account login page in accordance with an embodiment of the present invention;
FIG. 6 is a diagram illustrating a user triggered screen recording instruction according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating a chat interface of an instant messaging application in accordance with an embodiment of the invention;
FIG. 8 is a schematic diagram illustrating a reference image according to an embodiment of the present invention;
FIG. 9 is a diagram illustrating the determination of a target region in an image frame according to an embodiment of the present invention;
FIG. 10 is a schematic diagram illustrating an image frame after a blurring process according to an embodiment of the invention;
fig. 11 is a flowchart illustrating another screen recording method provided by the embodiment of the present invention;
FIG. 12 is a flowchart illustrating a complete screen recording method according to an embodiment of the present invention;
fig. 13 is a schematic structural diagram illustrating a first terminal according to an embodiment of the present invention;
fig. 14 is a schematic structural diagram illustrating a second terminal according to an embodiment of the present invention;
fig. 15 is a schematic structural diagram schematically illustrating a first screen recording device according to an embodiment of the present invention;
fig. 16 schematically shows a structure of a second screen recording apparatus according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present application will be described in detail and removed with reference to the accompanying drawings. In the description of the embodiments herein, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" in the text is only an association relationship describing an associated object, and means that three relationships may exist, for example, a and/or B may mean: three cases of a alone, a and B both, and B alone exist, and in addition, "a plurality" means two or more than two in the description of the embodiments of the present application.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature, and in the description of embodiments of the application, unless stated otherwise, "plurality" means two or more.
Some terms appearing herein are explained below:
1. the term "screen recording" in the embodiment of the present invention means that an operation process and display content of a user on a terminal display interface are recorded and stored locally in the form of a video.
2. The term "blurring processing" in the embodiment of the present invention refers to adjusting the pixel values of the pixels in the image, so that the image exhibits a blurring effect, and the content in the original image is covered.
3. The term "pixel" in the embodiments of the present invention refers to the small squares that make up the image, and these small squares all have a definite position and assigned color value, and the color and position of the small squares determine the appearance of the image.
Fig. 1 shows a schematic structural diagram of a terminal 100.
The following describes an embodiment specifically by taking the terminal 100 as an example. It should be understood that the terminal 100 shown in fig. 1 is merely an example, and that the terminal 100 may have more or fewer components than shown in fig. 1, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
A block diagram of a hardware configuration of the terminal 100 according to an exemplary embodiment is exemplarily shown in fig. 1. As shown in fig. 1, the terminal 100 includes: a Radio Frequency (RF)circuit 110, amemory 120, adisplay unit 130, acamera 140, asensor 150, anaudio circuit 160, a Wireless Fidelity (Wi-Fi)module 170, aprocessor 180, abluetooth module 181, and apower supply 190.
TheRF circuit 110 may be used for receiving and transmitting signals during information transmission and reception or during a call, and may receive downlink data of a base station and then send the downlink data to theprocessor 180 for processing; the uplink data may be transmitted to the base station. Typically, the RF circuitry includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
Thememory 120 may be used to store software programs and data. Theprocessor 180 performs various functions of the terminal 100 and data processing by executing software programs or data stored in thememory 120. Thememory 120 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Thememory 120 stores an operating system that enables the terminal 100 to operate. Thememory 120 may store an operating system and various application programs, and may also store codes for performing the methods described in the embodiments of the present application.
Thedisplay unit 130 may be used to receive input numeric or character information and generate signal input related to user settings and function control of the terminal 100, and particularly, thedisplay unit 130 may include atouch screen 131 disposed on the front surface of the terminal 100 and may collect touch operations of a user thereon or nearby, such as clicking a button, dragging a scroll box, and the like.
Thedisplay unit 130 may also be used to display a Graphical User Interface (GUI) of information input by or provided to the user and various menus of the terminal 100. Specifically, thedisplay unit 130 may include adisplay screen 132 disposed on the front surface of the terminal 100. Thedisplay screen 132 may be configured in the form of a liquid crystal display, a light emitting diode, or the like. Thedisplay unit 130 may be used to display various graphical user interfaces described herein.
Thetouch screen 131 may cover thedisplay screen 132, or thetouch screen 131 and thedisplay screen 132 may be integrated to implement the input and output functions of the terminal 100, and after the integration, the touch screen may be referred to as a touch display screen for short. In the present application, thedisplay unit 130 may display the application programs and the corresponding operation steps.
Thecamera 140 may be used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing elements convert the light signals into electrical signals which are then passed to theprocessor 180 for conversion into digital image signals.
The terminal 100 may further comprise at least onesensor 150, such as anacceleration sensor 151, adistance sensor 152, afingerprint sensor 153, atemperature sensor 154. The terminal 100 may also be configured with other sensors such as a gyroscope, barometer, hygrometer, thermometer, infrared sensor, light sensor, motion sensor, etc.
Audio circuitry 160,speaker 161, andmicrophone 162 may provide an audio interface between a user andterminal 100. Theaudio circuit 160 may transmit the electrical signal converted from the received audio data to thespeaker 161, and convert the electrical signal into a sound signal for output by thespeaker 161. The terminal 100 may also be provided with a volume button for adjusting the volume of the sound signal. On the other hand, themicrophone 162 converts the collected sound signal into an electrical signal, converts the electrical signal into audio data after being received by theaudio circuit 160, and outputs the audio data to theRF circuit 110 to be transmitted to, for example, another terminal or outputs the audio data to thememory 120 for further processing. In this application, themicrophone 162 may capture the voice of the user.
Wi-Fi belongs to a short-distance wireless transmission technology, and the terminal 100 can help a user to send and receive e-mails, browse webpages, access streaming media, and the like through the Wi-Fi module 170, and provides wireless broadband internet access for the user.
Theprocessor 180 is a control center of the terminal 100, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal 100 and processes data by running or executing software programs stored in thememory 120 and calling data stored in thememory 120. In some embodiments,processor 180 may include one or more processing units; theprocessor 180 may also integrate an application processor, which mainly handles operating systems, user interfaces, applications, etc., and a baseband processor, which mainly handles wireless communications. It will be appreciated that the baseband processor described above may not be integrated into theprocessor 180. In the present application, theprocessor 180 may run an operating system, an application program, a user interface display, and a touch response, and the processing method described in the embodiments of the present application. Further, theprocessor 180 is coupled with thedisplay unit 130.
And thebluetooth module 181 is configured to perform information interaction with other bluetooth devices having a bluetooth module through a bluetooth protocol. For example, the terminal 100 may establish a bluetooth connection with a wearable electronic device (e.g., a smart watch) having a bluetooth module via thebluetooth module 181, so as to perform data interaction.
The terminal 100 also includes a power supply 190 (e.g., a battery) to power the various components. The power supply may be logically connected to theprocessor 180 through a power management system to manage charging, discharging, power consumption, etc. through the power management system. The terminal 100 may also be configured with power buttons for powering the terminal on and off, and locking the screen.
Fig. 2 is a block diagram of a software configuration of the terminal 100 according to the embodiment of the present invention.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 2, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide a communication function of the terminal 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, text information is prompted in the status bar, a prompt tone is given, the terminal vibrates, an indicator light flashes, and the like.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The following describes exemplary workflow of the terminal 100 software and hardware in connection with capturing a photo scene.
When thetouch screen 131 receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into an original input event (including touch coordinates, a time stamp of the touch operation, and other information). The raw input events are stored at the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and taking a control corresponding to the click operation as a control of a camera application icon as an example, the camera application calls an interface of an application framework layer, starts the camera application, further starts a camera drive by calling a kernel layer, and captures a still image or a video through thecamera 140.
The terminal 100 in the embodiment of the present application may be a mobile phone, a tablet computer, a wearable device, a notebook computer, a television, and the like.
Fig. 3 is a schematic diagram for illustrating a user interface on a terminal (e.g.,terminal 100 of fig. 1). In some implementations, a user can open a corresponding application by touching an application icon on the user interface, or can open a corresponding folder by touching a folder icon on the user interface.
The screen recording means that the content displayed on the terminal display interface is recorded and stored locally in the form of video.
The existing screen recording technology is to obtain each frame of original image of a display interface of a terminal, compress and encode the obtained original image, and obtain a final screen recording video, wherein the screen recording video can be stored in a preset storage space of the terminal, and a user can check the screen recording video at any time or release the screen recording video to a network.
Because the existing screen recording technology directly compresses an acquired original image to generate a screen recording video, when a user inputs a password or chats and other operations related to user privacy in the screen recording process, the original image containing user privacy information can be directly used as one frame of image of the screen recording video, and when the user wants to upload the screen recording video to a network, in order to protect privacy, the video obtained by screen recording can only be edited or mosaic is manually added by means of a video editing tool.
Based on the above problem, an embodiment of the present invention provides a screen recording method, which is used for protecting privacy information of a user in a screen recording process.
As shown in fig. 4, a screen recording method according to an embodiment of the present invention includes the following steps:
step S401, responding to a screen recording instruction triggered by a user, and recording a user interface currently displayed in a display screen by the terminal to obtain a recorded video comprising at least one image frame;
step S402, aiming at any image frame in the recorded video, if the image frame comprises a preset type target area, the terminal fuzzifies the target area in the image frame;
and S403, the terminal generates a processed target recording video according to the image frame after the fuzzification processing.
According to the screen recording method provided by the embodiment of the invention, a terminal responds to a screen recording instruction triggered by a user and records a user interface displayed in a display screen to obtain a recorded video containing at least one image frame; after the image frame is confirmed to comprise the preset target area, the target area in the image frame is subjected to fuzzification processing, so that privacy information of a user is covered, a processed target recording video is generated according to the image frame subjected to fuzzification processing, the recorded video data does not need to be processed manually by the user, the target area containing privacy content in the fuzzified image frame can be achieved, the privacy of the user is protected, the screen recording safety is improved, and the user experience is improved.
In an optional implementation manner, the screen recording method provided in the embodiment of the present invention is a preset image set formed by reference images that need to be subjected to blurring processing on recorded image frames and are stored in a terminal in advance, and a corresponding relationship between each reference image and position information of a target area that needs to be subjected to blurring processing.
For example, the preset image set that needs to be subjected to the blurring process may include: an account login page reference image, a payment page reference image, a balance page reference image and the like in the third-party payment application program, and the corresponding relation between the reference image and the position information of the target area is stored in advance; as shown in fig. 5, taking the account login page reference image as an example, the target area corresponding to the account login page reference image may be an area within a dashed line frame in the figure, including a password area and an input keyboard area of the account.
It should be noted that a preset image set composed of reference images which are stored in the terminal in advance and need to be subjected to blurring processing, and a corresponding relationship between each reference image and the position information of the target area can be maintained and updated by technicians, and the updating of the local prestored content of the terminal is realized by updating the terminal system.
In response to a screen recording instruction triggered by a user, the terminal records the currently displayed content in the display screen to obtain a recorded video comprising at least one image frame;
in an alternative embodiment, the user may trigger the screen recording instruction by clicking an icon corresponding to the screen recording function, for example, as shown in fig. 6, the user clicks the icon corresponding to the screen recording function to trigger the screen recording instruction.
In a specific implementation, after receiving a screen recording instruction triggered by a user, a terminal records a content currently displayed in a display screen, and obtains original data of the content currently displayed in the display screen from a drawing tool Graphic to obtain a recorded video containing at least one image frame.
Judging whether any image frame in the recorded video comprises a preset type target area or not;
an optional implementation manner is that whether the image frame includes the preset type of target area is judged according to the following manner:
determining similarity between an image frame and a reference image in a preset image set corresponding to an application program; and if the similarity between the image frame and the reference image is larger than a preset threshold value, determining that the image frame comprises a preset type of target area.
It should be noted that the preset type target area is an area where the preset display content belonging to the user privacy is located, for example, an area on the user chat page in the instant messaging application program that includes the user chat content, a password area when the user logs in an account, a keyboard area used for inputting a password, and the like.
In the embodiment of the present invention, a method for determining the similarity between an image frame and a reference image may be used, and several methods for determining the similarity between two images are briefly described below, and it should be noted that the method for determining the similarity between an image frame and a reference image in the embodiment of the present invention is not limited to the following methods, and any method for determining the similarity between two images is applicable.
1. Histogram matching method
Histograms of the image frame and the reference image are calculated, respectively, and normalized correlation coefficients (babbitt distance, histogram intersection distance, etc.) of the two histograms are calculated, thereby determining the similarity between the image frame and the reference image.
2. Matrix decomposition method
And taking the pixel value corresponding to each pixel point in the image as a matrix, and obtaining the robustness characteristic capable of representing the matrix element value and the distribution characteristic in the matrix through matrix decomposition so as to calculate the similarity between the image frame and the reference image. Commonly used Matrix Decomposition algorithms are Singular Value Decomposition (SVD) and Non-negative Matrix Decomposition (NMF).
3. Image similarity calculation based on feature points
Each image has feature points which characterize some important positions in the image, such as Harris corner points and Sift feature points which are commonly used.
Respectively determining the characteristic points of the image frame and the reference image, comparing the characteristic points of the image frame and the reference image, and determining the similarity between the image frame and the reference image.
For example, Harris corners of the image frame and the reference image are respectively determined, the obtained image frame is compared with the Harris corners of the reference image, and if the number of similar Harris corners is large, the similarity between the image frame and the reference image is high.
After the similarity between the image frame and the reference image is determined, comparing the similarity between the image frame and the reference image with a preset threshold, and if the similarity between the image frame and the reference image is determined to be greater than the preset threshold, determining that the image frame includes a preset type of target region, for example, if the preset threshold is 90%, determining that the image frame includes the preset type of target region when the similarity between the image frame and the reference image is greater than 90%.
After determining that the image frame comprises the preset type of target area, determining the target area in the image frame according to the following modes:
determining the position information of a target area corresponding to a reference image with the similarity of the image frame being greater than a preset threshold value according to the corresponding relation between the preset reference image and the position information of the target area; and determining the target area in the image frame according to the position information of the target area corresponding to the reference image with the similarity of the image frame being greater than the preset threshold value.
In an alternative embodiment, the location information of the target area may be coordinate information of the target area, for example, assuming that the image frame is a chat interface of the instant messaging application shown in fig. 7, assuming that the reference image is shown in fig. 8, after determining that the similarity between the image frame and the reference image is greater than the preset threshold, the location information of the target area corresponding to the reference image is determined, for example, the target area corresponding to the reference image is a region in a dashed-line frame in fig. 8, the location information of the target area may be coordinates of all pixel points in the target area, and determining the target area in the image frame according to the location information of the target area is shown in a dashed-line frame in fig. 9.
After determining the target area in the image frame, performing fuzzification processing on the target area in the image frame;
an optional implementation manner is that a blurred image layer corresponding to a target area in an image frame is determined according to a preset pixel unit with a blurring effect; and covering a target area in the image frame by using the determined fuzzified layer through a shader.
In specific implementation, a preset pixel unit with a blurring effect is a pixel unit with a preset size of three primary colors (Red, Green, Blue, RGB) for blurring, a corresponding blurring layer is determined according to the size of a target area, and the determined blurring layer is covered on the target area in an image frame through an OpenGL shader; for example, assuming that the preset pixel units with the blurring effect are pixel units with a resolution of 100 × 100 and the resolution of the target area is 300 × 200, the preset pixel units are repeatedly spliced 6 times into the blurring layer with the resolution of 300 × 200, and the determined blurring layer is used to cover the target area in the image frame through the OpenGL shader.
For example, assume that the target area in the chat interface of the instant messaging application shown in fig. 7 is blurred, resulting in the blurred image frame shown in fig. 10.
After all the image frames needing to be subjected to fuzzification processing in the recorded video are processed, the image frames subjected to fuzzification processing are subjected to compression coding through an encoder, and the video data and the audio data subjected to compression coding are subjected to packaging processing through a packer to generate a target recorded video.
If the image frames that do not need to be subjected to the blurring processing exist in the recorded video, the image frames subjected to the blurring processing and the image frames not including the target area are compressed and encoded, and the video data and the audio data subjected to the compression encoding are subjected to the packaging processing through the packager to generate the target recorded video.
After the target recorded video is generated, the terminal stores the target recorded video into a preset storage space, so that a user can check or share the target recorded video at any time; for example, a user can search for a target recorded video in an album of a terminal system and play the target recorded video; or the user can publish the target recorded video to a network social platform for sharing; alternatively, the user may share the target recorded video to other users through the instant messaging application.
As shown in fig. 11, another screen recording method according to an embodiment of the present invention includes the following steps:
step S1101, responding to a screen recording instruction triggered by a user, and recording a user interface currently displayed in a display screen by the terminal to obtain a recorded video comprising at least one image frame;
step S1102, if the terminal determines that the current foreground running application program is in a preset application program set, aiming at any image frame in the recorded video, if the image frame comprises a preset type target area, blurring the target area in the image frame;
step S1103, the terminal generates a processed target recording video according to the image frame after the fuzzification processing;
the application programs in the preset application program set are application programs containing user personal information and/or user safety information.
In an optional implementation manner, the screen recording method provided in this embodiment of the present invention stores, in advance, a packet name set of an application program that needs to perform blurring processing on a recorded image frame, a preset image set composed of reference images that need to perform blurring processing and correspond to each application program, and a correspondence between each reference image and position information of a target area that needs to perform blurring processing in a terminal.
For example, the application program that needs to perform the blurring process on the recorded image frame is a third party payment application program, and the preset image set that needs to be performed the blurring process in the third party payment application program may include: the account login page reference image, the payment page reference image, the balance page reference image and the like are stored, and the corresponding relation between the reference image and the position information of the target area is stored in advance.
It should be noted that, a packet name set of an application program that needs to perform blurring processing on a recorded image frame, a preset image set composed of reference images that need to perform blurring processing and correspond to the application program, which are pre-stored in the terminal, and a corresponding relationship between each reference image and position information of a target area may be maintained and updated by a technician, and the updating of the locally pre-stored content of the terminal is realized by updating a terminal system.
After a recorded video containing at least one image frame is obtained, the terminal determines that an application program running on the current foreground is in a preset application program set, and determines that the application program running on the current foreground is an application program needing to perform fuzzification processing on the recorded image frame;
an optional implementation manner is to obtain a packet name of the application program currently running in the foreground, determine whether the packet name of the application program currently running in the foreground belongs to a packet name set of an application program which is stored in the terminal in advance and needs to perform blurring processing on the recorded image frame, and if the packet name of the application program currently running in the foreground belongs to the packet name set of the application program which needs to perform blurring processing on the recorded image frame, determine that the application program currently running in the foreground is the application program which needs to perform blurring processing on the recorded image frame.
After determining that the application program running in the current foreground is the application program which needs to perform blurring processing on the recorded image frame, determining a target area in the image frame according to the following mode for any image frame in the recorded video:
determining the position information of the target area corresponding to the reference image according to the corresponding relation between the preset reference image and the position information of the target area; the reference image is a reference image with the similarity between the preset image set and the image frame being greater than a preset threshold value;
and determining the target area in the image frame according to the position information of the target area corresponding to the reference image.
After determining the target area in the image frame, performing fuzzification processing on the target area in the image frame;
an optional implementation manner is that a blurred image layer corresponding to a target area in an image frame is determined according to a preset pixel unit with a blurring effect; and covering a target area in the image frame by using the determined fuzzified layer through a shader.
After all the image frames needing to be subjected to fuzzification processing in the recorded video are processed, the image frames subjected to fuzzification processing are subjected to compression coding through an encoder, and the video data and the audio data subjected to compression coding are subjected to packaging processing through a packer to generate a target recorded video.
If the image frames that do not need to be subjected to the blurring processing exist in the recorded video, the image frames subjected to the blurring processing and the image frames not including the target area are compressed and encoded, and the video data and the audio data subjected to the compression encoding are subjected to the packaging processing through the packager to generate the target recorded video.
As shown in fig. 12, a flowchart of a complete screen recording method according to an embodiment of the present invention includes the following steps:
step S1201, responding to a screen recording instruction triggered by a user, and recording a user interface currently displayed in a display screen to obtain a recorded video containing at least one image frame;
step S1202, determining that the application program running in the current foreground is in a preset application program set, and determining that the application program running in the current foreground is an application program needing to perform fuzzification processing on the recorded image frame;
step S1203, determining similarity between an image frame and a reference image in a preset image set corresponding to an application program aiming at any image frame in the recorded video;
step S1204, judge whether the similarity between reference image and the picture frame is greater than the preset threshold; if yes, go to step S1205; otherwise, go to step S1207;
step S1205, determining a target area in the image frame according to the position information of the target area corresponding to the reference image with the similarity degree of the image frame being greater than a preset threshold value;
step S1206, fuzzifying a target area in the image frame;
and step S1207, generating a processed target recorded video according to the image frame after the fuzzification processing.
Based on the same inventive concept, the embodiment of the present invention further provides a terminal, and as the principle of solving the problem of the terminal is similar to the screen recording method of the embodiment of the present invention, the implementation of the terminal may refer to the implementation of the method, and repeated details are not repeated.
As shown in fig. 13, an embodiment of the present invention provides a terminal, which includes adisplay screen 1301, aprocessor 1302, and areceiving unit 1303.
The receivingunit 1303 is configured to receive a screen recording instruction triggered by a user;
theprocessor 1302 is configured to respond to a screen recording instruction triggered by a user, and record a user interface currently displayed in a display screen to obtain a recorded video including at least one image frame; for any image frame in the recorded video, if the image frame comprises a preset type target area, performing fuzzification processing on the target area in the image frame; generating a processed target recording video according to the image frame subjected to the fuzzification processing;
thedisplay screen 1301 is configured to display a user interface.
In some exemplary embodiments, theprocessor 1302 is specifically configured to:
judging whether the image frame comprises a preset type of target area according to the following modes:
determining similarity between the image frame and a reference image in a preset image set;
and if the similarity between the image frame and the reference image is determined to be larger than a preset threshold value, determining that the image frame comprises a preset type of target area.
In some exemplary embodiments, theprocessor 1302 is specifically configured to:
determining a target region in the image frame according to:
determining the position information of a target area corresponding to a reference image according to the corresponding relation between the preset reference image and the position information of the target area;
and determining a target area in the image frame according to the position information of the target area corresponding to the reference image.
In some exemplary embodiments, theprocessor 1302 is specifically configured to:
determining a fuzzification image layer corresponding to a target area in the image frame according to a preset pixel unit with a fuzzy effect;
covering, by a shader, a target area in the image frame with the determined blurred layer.
As shown in fig. 14, the embodiment of the present invention provides a second terminal including adisplay 1401, aprocessor 1402, and areceiving unit 1403.
The receivingunit 1403 is configured to receive a screen recording instruction triggered by a user;
theprocessor 1402 is configured to respond to a screen recording instruction triggered by a user, record a user interface currently displayed in a display screen, and obtain a recorded video including at least one image frame; if the fact that the application program running in the current foreground is in a preset application program set is determined, aiming at any image frame in the recorded video, if the image frame comprises a preset type target area, fuzzifying the target area in the image frame; generating a processed target recording video according to the image frame subjected to the fuzzification processing; the application programs in the preset application program set are application programs containing user personal information and/or user safety information;
thedisplay 1401 is configured to display a user interface.
In some exemplary embodiments, theprocessor 1402 is specifically configured to:
determining a target region in the image frame according to:
determining the position information of the target area corresponding to the reference image according to the corresponding relation between the preset reference image and the position information of the target area; the reference image is a reference image with similarity to an image frame greater than a preset threshold value in a preset image set;
and determining a target area in the image frame according to the position information of the target area corresponding to the reference image.
In some exemplary embodiments, theprocessor 1402 is specifically configured to:
determining a fuzzification image layer corresponding to a target area in the image frame according to a preset pixel unit with a fuzzy effect;
covering, by a shader, a target area in the image frame with the determined blurred layer.
As shown in fig. 15, an embodiment of the present invention provides a first screen recording apparatus, including:
thefirst recording module 1501 is configured to respond to a screen recording instruction triggered by a user, and record a user interface currently displayed in a display screen to obtain a recorded video including at least one image frame;
afirst processing module 1502, configured to, for any image frame in the recorded video, perform blurring processing on a target area in the image frame if the image frame includes the preset type of target area;
afirst generation module 1503 configured to generate a processed target recorded video from the image frames after the blurring process.
In some exemplary embodiments, thefirst processing module 1502 is specifically configured to:
judging whether the image frame comprises a preset type of target area according to the following modes:
determining similarity between the image frame and a reference image in a preset image set;
and if the similarity between the image frame and the reference image is determined to be larger than a preset threshold value, determining that the image frame comprises a preset type of target area.
In some exemplary embodiments, thefirst processing module 1502 is specifically configured to:
determining a target region in the image frame according to:
determining the position information of a target area corresponding to a reference image according to the corresponding relation between the preset reference image and the position information of the target area;
and determining a target area in the image frame according to the position information of the target area corresponding to the reference image.
In some exemplary embodiments, thefirst processing module 1502 is specifically configured to:
determining a fuzzification image layer corresponding to a target area in the image frame according to a preset pixel unit with a fuzzy effect;
covering, by a shader, a target area in the image frame with the determined blurred layer.
As shown in fig. 16, an embodiment of the present invention provides a second screen recording apparatus, including:
thesecond recording module 1601 is configured to respond to a screen recording instruction triggered by a user, record a user interface currently displayed in a display screen, and obtain a recorded video including at least one image frame;
asecond processing module 1602, configured to, for any image frame in the recorded video, perform blurring processing on a target area in the image frame if the image frame includes a preset type of target area, if it is determined that the application program currently running in foreground is in a preset application program set;
asecond generating module 1603 configured to generate a processed target recorded video from the image frame after the blurring processing;
and the application programs in the preset application program set are application programs containing user personal information and/or user safety information.
In some exemplary embodiments, thesecond processing module 1602 is specifically configured to:
determining a target region in the image frame according to:
determining the position information of the target area corresponding to the reference image according to the corresponding relation between the preset reference image and the position information of the target area; the reference image is a reference image with similarity to an image frame greater than a preset threshold value in a preset image set;
and determining a target area in the image frame according to the position information of the target area corresponding to the reference image.
In some exemplary embodiments, thesecond processing module 1602 is specifically configured to:
determining a fuzzification image layer corresponding to a target area in the image frame according to a preset pixel unit with a fuzzy effect;
covering, by a shader, a target area in the image frame with the determined blurred layer.
Since the computer storage medium in the embodiment of the present invention may be applied to the screen recording method, reference may also be made to the above method embodiment for obtaining technical effects, and details of the embodiment of the present invention are not described herein again.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks. While specific embodiments of the invention have been described above, it will be appreciated by those skilled in the art that these are by way of example only, and that the scope of the invention is defined by the appended claims. Various changes and modifications to these embodiments may be made by those skilled in the art without departing from the spirit and scope of the invention, and these changes and modifications are within the scope of the invention.

Claims (10)

1. A terminal, comprising a display, a processor, and a receiving unit:
the receiving unit is configured to receive a screen recording instruction triggered by a user;
the processor is configured to respond to a screen recording instruction triggered by a user, record a user interface currently displayed in a display screen, and obtain a recorded video containing at least one image frame; for any image frame in the recorded video, if the image frame comprises a preset type target area, performing fuzzification processing on the target area in the image frame; generating a processed target recording video according to the image frame subjected to the fuzzification processing;
the display screen is configured to display a user interface.
2. The terminal of claim 1, wherein the processor is configured to:
judging whether the image frame comprises a preset type of target area according to the following modes:
determining similarity between the image frame and a reference image in a preset image set;
and if the similarity between the image frame and the reference image is determined to be larger than a preset threshold value, determining that the image frame comprises a preset type of target area.
3. The terminal of claim 2, wherein the processor is configured to:
determining a target region in the image frame according to:
determining the position information of a target area corresponding to a reference image according to the corresponding relation between the preset reference image and the position information of the target area;
and determining a target area in the image frame according to the position information of the target area corresponding to the reference image.
4. The terminal of claim 1, wherein the processor is configured to:
determining a fuzzification image layer corresponding to a target area in the image frame according to a preset pixel unit with a fuzzy effect;
covering, by a shader, a target area in the image frame with the determined blurred layer.
5. A terminal, comprising a display, a processor, and a receiving unit:
the receiving unit is configured to receive a screen recording instruction triggered by a user;
the processor is configured to respond to a screen recording instruction triggered by a user, record a user interface currently displayed in a display screen, and obtain a recorded video containing at least one image frame; if the fact that the application program running in the current foreground is in a preset application program set is determined, aiming at any image frame in the recorded video, if the image frame comprises a preset type target area, fuzzifying the target area in the image frame; generating a processed target recording video according to the image frame subjected to the fuzzification processing; the application programs in the preset application program set are application programs containing user personal information and/or user safety information;
the display screen is configured to display a user interface.
6. The terminal of claim 5, wherein the processor is configured to:
determining a target region in the image frame according to:
determining the position information of the target area corresponding to the reference image according to the corresponding relation between the preset reference image and the position information of the target area; the reference image is a reference image with similarity to an image frame greater than a preset threshold value in a preset image set;
and determining a target area in the image frame according to the position information of the target area corresponding to the reference image.
7. The terminal of claim 1, wherein the processor is configured to:
determining a fuzzification image layer corresponding to a target area in the image frame according to a preset pixel unit with a fuzzy effect;
covering, by a shader, a target area in the image frame with the determined blurred layer.
8. A screen recording method is characterized by comprising the following steps:
responding to a screen recording instruction triggered by a user, and recording a user interface currently displayed in a display screen by the terminal to obtain a recorded video containing at least one image frame;
for any image frame in the recorded video, if the image frame comprises a preset type target area, the terminal fuzzifies the target area in the image frame;
and the terminal generates a processed target recording video according to the image frame after the fuzzification processing.
9. A screen recording method is characterized by comprising the following steps:
responding to a screen recording instruction triggered by a user, and recording a user interface currently displayed in a display screen by the terminal to obtain a recorded video containing at least one image frame;
if the terminal determines that the current foreground running application program is in a preset application program set, aiming at any image frame in the recorded video, if the image frame comprises a preset type target area, blurring the target area in the image frame;
the terminal generates a processed target recording video according to the image frame after the fuzzification processing;
and the application programs in the preset application program set are application programs containing user personal information and/or user safety information.
10. A computer storage medium having computer program instructions stored therein which, when run on a computer, cause the computer to perform the method of claim 8 or claim 9.
CN202010472681.7A2020-05-292020-05-29Screen recording method, terminal and storage mediumPendingCN113742183A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202010472681.7ACN113742183A (en)2020-05-292020-05-29Screen recording method, terminal and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202010472681.7ACN113742183A (en)2020-05-292020-05-29Screen recording method, terminal and storage medium

Publications (1)

Publication NumberPublication Date
CN113742183Atrue CN113742183A (en)2021-12-03

Family

ID=78724401

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202010472681.7APendingCN113742183A (en)2020-05-292020-05-29Screen recording method, terminal and storage medium

Country Status (1)

CountryLink
CN (1)CN113742183A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN116347168A (en)*2023-03-222023-06-27招商蛇口数字城市科技有限公司Encryption method, device, equipment and storage medium for privacy video

Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106406710A (en)*2016-09-302017-02-15维沃移动通信有限公司Screen recording method and mobile terminal
CN110087123A (en)*2019-05-152019-08-02腾讯科技(深圳)有限公司Video file production method, device, equipment and readable storage medium storing program for executing
CN110211029A (en)*2019-05-142019-09-06努比亚技术有限公司A kind of record screen protection maintaining method, mobile terminal and computer readable storage medium based on anticipation mode
CN110298862A (en)*2018-03-212019-10-01广东欧珀移动通信有限公司Video processing method, video processing device, computer-readable storage medium and computer equipment
CN110446097A (en)*2019-08-262019-11-12维沃移动通信有限公司Record screen method and mobile terminal
CN110602544A (en)*2019-09-122019-12-20腾讯科技(深圳)有限公司Video display method and device, electronic equipment and storage medium
CN110719402A (en)*2019-09-242020-01-21维沃移动通信(杭州)有限公司Image processing method and terminal equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106406710A (en)*2016-09-302017-02-15维沃移动通信有限公司Screen recording method and mobile terminal
CN110298862A (en)*2018-03-212019-10-01广东欧珀移动通信有限公司Video processing method, video processing device, computer-readable storage medium and computer equipment
CN110211029A (en)*2019-05-142019-09-06努比亚技术有限公司A kind of record screen protection maintaining method, mobile terminal and computer readable storage medium based on anticipation mode
CN110087123A (en)*2019-05-152019-08-02腾讯科技(深圳)有限公司Video file production method, device, equipment and readable storage medium storing program for executing
CN110446097A (en)*2019-08-262019-11-12维沃移动通信有限公司Record screen method and mobile terminal
CN110602544A (en)*2019-09-122019-12-20腾讯科技(深圳)有限公司Video display method and device, electronic equipment and storage medium
CN110769305A (en)*2019-09-122020-02-07腾讯科技(深圳)有限公司Video display method and device, block chain system and storage medium
CN110719402A (en)*2019-09-242020-01-21维沃移动通信(杭州)有限公司Image processing method and terminal equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN116347168A (en)*2023-03-222023-06-27招商蛇口数字城市科技有限公司Encryption method, device, equipment and storage medium for privacy video

Similar Documents

PublicationPublication DateTitle
US11412153B2 (en)Model-based method for capturing images, terminal, and storage medium
CN111597000B (en)Small window management method and terminal
CN111508039B (en)Word processing method of ink screen and communication terminal
CN111526232B (en)Camera control method based on double-screen terminal and double-screen terminal
CN116095413B (en) Video processing method and electronic device
CN113709026B (en)Method, device, storage medium and program product for processing instant communication message
CN112184595A (en)Mobile terminal and image display method thereof
CN111193874B (en)Image display parameter adjusting method and mobile terminal
CN114020379B (en)Terminal equipment, information feedback method and storage medium
CN113038141B (en)Video frame processing method and electronic equipment
CN111176766A (en)Communication terminal and component display method
CN111031377B (en)Mobile terminal and video production method
CN112000411B (en)Mobile terminal and display method of recording channel occupation information thereof
CN113642010B (en)Method for acquiring data of extended storage device and mobile terminal
CN111479075B (en)Photographing terminal and image processing method thereof
CN113742183A (en)Screen recording method, terminal and storage medium
CN114449171B (en)Method for controlling camera, terminal device, storage medium and program product
CN113542711B (en)Image display method and terminal
CN113760164B (en) Display device and response method of its control operation
CN111163220B (en)Display method, communication terminal and computer storage medium
CN111556249A (en)Image processing method based on ink screen, terminal and storage medium
CN111159734A (en)Communication terminal and multi-application data inter-access processing method
CN111142648B (en)Data processing method and intelligent terminal
CN115277940B (en)Notification message prompting method and device and computer readable storage medium
CN115334239B (en)Front camera and rear camera photographing fusion method, terminal equipment and storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
CB02Change of applicant information
CB02Change of applicant information

Country or region after:China

Address after:Shandong City, Qingdao Province, Jiangxi City Road No. 11

Applicant after:Qingdao Hisense Mobile Communication Technology Co.,Ltd.

Address before:Shandong City, Qingdao Province, Jiangxi City Road No. 11

Applicant before:HISENSE MOBILE COMMUNICATIONS TECHNOLOGY Co.,Ltd.

Country or region before:China

RJ01Rejection of invention patent application after publication
RJ01Rejection of invention patent application after publication

Application publication date:20211203


[8]ページ先頭

©2009-2025 Movatter.jp