Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, or in parallel. Furthermore, method embodiments may include additional steps or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
Example one
In order to solve the technical problem of low image rendering efficiency in the prior art, the embodiment of the present disclosure provides a method for implementing rendering isolation by multiple processes. As shown in fig. 1, the method for implementing rendering isolation by multiple processes mainly includes the following steps S11 to S13.
Step S11: acquiring an initial image through a first process, and sending the initial image to a second process; the first process and the second process are two processes which run independently;
the first process may be a main process, the second process may be a sub-process of the first process, and the second process may run in the background. And sharing the resources of the main process and the sub-process. The first Process and the second Process perform information transfer in an Inter-Process Communication (IPC) manner, that is, the first Process sends the image to the second Process by the IPC manner.
In addition, in order to facilitate the second process to perform special effect processing on the image, after the first process acquires the image, the first process may perform preprocessing on the image, including denoising, feature extraction, target recognition and the like, and then send the preprocessed image to the second process.
Specifically, an initial image is obtained from an image input source through the first process, and the initial image may be a video image input in real time, for example, a live video in a short video application, or a video image stored in the terminal in advance. The image may also be a still image, i.e. a picture. The terminal may be a mobile terminal, such as a smart phone or a tablet computer, or a fixed terminal, such as a desktop computer.
The image input source can be a local storage space or a network storage space, the image is obtained from the local storage space or the network storage space through a first process, no matter where the image is obtained, a storage address of the image is preferably required to be obtained, and then the image is obtained from the storage address.
Wherein, the image input source can also be an image sensor, and the image is collected from the image sensor through the first process. The image sensor may be various devices for acquiring images, and typical image sensors are video cameras, still cameras, and the like. In this embodiment, the image sensor may be a camera on the mobile terminal, such as a front-facing or rear-facing camera on a smart phone, and an image acquired by the camera may be directly displayed on a display screen of the smart phone.
This step can be implemented by a custom function. Specifically, the first process may include the following custom functions or objects: VideoStream: the system comprises a user-defined video stream module, an EMVideoPreviewModel user-defined video preview display object, an EMVideoFilterRunable user-defined video frame preprocessing object and an Effect SDKClient user-defined special effect SDK interface object. Specifically, a video frame is read through a VideoStream as an initial image, the initial image is transmitted to an EMVideoPreviewModel object, an EMVideoPreviewModel is accessed through a QML layer, image data is sent to an EMVideoFilterRunable, the video frame is preprocessed through the EMVideoFilterRunable, image data to be displayed is sent to an effect sdkclient processing module, and the effect sdkclient processing module transmits the image data to an independent sub-process, namely, a second process through inter-process communication for special effect processing.
Step S12: and responding to the special effect editing operation of the user on the initial image, carrying out special effect processing on the initial image through the second process to obtain a special effect image, and sending the special effect image to the first process.
The special effect processing may be, for example, extracting a target object (e.g., a human face), adding a sticker (e.g., a human face sticker), and the like.
Step S13: displaying the special effect image through the first process.
Specifically, the special effect image can be displayed in real time through an Output control of a universal video playing component in the QML.
In the embodiment, the image rendering and the special effect processing are respectively realized through the two processes, so that the time consumption caused by the special effect processing of the second process can be avoided, the image rendering efficiency of the first process can be improved due to the influence on the image rendering of the first process, and the operation of the other process is not influenced when the process is blocked, so that the image processing efficiency is further improved.
In an alternative embodiment, the first process and the second process are implemented based on the ANGLE framework.
Almost Native Graphics Layer Engine (ANGLE) projects implement a Web GL subset interface in a Layer of OpenGL ES 2.0API on the basis of Direct X9.0 c API. Among them, Direct X (DX) is a multimedia Programming Interface created by microsoft corporation, and is an Application Programming Interface (API). Among them, OpenGL ES (OpenGL for Embedded Systems) is a subset of OpenGL three-dimensional graphics API. OpenGL (Open Graphics Library, Open Graphics Library or Open Graphics Library) is a cross-language, cross-platform application programming interface for rendering two-dimensional, three-dimensional vector Graphics.
Specifically, the image rendering and special effect processing processes are respectively extracted and divided into two processes to be realized, namely a first process is used for realizing image rendering, and a second process is used for special effect processing. And, both processes are implemented based on ANGLE.
In an alternative embodiment, the first process is a descriptive scripting language QML rendering thread, and the second process is a special effects software development kit Effect SDK.
In the prior art, in the ANGLE mode, Qt switches the descriptive scripting language Qml rendering thread to the main thread, and OpenGL is also used for calling the Effect SDK, which is a special Effect software development package, in the prior art, the Effect SDK must be called in the main thread due to the limitation of multithreading, but the Effect SDK special Effect processing is very time-consuming, which greatly reduces the rendering frame rate of Qml.
Therefore, in this embodiment, the main thread where QML rendering is located is taken as the first process, and the Effect SDK is isolated from the main thread as an independent sub-process, i.e., the second process. Therefore, the time consumption caused by Effect SDK special Effect processing and the influence on QML image rendering can be avoided, the image rendering efficiency can be improved, and in addition, when the main thread Crash is adopted, the subprocess can still run.
In an optional embodiment, the first process and the second process are implemented based on OpenGL.
The OpenGL may be an original OpenGL. The method can be realized based on the conventional OpenGL, and can be realized by extracting an image rendering function and an image special effect processing function and then respectively adopting different processes. Namely, the first process is used for realizing the image rendering function, the second process is used for realizing the image special effect processing function, and the processing result of the second process is called to render in the executing process of the first process.
It will be appreciated by those skilled in the art that obvious modifications (for example, a superposition of the modes listed) or equivalent substitutions may be made on the basis of the various embodiments described above.
In the above, although the steps in the embodiment of the method for implementing rendering isolation by multiple processes are described in the above sequence, it should be clear to those skilled in the art that the steps in the embodiment of the present disclosure are not necessarily performed in the above sequence, and may also be performed in other sequences such as reverse, parallel, and cross, and further, on the basis of the above steps, those skilled in the art may also add other steps, and these obvious modifications or equivalent alternatives should also be included in the protection scope of the present disclosure, and are not described in detail herein.
For convenience of description, only the relevant parts of the embodiments of the present disclosure are shown, and details of the specific techniques are not disclosed, please refer to the embodiments of the method of the present disclosure.
Example two
In order to solve the technical problem of low image rendering efficiency in the prior art, the embodiment of the present disclosure provides a device for implementing rendering isolation by multiple processes. The apparatus may perform the steps in the method embodiment for implementing rendering isolation by multiple processes described in the first embodiment. As shown in fig. 2, the apparatus mainly includes: animage acquisition module 21, a specialeffect processing module 22 and a special effectimage rendering module 23.
Theimage acquisition module 21 is configured to acquire an initial image through a first process and send the initial image to a second process;
the specialeffect processing module 22 is configured to, in response to a special effect editing operation of the user on the initial image, perform special effect processing on the initial image through the second process to obtain a special effect image, and send the special effect image to the first process;
the special effectimage rendering module 23 displays the special effect image through the first process.
Further, the first process is a main process, and the second process is a sub-process.
Further, the first process and the second process are implemented based on an ANGLE framework.
Further, the first process is a descriptive scripting language QML rendering thread, and the second process is a special Effect software development kit Effect SDK.
Further, the first process and the second process are implemented based on OpenGL.
For detailed descriptions of the working principle, the technical effect, and the like of the embodiment of the apparatus for implementing rendering isolation by multiple processes, reference may be made to the related descriptions in the foregoing embodiment of the method for implementing rendering isolation by multiple processes, and no further description is given here.
EXAMPLE III
Referring now to FIG. 3, a block diagram of an electronic device 300 suitable for use in implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 3 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 3, the electronic device 300 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 301 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)302 or a program loaded from a storage means 308 into a Random Access Memory (RAM) 303. In theRAM 303, various programs and data necessary for the operation of the electronic apparatus 300 are also stored. Theprocessing device 301, theROM 302, and theRAM 303 are connected to each other via abus 304. An input/output (I/O)interface 305 is also connected tobus 304.
Generally, the following devices may be connected to the I/O interface 305:input devices 306 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; anoutput device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like;storage devices 308 including, for example, magnetic tape, hard disk, etc.; and acommunication device 309. The communication means 309 may allow the electronic device 300 to communicate wirelessly or by wire with other devices to exchange data. While fig. 3 illustrates an electronic device 300 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 309, or installed from the storage means 308, or installed from theROM 302. The computer program, when executed by theprocessing device 301, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium of the present disclosure can be a computer readable signal medium or a computer readable storage medium or any superposition of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable superposition of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable superposition of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable superposition of the above.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring an initial image through a first process, and sending the initial image to a second process; the first process and the second process are two processes which run independently; responding to the special effect editing operation of the user on the initial image, carrying out special effect processing on the initial image through the second process to obtain a special effect image, and sending the special effect image to the first process; rendering the special effect image by the first process.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable superposition of the foregoing.
According to one or more embodiments of the present disclosure, a method for implementing rendering isolation by multiple processes is provided, which includes:
acquiring an initial image through a first process, and sending the image to a second process; the first process and the second process are two processes which run independently;
responding to the special effect editing operation of the user on the initial image, carrying out special effect processing on the initial image through the second process to obtain a special effect image, and sending the special effect image to the first process;
displaying the special effect image through the first process.
Further, the first process is a main process, and the second process is a sub-process.
Further, the first process and the second process are implemented based on an ANGLE framework.
Further, the first process is a descriptive scripting language QML rendering thread, and the second process is a special Effect software development kit Effect SDK.
Further, the first process and the second process are implemented based on OpenGL.
According to one or more embodiments of the present disclosure, there is provided an apparatus for implementing rendering isolation by multiple processes, including:
the image acquisition module is used for acquiring an initial image through a first process and sending the initial image to a second process; the first process and the second process are two processes which run independently;
the special effect processing module is used for responding to the special effect editing operation of the user on the initial image, carrying out special effect processing on the image through the second process to obtain a special effect image, and sending the special effect image to the first process;
and the special effect image rendering module is used for displaying the special effect image through the first process.
Further, the first process is a main process, and the second process is a sub-process.
Further, the first process and the second process are implemented based on an ANGLE framework.
Further, the first process is a descriptive scripting language QML rendering thread, and the second process is a special Effect software development kit Effect SDK.
Further, the first process and the second process are implemented based on OpenGL.
According to one or more embodiments of the present disclosure, there is provided an electronic device including:
a memory for storing non-transitory computer readable instructions; and
and the processor is used for executing the computer readable instructions, so that the processor realizes the method for realizing the rendering isolation by the multiple processes when executing.
According to one or more embodiments of the present disclosure, there is provided a computer-readable storage medium storing non-transitory computer-readable instructions which, when executed by a computer, cause the computer to perform the above-described multi-process implementation rendering isolation method.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular arrangement of features above described which are stacked together, and that other arrangements in which features of the above described type or their equivalents are stacked together as desired without departing from the spirit of the disclosure are also encompassed. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.