Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
The background blurring processing method, apparatus, and device according to the embodiments of the present application are described below with reference to the drawings.
Fig. 1 is a flowchart of a background blurring processing method according to an embodiment of the present application, as shown in fig. 1, the method includes:
step 101, acquiring a main image shot by a main camera and a sub-image shot by a sub-camera.
Step 102, detecting whether a preset target object exists in the main image.
Specifically, the dual-camera system calculates the depth of field information through the main image and the sub-image, wherein the dual-camera system includes a main camera for acquiring the main image of the subject to be photographed, and a sub-camera for assisting the main image to acquire the depth of field information, and the main camera and the sub-camera may be arranged along a horizontal direction, or may be arranged along a vertical direction, for more clearly describing how the dual-camera acquires the depth of field information, the following explains a principle that the dual-camera acquires the depth of field information with reference to the accompanying drawings:
in practical application, the information of depth of field resolved by human eyes mainly depends on binocular vision, which is the same as the principle of depth of field resolved by two cameras, and is realized mainly by the principle of triangulation distance measurement as shown in fig. 2, based on fig. 2, in the actual space, the imaging object is drawn, and the positions O of the two cameras are shownRAnd OTAnd focal planes of the two cameras, wherein the distance between the focal planes and the plane where the two cameras are located is f, and the two cameras perform imaging at the focal planes, so that two shot images are obtained.
Where P and P' are the positions of the same subject in different captured images, respectively. Wherein the distance from the P point to the left boundary of the shot image is XRDistance of P' pointThe distance of the left boundary of the photographed image is XT。ORAnd OTThe two cameras are respectively arranged on the same plane, and the distance is B.
Based on the principle of triangulation, the distance Z between the object and the plane where the two cameras are located in fig. 2 has the following relationship:
based on this, can be derived
Where d is a distance difference between positions of the same object in different captured images. B, f is constant, so the distance Z of the object can be determined from d.
Of course, in addition to the triangulation method, other methods may also be used to calculate the depth of field information of the main image, for example, when the main camera and the sub-camera take a picture of the same scene, the distance between an object in the scene and the sub-camera is proportional to the displacement difference, the attitude difference, and the like of the images formed by the main camera and the sub-camera, and therefore, in an embodiment of the present application, the distance Z may be obtained according to the proportional relationship.
For example, as shown in fig. 3, a map of differences between the main image captured by the main camera and the sub image captured by the sub camera is calculated, and this map is represented by a disparity map, which represents the difference in displacement between the same points on the two maps, but since the difference in displacement in triangulation is proportional to Z, the disparity map is often used as the depth information map as it is.
When blurring a background region of an image, a bi-camera system may cause blurring of an image of some target objects that are not desired to be blurred, and thus, in order to ensure that the target objects that are not desired to be blurred by a user are not blurred, it is detected whether a target object exists in a main image, where the target object may include a specific gesture motion (such as a scissor hand, a fuel filling gesture, and the like), may include a famous building (such as a long city of ten miles, a yellow mountain, and the like), or may include some specific shaped object (such as a circular object, a triangular object, and the like).
It should be understood that, depending on the application scenario, the detection of whether the preset target object exists in the main image may be implemented in different manners, for example as follows:
as an example:
in this example, template information including a contour edge of a target object is preset, a contour edge of a scene shot in a foreground region of a main image is detected, preset template information is matched with the contour edge, and if matching is successful, a preset target object is detected and known to exist in the main image.
In this example, the contour edge in the preset template information may include coordinate values of the contour edge of the target object, a position relationship between each pixel point, and the like.
It can be understood that, in this example, the detection efficiency is improved by only recognizing whether the target object exists in the foreground region by shooting the contour edge of the scene, so as to further improve the image processing efficiency.
As another example:
in this example, template information including shape information of a target object is preset, the shape information of the target object includes external contour information and internal filling pattern information of the target object, shape information of a shot scene in a foreground region of a main image is detected, the preset template information is matched with the shape information, and if the matching is successful, the preset target object in the main image is detected and known.
It can be understood that, in this example, whether the target object exists in the foreground region is identified through the shape information of the shooting scene, so that misjudgment of some shooting subject shapes with similar external profiles is avoided, and the accuracy of identification is improved.
In some examples, the two examples can be combined, and the contour edge is identified first, and then the shape information is identified, so as to improve the identification accuracy.
Step 103, if the target object is detected and known to exist, determining a target area corresponding to the target object in the main image.
And 104, calculating first depth of field information of the target area by applying a preset first depth of field algorithm according to the main image and the auxiliary image.
And 105, acquiring second depth of field information of the non-target area in the main image by applying a preset second depth of field algorithm.
Specifically, if the target object is detected and known to exist, in order to avoid blurring of the target object, a target area corresponding to the target object is determined, a preset first depth of field algorithm is applied to calculate first depth of field information of the target area according to the main image and the secondary image, and a preset second depth of field algorithm is applied to acquire second depth of field information of the non-target area, wherein the calculation precision of the first depth of field algorithm is higher than that of the second depth of field algorithm, so on one hand, the algorithm for calculating the depth of field of the background area corresponding to the non-target object is the second depth of field algorithm with relatively lower calculation precision, therefore, the calculation amount is smaller than that of the first depth of field algorithm, the operation pressure of the terminal device can be reduced, the phenomenon that blurring processing time is longer, which results in increased consumed time for image processing is avoided, and on the other hand, the algorithm for calculating the depth of the target area corresponding to the target object, therefore, the target object area is guaranteed not to be blurred, and the first depth of field algorithm is only adopted for the target area corresponding to the target object, so that the influence on the operation pressure of a processor of the terminal equipment is small, and the image processing time cannot be obviously increased.
Of course, in the specific implementation process, in order to meet the personalized requirements of the user and achieve an interesting image processing effect, the calculation accuracy of the first depth-of-field algorithm may also be equal to or lower than that of the second depth-of-field algorithm, which is not limited herein.
In an embodiment of the present application, after detecting that the preset target object does not exist in the main image, if the preset target object is not detected, the second depth of field algorithm is applied to calculate third depth of field information of the main image, and the background area of the main image is subjected to blurring processing according to the third depth of field information, so as to reduce the processing pressure of the system.
And 106, blurring the background area of the target area according to the first depth of field information.
Andstep 107, blurring the background area of the non-target area according to the second depth information.
Specifically, after the depth information of the target area and the depth information of the target area are calculated according to different calculation accuracies, the background area of the target area is subjected to blurring processing according to the first depth information, the background area of the non-target area is subjected to blurring processing according to the second depth information, and the target object in the image subjected to blurring processing is protected.
Specifically, in practical applications, different manners may be adopted according to different application scenes to implement blurring of a background area of a target area according to first depth information, and blurring of a background area of the non-target area according to second depth information, which is described as follows:
the first example:
as shown in fig. 4, blurring the background region of the target region according to the first depth information instep 103 may include:
step 201, determining first foreground region depth information and first background region depth information of the target region according to the first depth information and the focusing region of the main image.
It can be understood that the target area may include a foreground area where the target object is located and a background area other than the target object, and thus, in order to further process the area where the target object is located, first foreground area depth information and first background area depth information of the target area are determined according to the first depth information and a focused area of the main image, where a range of clear imaging in the target area before the focused area is the first foreground area and a range of clear imaging in the target area after the focused area is the first background area.
It should be noted that, according to different application scenarios, the manner of separating the first foreground depth from the first background region for the target region is different, which is exemplified as follows:
the first example:
the shooting related parameters can be acquired so as to calculate the depth of field information of the image area outside the focus area in the target area according to the formula of the shooting camera.
In this example, parameters of the photographing camera such as an allowable circle of confusion diameter, an aperture value, a focal length, a focal distance, and the like can be acquired, so that according to the formula: the first foreground region depth information (aperture value, square of permissible diffusion circle diameter, focal distance)/(focal length square + aperture value, permissible diffusion circle diameter, focal distance) is calculated as a first foreground region from which a foreground has been separated, and the first background region depth information (aperture value, square of permissible diffusion circle diameter, focal distance, square of aperture value, permissible diffusion circle diameter, focal distance) is calculated as a first background region depth information of the background of the target region from the formula.
The second example is:
determining a depth of field map of an image area outside a focus area according to depth of field data information of a current target area respectively acquired by two cameras, and determining a first foreground area before the focus area and a first background area after the focus area according to the depth of field map.
Specifically, in this example, since the two cameras are not located at the same position, the two rear cameras have a certain angle difference and distance difference with respect to the target object to be photographed, and thus the preview image data acquired by the two cameras have a certain phase difference.
For example, for point a on the imaging target object, in the preview image data of the camera 1, the coordinates of the pixel point corresponding to point a are (30, 50), while in the preview image data of the camera 2, the coordinates of the pixel point corresponding to point a are (30, 48), and the phase difference between the pixel points corresponding to point a in the two preview image data is 50-48, which is 2.
In this example, the relationship between the depth of field information and the phase difference may be established in advance according to experimental data or camera parameters, and then, the corresponding depth of field information may be searched for according to the phase difference of each pixel point in the target image in the preview image data acquired by the two cameras.
For example, for the phase difference 2 corresponding to the point a, if the corresponding depth of field is found to be 5 meters according to the preset corresponding relationship, the depth of field information corresponding to the point a in the target area is 5 meters. Therefore, the depth of field information of each pixel point in the current target area can be obtained, namely, a depth of field map of the image area outside the focus area is obtained.
Furthermore, after obtaining the depth map of the image area outside the focal area, the first foreground area depth information of the image area before the focal area and the first background area depth information after the focal area can be further determined.
Step 202, obtaining a basic value of the first blurring degree according to the depth of field information of the first foreground area and the depth of field information of the first background area.
The basic value of the first blurring degree may specify a degree level of blurring, such as strong, weak, and the like, where a larger difference between the first foreground region depth information and the first background region depth information indicates that the foreground and the background in the target region are more clearly distinguished, and thus the blurring degree may be smaller, so the basic value of the first blurring degree is smaller, and conversely, a smaller difference between the first foreground region depth information and the first background region depth information indicates that the foreground and the background in the target region are less clearly distinguished, and thus the blurring degree may be larger, so the basic value of the first blurring degree is larger.
Step 203, determining a blurring coefficient of each pixel in the background area of the target area according to the basic value of the first blurring degree and the depth information of the first background area.
In the embodiment of the present application, the blurring coefficient of each pixel in the background region of the target region is determined according to the basic value of the first blurring degree and the depth information of the first background region.
And step 204, performing Gaussian blur processing on the background area of the target area according to the blurring coefficient of each pixel.
Specifically, the gaussian blur processing is performed on the background area of the target area according to the blurring coefficient of each pixel, so that the larger the depth of field information of the background area in the target area is, the higher the depth of field information of the background area is, and the larger the blurring degree is.
Further, as shown in fig. 5, the blurring the background area of the non-target area according to the second depth information instep 103 includes:
step 301, determining second foreground region depth information and second background region depth information of the non-target region according to the second depth information and the focused region of the main image.
It is to be understood that the non-target area may include a foreground area and a background area, and thus, in order to further facilitate processing of the background area of the image, the second foreground area depth information and the second background area depth information of the non-target area are determined according to the second depth information and the focused area of the main image, where a manner of determining the second foreground area depth information and the second background area depth information of the non-target area according to the second depth information and the focused area of the main image is similar to a manner of determining the first foreground area depth information and the first background area depth information of the target area according to the first depth information and the focused area of the main image, and will not be described herein again.
Step 302, obtaining a base value of the second blurring degree according to the second foreground region depth information and the second background region depth information.
The base value of the second blurring degree may specify the blurring degree, where a larger difference between the depth of field information of the second foreground region and the depth of field information of the second background region indicates that the foreground and the background in the non-target region are more clearly distinguished, and thus, the blurring degree may be smaller, so the base value of the second blurring degree is smaller, whereas a smaller difference between the depth of field information of the second foreground region and the depth of field information of the second background region indicates that the foreground and the background in the non-target region are less clearly distinguished, and thus, the blurring degree may be larger, so the base value of the second blurring degree is larger.
And 303, performing Gaussian blur processing on the background area of the non-target area according to the basic value of the second blurring degree.
Specifically, the gaussian blur processing is performed on the background area of the non-target area according to the base value of the second blurring degree, so that the larger the depth of field information of the background area in the non-target area is, the higher the depth of field information of the background area is, and the larger the blurring degree is.
In order to make the implementation process and the processing effect of the background blurring process of the present application more clearly understood by those skilled in the art, the following example is taken in conjunction with a specific application scenario:
specifically, as shown in fig. 6, when the preset target object is a preset gesture, after the main image is acquired, it is detected whether a preset gesture image exists in the main image, if so, the target area where the preset gesture image exists is refined by using the background blurring processing method described in the above embodiment, a depth of field algorithm with higher precision is used for background blurring processing than a default depth of field algorithm of the system in advance, and the depth of field algorithm with lower precision set by the system can be used for calculating normal background blurring processing in other areas, so that blurring effects of some specific scenes can be improved, and meanwhile, too much processing time cannot be increased.
Continuing with the above scenario as an example, as shown in fig. 7(a), after performing background blurring by using a background blurring processing method in the prior art, due to the limitation of the calculation accuracy of the depth-of-field information of the terminal device, an image area corresponding to a preset finger may be blurred, which results in a poor blurring effect, and after using the background blurring processing method of the present application, as shown in fig. 7(b), a target area where a gesture image is located is subjected to refined background blurring processing, which makes the hand gesture prominent and not blurred, and the image blurring effect better.
To sum up, the background blurring processing method according to the embodiment of the present application obtains a main image captured by a main camera and a sub-image captured by a sub-camera, detects whether a preset target object exists in the main image, determines a target area corresponding to the target object if it is detected that the target object exists, calculates first depth-of-field information of the target area by using a preset first depth-of-field algorithm according to the main image and the sub-image, obtains second depth-of-field information of a non-target area by using a preset second depth-of-field algorithm, and further performs blurring processing on a background area of the target area according to the first depth-of-field information and performs blurring processing on a background area of the non-target area according to the second depth-of-field information. Therefore, the target object is protected from being blurred during blurring processing, and the visual effect of image processing is improved.
In order to implement the foregoing embodiments, the present application further provides a background blurring processing apparatus, and fig. 8 is a schematic structural diagram of the background blurring processing apparatus according to an embodiment of the present application, as shown in fig. 8, the background blurring processing apparatus includes a first obtainingmodule 100, a detectingmodule 200, a determiningmodule 300, a second obtainingmodule 400, and aprocessing module 500.
The first acquiringmodule 100 is configured to acquire a main image captured by a main camera and a sub-image captured by a sub-camera.
The detectingmodule 200 is configured to detect whether a preset target object exists in the main image.
In one embodiment of the present application, as shown in fig. 9, thedetection module 200 includes adetection unit 210 and anacquisition unit 220.
Wherein the detectingunit 210 is configured to detect a contour edge of a captured scene in a foreground region of the main image.
Alearning unit 220, configured to match preset template information with the contour edge, and if the matching is successful, detect and learn that a preset target object exists in the main image.
The determiningmodule 300 is configured to determine a target area corresponding to the target object in the main image when the presence of the target object is detected and known.
In one embodiment of the present application, as shown in fig. 10, the determiningmodule 300 includes a first determiningunit 310, an obtainingunit 320, a second determiningunit 330, and aprocessing unit 340, wherein,
a first determiningunit 310, configured to determine first foreground area depth information and first background area depth information of the target area according to the first depth information and the focused area of the main image.
The obtainingunit 320 is configured to obtain a base value of the first blurring degree according to the first foreground region depth information and the first background region depth information.
The second determiningunit 330 is configured to determine a blurring coefficient of each pixel in the background region of the target region according to the basic value of the first blurring degree and the depth information of the first background region.
And theprocessing unit 340 is configured to perform gaussian blurring processing on the background area of the target area according to the blurring coefficient of each pixel.
The second obtainingmodule 400 is configured to calculate first depth-of-field information of the target area by applying a preset first depth-of-field algorithm according to the main image and the secondary image, and obtain second depth-of-field information of the non-target area by applying a preset second depth-of-field algorithm.
In one embodiment of the present application, the first depth of view algorithm is more computationally accurate than the second depth of view algorithm.
Theprocessing module 500 is configured to perform blurring on the background area of the target area according to the first depth-of-field information, and perform blurring on the background area of the non-target area according to the second depth-of-field information.
It should be noted that the foregoing description of the method embodiments is also applicable to the apparatus in the embodiments of the present application, and the implementation principles thereof are similar and will not be described herein again.
The division of each module in the background blurring processing apparatus is only used for illustration, and in other embodiments, the background blurring processing apparatus may be divided into different modules as needed to complete all or part of the functions of the background blurring processing apparatus.
To sum up, the background blurring processing apparatus according to the embodiment of the present application acquires a main image captured by a main camera and a sub-image captured by a sub-camera, detects whether a preset target object exists in the main image, determines a target area corresponding to the target object if it is detected that the target object exists, calculates first depth-of-field information of the target area by using a preset first depth-of-field algorithm according to the main image and the sub-image, acquires second depth-of-field information of a non-target area by using a preset second depth-of-field algorithm, and further performs blurring processing on a background area of the target area according to the first depth-of-field information and performs blurring processing on a background area of the non-target area according to the second depth-of-field information. Therefore, the target object is protected from being blurred during blurring processing, and the visual effect of image processing is improved.
In order to implement the above embodiments, the present application further proposes a computer device, where the computer device is any device including a memory for storing a computer program and a processor for running the computer program, such as a smart phone, a personal computer, and the like, and the computer device further includes an Image Processing circuit, and the Image Processing circuit may be implemented by using hardware and/or software components and may include various Processing units for defining an ISP (Image Signal Processing) pipeline. FIG. 11 is a schematic diagram of an image processing circuit in one embodiment. As shown in fig. 11, for convenience of explanation, only aspects of the image processing technology related to the embodiments of the present application are shown.
As shown in fig. 11, the image processing circuit includes anISP processor 1040 andcontrol logic 1050. The image data captured by theimaging device 1010 is first processed by theISP processor 1040, and theISP processor 1040 analyzes the image data to capture image statistics that may be used to determine and/or control one or more parameters of theimaging device 1010. Imaging device 1010 (camera) may include a camera having one ormore lenses 1012 and animage sensor 1014, wherein to implement the background blurring processing methods of the present application,imaging device 1010 includes two sets of cameras, wherein, with continued reference to fig. 11,imaging device 1010 may simultaneously capture images of a scene based on a primary camera and a secondary camera,image sensor 1014 may include a color filter array (e.g., a Bayer filter), andimage sensor 1014 may acquire light intensity and wavelength information captured with each imaging pixel ofimage sensor 1014 and provide a set of raw image data that may be processed byISP processor 1040. Thesensor 1020 may provide raw image data to theISP processor 1040 based on thesensor 1020 interface type, wherein theISP processor 1040 may calculate depth of field information and the like based on raw image data acquired by theimage sensor 1014 in the primary camera and raw image data acquired by theimage sensor 1014 in the secondary camera provided by thesensor 1020. Thesensor 1020 interface may utilize a SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination thereof.
TheISP processor 1040 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, andISP processor 1040 may perform one or more image processing operations on the raw image data, gathering statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
ISP processor 1040 may also receive pixel data fromimage memory 1030. For example, raw pixel data is sent from thesensor 1020 interface to theimage memory 1030, and the raw pixel data in theimage memory 1030 is then provided to theISP processor 1040 for processing. Theimage Memory 1030 may be part of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving raw image data from thesensor 1020 interface or from theimage memory 1030, theISP processor 1040 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to imagememory 1030 for additional processing before being displayed.ISP processor 1040 receives processed data fromimage memory 1030 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. The processed image data may be output to adisplay 1070 for viewing by a user and/or further processed by a Graphics Processing Unit (GPU). Further, the output ofISP processor 1040 may also be sent to imagememory 1030, anddisplay 1070 may read image data fromimage memory 1030. In one embodiment,image memory 1030 may be configured to implement one or more frame buffers. Further, the output of theISP processor 1040 may be transmitted to the encoder/decoder 1060 for encoding/decoding the image data. The encoded image data may be saved and decompressed before being displayed on adisplay 1070 device. The encoder/decoder 1060 may be implemented by a CPU or GPU or coprocessor.
The statistics determined by theISP processor 1040 may be sent to thecontrol logic 1050 unit. For example, the statistical data may includeimage sensor 1014 statistics such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation,lens 1012 shading correction, and the like.Control logic 1050 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters ofimaging device 1010 and, in turn, control parameters based on the received statistical data. For example, the control parameters may includesensor 1020 control parameters (e.g., gain, integration time for exposure control), camera flash control parameters,lens 1012 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), andlens 1012 shading correction parameters.
The following steps are implemented by using the image processing technique in fig. 11 to realize the background blurring processing method:
acquiring a main image shot by a main camera and an auxiliary image shot by an auxiliary camera;
detecting whether a preset target object exists in the main image or not;
if the target object is detected and known to exist, determining a target area corresponding to the target object in the main image;
according to the main image and the auxiliary image, calculating first depth of field information of the target area by applying a preset first depth of field algorithm;
acquiring second depth-of-field information of a non-target area in the main image by applying a preset second depth-of-field algorithm;
and blurring the background area of the target area according to the first depth of field information, and blurring the background area of the non-target area according to the second depth of field information.
To achieve the above embodiments, the present application also proposes a non-transitory computer-readable storage medium, in which instructions are enabled to perform the background blurring processing method as in the above embodiments when executed by a processor.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.