Invention content
For this purpose, the present invention provides the processing method and mobile terminal of a kind of depth image, to try hard to solve or at leastAlleviate existing at least one problem above.
According to an aspect of the invention, there is provided a kind of processing method of depth image, this method is suitable for mobile wholeIt is performed in end, mobile terminal has dual camera, suitable for passing through dual camera while Same Scene being shot to obtain the first imageWith the second image, including step:It carries out down-sampling processing respectively to the first image and the second image, is adopted under generation corresponding firstSampled images and the second down-sampled images;According to the first down-sampled images and the second down-sampled images, pass through binocular ranging algorithm meterCalculation obtains the first depth map and the second depth map;It is respectively obtained by the consistency detection to the first depth map and the second depth mapThe effective coverage and non-active area of first depth map and the second depth map;Utilize the Pixel Information of the pixel in effective coverageFill the Pixel Information of the pixel in non-active area, the first depth map and the second depth map after being filled;By rightThe first depth map and the second depth map after filling carry out up-sampling treatment and obtain adopting on the first up-sampling image and second respectivelySampled images;Steerable filter is carried out to the first up-sampling image using the first image, obtains the first image of details enhancing;And profitSteerable filter is carried out to the second up-sampling image with the second image, obtains the second image of details enhancing.
Optionally, in the processing method of depth image according to the present invention, by the first depth map and the second depthThe consistency detection of figure respectively obtains the step of effective coverage and non-active area of the first depth map and the second depth map and includes:The pixel value difference of corresponding pixel points in pixel and the second depth map in the first depth map is calculated pixel-by-pixel;If pixel value difference is exhaustedTo value be more than threshold value, then the two pixels be belonging respectively to the first depth map non-active area and the second depth map it is non-effectiveRegion;And if the absolute value of pixel value difference, no more than threshold value, the two pixels are belonging respectively to the effective of the first depth mapRegion and the effective coverage of the second depth map.
Optionally, in the processing method of depth image according to the present invention, the pixel of the pixel of effective coverage is utilizedThe step of Pixel Information of pixel in information filling non-active area, includes:For in the non-active area of the first depth mapEach pixel, it is smoothed using the Pixel Information of the pixel of effective coverage in its neighborhood, it is non-to obtain thisThe Pixel Information of pixel in effective coverage, to generate the first depth map after preliminary filling;Non- for the second depth map hasEach pixel in region is imitated, it is smoothed using the Pixel Information of the pixel of effective coverage in its neighborhood,The Pixel Information of pixel in the non-active area is obtained, to generate the second depth map after preliminary filling;Wherein, pixelPixel Information includes the depth information and image information of pixel.
Optionally, in the processing method of depth image according to the present invention, the pixel of the pixel of effective coverage is utilizedThe step of Pixel Information of pixel in information filling non-active area, further includes:For the first depth map after tentatively fillingNeutralize each pixel in unfilled non-active area in the second depth map, search respectively on it, under, it is four left and rightPixel on direction in closest effective coverage;The depth information weighted calculation of four pixels according to being found goes outThe depth information of the pixel in corresponding non-active area;And the first depth map and the second depth after filling are generated respectivelyFigure.
Optionally, in the processing method of depth image according to the present invention, according to four pixels foundDepth information weighted calculation goes out the step of depth information of the pixel corresponding non-active area Nei and includes:It calculates and is searched respectivelyThe difference of the location of pixels of four pixels arrived and the location of pixels of the pixel in non-active area, obtain four pixels away fromFrom;Calculate the sum of four pixel distances, as pixel distance and;It calculates according to four pixel distances and pixel distance and respectivelyThe corresponding distance weighting of four pixels;And believed according to the depth of the corresponding distance weighting of four pixels and four pixelsBreath weighted calculation goes out the depth information of pixel non-active area Nei.
Optionally, in the processing method of depth image according to the present invention, to the first image and the second image respectively intoThe step of row down-sampling processing, includes:Described first image and the second image are carried out down adopting respectively using bi-cubic interpolation algorithmSample processing.
Optionally, in the processing method of depth image according to the present invention, by the first depth map after filling andSecond depth map carry out respectively up-sampling treatment obtain the first up-sampling image and second up-sampling image the step of include:UsingClosest interpolation algorithm carries out up-sampling treatment to the first depth map after the filling and the second depth map respectively, obtains firstUp-sample image and the second up-sampling image.
Optionally, in the processing method of depth image according to the present invention, threshold value is set as 5.
According to another aspect of the present invention, a kind of mobile terminal is provided, including:Dual camera, suitable for simultaneously to sameScene capture obtains two images;One or more processors;And memory;One or more programs, wherein one or moreProgram is stored in the memory and is configured as being performed by one or more processors, and one or more programs include being used forPerform the instruction of the either method in method as described above.
In accordance with a further aspect of the present invention, a kind of computer-readable storage medium for storing one or more programs is providedMatter, one or more programs include instruction, instruct when mobile terminal execution so that mobile terminal execution method as described aboveIn either method.
Depth image processing scheme according to the present invention, by shooting to obtain at two coloured images to dual cameraReason, ultimately generates two high-precision high definition depth maps.Compared with existing scheme, the present invention can be obtained by depth Enhancement MethodThe depth map higher to precision, and obtain large-sized high definition depth map using super-resolution method.
In addition, the solution of the present invention is first by the first image and the second image down sampling to small size, on small-sized imageIt carries out binocular ranging and depth map enhancing calculates, therefore algorithm complexity is relatively low, suitable for mobile terminal, has very strong generalProperty.
Specific embodiment
The exemplary embodiment of the disclosure is more fully described below with reference to accompanying drawings.Although the disclosure is shown in attached drawingExemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth hereIt is limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosureCompletely it is communicated to those skilled in the art.
Fig. 1 is the structure diagram of mobile terminal 100.
Mobile terminal 100 can include memory interface 102, one or more data processor, image processor and/orCentral processing unit 104 and peripheral interface 106.
Memory interface 102, one or more processors 104 and/or peripheral interface 106 either discrete component,It can be integrated in one or more integrated circuits.In the mobile terminal 100, various elements can pass through one or more communicationBus or signal wire couple.Sensor, equipment and subsystem may be coupled to peripheral interface 106, a variety of to help to realizeFunction.
For example, motion sensor 110, light sensor 112 and range sensor 114 may be coupled to peripheral interface 106,To facilitate the functions such as orientation, illumination and ranging.Other sensors 116 can equally be connected with peripheral interface 106, such as positioning systemSystem (such as GPS receiver), acceleration transducer, temperature sensor, biometric sensor or other sensor devices, thus may be usedTo help to implement relevant function.
Camera sub-system 120 and optical sensor 122 can be used for the camera of convenient such as recording photograph and video clippingThe realization of function, wherein the camera sub-system and optical sensor for example can be charge coupling device (CCD) or complementary goldBelong to oxide semiconductor (CMOS) optical sensor.A kind of realization method according to the present invention, camera sub-system 120 can be setFor the dual camera with same pixel, also, dual camera can be horizontally oriented two cameras of parallel arrangement (such asOne the first from left is right) or vertical direction on two cameras (as one on the other) for being arranged in parallel, for simultaneously to same fieldScape shoots to obtain two images.
It can help to realize communication function, wherein wireless communication by one or more radio communication subsystems 124System can include radio-frequency transmitter and transmitter and/or light (such as infrared) Receiver And Transmitter.Radio communication subsystem124 particular design and embodiment can depend on one or more communication networks that mobile terminal 100 is supported.For example,Mobile terminal 100 can include being designed to support LTE, 3G, GSM network, GPRS network, EDGE network, Wi-Fi or WiMaxNetwork and BlueboothTMThe communication subsystem 124 of network.
Audio subsystem 126 can be coupled with loud speaker 128 and microphone 130, to help to implement to enable voiceFunction, such as speech recognition, speech reproduction, digital record and telephony feature.I/O subsystems 140 can include touch screen controlDevice 142 processed and/or other one or more input controllers 144.Touch screen controller 142 may be coupled to touch screen 146.It liftsFor example, the touch screen 146 and touch screen controller 142 can be detected using any one of a variety of touch-sensing technologiesThe contact and movement or pause carried out therewith, wherein detection technology include but is not limited to capacitive character, resistive, infrared and tableFace technology of acoustic wave.
Other one or more input controllers 144 may be coupled to other input/control devicess 148, for example, one orThe pointer device of multiple buttons, rocker switch, thumb wheel, infrared port, USB port, and/or stylus etc.Described oneA or multiple buttons (not shown) can include pressing for the up/down of 130 volume of controlling loudspeaker 128 and/or microphoneButton.
Memory interface 102 can be coupled with memory 150.The memory 150 can be deposited including high random accessReservoir and/or nonvolatile memory, such as one or more disk storage equipment, one or more optical storage apparatus and/Or flash memories (such as NAND, NOR).Memory 150 can store an operating system 152, for example, Android, iOS orThe operating system of Windows Phone etc.The operating system 152 can include handling basic system services and executionThe instruction of task dependent on hardware.In some embodiments, the place for performing depth image is contained in operating system 152The instruction of reason method.Memory 150 can also be stored using 154.In running of mobile terminal, can be loaded from memory 150Operating system 152, and performed by processor 104.Using 154 at runtime, it can also be loaded from memory 150, and byReason device 104 performs.It is operated on operating system using 154, the interface provided using operating system and bottom hardware is realizedThe various desired functions of user, such as instant messaging, web page browsing, pictures management, video playing.Can be independent using 154In operating system provide or operating system carries, including various social networking application softwares, also broadcast including various videosApplication software is put, the systems such as photograph album, calculator, recording pen can also be included and carry application program.In addition, it is mounted using 154During into mobile terminal 100, drive module can also be added to operating system.
The present invention provides a kind of processing method of depth image, by being stored in the memory 150 of mobile terminal 100Corresponding one or more programs (including previously described dependent instruction) handle two pending images, with lifeThe depth image enhanced into details.Wherein, two pending images can be obtained by camera sub-system 120, be taken the photograph by doubleAs head while Same Scene is shot to obtain two images, referred to as the first image and the second image.It is moved it is of course also possible to will utilizeThe two of Same Scene images that dual camera except dynamic terminal 100 is shot send mobile terminal 100 to, by mobile wholeEnd 100 performs the processing method of depth image, and the embodiment of the present invention is not restricted this.
Fig. 2 shows the flow diagrams of the processing method 200 of depth image according to an embodiment of the invention.
As shown in Fig. 2, method 200 starts from step S210, to acquired the first image (being denoted as IL) and the second image (noteMake IR) down-sampling processing is carried out respectively, generate corresponding first down-sampled images IL ' and the second down-sampled images IR '.
As it was noted above, the first image and the second image as dual camera simultaneously to obtained by Same Scene shooting, hereinIt repeats no more.
Three kinds of common interpolation algorithms have closest interpolation, bilinear interpolation and bi-cubic interpolation in image procossing, according toOne embodiment of the present of invention is carried out at down-sampling the first image IL and the second image IR using bi-cubic interpolation algorithm respectivelyReason, obtains the first down-sampled images IL ' for meeting preliminary dimension and the second down-sampled images IR ', preliminary dimension is for example set as640*480。
The formula of bi-cubic interpolation can represent as follows:
Above formula represents, the corresponding value of pixel (i', j') is in artwork neighbouring 16 at pixel (i, j) in figure after interpolationThe sum of weight convolution of a pixel.Dx and dy is illustrated respectively in the fractional coordinate in X-direction and Y-direction, and m and n represent interpolationWindow size, (i.e. [i-1, i+2], [j-1, j+2]), R (x) represent interpolation expression.
In a particular embodiment, different interpolation expressions can be selected as needed, for example, sampled based on triangle,Based on Bell profile samples, based on B-spline curves sampling etc..For being based on triangle sampling, interpolation expression R (x) is representedFor:
The present invention is not limited the selection of specific interpolation expression.
Then in step S220, according to the first down-sampled images IL ' and the second down-sampled images IR ', pass through binocularCorresponding first depth map DL and the second depth map DR is calculated with algorithm (or being binocular stereo vision matching algorithm).
The basic principle of binocular ranging algorithm is to find out each pixel in image to correspond on the image at another visual anglePixel, calculate anaglyph, and then estimate depth image.In the art, binocular ranging algorithm has had very much,It can substantially be divided into sectional perspective matching algorithm and global Stereo Matching Algorithm.Wherein, sectional perspective matching algorithm is mainlyBy the way of sliding window, the estimation of parallax, common algorithm such as absolute error and calculation are carried out using local optimum functionMethod (SAD), error sum of squares algorithm (SSD) etc..Global Stereo Matching Algorithm is mainly to employ global optimum theory methodEstimating disparity establishes global energy function, obtains optimal disparity map by minimizing global energy function, common algorithm hasSGBM algorithms.Usually, the results contrast that global Stereo Matching Algorithm obtains is accurate, thus matching effect is better than sectional perspectiveWith algorithm, but simultaneously the complexity of global Stereo Matching Algorithm also much larger than sectional perspective matching algorithm, long operational time.HavingIn body embodiment, suitable Stereo Matching Algorithm can be selected as needed to calculate the first depth map DL and the second depth map DR.
It should be noted that the present invention is not limited the depth map for specifically which kind of matching algorithm being used to calculate image,Any binocular ranging algorithm can be combined with the embodiment of the present invention, and depth image is handled.
Then in step S230, respectively obtained by the consistency detection to the first depth map DL and the second depth map DRThe effective coverage and non-active area of first depth map and the second depth map.
According to a kind of realization method, the step of carrying out consistency detection, includes:(1) the first depth map DL is calculated pixel by pixelThe pixel value difference of corresponding pixel points in middle pixel and the second depth map DR;(2) absolute value of the pixel value difference is sought, if absolute valueMore than threshold value, then the two pixels are belonging respectively to the non-active area of the first depth map and the non-active area of the second depth mapDomain;(3) if absolute value is not more than threshold value, the two pixels are belonging respectively to effective coverage and the second depth of the first depth mapThe effective coverage of figure.
If setting the effective coverage of the first depth map as DL_Validmap, the effective coverage of the second depth map is DR_Validmap, then, the step of consistency detection, can use following formulae express:
When | DLi-DRj| during > T,
When | DLi-DRjDuring |≤T, i ∈ DL_Validmap, j ∈ DR_Validmap.
In formula, DLiRepresent the pixel value of the pixel i in the first depth map DL, DRjRepresent corresponding with pixel i secondThe pixel value of pixel j in depth map DR, T represent threshold value.Optionally, threshold value is set as T=5.
Then in step S240, for the first depth map DL, the Pixel Information of the pixel in its effective coverage is utilizedFill the Pixel Information of the pixel in non-active area, the first depth map FDL after being filled;Similarly, for secondDepth map DR, the pixel using the pixel in the Pixel Information filling non-active area of the pixel in its effective coverage are believedBreath, the second depth map FDR after being filled.
Realization method according to the present invention, step S240 can be divided into two steps, be as follows:
The first step for each pixel in the non-active area of the first depth map DL, has using belonging in its neighborhoodThe Pixel Information for imitating the pixel in region is smoothed it, obtains the Pixel Information of pixel in the non-active area,To generate the first depth map F1DL after preliminary filling;
Similarly, for each pixel in the non-active area of the second depth map DR, have using belonging in its neighborhoodThe Pixel Information for imitating the pixel in region is smoothed it, obtains the Pixel Information of pixel in the non-active area,To generate the second depth map F1DR after preliminary filling.
Realization method according to the present invention, as shown in figure 3, representing its effective coverage with oblique line fill part in image 3132, remaining area is non-active area.If some pixel in non-active area is a, searches in the neighborhood 33 of pixel a and belong toAll pixels point in effective coverage, can select the filtering methods such as bilateral filtering, guiding filtering, gaussian filtering, using being looked intoThe Pixel Information of pixel (that is, pixel of 33 internal oblique line fill part of neighborhood) found is smoothed pixel a.Optionally, the Pixel Information of pixel includes the depth information and image information of pixel, and image information is, for example, pixelRgb value.
According to an embodiment of the invention, the radius of neighbourhood can take any integer between 1-20, according to the present invention oneIn a embodiment, the radius of neighbourhood is set as 5, the invention is not limited in this regard.
According to still another embodiment of the invention, the pixel after the first step is filled is merged into effective coverage.
It in an embodiment according to the present invention, may be in non-active area for the bigger non-active area of some areasIt is searched in pixel neighborhood of a point less than the pixel for belonging to effective coverage in domain, then, after the first step is filled, non-active areaInterior certain pixels are not filled into.
Therefore, in second step, the first depth map F1DL after tentatively filling is neutralized in the second depth map F1DR notEach pixel in the non-active area being filled into, search respectively on it, under, closest on left and right four direction haveThe pixel in region is imitated, goes out corresponding non-active area further according to the depth information weighted calculation of four pixels foundThe depth information of the interior pixel, and then the first depth map FDL and the second depth map FDR after being filled.
A kind of realization method according to the present invention, second step can also be summarised as being filled using cross value method.IfSome unfilled pixel is i in non-active area, searches on the four direction of pixel i upper and lower, left and right first and belongs toPixel in effective coverage, is denoted as itop, ibottom, ileft, iright respectively.Next, utilize distance weighted algorithmCalculate the depth value of pixel i.
Optionally, the calculating process of the depth value of pixel i is:1) four found pixels are calculated (i.e. respectivelyItop, ibottom, ileft, iright) location of pixels and non-active area in pixel i location of pixels difference,Obtain four pixel distances;2) the sum of this four pixel distances are calculated, as pixel distance and;3) according to four pixel distances andPixel distance and the corresponding distance weighting of four pixels is calculated respectively;4) according to the corresponding distance weighting of four pixels andThe depth information weighted calculation of four pixels goes out the depth information of pixel non-active area Nei (that is, depth value).
To calculate in the first depth map FDL for the depth information of certain pixel i, depth value FDL (i) is defined as:
Wherein, itop, ibottom, ileft, iright are illustrated respectively in pixel i upper and lower, left and right four directions upwards mostPixel in neighbouring effective coverage, F1DL (itop), F1DL (ibottom), F1DL (ileft), F1DL (iright) pointThe depth letter of pixel itop, ibottom, ileft, iright in the first depth map F1DL after preliminary filling are not representedBreath.
According to an embodiment of the invention, the image information (i.e. rgb value) of pixel is considered when the first step is tentatively filled,Second step is further filled for unfilled pixel in the first step, it is only necessary to the depth information of pixel (i.e.Depth value) cross filling is done, the image information of pixel is not considered further that.Ensureing that filling effect is good, to the greatest extentAmount reduces computational complexity.
Then in step s 250, by being carried out respectively to the first depth map FDL after filling and the second depth map FDRSampling processing obtains the up-sampling images of the first up-sampling image BFDL and second BFDR.
According to one embodiment of present invention, using closest interpolation algorithm respectively to the first depth map FDL after fillingUp-sampling treatment is carried out with the second depth map FDR so that picture size and the first image IL and the second image IR after up-samplingPicture size be consistent, by the image after up-sampling be denoted as respectively the first up-sampling image BFDL and second up-sampling imageBFDR。
After step S250 processing, that obtain is coarse large-sized depth map BFDL and BFDR.It is according to the present inventionA kind of realization method using super-resolution Super Resolution (SR) method, can be protected in the colleague of enlarged drawing sizeStay image detail.Common super-resolution method has using the super-resolution algorithms (sparse-coding- based on sparse codingBased SR), alternatively, it is also possible to use the super-resolution algorithms based on deep learning, such as SRCNN, by depth convolutional Neural netNetwork introduces SR.Retain image detail using super-resolution method and belong to the technology of this field comparative maturity, herein no longer to its intoRow repeats.
In an embodiment according to the present invention, in step S260, using the first image IL to the first up-sampling imageBFDL carries out Steerable filter, obtains the first image SDL of details enhancing.Correspondingly, in step S270, the second image IR is utilizedSteerable filter is carried out to the second up-sampling image BFDR, obtains the second image SDR of details enhancing.
The present invention is not restricted the specific method of Steerable filter.To carry out Steerable filter to the first up-sampling image BFDLFor, it is IL to be oriented to figure, and filter result figures of the SDL for Steerable filter treated BFDL, Steerable filter can use equation belowIt represents:
SDL=A*IL+B,
In formula, A and B are Steerable filter parameter, according to the pixel in the up-sampling images of the first image IL and first BFDLCalculated for pixel values obtain.Optionally, the standard of pixel in image BFDL is up-sampled by calculating the first image IL and firstDeviation is worth to Steerable filter parameter, described in following steps:
(a) in the first up-sampling image BFDL, centered on pixel p, r is filter radius, and structure size is (2r+1) block of × (2r+1) calculates the average value mu BFDL of each pixel p and standard deviation BFDL;
(b) in figure IL is oriented to, centered on pixel p, r is filter radius, and structure size is (2r+1) × (2r+1)Block, calculate the average value mu IL of each pixel p and standard deviation IL;
(c) the first up-sampling image BFDL with the corresponding pixel values of the same pixel position p of figure IL are oriented to is multiplied, formedBFDLIL schemes, and centered on pixel p, r is filter radius, and the block that structure size is (2r+1) × (2r+1) calculates BFDLILThe average value mu BFDLIL and standard deviation BFDLIL of each pixel p in figure.
(d) A and B of each pixel p is calculated
B=μ BFDL-A μ IL
Wherein, ε is fixed constant, will determine the final effect of filtering, should for the figure using 0-255 Range RepresentationsThe usual value of fixed constant is ε=50~500.
It is of course also possible to region different in image is no longer done and excessively explained using different Steerable filter parameters hereinIt states.
Details enhancing is carried out to the first up-sampling image BFDL and the second up-sampling image BFDR by Steerable filter algorithmAnd deblurring, to obtain the high accuracy depth image for meeting visual effect:I.e. the first image SDL of details enhancing and details increaseThe second strong image SDR.
Depth image processing scheme according to the present invention obtains two coloured image (i.e. IL by being shot to dual cameraAnd IR) handled, ultimately generate two high-precision high definition depth maps (i.e. SDL and SDR).Compared with existing scheme, this hairIt is bright that the higher depth map of precision can be obtained, and obtain large-sized high definition using super-resolution method by depth algorithmDepth map.
In addition, the solution of the present invention is first by the first image and the second image down sampling to small size, on small-sized imageIt carries out binocular ranging and depth map enhancing calculates, therefore algorithm complexity is relatively low, suitable for mobile terminal, has very strong generalProperty.
It should be appreciated that in order to simplify the disclosure and help to understand one or more of each inventive aspect, it is right aboveIn the description of exemplary embodiment of the present invention, each feature of the invention be grouped together into sometimes single embodiment, figure orIn person's descriptions thereof.However, the method for the disclosure should be construed to reflect following intention:I.e. claimed hairThe bright feature more features required than being expressly recited in each claim.More precisely, as the following claimsAs book reflects, inventive aspect is all features less than single embodiment disclosed above.Therefore, it then follows specific realThus the claims for applying mode are expressly incorporated in the specific embodiment, wherein each claim is used as this hair in itselfBright separate embodiments.
Those skilled in the art should understand that the modules or unit or group of the equipment in example disclosed hereinPart can be arranged in equipment as depicted in this embodiment or alternatively can be positioned at and the equipment in the exampleIn different one or more equipment.Module in aforementioned exemplary can be combined into a module or be segmented into addition multipleSubmodule.
Those skilled in the art, which are appreciated that, to carry out adaptively the module in the equipment in embodimentChange and they are arranged in one or more equipment different from the embodiment.It can be the module or list in embodimentMember or component be combined into a module or unit or component and can be divided into addition multiple submodule or subelement orSub-component.Other than such feature and/or at least some of process or unit exclude each other, it may be used anyCombination is disclosed to all features disclosed in this specification (including adjoint claim, abstract and attached drawing) and so to appointWhere all processes or unit of method or equipment are combined.Unless expressly stated otherwise, this specification is (including adjoint powerProfit requirement, abstract and attached drawing) disclosed in each feature can be by providing the alternative features of identical, equivalent or similar purpose come generationIt replaces.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodimentsIn included certain features rather than other feature, but the combination of the feature of different embodiments means in of the inventionWithin the scope of and form different embodiments.For example, in the following claims, embodiment claimed is appointedOne of meaning mode can use in any combination.
Various technologies described herein can combine hardware or software or combination thereof is realized together.So as to the present inventionMethod and apparatus or the process and apparatus of the present invention some aspects or part can take embedded tangible media, such as softThe form of program code (instructing) in disk, CD-ROM, hard disk drive or other arbitrary machine readable storage mediums,Wherein when program is loaded into the machine of such as computer etc, and is performed by the machine, the machine becomes to put into practice this hairBright equipment.
In the case where program code performs on programmable computers, computing device generally comprises processor, processorReadable storage medium (including volatile and non-volatile memory and or memory element), at least one input unit and extremelyA few output device.Wherein, memory is configured for storage program code;Processor is configured for according to the memoryInstruction in the said program code of middle storage performs method of the present invention.
By way of example and not limitation, computer-readable medium includes computer storage media and communication media.It calculatesMachine readable medium includes computer storage media and communication media.Computer storage media storage such as computer-readable instruction,The information such as data structure, program module or other data.Communication media is generally modulated with carrier wave or other transmission mechanisms etc.Data-signal processed passes to embody computer-readable instruction, data structure, program module or other data including any informationPass medium.Above any combination is also included within the scope of computer-readable medium.
In addition, be described as herein can be by the processor of computer system or by performing for some in the embodimentThe method or the combination of method element that other devices of the function are implemented.Therefore, have to implement the method or methodThe processor of the necessary instruction of element forms the device for implementing this method or method element.In addition, device embodimentElement described in this is the example of following device:The device is for implementing as in order to performed by implementing the element of the purpose of the inventionFunction.
As used in this, unless specifically stated, come using ordinal number " first ", " second ", " third " etc.Description plain objects are merely representative of the different instances for being related to similar object, and are not intended to imply that the object being described in this way mustMust have the time it is upper, spatially, in terms of sequence or given sequence in any other manner.
Although the embodiment according to limited quantity describes the present invention, above description, the art are benefited fromIt is interior it is clear for the skilled person that in the scope of the present invention thus described, it can be envisaged that other embodiments.Additionally, it should be noted thatThe language that is used in this specification primarily to readable and introduction purpose and select rather than in order to explain or limitDetermine subject of the present invention and select.Therefore, in the case of without departing from the scope and spirit of the appended claims, for thisMany modifications and changes will be apparent from for the those of ordinary skill of technical field.For the scope of the present invention, to thisThe done disclosure of invention is illustrative and not restrictive, and it is intended that the scope of the present invention be defined by the claims appended hereto.