Movatterモバイル変換


[0]ホーム

URL:


CN107613208A - Adjusting method and terminal, the computer-readable storage medium of a kind of focusing area - Google Patents

Adjusting method and terminal, the computer-readable storage medium of a kind of focusing area
Download PDF

Info

Publication number
CN107613208A
CN107613208ACN201710909825.9ACN201710909825ACN107613208ACN 107613208 ACN107613208 ACN 107613208ACN 201710909825 ACN201710909825 ACN 201710909825ACN 107613208 ACN107613208 ACN 107613208A
Authority
CN
China
Prior art keywords
focusing
area
interface
target area
view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710909825.9A
Other languages
Chinese (zh)
Other versions
CN107613208B (en
Inventor
赵蕴泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co LtdfiledCriticalNubia Technology Co Ltd
Priority to CN201710909825.9ApriorityCriticalpatent/CN107613208B/en
Publication of CN107613208ApublicationCriticalpatent/CN107613208A/en
Application grantedgrantedCritical
Publication of CN107613208BpublicationCriticalpatent/CN107613208B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Landscapes

Abstract

The invention discloses a kind of adjusting method of focusing area, method includes:Obtain the focusing track for being inputted in the target area in interface of finding a view;The M sub-regions in target area are determined based on focusing track, and M sub-regions are defined as focusing area;Subject in focusing area is focused.Meanwhile the invention also discloses a kind of terminal and computer-readable storage medium.By focusing area adjusting method provided by the invention, unnecessary amount of calculation during focusing process can be reduced in a manner of customized regulation focusing area size, accelerates focusing speed, while also ensured focusing quality.

Description

Adjusting method and terminal, the computer-readable storage medium of a kind of focusing area
Technical field
The present invention relates to camera technique for taking field, more particularly to a kind of adjusting method of focusing area and terminal, calculatingStorage medium.
Background technology
With the development of intelligent terminal, the performance of the camera on intelligent terminal is also become better and better.In order to meet user to heightThe pursuit of shooting effect, camera is interior to be provided with continuous auto-focusing (Continuous Auto Focus) function, a kind of common companyContinuing auto-focusing scheme is:Ambient light change is detected, the pose data variation of camera is detected, detects the object in interface of finding a viewChange, when a certain or several change in these three changes reaches corresponding threshold value, camera can be triggered and re-start focusing.
The either focusing again of which kind of change triggers, the measuring and calculating region (namely focusing area) of continuous auto-focusing are allOne piece of region at terminal screen center, normally found a view when shooting where subject because the central area of screen is userRegion.
The content of the invention
In order to solve the above technical problems, the embodiments of the invention provide a kind of adjusting method of focusing area and terminal, meterCalculate storage medium.
The embodiment of the present invention provides a kind of adjusting method of focusing area, and method includes:Obtain in interface of finding a viewThe focusing track inputted in target area;M sub-regions in target area are determined based on focusing track, and by M sub-regionsIt is defined as focusing area;Subject in focusing area is focused.
Alternatively, the step of obtaining the focusing track for being inputted in the target area in interface of finding a view, including:ObtainThe coordinate value for each touch point of input trajectory found a view in interface;Obtain the coordinate value scope of target area;By the seat of each touch pointScale value is compared with the coordinate value scope of target area, will be defined as pair positioned at the touch point of the coordinate value scope of target areaFocal track mark.
Alternatively, the step of M sub-regions in target area being determined based on focusing track, including:It will find a view in interfaceTarget area be divided into N number of subregion, N number of subregion is determined according to coordinate value scope;By the way that each of track of focusing is touchedThe coordinate value touched a little is compared with the coordinate value scope of N number of subregion, and the subregion of the touch point comprising focusing track is trueThe M sub-regions being set in target area.
Alternatively, the subregion of the touch point comprising focusing track is defined as to the step of the M sub-regions in target areaSuddenly, including:Obtain the big minor adjustment operation for the subregion for being directed to the touch point comprising focusing track in target area;Based on sizeRegulation operation, it is determined that corresponding zooming parameter;Based on zooming parameter, by the subregion of the touch point for track of focusing by first areaRange regulation is to second area scope;The subregion adjusted to the touch point of the focusing track of second area scope is defined as meshMark the M sub-regions in region.
Alternatively, the step of being focused to the subject in focusing area, including:To image corresponding to interface of finding a viewFeature extraction is carried out, obtains characteristic parameter;Based on the characteristic parameter extracted, determined in focusing area default containing meetingThe destination object of the characteristic parameter of condition;Destination object in focusing area is focused.
Alternatively, characteristic parameter is depth information, based on the characteristic parameter extracted, determines to contain in focusing areaThe step of meeting the destination object of the characteristic parameter of preparatory condition, including:Depth letter based on each object in interface of finding a viewBreath, determines to include the destination object of qualified depth information in focusing area.
Alternatively, interface of finding a view includes one or more focusing areas;Method also includes:In picture corresponding to interface of finding a viewFocusing area is marked out in face.
The present invention also provides a kind of terminal, and the terminal includes:Camera, for carrying out IMAQ to view area, obtainTo interface of finding a view;Input unit, for obtaining the focusing track for being inputted in the target area in interface of finding a view;Memory,For storing the regulation program of focusing area;Processor, for performing the regulation program of the focusing area in memory to realizeOperate below:The M sub-regions in target area are determined based on focusing track, and M sub-regions are defined as focusing area;It is rightSubject in focusing area is focused.
Alternatively, input unit is used to obtain the input operation in interface, and processor is additionally operable to pair in memoryThe regulation program in burnt region is to realize following operation:Obtain the coordinate value of each touch point of input trajectory in interface of finding a view;ObtainTake the coordinate value scope of target area;The coordinate value of each touch point is compared with the coordinate value scope of target area, by positionIt is defined as track of focusing in the touch point of the coordinate value scope of target area.
The present invention also provides a kind of computer-readable storage medium, and computer-readable storage medium has one or more program, oneOr multiple programs can be by one or more computing device, to realize the method and step of any one of claim 1 to 7.
In the technical scheme of the embodiment of the present invention, the size by the operation of big minor adjustment to the target area in interface of finding a viewIt is adjusted, because focusing area is a part for target area, therefore, passes through the regulation to target area size, Neng GoudongState adapts to the size of current subject, and the mode of this customized regulation focusing area size can reduce focusing process mistakeUnnecessary amount of calculation in journey, accelerate focusing speed, while also ensure focusing quality.
Brief description of the drawings
Fig. 1 is each optionally a kind of hardware architecture diagram of mobile terminal of embodiment one of the realization present invention;
Fig. 2 is a kind of communications network system Organization Chart provided in an embodiment of the present invention;
Fig. 3 is the schematic diagram at the interface of finding a view of the camera of the embodiment of the present invention;
Fig. 4 is the schematic flow sheet of the adjusting method of the focusing area of the embodiment of the present invention;
Fig. 5 is the division schematic diagram of the target area of the camera of the embodiment of the present invention;
Fig. 6 is that the user of the embodiment of the present invention inputs the schematic diagram of the input trajectory T by target area in view area;
Fig. 7 is the schematic diagram that focusing area is determined according to input trajectory of the embodiment of the present invention;
Focusing area display schematic diagram when Fig. 8 is the camera focusing of the embodiment of the present invention;
Fig. 9 is the schematic flow sheet of the adjusting method of the focusing area of the embodiment of the present invention;
Figure 10 is the schematic diagram of the change focusing area size of the embodiment of the present invention;
Figure 11 is the schematic flow sheet of the adjusting method of the focusing area of the embodiment of the present invention;
Figure 12 is the schematic diagram for including destination object in the focusing area of the embodiment of the present invention;
Figure 13 is the structure composition schematic diagram of the terminal of the embodiment of the present invention.
Embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
In follow-up description, the suffix using such as " module ", " part " or " unit " for representing element is onlyBe advantageous to the explanation of the present invention, itself there is no a specific meaning.Therefore, " module ", " part " or " unit " can mixGround uses.
Terminal can be implemented in a variety of manners.For example, the terminal described in the present invention can include such as mobile phone, flat boardComputer, notebook computer, palm PC, personal digital assistant (Personal Digital Assistant, PDA), portableMedia player (Portable Media Player, PMP), guider, wearable device, Intelligent bracelet, pedometer etc. moveDynamic terminal, and the fixed terminal such as digital TV, desktop computer.
It will be illustrated in subsequent descriptions by taking mobile terminal as an example, it will be appreciated by those skilled in the art that except specialFor moving outside purpose element, construction according to the embodiment of the present invention can also apply to the terminal of fixed type.
Referring to Fig. 1, its hardware architecture diagram for a kind of mobile terminal of each embodiment of the realization present invention, the shiftingDynamic terminal 100 can include:RF (Radio Frequency, radio frequency) unit 101, WiFi module 102, audio output unit103rd, A, V (audio, video) input block 104, sensor 105, display unit 106, user input unit 107, interface unit108th, the part such as memory 109, processor 110 and power supply 111.It will be understood by those skilled in the art that shown in Fig. 1Mobile terminal structure does not form the restriction to mobile terminal, and mobile terminal can be included than illustrating more or less parts,Either combine some parts or different parts arrangement.
The all parts of mobile terminal are specifically introduced with reference to Fig. 1:
Radio frequency unit 101 can be used for receiving and sending messages or communication process in, the reception and transmission of signal, specifically, by base stationDownlink information receive after, handled to processor 110;In addition, up data are sent to base station.Generally, radio frequency unit 101Including but not limited to antenna, at least one amplifier, transceiver, coupler, low-noise amplifier, duplexer etc..In addition, penetrateFrequency unit 101 can also be communicated by radio communication with network and other equipment.Above-mentioned radio communication can use any communicationStandard or agreement, including but not limited to GSM (Global System of Mobile communication, global system for mobile telecommunicationsSystem), GPRS (General Packet Radio Service, general packet radio service), CDMA2000 (CodeDivision Multiple Access 2000, CDMA 2000), WCDMA (Wideband Code DivisionMultiple Access, WCDMA), TD-SCDMA (Time Division-Synchronous CodeDivision Multiple Access, TD SDMA), FDD-LTE (Frequency DivisionDuplexing-Long Term Evolution, FDD Long Term Evolution) and TDD-LTE (Time DivisionDuplexing-Long Term Evolution, time division duplex Long Term Evolution) etc..
WiFi belongs to short range wireless transmission technology, and mobile terminal can help user to receive and dispatch electricity by WiFi module 102Sub- mail, browse webpage and access streaming video etc., it has provided the user wireless broadband internet and accessed.Although Fig. 1 showsGo out WiFi module 102, but it is understood that, it is simultaneously not belonging to must be configured into for mobile terminal, completely can be according to needTo be omitted in the essential scope for do not change invention.
Audio output unit 103 can be in call signal reception pattern, call mode, record mould in mobile terminal 100When under the isotypes such as formula, speech recognition mode, broadcast reception mode, by radio frequency unit 101 or WiFi module 102 it is receiving orIt is sound that the voice data stored in memory 109, which is converted into audio signal and exported,.Moreover, audio output unit 103The audio output related to the specific function that mobile terminal 100 performs can also be provided (for example, call signal receives sound, disappearedBreath receives sound etc.).Audio output unit 103 can include loudspeaker, buzzer etc..
A, V input blocks 104 are used to receive audio or video signal.A, V input blocks 104 can include graphics processor(Graphics Processing Unit, GPU) 1041 and microphone 1042, graphics processor 1041 is in video acquisition modeOr the static images or the view data of video obtained in image capture mode by image capture apparatus (such as camera) are carried outReason.Picture frame after processing may be displayed on display unit 106.Picture frame after the processing of graphics processor 1041 can be depositedStorage is transmitted in memory 109 (or other storage mediums) or via radio frequency unit 101 or WiFi module 102.MikeWind 1042 can connect in telephone calling model, logging mode, speech recognition mode etc. operational mode via microphone 1042Quiet down sound (voice data), and can be voice data by such acoustic processing.Audio (voice) data after processing canTo be converted to the form output that mobile communication base station can be sent to via radio frequency unit 101 in the case of telephone calling model.Microphone 1042 can implement various types of noises and eliminate (or suppression) algorithm to eliminate (or suppression) in reception and send soundCaused noise or interference during frequency signal.
Mobile terminal 100 also includes at least one sensor 105, such as optical sensor, motion sensor and other biographiesSensor.Specifically, optical sensor includes ambient light sensor and proximity transducer, wherein, ambient light sensor can be according to environmentThe light and shade of light adjusts the brightness of display panel 1061, and proximity transducer can close when mobile terminal 100 is moved in one's earDisplay panel 1061 and or backlight.As one kind of motion sensor, accelerometer sensor can detect in all directions (generalFor three axles) size of acceleration, size and the direction of gravity are can detect that when static, the application available for identification mobile phone posture(such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, percussion) etc.;The fingerprint sensor that can also configure as mobile phone, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer,The other sensors such as hygrometer, thermometer, infrared ray sensor, will not be repeated here.
Display unit 106 is used for the information for showing the information inputted by user or being supplied to user.Display unit 106 can wrapDisplay panel 1061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be usedForms such as (Organic Light-Emitting Diode, OLED) configures display panel 1061.
User input unit 107 can be used for the numeral or character information for receiving input, and produce the use with mobile terminalThe key signals input that family is set and function control is relevant.Specifically, user input unit 107 may include contact panel 1071 withAnd other input equipments 1072.Contact panel 1071, also referred to as touch-screen, collect touch operation of the user on or near it(for example user uses any suitable objects or annex such as finger, stylus on contact panel 1071 or in contact panel 1071Neighbouring operation), and corresponding attachment means are driven according to formula set in advance.Contact panel 1071 may include touch detectionTwo parts of device and touch controller.Wherein, the touch orientation of touch detecting apparatus detection user, and detect touch operation bandThe signal come, transmits a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and by itContact coordinate is converted into, then gives processor 110, and the order sent of reception processing device 110 and can be performed.In addition, canTo realize contact panel 1071 using polytypes such as resistance-type, condenser type, infrared ray and surface acoustic waves.Except contact panel1071, user input unit 107 can also include other input equipments 1072.Specifically, other input equipments 1072 can wrapInclude but be not limited to physical keyboard, in function key (such as volume control button, switch key etc.), trace ball, mouse, action bars etc.One or more, do not limit herein specifically.
Further, contact panel 1071 can cover display panel 1061, detect thereon when contact panel 1071 orAfter neighbouring touch operation, processor 110 is sent to determine the type of touch event, is followed by subsequent processing device 110 according to touch thingThe type of part provides corresponding visual output on display panel 1061.Although in Fig. 1, contact panel 1071 and display panel1061 be the part independent as two to realize the input of mobile terminal and output function, but in certain embodiments, canInput and the output function of mobile terminal are realized so that contact panel 1071 and display panel 1061 is integrated, is not done herein specificallyLimit.
Interface unit 108 is connected the interface that can pass through as at least one external device (ED) with mobile terminal 100.For example,External device (ED) can include wired or wireless head-band earphone port, external power source (or battery charger) port, wired or nothingLine FPDP, memory card port, the port for connecting the device with identification module, audio input, output (I, O) endMouth, video I, O port, ear port etc..Interface unit 108 can be used for receiving the input from external device (ED) (for example, numberIt is believed that breath, electric power etc.) and the input received is transferred to one or more elements in mobile terminal 100 or can be withFor transmitting data between mobile terminal 100 and external device (ED).
Memory 109 can be used for storage software program and various data.Memory 109 can mainly include storing program areaAnd storage data field, wherein, storing program area can storage program area, application program (such as the sound needed at least one functionSound playing function, image player function etc.) etc.;Storage data field can store according to mobile phone use created data (such asVoice data, phone directory etc.) etc..In addition, memory 109 can include high-speed random access memory, can also include non-easyThe property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 110 is the control centre of mobile terminal, utilizes each of various interfaces and the whole mobile terminal of connectionIndividual part, by run or perform the software program being stored in memory 109 and or module, and call be stored in storageData in device 109, the various functions and processing data of mobile terminal are performed, so as to carry out integral monitoring to mobile terminal.PlaceReason device 110 may include one or more processing units;Preferably, processor 110 can integrate application processor and modulatedemodulate is mediatedDevice is managed, wherein, application processor mainly handles operating system, user interface and application program etc., and modem processor is mainHandle radio communication.It is understood that above-mentioned modem processor can not also be integrated into processor 110.
Mobile terminal 100 can also include the power supply 111 (such as battery) to all parts power supply, it is preferred that power supply 111Can be logically contiguous by power-supply management system and processor 110, so as to realize management charging by power-supply management system, putThe function such as electricity and power managed.
Although Fig. 1 is not shown, mobile terminal 100 can also will not be repeated here including bluetooth module etc..
For the ease of understanding the embodiment of the present invention, the communications network system being based on below to the mobile terminal of the present invention entersRow description.
Referring to Fig. 2, Fig. 2 is a kind of communications network system Organization Chart provided in an embodiment of the present invention, the communication network systemUnite as the LTE system of universal mobile communications technology, the UE that the LTE system includes communicating connection successively (User Equipment, is usedFamily equipment) 201, E-UTRAN (Evolved UMTS Terrestrial Radio Access Network, evolved UMTS landsGround wireless access network) 202, EPC (Evolved Packet Core, evolved packet-based core networks) 203 and operator IP operation204。
Specifically, UE201 can be above-mentioned terminal 100, and here is omitted.
E-UTRAN202 includes eNodeB2021 and other eNodeB2022 etc..Wherein, eNodeB2021 can be by returningJourney (backhaul) (such as X2 interface) is connected with other eNodeB2022, and eNodeB2021 is connected to EPC203,ENodeB2021 can provide UE201 to EPC203 access.
EPC203 can include MME (Mobility Management Entity, mobility management entity) 2031, HSS(Home Subscriber Server, home subscriber server) 2032, other MME2033, SGW (Serving Gate Way,Gateway) 2034, PGW (PDN Gate Way, grouped data network gateway) 2035 and PCRF (Policy andCharging Rules Function, policy and rate functional entity) 2036 etc..Wherein, MME2031 be processing UE201 andThe control node of signaling between EPC203, there is provided carrying and connection management.HSS2032 is all to manage for providing some registersSuch as the function of attaching position register (not shown) etc, and preserve some and used about service features, data rate etc.The special information in family.All customer data can be transmitted by SGW2034, and PGW2035 can provide UE 201 IPAddress is distributed and other functions, and PCRF2036 is strategy and the charging control strategic decision-making of business data flow and IP bearing resourcesPoint, it selects and provided available strategy and charging control decision-making with charge execution function unit (not shown) for strategy.
IP operation 204 can include internet, Intranet, IMS (IP Multimedia Subsystem, IP multimediaSystem) or other IP operations etc..
Although above-mentioned be described by taking LTE system as an example, those skilled in the art it is to be understood that the present invention not onlySuitable for LTE system, be readily applicable to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA withAnd following new network system etc., do not limit herein.
Based on above-mentioned mobile terminal hardware configuration and communications network system, each embodiment of the inventive method is proposed.
Fig. 3 is the schematic diagram one at the interface of finding a view of camera, as shown in figure 3, camera carries out the measuring and calculating area of continuous auto-focusingDomain (namely focusing area) is the central area of screen, and the area of the central area occupies the 1 of whole screen, 4, subject(namely destination object of user's shooting) is located at the upper right corner of central area, namely:Central area is assumed to be A, subjectRegion is assumed to be A1, and the region in central area in addition to subject is assumed to be A2, it is clear that A=A1+A2.
Because when carrying out continuous auto-focusing, the measuring and calculating region of continuous auto-focusing is screen center 1, the area of 4 sizesDomain (namely a-quadrant in Fig. 3), and due to smaller (namely Fig. 3 Zhong A1 areas of subject ratio shared in a-quadrantDomain), continuous auto-focusing processing procedure needs to look after the imaging (namely A2 regions in Fig. 3) of other objects in a-quadrant, leadsIt is still unsharp to cause the subject after continuous auto-focusing is carried out.It is real for this and similar situation, the present inventionApply the continuous auto-focusing scheme that example proposes a kind of improvement.
Fig. 4 is the schematic flow sheet one of the focusing method of the embodiment of the present invention, as shown in figure 4, focusing method is including followingStep:
Step S401, obtain the focusing track for being inputted in the target area in interface of finding a view.
In the embodiment of the present invention, the region of camera shooting is referred to as view area, and the scene in view area is presented on into screenWhen on curtain, the picture on screen is referred to as interface of finding a view, it is seen then that and interface of finding a view is corresponding with view area, namely:Find a view interfaceIt is the picture that view area is presented on screen.
In the embodiment of the present invention, it is assumed that the size at interface of finding a view and terminal screen it is in the same size, certainly, interface of finding a viewSize might be less that the size of terminal screen, namely:Interface of finding a view occupies a part for screen.In order to distinguish interface of finding a viewMain body and background, and consider that main body is normally at the central area at interface of finding a view, using the central area at interface of finding a view as meshRegion is marked, this target area that is to say body region.In one embodiment, target area occupies the 1 of interface of finding a view, 4, whenSo, the size of target area and position not limited to this.
In the embodiment of the present invention, because the size of subject is possible to smaller relative to target area, therefore, in order to moreAccurately subject is focused, target area is refined.Specifically, target area is divided into N number of subregion.In one embodiment, the size all same of every sub-regions in N number of subregion.In another embodiment, N number of subregionIn every sub-regions size it is different or the sizes of part subregion are different.Fig. 5 is the camera of the embodiment of the present inventionFind a view the schematic diagram two at interface, as shown in figure 5,1,4 regions (namely target area) at interface center of finding a view are divided into 5 × 525 sub-regions (namely N=25).It will be appreciated by those skilled in the art that the division of target area is not limited to shown in Fig. 5Mode, can also be divided by other means.
As shown in fig. 6, terminal is learnt from else's experience, region shows current picture, and user carries out the shooting of exact focus as neededObject, pass through touch-screen input and the input trajectory T similar in the reference object track of terminal.Terminal is obtained on input trajectory TEach touch point coordinate value, the coordinate value of each touch point got is compared with the coordinate range of target area,Each touch point in the coordinate range will be dropped into and be defined as the track T2 that focuses, each touch not fallen within the coordinate rangePoint is defined as track T1 outside scope.
Step S402, M sub-regions in target area are determined based on focusing track, and M sub-regions are defined as pairBurnt region.
As shown in fig. 7, target area is divided into N number of subregion, determined per sub-regions by way of coordinate rangePer position of the sub-regions in screen.By the coordinate value and the coordinate model of every sub-regions of the touch point that will be focused on trackEnclose and be compared, from N number of subregion selection include the M sub-regions of the touch point of focusing track, and by the M sub-regionsIt is defined as focusing area.
Step S403, the subject of focusing area is focused.
In the embodiment of the present invention, focusing is carried out to the object in the focusing area and uses Autofocus Technology.From basicFor principle, auto-focusing is segmented into two major classes:One kind is the ranging based on range measurement between camera lens and the target that is takenAuto-focusing, another kind of is the focus detection auto-focusing based on imaging clearly on focusing screen.
Illustrate focus process by taking focus detection auto-focusing as an example below:The focal length of camera is adjusted, obtains different focalThe image at corresponding interface of finding a view;When the definition maximum of the image in the focusing area in the interface of finding a view,Determine current focus focusing focal length corresponding to the focusing area.
Above-mentioned focus detection auto-focusing mainly has contrast method and phase method, wherein:
1) contrast method, this method are to realize auto-focusing by the contour edge of detection image.The profile side of imageEdge is more clear, then its brightness step is bigger, and the contrast between edge scenery and background is bigger in other words.Conversely,Image out of focus, contour edge is smudgy, and brightness step or contrast decline;Out of focus more remote, contrast is lower.Utilize thisPrinciple, two photoelectric detectors are placed on equidistance before and after charge coupled cell (CCD, Charge-coupled Device)Place, by the image of photography thing by light splitting simultaneously into the contrast on the two detectors, exporting its imaging respectively.When twoDuring the absolute value minimum for the contrast difference that detector is exported, illustrate the image planes of focusing just among two detectors, i.e.,Approached with CCD imaging surface, then focusing is completed.
2) phase method, this method are to realize auto-focusing by detecting the offset of picture.One is placed in CCD positionThe waffle slab being made up of parallel lines, lines are in succession printing opacity and light tight.After network board on appropriate location with optical axis symmetricallyPlace two photo detectors.Network board with it is of reciprocating vibration in optical axis vertical direction.When focusing surface overlaps with network board, pass throughThe light of waffle slab printing opacity lines reaches two photo detectors behind simultaneously.And when defocus, light beam can only successively reach twoIndividual photo detector, then there is phase difference between their output signal.Dephased two signals are after processing of circuitControl executing agency to adjust the position of object lens, focusing surface is overlapped with the plane of waffle slab.
In one embodiment, the focusing method is further comprising the steps of:
The focusing area is marked out in the picture corresponding to interface of finding a view.
As shown in figure 8, interface of finding a view includes 4 focusing areas, marked in picture corresponding to interface of finding a view by square frameGoing out focusing area, user can easily know the position where clearly region, wherein, focusing area only is completed to focusInterface of finding a view can be just shown in afterwards, by the quantity of focusing area to prompt whether user currently completes to focus.
Pass through focusing method provided in an embodiment of the present invention so that user, which can predefine, carries out detection zone shape of focusingShape, when terminal is being focused, focusing calculating is carried out according to the shape of focusing area, and prompt user to work as by display interfaceWhether the subject in preceding focusing area focuses, and compared to existing automated process, the focusing method of the present embodiment improvesFocusing levels of precision, meanwhile, reduce process and time that focusing calculates.
As shown in figure 9, in second embodiment provided by the invention, by obtaining the regulation of the area for focusing area,Further accurate focusing area and focusing speed for reference object, it is carried out as follows.
Step S901, obtain the focusing track for being inputted in the target area in interface of finding a view.
Step S902, the M sub-regions in target area are determined based on focusing track.
Step S903, obtain the big minor adjustment behaviour for the subregion for being directed to the touch point comprising focusing track in target areaMake.
In the embodiment of the present invention, because the size of subject is possible to much smaller than focusing area or much larger than focusing areaDomain, therefore, in order to adaptively be focused to subject, the size of focusing area is adjusted.
In the embodiment of the present invention, the big minor adjustment of focusing area is operated based on big minor adjustment to realize, here, sizeRegulation operation includes two classes, and one kind is amplification regulation operation, and another kind of is to reduce regulation operation.In one embodiment, can be withRealize amplification regulation operation by gesture or reduce regulation operation, such as:Kneading operation is carried out by two fingers, is come realNow reduce regulation operation, lock out operation carried out by two fingers, to realize amplification regulation operation, wherein, finger mediate orThe distance of separation characterizes the degree zoomed in or out.Again for example:Regulation is reduced to realize by a finger upward slidingOperation, by a finger slide downward come realize amplification regulation operation, wherein, finger slide up or down apart from tableThe degree zoomed in or out is levied.
It will be appreciated by those skilled in the art that the realization of amplification regulation operation or the diminution regulation operation of the embodiment of the present inventionMode is not limited to this, can also realize by other means, such as long-press left direction button is grasped to realize that amplification is adjustedMake, long-press right direction button reduces regulation operation to realize.
Step S904, operated based on big minor adjustment, it is determined that corresponding zooming parameter.
In inventive embodiments, the big minor adjustment operation based on the focusing area is capable of determining that the focusing area is correspondingZooming parameter.Here, the operational attribute in big minor adjustment operation characterizes zooming parameter.
In one embodiment, kneading operation is carried out by two fingers, reduces regulation operation to realize, pass through twoFinger carries out lock out operation, and to realize amplification regulation operation, finger is mediated or the distance of separation characterizes and reduces parameter or putBig parameter.
In another embodiment, regulation operation is reduced to realize by a finger upward sliding, passes through a handFinger slide downward operates to realize that amplification is adjusted, and the distance that finger slides up or down, which characterizes, reduces parameter or amplification ginsengNumber.
In yet another embodiment, long-press left direction button operates to realize that amplification is adjusted, long-press right direction buttonRegulation operation is reduced to realize, the pressing duration of left direction button or right direction button characterizes amplifying parameters or reduces parameter.
In specific application, it is assumed that zooming parameter E, it is A that focusing area, which carries out the area before big minor adjustment, focusing areaIt is a to carry out the area after big minor adjustment, then is tied to form just like ShiShimonoseki vertical:
A=E × A
Here, when E is more than 0 less than 1, represents and focusing area is contracted to a by A;E be more than 1 when, represent by focusing area byA is amplified to a.
Step S905, based on zooming parameter, by the subregion of the touch point for track of focusing by first area range regulation extremelySecond area scope.
As shown in Figure 10, in the embodiment of the present invention, it is assumed that the scope before the big minor adjustment of focusing area progress is first areaScope (corresponds to area A), then, can be by zooming parameter by original first area range regulation to based on zooming parameterTwo regional extents.Here, second area scope (corresponds to area a) namely focusing area carries out the scope after big minor adjustment.
In the embodiment of the present invention, the regional extent (such as first area scope, second area scope) of focusing area can be withCharacterized by the pixel coordinate of display screen, by taking the scope of first area as an example, first area scope can by (x1, y1) and(x2, y2) is characterized, wherein, (x1, y1) represents the coordinate in the first area scope upper left corner, (x2, y2) represents first area modelEnclose the coordinate in the lower right corner.Similarly, second area scope can be characterized by (x3, y3) and (x4, y4), wherein, (x3, y3) generationThe coordinate in the table second area scope upper left corner, (x4, y4) represents the coordinate in the second area scope lower right corner.
In the embodiment of the present invention, based on the regional extent of zooming parameter regulation focusing area, show focusing areaArea is by turning down greatly or being tuned up by small.
Step S906, the subregion adjusted to the touch point of the focusing track of second area scope is defined as target areaIn focusing area.
Step S907, the subject in focusing area is focused.
In the present embodiment, step S901,902 and step S907 and step S401~S403's in first embodiment is interiorHold identical, therefore will not be repeated here.
Further, sometimes when being focused, the subject in focusing area includes two parts, and a part is userReally want the content shot, and simultaneously non-user wants the content shot to another part, in order to avoid to not usingThe content that family is wanted to be focused is focused, and following steps are still further comprised in step S508, by implementing these stepsRapid realize precisely is focused, specific as follows, as shown in figure 11.
Step S9071, to finding a view, image corresponding to interface carries out feature extraction, obtains characteristic parameter.
In inventive embodiments, feature extraction refers to:The information of each pixel in image is extracted, determines that each pixel isIt is no to belong to a characteristics of image, and the continuous image vegetarian refreshments for belonging to same characteristics of image is classified as one kind.The result of feature extractionThat all pixels point on image is divided into different subsets, these subsets tend to belong to isolated point, continuous curve orContinuous region.The characteristic parameter obtained for image progress feature extraction includes but is not limited to:Edge, angle, region,Ridge.
Step S9072, based on the characteristic parameter extracted, determined in focusing area containing the spy for meeting preparatory conditionLevy the destination object of parameter.
As shown in figure 12, in the embodiment of the present invention, subject for background generally relative to more protruding, therefore, quiltIt is more prominent to take the photograph the profile that the characteristic parameter of object symbolizes., can based on the characteristic parameter extracted based on such understandingTo determine some profile, this profile represents the profile of subject, the coordinate position of each pixel based on the profile, canTo determine position of the subject in the interface of finding a view, by the positional information in interface that the son of determination is found a view with it is rightThe coordinate range in burnt region is compared, and then judges in out-degree Jiao region containing the destination object for meeting preparatory condition.It is optionalGround, characteristic parameter are depth information, based on the depth information of each object in interface of finding a view, determine to wrap in focusing areaDestination object containing qualified depth information, such as:Qualified destination object can be depth information less than specificThe subject of value.
Step S9073, destination object in focusing area is focused.
As shown in figure 12, the destination object for meeting condition is determined by step S9072, to the destination object in focusing areaFocused, because focusing area is that have M sub-regions composition, therefore, by only to containing destination object in M sub-regionsSubregion carry out auto-focusing.
In this manner, so that user on the basis of it have input focusing area, further accurate focusing area.
Figure 13 is the structure composition schematic diagram of the terminal of the embodiment of the present invention, and as shown in figure 13, terminal includes:
Camera 1301, for carrying out IMAQ to view area, obtain picture of finding a view;
Input unit 1302, for obtaining the focusing track for being inputted in the target area in interface of finding a view;
Memory 1303, for storing the regulation program of focusing area;
Processor 1304, for performing the regulation program of the focusing area in memory to realize following operation:
The M sub-regions in target area are determined based on focusing track, and M sub-regions are defined as focusing area;
Subject in focusing area is focused.
Wherein, processor 1304 is additionally operable to the regulation program of the focusing area in memory to realize following operation:Obtain the coordinate value of each touch point of input trajectory in interface of finding a view;Obtain the coordinate value scope of target area;By each touchThe coordinate value of point is compared with the coordinate value scope of target area, and the touch point of the coordinate value scope positioned at target area is trueIt is set to focusing track.
In the embodiment of the present invention, before the big minor adjustment of target area progress and included by after the big minor adjustment of progress
It will be appreciated by those skilled in the art that the function of each synthesizer part can refer in terminal in the embodiment of the present inventionThe associated description of the adjusting method of foregoing focusing area is understood.
If the above-mentioned terminal of the embodiment of the present invention is realized in the form of software function module and is used as independent production marketingOr in use, it can also be stored in a computer read/write memory medium.Based on such understanding, the embodiment of the present inventionThe part that technical scheme substantially contributes to prior art in other words can be embodied in the form of software product, the meterCalculation machine software product is stored in a storage medium, including some instructions are causing a computer equipment (can bePeople's computer, server or network equipment etc.) perform all or part of each embodiment method of the present invention.It is and foregoingStorage medium includes:USB flash disk, mobile hard disk, read-only storage (ROM, Read Only Memory), magnetic disc or CD etc. are variousCan be with the medium of store program codes.So, the embodiment of the present invention is not restricted to any specific hardware and software combination.
Correspondingly, the embodiment of the present invention also provides a kind of computer-readable storage medium, wherein computer program is stored with, the meterCalculation machine program is configured to perform the adjusting method of the focusing area of the embodiment of the present invention.
It should be noted that herein, term " comprising ", "comprising" or its any other variant are intended to non-rowHis property includes, so that process, method, article or device including a series of elements not only include those key elements, andAnd also include the other element being not expressly set out, or also include for this process, method, article or device institute inherentlyKey element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that including thisOther identical element also be present in the process of key element, method, article or device.
The embodiments of the present invention are for illustration only, do not represent the quality of embodiment.
Embodiments of the invention are described above in conjunction with accompanying drawing, but the invention is not limited in above-mentioned specificEmbodiment, above-mentioned embodiment is only schematical, rather than restricted, one of ordinary skill in the artUnder the enlightenment of the present invention, in the case of present inventive concept and scope of the claimed protection is not departed from, it can also make a lotForm, these are belonged within the protection of the present invention.

Claims (10)

CN201710909825.9A2017-09-292017-09-29Focusing area adjusting method, terminal and computer storage mediumActiveCN107613208B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201710909825.9ACN107613208B (en)2017-09-292017-09-29Focusing area adjusting method, terminal and computer storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201710909825.9ACN107613208B (en)2017-09-292017-09-29Focusing area adjusting method, terminal and computer storage medium

Publications (2)

Publication NumberPublication Date
CN107613208Atrue CN107613208A (en)2018-01-19
CN107613208B CN107613208B (en)2020-11-06

Family

ID=61067458

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201710909825.9AActiveCN107613208B (en)2017-09-292017-09-29Focusing area adjusting method, terminal and computer storage medium

Country Status (1)

CountryLink
CN (1)CN107613208B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108055470A (en)*2018-01-222018-05-18努比亚技术有限公司A kind of method of focusing, terminal and storage medium
CN109447927A (en)*2018-10-152019-03-08Oppo广东移动通信有限公司Image processing method and device, electronic equipment and computer readable storage medium
WO2020024196A1 (en)*2018-08-012020-02-06深圳市大疆创新科技有限公司Method for adjusting parameters of photographing device, control device and photographing system
CN111131703A (en)*2019-12-252020-05-08北京东宇宏达科技有限公司Whole-process automatic focusing method of continuous zooming infrared camera
CN112235563A (en)*2019-07-152021-01-15北京字节跳动网络技术有限公司Focusing test method and device, computer equipment and storage medium
CN113784045A (en)*2021-08-312021-12-10北京安博盛赢教育科技有限责任公司Focusing interaction method, device, medium and electronic equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20090015703A1 (en)*2007-07-112009-01-15Lg Electronics Inc.Portable terminal having touch sensing based image capture function and image capture method therefor
CN103813098A (en)*2012-11-122014-05-21三星电子株式会社Method and apparatus for shooting and storing multi-focused image in electronic device
US20140152883A1 (en)*2008-07-242014-06-05Apple Inc.Image capturing device with touch screen for adjusting camera settings
CN104363377A (en)*2014-11-282015-02-18广东欧珀移动通信有限公司Method and apparatus for displaying focus frame as well as terminal
CN105681657A (en)*2016-01-152016-06-15广东欧珀移动通信有限公司 Method and terminal device for photographing and focusing
CN105704375A (en)*2016-01-292016-06-22广东欧珀移动通信有限公司Image processing method and terminal
CN106131403A (en)*2016-06-302016-11-16北京小米移动软件有限公司Touch focusing method and device
CN106534619A (en)*2016-11-292017-03-22努比亚技术有限公司Method and apparatus for adjusting focusing area, and terminal

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20090015703A1 (en)*2007-07-112009-01-15Lg Electronics Inc.Portable terminal having touch sensing based image capture function and image capture method therefor
US20140152883A1 (en)*2008-07-242014-06-05Apple Inc.Image capturing device with touch screen for adjusting camera settings
CN103813098A (en)*2012-11-122014-05-21三星电子株式会社Method and apparatus for shooting and storing multi-focused image in electronic device
CN104363377A (en)*2014-11-282015-02-18广东欧珀移动通信有限公司Method and apparatus for displaying focus frame as well as terminal
CN105681657A (en)*2016-01-152016-06-15广东欧珀移动通信有限公司 Method and terminal device for photographing and focusing
CN105704375A (en)*2016-01-292016-06-22广东欧珀移动通信有限公司Image processing method and terminal
CN106131403A (en)*2016-06-302016-11-16北京小米移动软件有限公司Touch focusing method and device
CN106534619A (en)*2016-11-292017-03-22努比亚技术有限公司Method and apparatus for adjusting focusing area, and terminal

Cited By (10)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108055470A (en)*2018-01-222018-05-18努比亚技术有限公司A kind of method of focusing, terminal and storage medium
WO2020024196A1 (en)*2018-08-012020-02-06深圳市大疆创新科技有限公司Method for adjusting parameters of photographing device, control device and photographing system
CN110771147A (en)*2018-08-012020-02-07深圳市大疆创新科技有限公司Method for adjusting parameters of shooting device, control equipment and shooting system
CN109447927A (en)*2018-10-152019-03-08Oppo广东移动通信有限公司Image processing method and device, electronic equipment and computer readable storage medium
CN109447927B (en)*2018-10-152021-01-22Oppo广东移动通信有限公司 Image processing method and apparatus, electronic device, computer-readable storage medium
CN112235563A (en)*2019-07-152021-01-15北京字节跳动网络技术有限公司Focusing test method and device, computer equipment and storage medium
CN112235563B (en)*2019-07-152023-06-30北京字节跳动网络技术有限公司Focusing test method and device, computer equipment and storage medium
CN111131703A (en)*2019-12-252020-05-08北京东宇宏达科技有限公司Whole-process automatic focusing method of continuous zooming infrared camera
CN113784045A (en)*2021-08-312021-12-10北京安博盛赢教育科技有限责任公司Focusing interaction method, device, medium and electronic equipment
CN113784045B (en)*2021-08-312023-08-22北京安博盛赢教育科技有限责任公司Focusing interaction method, device, medium and electronic equipment

Also Published As

Publication numberPublication date
CN107613208B (en)2020-11-06

Similar Documents

PublicationPublication DateTitle
CN107613208A (en)Adjusting method and terminal, the computer-readable storage medium of a kind of focusing area
CN107508994A (en)Touch-screen report point rate processing method, terminal and computer-readable recording medium
CN108108704A (en)Face identification method and mobile terminal
CN107317963A (en)A kind of double-camera mobile terminal control method, mobile terminal and storage medium
CN107896303A (en)A kind of image-pickup method, system and equipment and computer-readable recording medium
CN107730462A (en)A kind of image processing method, terminal and computer-readable recording medium
CN107690065A (en)A kind of white balance correcting, device and computer-readable recording medium
CN108234295A (en)Display control method, terminal and the computer readable storage medium of group's functionality controls
CN108063901A (en)A kind of image-pickup method, terminal and computer readable storage medium
CN107682627A (en)A kind of acquisition parameters method to set up, mobile terminal and computer-readable recording medium
CN107959795A (en)A kind of information collecting method, equipment and computer-readable recording medium
CN107680060A (en)A kind of image distortion correction method, terminal and computer-readable recording medium
CN108008889A (en)Photographic method, mobile terminal and the computer-readable recording medium of flexible screen
CN107566731A (en)A kind of focusing method and terminal, computer-readable storage medium
CN107707821A (en)Modeling method and device, bearing calibration, terminal, the storage medium of distortion parameter
CN107948530A (en)A kind of image processing method, terminal and computer-readable recording medium
CN107240072A (en)A kind of screen luminance adjustment method, terminal and computer-readable recording medium
CN107704176A (en)A kind of picture-adjusting method and terminal
CN107493426A (en)A kind of information collecting method, equipment and computer-readable recording medium
CN107357500A (en)A kind of picture-adjusting method, terminal and storage medium
CN107239205A (en)A kind of photographic method, mobile terminal and storage medium
CN107450796B (en)A kind of image processing method, mobile terminal and computer readable storage medium
CN108965710A (en)Method, photo taking, device and computer readable storage medium
CN107580181A (en)A kind of focusing method, equipment and computer-readable recording medium
CN112135060B (en)Focusing processing method, mobile terminal and computer storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp