Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, completeSite preparation description, it is clear that described embodiments are only a part of embodiments of the present application, instead of all the embodiments.It is based onEmbodiment in the application, it is obtained by those of ordinary skill in the art without making creative efforts every otherEmbodiment shall fall in the protection scope of this application.
With the continuous development of machine learning and deep learning, identification point is carried out to image scene using machine learning modelThe method of class has been widely applied in every field.
People do not terminate in and the photo of shooting are allowed to become machine-made picture record when using mobile phone photograph, it is also desirable toThe photo shot has aesthetic feeling, improves the reserve value of shooting photo.The various of mobile terminal are widely used at present to take picturesUsing, by the stabilization under screening-mode, and to shooting image render, can make shoot photo quality mention significantlyIt rises, is liked by numerous shutterbugs.
However, inventor is collected in the demand of taking pictures to people, and sent out after analyzing existing application of taking picturesExisting, existing mobile phone photograph is functionally optimized from stabilization or promotion shooting effect etc., although these optimizations canThe quality of photo is promoted to a certain extent, but for the amateur photographer numerous for group, it is desirable to take professionPhoto with composition or aesthetic feeling of color, it is but still highly difficult, because these shooting skills need for a long time to shooting angle, positionThe training set can be grasped.
In the course of the study, what kind of mode inventor, which has studied through, guides user to take pictures, and researchHow the Real-time Feedback of information guided according to the current shooting picture of camera, and proposes the bat in the embodiment of the present applicationTake the photograph method, apparatus, mobile terminal and computer-readable storage medium.
It to image pickup method provided by the embodiments of the present application, device, mobile terminal and will be deposited by specific embodiment belowStorage media is described in detail.
First embodiment
Referring to Fig. 1, Fig. 1 shows the flow diagram of the image pickup method of the application first embodiment offer.The batThe method of taking the photograph first passes through the common camera site of acquisition and is trained and is deployed in mobile whole to convolutional neural networks as sample setIt is corresponding using trained convolutional neural networks output Current camera picture when user's hand-held mobile terminal is taken pictures in endOptimum photographing position, then export from the current location of current picture to the guidance information of optimum photographing position, finally guide userTowards the mobile mobile terminal of the optimum photographing position, the quality of shooting photo can be effectively improved, the experience of taking pictures of user is promoted.In the particular embodiment, the image pickup method can be applied to filming apparatus 300 as shown in Figure 3 and configured with filming apparatus300 mobile terminal 100 (Fig. 5), the image pickup method is for improving experience of the user when taking pictures.To be with mobile phone belowExample, is explained in detail for process shown in FIG. 1.Above-mentioned image pickup method specifically may comprise steps of:
Step S101: the optimum photographing position of current picture is obtained based on convolutional neural networks.
In the embodiment of the present application, the current picture, can be mobile phone camera when shooting by camera lens or other lightElectrical part acquired image;It can be two-dimensional image, be also possible to three-dimensional image.
The optimum photographing position can be the hobby of individual subscriber, or generally recognize in photography art evaluation or massesIbid, make current picture that there is the camera site of Aesthetic Characteristics.For example, optimum photographing position can be with optimal pattern featuresCamera site, or the camera site with optimal color character, wherein composition can be understood as lines in image, targetThe distribution of the shapes such as edge, position feature, color can be understood as the distribution of the features such as brightness, coloration, the contrast of image.It canWith understanding, when mobile phone is located at the optimum photographing position, the photo shot is that tool is artistic, which can pass throughHobby, photography art or the common cognition of masses of individual subscriber quantify.It should be noted that the best shooting positionIt sets, may be embodied in when can take optimal current picture, space coordinate and angle where mobile terminal (camera).
In the present embodiment, before executing step S101, multiclass can be acquired in advance with different Aesthetic Characteristics (shooting positionSet) image as sample set, such as acquire the images of a variety of different composition types in advance as sample set, or acquire it is a variety of notImage with color topological classes is as sample set;Convolutional neural networks such as ResNet etc. is instructed by the sample set againPractice;Finally the convolutional neural networks that training is completed are deployed in mobile terminal.
In the present embodiment, current picture when camera is shot inputs trained convolutional neural networks, can be by thisConvolutional neural networks export the optimum photographing position being adapted to camera current picture.For example, scheming by the way that mobile phone camera shooting one is secondaryPicture is likely to occur sky and the two target subjects of grassland in the image that camera lens obtain at this time, however is now placed in skyHorizon between grassland is inclined, and contains only lines level in the training set of training convolutional neural networks in advanceComposition does not include streak-inclined composition, at this time by the way that the inclined current picture in above-mentioned horizon is inputted convolutional Neural netNetwork, the optimum photographing position of acquisition can be the composition with the immediate horizon level of current picture, and the best shootingPosition is when can take the current picture of horizon level, the location of mobile terminal (including space coordinate and angleDegree).
Step S102: it according to the optimum photographing position, obtains from the current location of the current picture to described bestThe guidance information of camera site.
In the present embodiment, after obtaining optimum photographing position, can be generated from the current location of current picture to it is described mostThe guidance information of good camera site.
When mobile terminal is taken pictures by user's manual operation, from the current location of current picture to optimum photographing positionGuidance information can be the prompt information that mobile terminal is transmitted to user, which can be from vision, the sense of hearing or tactile etc.It sensuously prompts and user is guided to move the position (including space coordinate and angle) of mobile terminal.For example, in visionOn, it can be by the way that on the display screen of mobile terminal, display be moved to arrow, the song of optimum photographing position from current locationLine, or display interface partial region show optimum photographing position when camera view analog image thumbnail, with guidanceUser is towards the direction cell phone prompted on display interface;Acoustically, can persistently be played by the loudspeaker of mobile terminalThe prompt tone of certain frequency, according to the distance change of current location and optimum photographing position, the frequency that prompt tone plays also can be withVariation, the orientation of user's optimum photographing position is prompted with this;In tactile, it can prompt to use by modes such as mobile phone vibrationsThe moving direction of family mistake, guidance client edge are correctly oriented cell phone.
When mobile terminal is to be automatically performed to take pictures by Mechanical course (such as robot is taken pictures by camera), from currentThe current location of picture can be mobile terminal and send out to the positioner of mobile terminal to the guidance information of optimum photographing positionSend control signal (such as robot camera obtain image, guidance information is calculated by CPU, then the guidance information is sent outGive the MCU that the camera of control robot moves in three-dimensional space), which can control mobile terminal and be moved toSpace coordinate where optimum photographing position, and control rotation of mobile terminal to optimum photographing position demand space angle.
The image pickup method that the application first embodiment provides has artistic camera site as sample set pair by acquisitionConvolutional neural networks are trained and dispose in the terminal, when user takes pictures, utilize trained convolutional neural networksThe corresponding optimum photographing position of Current camera picture is exported, and guides user towards the mobile mobile terminal of optimum photographing position, it canThe quality for effectively improving shooting photo, promotes the experience of taking pictures of user.
Second embodiment
Referring to Fig. 2, the flow diagram of the image pickup method provided Fig. 2 shows the application second embodiment.Below willIt takes the mobile phone as an example, is explained in detail for process shown in Fig. 2.Above-mentioned image pickup method specifically may include following stepIt is rapid:
Step S201: the composition of the current picture is obtained relative to the similar general of default composition based on convolutional neural networksRate.
In the present embodiment, the current picture be can be in the image acquisition modality in mobile phone camera shooting, via handThe image that the photoelectric subassemblys such as machine camera obtain, the composition of the current picture can be the subject on the imageLines distribution.The method for obtaining the composition of the current picture can be multiplicity, for example, can be by acquiring current picture figureAs the gray value of interior each pixel, if the gray value of some pixel is significantly lower than neighbouring pixel, which may beAny in current picture on some lines can determine before deserving by obtaining the intensity profile of entire current picture imageThe composition of picture.Under in such a way that gray scale is patterned analysis, the composition of the current picture is relative to default compositionThe similarity of likelihood probability, as current picture image relative to the intensity profile of default patterned image.
In the present embodiment, the default composition be can be as the common of the training set training convolutional neural networksOne of composition, for example, it is upper trichotomy composition, lower trichotomy composition, left trichotomy composition, right trichotomy composition, three points otherMethod composition, diagonal line composition, guide line composition, S molded line composition, triangle composition or other type compositions.It is understood thatThe number of species of default composition in training set are more, and convolutional neural networks composition judgement most suitable for current picture is moreRefinement.
In the present embodiment, the composition of current picture can be successively compared with each default composition, it is current to obtainThe likelihood probability of the composition of picture and each default composition.
Step S202: judge whether the likelihood probability is greater than preset threshold.
If the likelihood probability is greater than the preset threshold, step S203 is executed;If the likelihood probability is no more than describedPreset threshold then returns to step S201.
In the present embodiment, the preset threshold can be the mark of the composition for judging current picture and default composition likelihood probabilityStandard, the preset threshold need to be set according to the number of species of default composition.For example, default composition number of species compared withWhen few, since the similarity between each default composition type is inherently lower, the preset threshold can be set as 60%;If pre-If the number of species of composition are more, the similarity itself between each default composition type is higher, the differentiation for needing more to refineMost suitable pairing could be carried out to the composition of current picture, the threshold value can be set at this time as 80%.Particularly, in current pictureComposition and likelihood probability of some default composition when being 100%, it is believed that the composition of current picture meets the default compositionStandard, in this case, i.e., instruction user takes pictures, and obtains the photo with the default composition.
Step S203: it is exported the type of the default composition as the composition type of the current picture.
In the present embodiment, when the likelihood probability is greater than the preset threshold, it is believed that the composition kind of current pictureThis, can be preset composition type of the type as the current picture of composition by the type matching of class and the default composition at this timeOutput.
In the present embodiment, if the likelihood probability for occurring multiple default compositions simultaneously is greater than preset threshold, phase can be chosenIt is exported like a default composition of maximum probability as the composition type of current picture;If occurring the phase of multiple default compositions simultaneouslyBe all larger than preset threshold and identical like probability, then it can be from multiple the default of (likelihood probability is greater than preset threshold) that meet the requirementsOne is randomly selected in composition as the composition type of current picture and generates corresponding optimum photographing position, it can also be defeated simultaneouslyThe identical default composition of multiple likelihood probabilities out carries out the screening of step S204 to step S205.
Step S204: judge in the composition type with the presence or absence of preference type.
In the present embodiment, the preference type can be that user selects setting in the default composition of multiple typesThe composition type of people's hobby is also possible to mobile terminal and is taken pictures after data analyze according to the history of user, according to user'sThe composition type for the user preference that use habit obtains.
In the present embodiment, if thening follow the steps S205 there are preference type in the composition type of step S203 output;If noThere are preference types, then illustrate that the composition type currently exported does not meet the hobby of user, can return to step S201, etc.New current picture to be collected, can also execute step S206.Preference type if it does not exist can also be user or mobile terminalPreference type is not set, at this point, no matter step S203 outputs one or multiple composition types, executable stepS206。
Step S205: according to the preference type, the optimum photographing position of the current picture is generated.
In the present embodiment, if step S203 outputs structure of the identical default composition of multiple likelihood probabilities as current pictureFigure, and there are multiple preference types in these default compositions, then can randomly select one in the composition of this multiple preference type,The corresponding optimum photographing position for generating current picture;If in these default compositions, there is only a preference type, preferential basesThe optimum photographing position of preference type generation current picture.
Step S206: the optimum photographing position of the current picture is generated according to any one in the composition type.
If the composition type only one, then generating the best of current picture according to the unique composition typeCamera site, and execute step S207;If the composition type have it is multiple, then randomly selecting one in multiple composition typesThe optimum photographing position of current picture is generated, and executes step S207.
Step S207: the position coordinates of the optimum photographing position and the current location are obtained.
In the present embodiment, the position coordinates of current location can be by pixel each on current picture in space coordinatesInterior position is obtained;The position coordinates of optimum photographing position, can be according to certain on the corresponding default composition of optimum photographing positionA pixel most preferably shoots position with the displacement difference of pixel corresponding on current picture in space coordinates to calculateThe space coordinate for the pixel set.
Step S208: be based on the position coordinates, obtain the current location to the optimum photographing position vector.
In the present embodiment, after the coordinate for obtaining each pixel on optimum photographing position and current locationBased on the coordinate obtain the current location to the optimum photographing position vector.
Step S209: be based on the vector, generate the current location to the optimum photographing position guidance information.
In the present embodiment, the vector of the current location to the optimum photographing position, direction is current location to mostThe direction of good camera site, the length is the distances of current location to optimum photographing position.
In the present embodiment, before step S204, following steps can also be carried out.
Step S210: the composition type that frequency of occurrence is most in history photographed data is obtained.
In the present embodiment, the history photographed data can be the shooting recorded when mobile terminal camera is in each shootingThe composition type of photo, or the composition type of local photograph album or the photo of cloud preservation.
Step S211: it is described pre- to judge whether frequency of occurrence is most in the history photographed data composition type is subordinated toIf composition type.
If the most composition type of frequency of occurrence is subordinated to the default composition type in the history photographed data, holdRow step S212 to S213;If the most composition type of frequency of occurrence is not subordinated to the default structure in the history photographed dataFigure type, thens follow the steps S214.
Step S212: based on the most composition type of frequency of occurrence in the history photographed data, user preference letter is generatedBreath.
The most composition type of frequency of occurrence in history photographed data, it is believed that it is the composition type of user preference, it canUser preference information is generated according to the most composition type of the frequency of occurrence.
Step S213: being based on the user preference information, and preference type is set in default composition type.
In the present embodiment, after preference type is completed in step S213 setting, step S204 can be carried out.
Step S214: using the most composition type of frequency of occurrence in the history photographed data as sample set, to describedConvolutional neural networks are trained.
It, can be according to user's relative to the image pickup method that the application first embodiment, the application second embodiment provideHobby or use habit set preference type, and automatically select the composition type output for meeting user preferences or habit;It can alsoIt is enough that newly-increased composition type is obtained according to the use habit of user, expansion training is carried out to convolutional neural networks, makes the programUsing more intelligent, humanized.
3rd embodiment
Referring to Fig. 3, Fig. 3 shows the module frame chart of the filming apparatus 300 of the application 3rd embodiment offer.Below willIt being illustrated for module frame chart shown in Fig. 3, the filming apparatus 300 includes: to obtain module 310 and guiding module 320,Wherein:
Module 310 is obtained, for obtaining the optimum photographing position of current picture based on convolutional neural networks.
Guiding module 320, for obtaining from the current location of the current picture to institute according to the optimum photographing positionState the guidance information of optimum photographing position.
The filming apparatus that the application 3rd embodiment provides, by utilizing trained convolutional Neural when user takes picturesNetwork exports the corresponding optimum photographing position of Current camera picture, while guiding user whole towards the mobile movement of optimum photographing positionEnd can effectively improve the quality of shooting photo, promote the experience of taking pictures of user.
Fourth embodiment
Referring to Fig. 4, Fig. 4 shows the module frame chart of the filming apparatus 400 of the application fourth embodiment offer.Below willBe illustrated for module frame chart shown in Fig. 4, the filming apparatus 400 include: obtain module 410, guiding module 420, partiallyGood module 430, setting module 440, judgment module 450 and training module 460, in which:
Module 410 is obtained, for obtaining the optimum photographing position of current picture based on convolutional neural networks.Further,The acquisition module 410 includes: acquiring unit 411 and generation unit 412, in which:
Acquiring unit 411, for obtaining the composition type of the current picture based on convolutional neural networks.Further,The acquiring unit 411 includes: probability subelement, threshold value subelement and composition subelement, in which:
Probability subelement, for obtaining the composition of the current picture based on convolutional neural networks relative to default compositionLikelihood probability.
Threshold value subelement, for judging whether the likelihood probability is greater than preset threshold.
Composition subelement is used for when the likelihood probability is greater than the preset threshold, by the type of the default compositionComposition type as the current picture exports.
Generation unit 412, for generating the optimum photographing position of the current picture according to the composition type.Into oneStep, the generation unit 412 includes: preference subelement and location subunit, in which:
Preference subelement, for judging in the composition type with the presence or absence of preference type;
Location subunit, for, there are when preference type, according to the preference type, generating institute in the composition typeState the optimum photographing position of current picture.
Guiding module 420, for obtaining from the current location of the current picture to institute according to the optimum photographing positionState the guidance information of optimum photographing position.Further, the guiding module 420 includes: coordinate unit 421, vector location 422And guidance unit 423, in which:
Coordinate unit 421, for obtaining the position coordinates of the optimum photographing position and the current location.
Vector location 422 obtains the current location to the optimum photographing position for being based on the position coordinatesVector.
Guidance unit 423 generates the guidance of the current location to the optimum photographing position for being based on the vectorInformation.
Preference module 430, for obtaining user preference information.Further, the preference module 430 includes: history listMember 431 and counting unit 432, in which:
History unit 431, for obtaining the most composition type of frequency of occurrence in history photographed data.
Counting unit 432 generates user for the composition type most based on frequency of occurrence in the history photographed dataPreference information.
Setting module 440 sets preference type for being based on the user preference information in default composition type.
Judgment module 450, for judge frequency of occurrence is most in the history photographed data composition type whether subordinateIn the default composition type.
Training module 460, the composition type most for the frequency of occurrence in the history photographed data are not subordinated to instituteWhen stating default composition type, using the most composition type of frequency of occurrence in the history photographed data as sample set, to describedConvolutional neural networks are trained.
It, can be according to user's relative to the filming apparatus that the application 3rd embodiment, the application fourth embodiment provideHobby or use habit set preference type, and automatically select the composition type output for meeting user preferences or habit;It can alsoIt is enough that newly-increased composition type is obtained according to the use habit of user, expansion training is carried out to convolutional neural networks, makes the programUsing more intelligent, humanized.
5th embodiment
The 5th embodiment of the application provides a kind of mobile terminal comprising display, memory and processor, it is describedDisplay and the memory are couple to the processor, the memory store instruction, when described instruction is by the processorIt is executed when execution:
The optimum photographing position of current picture is obtained based on convolutional neural networks;
According to the optimum photographing position, obtain from the current location of the current picture to the optimum photographing positionGuidance information.
Sixth embodiment
The application sixth embodiment provide it is a kind of with processor can be performed the computer-readable of program code depositStorage media, said program code execute the processor:
The optimum photographing position of current picture is obtained based on convolutional neural networks;
According to the optimum photographing position, obtain from the current location of the current picture to the optimum photographing positionGuidance information.
In conclusion image pickup method provided by the present application, device, mobile terminal and computer-readable storage medium, firstThe optimum photographing position that current picture is obtained based on convolutional neural networks is obtained further according to the optimum photographing position from describedGuidance information of the current location of current picture to the optimum photographing position.Opposite and the prior art, the embodiment of the present application are adoptedCollect common camera site to be trained convolutional neural networks as sample set and dispose in the terminal, take pictures in userWhen, using the corresponding optimum photographing position of trained convolutional neural networks output Current camera picture, while guiding user courtOptimum photographing position moves the mobile terminal, can effectively improve the quality of shooting photo, promote the experience of taking pictures of user.
It should be noted that all the embodiments in this specification are described in a progressive manner, each embodiment weightPoint explanation is the difference from other embodiments, and the same or similar parts between the embodiments can be referred to each other.For device class embodiment, since it is basically similar to the method embodiment, so being described relatively simple, related place ginsengSee the part explanation of embodiment of the method.For arbitrary processing mode described in embodiment of the method, in device realityApply in example can no longer be repeated in Installation practice by corresponding processing modules implement one by one.
Referring to Fig. 5, the embodiment of the present application also provides a kind of mobile terminal 100 based on above-mentioned image pickup method, device,It includes electronic body portion 10, and the electronic body portion 10 includes shell 12 and the main display being arranged on the shell 12120.Metal can be used in the shell 12, such as steel, aluminium alloy are made.In the present embodiment, the main display 120 is generally includedDisplay panel 111 may also comprise for responding the circuit etc. for carrying out touch control operation to the display panel 111.The display surfacePlate 111 can be a liquid crystal display panel (Liquid Crystal Display, LCD), in some embodiments, described aobviousShow panel 111 while being a touch screen 109.
Please refer to Fig. 6, in actual application scenarios, the mobile terminal 100 can be used as intelligent mobile phone terminal intoIt exercises and uses, the electronic body portion 10 also typically includes one or more (only showing one in figure) processors in this case102, memory 104, RF (Radio Frequency, radio frequency) module 106, voicefrequency circuit 110, sensor 114, input module118, power module 122.It will appreciated by the skilled person that structure shown in fig. 5 is only to illustrate, not to describedThe structure in electronic body portion 10 causes to limit.For example, the electronic body portion 10 may also include than shown in Fig. 5 more or moreFew component, or with the configuration different from shown in Fig. 5.
It will appreciated by the skilled person that every other component belongs to for the processor 102It is coupled between peripheral hardware, the processor 102 and these peripheral hardwares by multiple Peripheral Interfaces 124.The Peripheral Interface 124 canBased on following standard implementation: Universal Asynchronous Receive/sending device (Universal Asynchronous Receiver/Transmitter, UART), universal input/output (General Purpose Input Output, GPIO), serial peripheral connectMouthful (Serial Peripheral Interface, SPI), internal integrated circuit (Inter-Integrated Circuit,I2C), but it is not limited to above-mentioned standard.In some instances, the Peripheral Interface 124 can only include bus;In other examplesIn, the Peripheral Interface 124 may also include other elements, such as one or more controller, such as connecting the displayThe display controller of panel 111 or storage control for connecting memory.In addition, these controllers can also be from describedIt detaches, and is integrated in the processor 102 or in corresponding peripheral hardware in Peripheral Interface 124.
The memory 104 can be used for storing software program and module, and the processor 102 is stored in institute by operationThe software program and module in memory 104 are stated, thereby executing various function application and data processing.The memory104 may include high speed random access memory, may also include nonvolatile memory, and such as one or more magnetic storage device dodgesIt deposits or other non-volatile solid state memories.In some instances, the memory 104 can further comprise relative to instituteThe remotely located memory of processor 102 is stated, these remote memories can pass through network connection to the electronic body portion 10Or the main display 120.The example of above-mentioned network includes but is not limited to internet, intranet, local area network, mobile communicationNet and combinations thereof.
The RF module 106 is used to receive and transmit electromagnetic wave, realizes the mutual conversion of electromagnetic wave and electric signal, thusIt is communicated with communication network or other equipment.The RF module 106 may include various existing for executing these functionsCircuit element, for example, antenna, RF transceiver, digital signal processor, encryption/deciphering chip, subscriber identity module(SIM) card, memory etc..The RF module 106 can be carried out with various networks such as internet, intranet, wireless networkCommunication is communicated by wireless network and other equipment.Above-mentioned wireless network may include cellular telephone networks, wirelessLocal area network or Metropolitan Area Network (MAN).Various communication standards, agreement and technology can be used in above-mentioned wireless network, including but not limited toGlobal system for mobile communications (Global System for Mobile Communication, GSM), enhanced mobile communication skillArt (Enhanced Data GSM Environment, EDGE), Wideband CDMA Technology (wideband codeDivision multiple access, W-CDMA), Code Division Multiple Access (Code division access, CDMA), time-divisionMultiple access technology (time division multiple access, TDMA), adopting wireless fidelity technology (Wireless, Fidelity,WiFi) (such as American Institute of Electrical and Electronics Engineers's standard IEEE 802.10A, IEEE 802.11b, IEEE802.11g and/orIEEE 802.11n), the networking telephone (Voice over internet protocal, VoIP), worldwide interoperability for microwave accesses(Worldwide Interoperability for Microwave Access, Wi-Max), other be used for mail, Instant MessengerThe agreement and any other suitable communications protocol of news and short message, or even may include that those are not developed currently yetAgreement.
Voicefrequency circuit 110, earpiece 101, sound jack 103, microphone 105 provide user and the electronic body portion jointlyAudio interface between 10 or the main display 120.Specifically, the voicefrequency circuit 110 receives from the processor 102Voice data is converted to electric signal by voice data, by electric signal transmission to the earpiece 101.The earpiece 101 is by electric signalBe converted to the sound wave that human ear can be heard.The voicefrequency circuit 110 receives electric signal also from the microphone 105, by electric signalVoice data is converted to, and gives the processor 102 to be further processed data transmission in network telephony.Audio data can be withIt is obtained from the memory 104 or through the RF module 106.In addition, audio data also can store to the storageIt is sent in device 104 or by the RF module 106.
The setting of sensor 114 is in the electronic body portion 10 or in the main display 120, the sensor114 example includes but is not limited to: optical sensor, operation sensor, pressure sensor, gravity accelerometer andOther sensors.
Specifically, the optical sensor may include light sensor 114F, pressure sensor 114G.Wherein, pressure sensingDevice 114G can detecte the sensor by pressing the pressure generated in mobile terminal 100.That is, pressure sensor 114G detection by withThe pressure that contact between family and mobile terminal or pressing generate, for example, by between the ear and mobile terminal of user contact orPress the pressure generated.Therefore, whether pressure sensor 114G may be used to determine occurs between user and mobile terminal 100The size of contact or pressing and pressure.
Referring to Fig. 5, specifically in the embodiment shown in fig. 5, the light sensor 114F and the pressureSensor 114G is arranged adjacent to the display panel 111.The light sensor 114F can have object close to the main displayWhen shielding 120, such as when the electronic body portion 10 is moved in one's ear, the processor 102 closes display output.
As a kind of motion sensor, gravity accelerometer can detect in all directions (generally three axis) and accelerateThe size of degree can detect that size and the direction of gravity when static, can be used to identify the application of 100 posture of mobile terminal(such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, percussion) etc..In addition, the electronic body portion 10 can also configure the other sensors such as gyroscope, barometer, hygrometer, thermometer, herein no longerIt repeats,
In the present embodiment, the input module 118 may include the touch screen being arranged on the main display 120109, the touch screen 109 collects the touch operation of user on it or nearby, and (for example user is any using finger, stylus etc.Operation of the suitable object or attachment on the touch screen 109 or near the touch screen 109), and according to presettingThe corresponding attachment device of driven by program.Optionally, the touch screen 109 may include touch detecting apparatus and touch controller.Wherein, the touch orientation of the touch detecting apparatus detection user, and touch operation bring signal is detected, it transmits a signal toThe touch controller;The touch controller receives touch information from the touch detecting apparatus, and by the touch informationIt is converted into contact coordinate, then gives the processor 102, and order that the processor 102 is sent can be received and executed.Furthermore, it is possible to realize the touching of the touch screen 109 using multiple types such as resistance-type, condenser type, infrared ray and surface acoustic wavesTouch detection function.In addition to the touch screen 109, in other change embodiments, the input module 118 can also include itHis input equipment, such as key 107.The key 107 for example may include the character keys for inputting character, and for triggeringThe control button of control function.The example of the control button includes " returning to main screen " key, power on/off key etc..
The information and the electronics that the main display 120 is used to show information input by user, is supplied to userThe various graphical user interface of body part 10, these graphical user interface can by figure, text, icon, number, video and itsAny combination is constituted, in an example, the touch screen 109 may be disposed on the display panel 111 to it is describedDisplay panel 111 constitutes an entirety.
The power module 122 is used to provide power supply to the processor 102 and other each components.Specifically,The power module 122 may include power-supply management system, one or more power supply (such as battery or alternating current), charging circuit,Power-fail detection circuit, inverter, indicator of the power supply status and any other and the electronic body portion 10 or the masterThe generation, management of electric power and the relevant component of distribution in display screen 120.
The mobile terminal 100 further includes locator 119, and the locator 119 is for determining 100 institute of mobile terminalThe physical location at place.In the present embodiment, the locator 119 realizes the positioning of the mobile terminal 100 using positioning service,The positioning service, it should be understood that the location information of the mobile terminal 100 is obtained by specific location technology (as passed throughLatitude coordinate), it is marked on the electronic map by the technology or service of the position of positioning object.
It should be understood that above-mentioned mobile terminal 100 is not limited to intelligent mobile phone terminal, should refer to can movedComputer equipment used in dynamic.Specifically, mobile terminal 100, refers to the mobile computer for being equipped with intelligent operating systemEquipment, mobile terminal 100 include but is not limited to smart phone, smartwatch, tablet computer, etc..
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically showThe description of example " or " some examples " etc. means specific features, structure, material or spy described in conjunction with this embodiment or examplePoint is contained at least one embodiment or example of the application.In the present specification, schematic expression of the above terms are notIt must be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be in officeIt can be combined in any suitable manner in one or more embodiment or examples.In addition, without conflicting with each other, the skill of this fieldArt personnel can tie the feature of different embodiments or examples described in this specification and different embodiments or examplesIt closes and combines.
In addition, term " first ", " second " are used for descriptive purposes only and cannot be understood as indicating or suggesting relative importanceOr implicitly indicate the quantity of indicated technical characteristic.Define " first " as a result, the feature of " second " can be expressed orImplicitly include at least one this feature.In the description of the present application, the meaning of " plurality " is at least two, such as two, threeIt is a etc., unless otherwise specifically defined.
Any process described otherwise above or method description are construed as in flow chart or herein, and expression includesIt is one or more for realizing specific logical function or process the step of executable instruction code module, segment or portionPoint, and the range of the preferred embodiment of the application includes other realization, wherein can not press shown or discussed suitableSequence, including according to related function by it is basic simultaneously in the way of or in the opposite order, to execute function, this should be by the applicationEmbodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered useIn the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, forInstruction execution system, device or equipment (such as computer based system, including the system of processor or other can be held from instructionThe instruction fetch of row system, device or equipment and the system executed instruction) it uses, or combine these instruction execution systems, device or setIt is standby and use.For the purpose of this specification, " computer-readable medium ", which can be, any may include, stores, communicates, propagates or passDefeated program is for instruction execution system, device or equipment or the dress used in conjunction with these instruction execution systems, device or equipmentIt sets.The more specific example (non-exhaustive list) of computer-readable medium include the following: there is the electricity of one or more wiringsInterconnecting piece (mobile terminal), portable computer diskette box (magnetic device), random access memory (RAM), read-only memory(ROM), erasable edit read-only storage (EPROM or flash memory), fiber device and portable optic disk is read-only depositsReservoir (CDROM).In addition, computer-readable medium can even is that the paper that can print described program on it or other are suitableMedium, because can then be edited, be interpreted or when necessary with it for example by carrying out optical scanner to paper or other mediaHis suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each section of the application can be realized with hardware, software, firmware or their combination.Above-mentionedIn embodiment, software that multiple steps or method can be executed in memory and by suitable instruction execution system with storageOr firmware is realized.It, and in another embodiment, can be under well known in the art for example, if realized with hardwareAny one of column technology or their combination are realized: having a logic gates for realizing logic function to data-signalDiscrete logic, with suitable combinational logic gate circuit specific integrated circuit, programmable gate array (PGA), sceneProgrammable gate array (FPGA) etc..
Those skilled in the art are understood that realize all or part of step that above-described embodiment method carriesIt suddenly is that relevant hardware can be instructed to complete by program, the program can store in a kind of computer-readable storage mediumIn matter, which when being executed, includes the steps that one or a combination set of embodiment of the method.In addition, in each embodiment of the applicationIn each functional unit can integrate in a processing module, be also possible to each unit and physically exist alone, can also twoA or more than two units are integrated in a module.Above-mentioned integrated module both can take the form of hardware realization, can alsoIt is realized in the form of using software function module.If the integrated module realized in the form of software function module and asIndependent product when selling or using, also can store in a computer readable storage medium.
Storage medium mentioned above can be read-only memory, disk or CD etc..Although having been shown and retouching aboveEmbodiments herein is stated, it is to be understood that above-described embodiment is exemplary, and should not be understood as the limit to the applicationSystem, those skilled in the art can be changed above-described embodiment, modify, replace and become within the scope of applicationType.
Finally, it should be noted that above embodiments are only to illustrate the technical solution of the application, rather than its limitations;AlthoughThe application is described in detail with reference to the foregoing embodiments, those skilled in the art are when understanding: it still can be withIt modifies the technical solutions described in the foregoing embodiments or equivalent replacement of some of the technical features;AndThese are modified or replaceed, do not drive corresponding technical solution essence be detached from each embodiment technical solution of the application spirit andRange.