Embodiment
Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
The mobile terminal realizing each embodiment of the present invention is described referring now to accompanying drawing.In follow-up description, use the suffix of such as " module ", " parts " or " unit " for representing element only in order to be conducive to explanation of the present invention, itself is specific meaning not.Therefore, " module " and " parts " can mixedly use.
Mobile terminal can be implemented in a variety of manners.Such as, the terminal described in the present invention can comprise the such as mobile terminal of mobile phone, smart phone, notebook computer, digit broadcasting receiver, PDA (personal digital assistant), PAD (panel computer), PMP (portable media player), guider etc. and the fixed terminal of such as digital TV, desktop computer etc.Below, suppose that terminal is mobile terminal.But it will be appreciated by those skilled in the art that except the element except being used in particular for mobile object, structure according to the embodiment of the present invention also can be applied to the terminal of fixed type.
Fig. 1 is the hardware configuration signal of the mobile terminal realizing each embodiment of the present invention.
Mobile terminal 100 can comprise wireless communication unit 110, A/V (audio/video) input unit 120, user input unit 130, sensing cell 140, output unit 150, memory 160, interface unit 170, controller 180 and power subsystem 190 etc.Fig. 1 shows the mobile terminal with various assembly, it should be understood that, does not require to implement all assemblies illustrated.Can alternatively implement more or less assembly.Will be discussed in more detail below the element of mobile terminal.
Wireless communication unit 110 generally includes one or more assembly, and it allows the radio communication between mobile terminal 100 and wireless communication system or network.Such as, wireless communication unit can comprise at least one in broadcast reception module 111, mobile communication module 112, wireless Internet module 113, short range communication module 114 and positional information module 115.
Broadcast reception module 111 via broadcast channel from external broadcasting management server receiving broadcast signal and/or broadcast related information.Broadcast channel can comprise satellite channel and/or terrestrial channel.Broadcast management server can be generate and send the server of broadcast singal and/or broadcast related information or the broadcast singal generated before receiving and/or broadcast related information and send it to the server of terminal.Broadcast singal can comprise TV broadcast singal, radio signals, data broadcasting signal etc.And broadcast singal may further include the broadcast singal combined with TV or radio signals.Broadcast related information also can provide via mobile communications network, and in this case, broadcast related information can be received by mobile communication module 112.Broadcast singal can exist in a variety of manners, such as, it can exist with the form of the electronic service guidebooks (ESG) of the electronic program guides of DMB (DMB) (EPG), digital video broadcast-handheld (DVB-H) etc.Broadcast reception module 111 can by using the broadcast of various types of broadcast system Received signal strength.Especially, broadcast reception module 111 can by using such as multimedia broadcasting-ground (DMB-T), DMB-satellite (DMB-S), digital video broadcasting-hand-held (DVB-H), the digit broadcasting system receiving digital broadcast of the Radio Data System, received terrestrial digital broadcasting integrated service (ISDB-T) etc. of forward link media (MediaFLO@).Broadcast reception module 111 can be constructed to be applicable to providing the various broadcast system of broadcast singal and above-mentioned digit broadcasting system.The broadcast singal received via broadcast reception module 111 and/or broadcast related information can be stored in memory 160 (or storage medium of other type).
Radio signal is sent at least one in base station (such as, access point, Node B etc.), exterior terminal and server and/or receives radio signals from it by mobile communication module 112.Various types of data that such radio signal can comprise voice call signal, video calling signal or send according to text and/or Multimedia Message and/or receive.
Wireless Internet module 113 supports the Wi-Fi (Wireless Internet Access) of mobile terminal.This module can be inner or be externally couple to terminal.Wi-Fi (Wireless Internet Access) technology involved by this module can comprise WLAN (WLAN) (Wi-Fi), Wibro (WiMAX), Wimax (worldwide interoperability for microwave access), HSDPA (high-speed downlink packet access) etc.
Short range communication module 114 is the modules for supporting junction service.Some examples of short-range communication technology comprise bluetooth TM, radio-frequency (RF) identification (RFID), Infrared Data Association (IrDA), ultra broadband (UWB), purple honeybee TM etc.
Positional information module 115 is the modules of positional information for checking or obtain mobile terminal.The typical case of positional information module is GPS (global positioning system).According to current technology, GPS module 115 calculates from the range information of three or more satellite and correct time information and for the Information application triangulation calculated, thus calculates three-dimensional current location information according to longitude, latitude and pin-point accuracy.Current, the method for calculating location and temporal information uses three satellites and by the error of the position that uses an other satellite correction calculation to go out and temporal information.In addition, GPS module 115 can carry out computational speed information by Continuous plus current location information in real time.
A/V input unit 120 is for audio reception or vision signal.A/V input unit 120 can comprise camera 121 and microphone 1220, and the view data of camera 121 to the static images obtained by image capture apparatus in Video Capture pattern or image capture mode or video processes.Picture frame after process may be displayed on display module 151.Picture frame after camera 121 processes can be stored in memory 160 (or other storage medium) or via wireless communication unit 110 and send, and can provide two or more cameras 1210 according to the structure of mobile terminal.Such acoustic processing can via microphones sound (voice data) in telephone calling model, logging mode, speech recognition mode etc. operational mode, and can be voice data by microphone 122.Audio frequency (voice) data after process can be converted to the formatted output that can be sent to mobile communication base station via mobile communication module 112 when telephone calling model.Microphone 122 can be implemented various types of noise and eliminate (or suppress) algorithm and receiving and sending to eliminate (or suppression) noise or interference that produce in the process of audio signal.
User input unit 130 can generate key input data to control the various operations of mobile terminal according to the order of user's input.User input unit 130 allows user to input various types of information, and keyboard, the young sheet of pot, touch pad (such as, detecting the touch-sensitive assembly of the change of the resistance, pressure, electric capacity etc. that cause owing to being touched), roller, rocking bar etc. can be comprised.Especially, when touch pad is superimposed upon on display module 151 as a layer, touch-screen can be formed.
Sensing cell 140 detects the current state of mobile terminal 100, (such as, mobile terminal 100 open or close state), the position of mobile terminal 100, user for mobile terminal 100 contact (namely, touch input) presence or absence, the orientation of mobile terminal 100, the acceleration or deceleration of mobile terminal 100 move and direction etc., and generate order or the signal of the operation for controlling mobile terminal 100.Such as, when mobile terminal 100 is embodied as sliding-type mobile phone, sensing cell 140 can sense this sliding-type phone and open or close.In addition, whether whether sensing cell 140 can detect power subsystem 190 provides electric power or interface unit 170 to couple with external device (ED).Sensing cell 140 can comprise proximity transducer 1410 and will be described this in conjunction with touch-screen below.
Interface unit 170 is used as at least one external device (ED) and is connected the interface that can pass through with mobile terminal 100.Such as, external device (ED) can comprise wired or wireless head-band earphone port, external power source (or battery charger) port, wired or wireless FPDP, memory card port, for connecting the port, audio frequency I/O (I/O) port, video i/o port, ear port etc. of the device with identification module.Identification module can be that storage uses the various information of mobile terminal 100 for authentication of users and can comprise subscriber identification module (UIM), client identification module (SIM), Universal Subscriber identification module (USIM) etc.In addition, the device (hereinafter referred to " recognition device ") with identification module can take the form of smart card, and therefore, recognition device can be connected with mobile terminal 100 via port or other jockey.Interface unit 170 may be used for receive from external device (ED) input (such as, data message, electric power etc.) and the input received be transferred to the one or more element in mobile terminal 100 or may be used for transmitting data between mobile terminal and external device (ED).
In addition, when mobile terminal 100 is connected with external base, interface unit 170 can be used as to allow by it electric power to be provided to the path of mobile terminal 100 from base or can be used as the path that allows to be transferred to mobile terminal by it from the various command signals of base input.The various command signal inputted from base or electric power can be used as and identify whether mobile terminal is arranged on the signal base exactly.Output unit 150 is constructed to provide output signal (such as, audio signal, vision signal, alarm signal, vibration signal etc.) with vision, audio frequency and/or tactile manner.Output unit 150 can comprise display module 151, dio Output Modules 152, alarm modules 153 etc.
Display module 151 may be displayed on the information of process in mobile terminal 100.Such as, when mobile terminal 100 is in telephone calling model, display module 151 can show with call or other communicate (such as, text messaging, multimedia file are downloaded etc.) be correlated with user interface (UI) or graphic user interface (GUI).When mobile terminal 100 is in video calling pattern or image capture mode, display module 151 can the image of display capture and/or the image of reception, UI or GUI that video or image and correlation function are shown etc.
Meanwhile, when display module 151 and touch pad as a layer superposed on one another to form touch-screen time, display module 151 can be used as input unit and output device.Display module 151 can comprise at least one in liquid crystal display (LCD), thin-film transistor LCD (TFT-LCD), Organic Light Emitting Diode (OLED) display, flexible display, three-dimensional (3D) display etc.Some in these displays can be constructed to transparence and watch from outside to allow user, and this can be called transparent display, and typical transparent display can be such as TOLED (transparent organic light emitting diode) display etc.According to the specific execution mode wanted, mobile terminal 100 can comprise two or more display modules (or other display unit), such as, mobile terminal can comprise outside display module (not shown) and inner display module (not shown).Touch-screen can be used for detecting touch input pressure and touch input position and touch and inputs area.
When dio Output Modules 152 can be under the isotypes such as call signal receiving mode, call mode, logging mode, speech recognition mode, broadcast reception mode at mobile terminal, voice data convert audio signals that is that wireless communication unit 110 is received or that store in memory 160 and exporting as sound.And dio Output Modules 152 can provide the audio frequency relevant to the specific function that mobile terminal 100 performs to export (such as, call signal receives sound, message sink sound etc.).Dio Output Modules 152 can comprise loud speaker, buzzer etc.
Alarm modules 153 can provide and export that event informed to mobile terminal 100.Typical event can comprise calling reception, message sink, key signals input, touch input etc.Except audio or video exports, alarm modules 153 can provide in a different manner and export with the generation of notification event.Such as, alarm modules 153 can provide output with the form of vibration, when receive calling, message or some other enter communication (incoming communication) time, alarm modules 153 can provide sense of touch to export (that is, vibrating) to notify to user.By providing such sense of touch to export, even if when the mobile phone of user is in the pocket of user, user also can identify the generation of various event.Alarm modules 153 also can provide the output of the generation of notification event via display module 151 or dio Output Modules 152.
Memory 160 software program that can store process and the control operation performed by controller 180 etc., or temporarily can store oneself through exporting the data (such as, telephone directory, message, still image, video etc.) that maybe will export.And, memory 160 can store about when touch be applied to touch-screen time the vibration of various modes that exports and the data of audio signal.
Memory 160 can comprise the storage medium of at least one type, described storage medium comprises flash memory, hard disk, multimedia card, card-type memory (such as, SD or DX memory etc.), random access storage device (RAM), static random-access memory (SRAM), read-only memory (ROM), Electrically Erasable Read Only Memory (EEPROM), programmable read only memory (PROM), magnetic storage, disk, CD etc.And mobile terminal 100 can be connected the memory function of execute store 160 network storage device with by network cooperates.
Controller 180 controls the overall operation of mobile terminal usually.Such as, controller 180 performs the control relevant to voice call, data communication, video calling etc. and process.In addition, controller 180 can comprise the multi-media module 1810 for reproducing (or playback) multi-medium data, and multi-media module 1810 can be configured in controller 180, or can be configured to be separated with controller 180.Controller 180 can pattern recognition process, is identified as character or image so that input is drawn in the handwriting input performed on the touchscreen or picture.
Power subsystem 190 receives external power or internal power and provides each element of operation and the suitable electric power needed for assembly under the control of controller 180.
Various execution mode described herein can to use such as computer software, the computer-readable medium of hardware or its any combination implements.For hardware implementation, execution mode described herein can by using application-specific IC (ASIC), digital signal processor (DSP), digital signal processing device (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), processor, controller, microcontroller, microprocessor, being designed at least one performed in the electronic unit of function described herein and implementing, in some cases, such execution mode can be implemented in controller 180.For implement software, the execution mode of such as process or function can be implemented with allowing the independent software module performing at least one function or operation.Software code can be implemented by the software application (or program) write with any suitable programming language, and software code can be stored in memory 160 and to be performed by controller 180.
So far, oneself is through the mobile terminal according to its functional description.Below, for the sake of brevity, by the slide type mobile terminal that describes in various types of mobile terminals of such as folded form, board-type, oscillating-type, slide type mobile terminal etc. exemplarily.Therefore, the present invention can be applied to the mobile terminal of any type, and is not limited to slide type mobile terminal.
Mobile terminal 100 as shown in Figure 1 can be constructed to utilize and send the such as wired and wireless communication system of data via frame or grouping and satellite-based communication system operates.
Describe wherein according to the communication system that mobile terminal of the present invention can operate referring now to Fig. 2.
Such communication system can use different air interfaces and/or physical layer.Such as, the air interface used by communication system comprises such as frequency division multiple access (FDMA), time division multiple access (TDMA), code division multiple access (CDMA) and universal mobile telecommunications system (UMTS) (especially, Long Term Evolution (LTE)), global system for mobile communications (GSM) etc.As non-limiting example, description below relates to cdma communication system, but such instruction is equally applicable to the system of other type.
With reference to figure 2, cdma wireless communication system can comprise multiple mobile terminal 100, multiple base station (BS) 270, base station controller (BSC) 275 and mobile switching centre (MSC) 280.MSC280 is constructed to form interface with Public Switched Telephony Network (PSTN) 290.MSC280 is also constructed to form interface with the BSC275 that can be couple to base station 270 via back haul link.Back haul link can construct according to any one in some interfaces that oneself knows, described interface comprises such as E1/T1, ATM, IP, PPP, frame relay, HDSL, ADSL or xDSL.Will be appreciated that system as shown in Figure 2 can comprise multiple BSC2750.
Each BS270 can serve one or more subregion (or region), by multidirectional antenna or point to specific direction each subregion of antenna cover radially away from BS270.Or each subregion can by two or more antenna covers for diversity reception.Each BS270 can be constructed to support multiple parallel compensate, and each parallel compensate has specific frequency spectrum (such as, 1.25MHz, 5MHz etc.).
Subregion can be called as CDMA Channel with intersecting of parallel compensate.BS270 also can be called as base station transceiver subsystem (BTS) or other equivalent terms.Under these circumstances, term " base station " may be used for broadly representing single BSC275 and at least one BS270.Base station also can be called as " cellular station ".Or each subregion of particular B S270 can be called as multiple cellular station.
As shown in Figure 2, broadcast singal is sent to the mobile terminal 100 at operate within systems by broadcsting transmitter (BT) 295.Broadcast reception module 111 as shown in Figure 1 is arranged on mobile terminal 100 and sentences the broadcast singal receiving and sent by BT295.In fig. 2, several global positioning system (GPS) satellite 300 is shown.Satellite 300 helps at least one in the multiple mobile terminal 100 in location.
In fig. 2, depict multiple satellite 300, but understand, the satellite of any number can be utilized to obtain useful locating information.GPS module 115 as shown in Figure 1 is constructed to coordinate to obtain the locating information wanted with satellite 300 usually.Substitute GPS tracking technique or outside GPS tracking technique, can use can other technology of position of tracking mobile terminal.In addition, at least one gps satellite 300 optionally or extraly can process satellite dmb transmission.
As a typical operation of wireless communication system, BS270 receives the reverse link signal from various mobile terminal 100.Mobile terminal 100 participates in call usually, information receiving and transmitting communicates with other type.Each reverse link signal that certain base station 270 receives is processed by particular B S270.The data obtained are forwarded to relevant BSC275.BSC provides call Resourse Distribute and comprises the mobile management function of coordination of the soft switching process between BS270.The data received also are routed to MSC280 by BSC275, and it is provided for the extra route service forming interface with PSTN290.Similarly, PSTN290 and MSC280 forms interface, and MSC and BSC275 forms interface, and BSC275 correspondingly control BS270 so that forward link signals is sent to mobile terminal 100.
Based on above-mentioned mobile terminal hardware configuration and communication system, each embodiment of the inventive method is proposed.
As shown in Figure 3, the split screen method of a kind of mobile terminal that first embodiment of the invention proposes, comprises the steps:
S10, the instruction of reception split screen;
Referring to Fig. 4, mobile terminal split screen method provided by the invention, be specially adapted to the mobile terminal of narrow frame or Rimless, this mobile terminal comprises touch area, and this touch area comprises viewing area 10 and is positioned at the frame region 20 of both sides, viewing area 10.
Further, the touch-screen of mobile terminal registers two input equipments (input) by input_register_device () instruction, as input equipment 0 (input0) and input equipment 1 (input1) when driving initialization.And by input_allocate_device () instruction for each subregion distributes an input equipment, input equipment 0 as corresponding in viewing area, the corresponding input equipment 1 of frame region.
After these two input equipments of registration, upper strata is according to the name of the input equipment driving layer to report, and identifying active user viewing area is viewing area or frame region, different subregions, and upper strata processing mode is different.
Upper strata of the present invention is often referred to framework (Framework) layer, application layer etc., in the system of mobile terminal, such as android, the custom-built systems such as IOS, generally include bottom (physical layer, drive layer) and upper strata (ccf layer, application layer), the trend of signal stream is: physical layer (contact panel) receives the touch control operation of user, physical depression changes signal of telecommunication TP into, TP is passed to driving layer, the position of layer to pressing is driven to resolve, obtain the concrete coordinate of location point, duration, pressure and other parameters, this parameter is uploaded to ccf layer, ccf layer realizes by corresponding interface to driving the communication of layer, ccf layer receives the input equipment (input) driving layer, resolve this input equipment, thus Response to selection or do not respond this input equipment, and upwards pass to which application concrete by effectively inputting, different application operatings is performed according to different events to meet application layer.
Concrete, the split screen instruction in the present embodiment is touch control operation, and it comprises pressing operation and slide, and referring to Fig. 5 and Fig. 6, step S10 specifically comprises the steps:
The pressing parameter of S11, acquisition pressing operation;
Mobile terminal can receive pressing operation by driving layer.Concrete, obtain the pressing operation duration parameter of user in frame region 20.When the touch point of split screen instruction falls into frame region 20, the driving layer of mobile terminal then reports this touch point by the input equipment corresponding to frame region 20.
After framework (Framework) layer receives reported event (reported event comprises input equipment and touch point parameters etc.), first according to the name of input equipment, which region identification is, if driving layer (kernel) to identify is at frame region 20 touch-control, layer is then driven to be reported to the input equipment of ccf layer to be input1, instead of report with input0, namely, ccf layer does not need to judge current touch point is at which subregion, the size and the position that judge subregion is not needed yet, these judgements operate in drive on layer and complete, and, drive layer except reporting specifically which input equipment, the parameters of this touch point also can be reported to ccf layer, such as compressing time, position coordinates etc.Further, if drive layer identification to be at viewing area 10 touch-control, then drive layer to be reported to the input equipment input0 of ccf layer, instead of input1 report, then ignore this split screen instruction.
It should be noted that, ccf layer, after receiving reported event, turns multichannel mechanism by single channel, is reported to application layer.Be specially: first register a passage, this reported event is transmitted by this passage, this event is monitored by monitor (listener), by this event by different passages, be passed to corresponding application module, produce different application operatings, wherein, application module comprises the conventional application such as shooting, contact person; Produce different application operatings, such as, under shooting application, user clicks at special subregion, then can produce focusing, and the different operatings such as camera parameter are adjusted in shooting.Before it should be noted that reported event is delivered to monitor, it is single channel, after monitor is monitored, what reported event was walked is multichannel, and multichannel exists simultaneously, its benefit is can be passed to different application modules simultaneously, and different application module produces different operation responses.
Alternatively, being implemented as of above-mentioned steps: the mode utilizing object-oriented, the classification of definition viewing area and frame region and implementation, after judgement is special subregion, by EventHub function, the touch point coordinate of different resolution is converted into the coordinate of LCD, definition single channel function (such as serverchannel and clientchannel etc.), the effect of this function is, after receiving reported event, this event is passed to task manager (TouchEventManager) by this passage, by the monitoring of monitor, by this event by multichannel simultaneously or under the application module being passed to multiple response one by one, also one of them application module can only be passed to, application module is as camera, gallery etc., different application module produces corresponding operation.Certainly, the specific implementation of above-mentioned steps also can be the step realization of other modes, and the embodiment of the present invention does not limit this.
S12, judge pressing parameter whether fall into trigger range, be enter step S13, otherwise return step S11;
In the present embodiment, pressing parameter is compressing time value, realizes the detection of compressing time value, such as by arranging fixing detection frequency, arranging and detecting frequency is detect once current time parameter, until the duration parameters of pressing operation falls into time triggered scope in every 1/85 second.In other embodiments, pressing parameter is press pressure value, can be realized the detection of press pressure value, after detecting that press pressure value falls into trigger range, perform subsequent step by the mode arranging pressing inductor.In other embodiments, pressing parameter can be compressing time value and press pressure value simultaneously, after detecting that press pressure value and compressing time value all fall into trigger range, performs subsequent step, to reach the effect of false-touch prevention.This trigger range is set in advance in allocation list.
Slide is received while S13, triggering pressing operation;
When the time that pressing operation continues drops into trigger range, while this pressing operation of triggering, mobile terminal receives slide by driving layer, and each slide is made up of multiple touch point.
The input position parameter of S14, acquisition slide;
Mobile terminal is by the region detecting the touch point of slide and fall into and obtain the coordinate (X0, Y0) of touch point, to obtain the initial input location parameter of slide.
Concrete, if driving layer (kernel) to identify is at frame region 20 touch-control, layer is then driven to be reported to the input equipment of ccf layer to be input1, instead of report with input0, namely, ccf layer does not need to judge current touch point is at which subregion, the size and the position that judge subregion is not needed yet, these judgements operate in drive on layer and complete, and, driving layer except reporting specifically which input equipment, the parameters of this touch point also can be reported to ccf layer, such as compressing time, position coordinates etc.Further, if drive layer identification to be at viewing area 10 touch-control, then drive layer to be reported to the input equipment input0 of ccf layer, instead of input1 report, then ignore this split screen instruction.
The current location parameter of S15, acquisition slide;
Mobile terminal is by the region that falls into of touch point of each Preset Time detecting slide and obtain changing coordinates (X1, Y1) or (X1 ', Y1 ') of touch point, to obtain the current location parameter of slide.In the present embodiment, the detecting time of presetting is detected once current location parameter every 1/85 second.
S16, compare the numerical values recited of input position parameter and current location parameter;
Concrete, the numerical values recited comparing input position parameter and current location parameter is coordinate (X0, Y0) by calculating touch point and changing coordinates (X1, Y1) or (X1 ', Y1 ') distance, such as | Y0 – Y1|, or pass through formulacalculate.
S17, according to the numerical values recited compared, determine the direction of slide.
Concrete, if Y0 is greater than Y1, then the direction of slide is upwards;
If Y0 is less than Y1 ', then the direction of slide is downward.
S20, judge that whether split screen instruction meets the split screen trigger threshold preset;
Concrete, after determining the direction of slide, namely determine slide and slid into (X1, Y1) or (X1 ', Y1 ') by (X0, Y0), then judge that split screen instruction meets default split screen trigger threshold.
S30, mark off M split screen region on the display region.
Referring to Fig. 7, in the present embodiment, M is not less than 2.When M equals 2, trigger split screen instruction, viewing area there will be 1 cut-off rule, receive slide, user can pull or drag this cut-off rule and slide simultaneously, and identifies that this slide slides into B by A, then viewing area is marked off split screen region 1 and 2, these 2 split screen regions directly can show the application interface of running of mobile terminal, also can show interface of main menu, make user pass through to restart the new operation that should be used for.
Concrete, user produces split screen instruction by gesture, such as a finger presses frame region is motionless, to trigger pressing operation, another finger slides into B in frame region by A and carries out slide, and to produce split screen instruction, the position controlling cut-off rule by sliding changes the size in split screen region, make user's flexible and convenient operation, improve operating experience.
Further, referring to Fig. 8, when M equals 3, receive pressing operation, and trigger split screen instruction, viewing area there will be a cut-off rule, receive slide simultaneously, user can pull or drag this cut-off rule and slide, and identifying that this slide is via A, B to C, then viewing area is marked off split screen region 1,2 and 3, these 3 split screen regions directly can show the application interface of running of mobile terminal, also can show interface of main menu, make user pass through to restart the new operation that should be used for.
Concrete, user's finger presses frame region is motionless, to trigger pressing operation, another finger slides into B in frame region by A and carries out slide, and the 1st cut-off rule appears in viewing area, user continues to slide into C by B, there is the 2nd cut-off rule in viewing area, thus produces split screen instruction, and the position controlling cut-off rule by sliding changes the size in split screen region, make user's flexible and convenient operation, improve operating experience.
Further, referring to Fig. 9, will be described further touch control operation flow process of the present invention in another way, for simplicity, in Fig. 9, by viewing area referred to as A district, by frame region referred to as C district, the report flow of touch-control event is as follows:
Drive layer by physical hardware as touch-screen receives touch control operation, and judge that this touch control operation occurs in A district or C district, and by A district or C district device file node reported event.Native floor reads event from the device file in A district, C district, and processes the event in A district, C district, as coordinate calculates, is distinguished, finally distribute A district and C district event respectively by the event of device id to A, C district.Wherein, A district event walks primary flow process, processes, that is, processed by multichannel mechanism by common mode to A district event; C district event then distributes from the C district designated lane being registered to Native floor in advance, inputted by Native port, system port exports the event ends with system service of C district to, monitor C district event by monitor (listener), then report to each application by C district event reception external interface.
Second embodiment of the invention proposes a kind of split screen method of mobile terminal.In a second embodiment, described split screen method is different from the concrete manifestation that the difference in the first embodiment is mainly reflected in step S30, in a second embodiment, when M equals 3, user's finger presses frame region 20 is motionless, layer is driven to receive pressing operation, and trigger split screen instruction, viewing area there will be the 1st cut-off rule, frame region 20 reverse slide of two fingers in cut-off rule both sides, screen is made to produce the 2nd cut-off rule, and viewing area is marked off split screen region 1, 2 and 3, these 3 split screen regions directly can show the application interface of running of mobile terminal, also interface of main menu can be shown, user is made to pass through to restart the new operation that should be used for.Further, these 3 split screen regions directly can show the application interface of running of mobile terminal, also can show interface of main menu, make user pass through to restart the new operation that should be used for.
Please refer to Figure 10, third embodiment of the invention proposes a kind of split screen method of mobile terminal.In the third embodiment, specifically comprise the following steps:
Screen instruction is closed in S50, reception;
First, obtain the pressing operation duration parameter of user in frame region 20, when the touch point of split screen instruction falls into frame region 20, the driving layer of mobile terminal then reports this touch point by the input equipment corresponding to frame region 20.
Then, arranging detection frequency is detect once current time parameter, until the duration parameters of pressing operation falls into time triggered scope in 1/85 second.In the present embodiment, this trigger range is set in advance in allocation list.
After framework (Framework) layer receives reported event (reported event comprises input equipment and touch point parameters etc.), first according to the name of input equipment, which region identification is, if driving layer (kernel) to identify is at frame region 20 touch-control, layer is then driven to be reported to the input equipment of ccf layer to be input1, instead of report with input0, namely, ccf layer does not need to judge current touch point is at which subregion, the size and the position that judge subregion is not needed yet, these judgements operate in drive on layer and complete, and, drive layer except reporting specifically which input equipment, the parameters of this touch point also can be reported to ccf layer, such as compressing time, position coordinates etc.Further, if drive layer identification to be at viewing area 10 touch-control, then drive layer to be reported to the input equipment input0 of ccf layer, instead of input1 report, then ignore this split screen instruction.
It should be noted that, ccf layer, after receiving reported event, turns multichannel mechanism by single channel, is reported to application layer.Be specially: first register a passage, this reported event is transmitted by this passage, this event is monitored by monitor (listener), by this event by different passages, be passed to corresponding application module, produce different application operatings, wherein, application module comprises the conventional application such as shooting, contact person; Produce different application operatings, such as, under shooting application, user clicks at special subregion, then can produce focusing, and the different operatings such as camera parameter are adjusted in shooting.Before it should be noted that reported event is delivered to monitor, it is single channel, after monitor is monitored, what reported event was walked is multichannel, and multichannel exists simultaneously, its benefit is can be passed to different application modules simultaneously, and different application module produces different operation responses.
Alternatively, being implemented as of above-mentioned steps: the mode utilizing object-oriented, the classification of definition viewing area and frame region and implementation, after judgement is special subregion, by EventHub function, the touch point coordinate of different resolution is converted into the coordinate of LCD, definition single channel function (such as serverchannel and clientchannel etc.), the effect of this function is, after receiving reported event, this event is passed to task manager (TouchEventManager) by this passage, by the monitoring of monitor, by this event by multichannel simultaneously or under the application module being passed to multiple response one by one, also one of them application module can only be passed to, application module is as camera, gallery etc., different application module produces corresponding operation.Certainly, the specific implementation of above-mentioned steps also can be the step realization of other modes, and the embodiment of the present invention does not limit this.
S60, judge that whether close screen instruction meets the conjunction screen trigger threshold preset;
When the duration of pressing operation meets time triggered scope, then trigger and close screen instruction, and viewing area produces M-1 cut-off rule.
S70, by M the N number of split screen region of split screen region synthesis.
In the present embodiment, M is not less than 2, and N is less than M.When M equals 2, trigger and close screen instruction, viewing area there will be 1 cut-off rule, then, receive the slide of user, user can a finger pull or drag this cut-off rule and slide, until the location parameter of cut-off rule meets the threshold values of setting, display wherein 1 split screen region.Concrete, drag this cut-off rule and slide to split screen region 2 by split screen region 1, until when split screen region 1 accounts for 90% of viewing area, split screen region is 2-in-1 becomes split screen region 1, i.e. the interface in split-screen display region 1 on viewing area.Or also can drag this cut-off rule and slide to split screen region 1 by split screen region 2, until when split screen region 2 accounts for 90% of viewing area, split screen region 2 is synthesized in split screen region 1, i.e. the interface in split-screen display region 2 on viewing area.Or, drag cut-off rule and slide to split screen region 2 by split screen region 1, until split screen region 2 disappears, then the interface in split-screen display region 1 on viewing area.
When M equals 3, receive pressing operation, and trigger the instruction of conjunction screen, viewing area there will be 2 cut-off rules, then, receive the slide of user, user can a finger pull or drag one of them cut-off rule and slide to another cut-off rule, until the location parameter of cut-off rule meets default threshold values, display wherein 1 split screen region.Concrete, drag the cut-off rule between split screen region 1 and split screen region 2, slided to split screen region 2 in split screen region 1, and cut-off rule between split screen region 2 and split screen region 3 is motionless, when split screen region 1 accounts for shown by split screen region 1 and split screen region 2 90%, split screen region is 2-in-1 becomes split screen region 1, then, drag the cut-off rule between the split screen region 1 after synthesis and split screen region 3, slided to split screen region 3 in split screen region 1 after synthesis, until when the split screen region 1 after synthesis accounts for 90% of viewing area, split screen region 3 synthesizes split screen region 1.
Fourth embodiment of the invention proposes a kind of split screen method of mobile terminal.In the fourth embodiment, described split screen method is different from the concrete manifestation being mainly reflected in step S70 in the 3rd embodiment, in the fourth embodiment, when M equals 3, user's finger presses frame region 20 is motionless, layer is driven to receive pressing operation, and trigger the instruction of conjunction screen, there are 2 cut-off rules in viewing area, point slide in opposition in the frame region 20 respectively near split screen region 1 and split screen region 3 for two, split screen region 2 is disappeared, then 1 cut-off rule, split screen region 1 and split screen region 3 appears in viewing area.In the present embodiment, split screen region 1 and split screen region 3 synthesize the step in split screen region 1 or split screen region 3 and embodiment three similar, do not repeat them here.
The split screen method of the mobile terminal of the present embodiment, can after the split screen instruction receiving user, judge whether this split screen instruction meets the split screen trigger threshold preset, when split screen instruction meets default split screen trigger threshold, mark off multiple split screen region on the display region, user can regulate the size in split screen region according to demand, is convenient to open and close screen, improves the operating experience of user.
Fifth embodiment of the invention further provides a kind of point screen device of mobile terminal, and please refer to Figure 11, this point of screen device comprises:
First receiving element 101, for receiving split screen instruction;
Mobile terminal provided by the invention divides screen device, is specially adapted to the mobile terminal of narrow frame or Rimless, and this mobile terminal comprises touch area, and this touch area comprises viewing area 10 and is positioned at the frame region 20 of both sides, viewing area 10.
The touch-screen of mobile terminal registers two input equipments (input) by input_register_device () instruction, as input equipment 0 (input0) and input equipment 1 (input1) when driving initialization.And by input_allocate_device () instruction for each subregion distributes an input equipment, input equipment 0 as corresponding in viewing area, the corresponding input equipment 1 of frame region.
After these two input equipments of registration, upper strata is according to the name of the input equipment driving layer to report, and identifying active user viewing area is viewing area or frame region, different subregions, and upper strata processing mode is different.
Upper strata of the present invention is often referred to framework (Framework) layer, application layer etc., in the system of mobile terminal, such as android, the custom-built systems such as IOS, generally include bottom (physical layer, drive layer) and upper strata (ccf layer, application layer), the trend of signal stream is: physical layer (contact panel) receives the touch control operation of user, physical depression changes signal of telecommunication TP into, TP is passed to driving layer, the position of layer to pressing is driven to resolve, obtain the concrete coordinate of location point, duration, pressure and other parameters, this parameter is uploaded to ccf layer, ccf layer realizes by corresponding interface to driving the communication of layer, ccf layer receives the input equipment (input) driving layer, resolve this input equipment, thus Response to selection or do not respond this input equipment, and upwards pass to which application concrete by effectively inputting, different application operatings is performed according to different events to meet application layer.
Split screen instruction in the present embodiment is touch control operation, and it comprises pressing operation and slide, and this first receiving element 101 specifically comprises:
Acquisition module, for obtaining the pressing parameter of pressing operation.Specifically for obtaining the pressing operation duration parameter of user in frame region 20.When the touch point of split screen instruction falls into frame region 20, the driving layer of mobile terminal then reports this touch point by the input equipment corresponding to frame region 20.
After framework (Framework) layer receives reported event (reported event comprises input equipment and touch point parameters etc.), first according to the name of input equipment, which region identification is, if driving layer (kernel) to identify is at frame region 20 touch-control, layer is then driven to be reported to the input equipment of ccf layer to be input1, instead of report with input0, namely, ccf layer does not need to judge current touch point is at which subregion, the size and the position that judge subregion is not needed yet, these judgements operate in drive on layer and complete, and, drive layer except reporting specifically which input equipment, the parameters of this touch point also can be reported to ccf layer, such as compressing time, position coordinates etc.Further, if drive layer identification to be at viewing area 10 touch-control, then drive layer to be reported to the input equipment input0 of ccf layer, instead of input1 report, then ignore this split screen instruction.
It should be noted that, acquisition module, after receiving reported event, turns multichannel mechanism by single channel, is reported to application layer.Be specially: first register a passage, this reported event is transmitted by this passage, this event is monitored by monitor (listener), by this event by different passages, be passed to corresponding application module, produce different application operatings, wherein, application module comprises the conventional application such as shooting, contact person; Produce different application operatings, such as, under shooting application, user clicks at special subregion, then can produce focusing, and the different operatings such as camera parameter are adjusted in shooting.Before it should be noted that reported event is delivered to monitor, it is single channel, after monitor is monitored, what reported event was walked is multichannel, and multichannel exists simultaneously, its benefit is can be passed to different application modules simultaneously, and different application module produces different operation responses.
Alternatively, utilize the mode of object-oriented, the classification of definition viewing area and frame region and implementation, after judgement is special subregion, by EventHub function, the touch point coordinate of different resolution is converted into the coordinate of LCD, definition single channel function (such as serverchannel and clientchannel etc.), the effect of this function is, after receiving reported event, this event is passed to task manager (TouchEventManager) by this passage, by the monitoring of monitor, by this event by multichannel simultaneously or under the application module being passed to multiple response one by one, also one of them application module can only be passed to, application module is as camera, gallery etc., different application module produces corresponding operation.Certainly, also can be that the step of other modes realizes, the embodiment of the present invention limit this.
Trigger module, for judging whether pressing parameter falls into trigger range;
In the present embodiment, pressing parameter is compressing time value, realizes the detection of compressing time value, such as by arranging fixing detection frequency, arranging and detecting frequency is detect once current time parameter, until the duration parameters of pressing operation falls into time triggered scope in every 1/85 second.In other embodiments, pressing parameter is press pressure value, can be realized the detection of press pressure value, after detecting that press pressure value falls into trigger range, perform subsequent step by the mode arranging pressing inductor.In other embodiments, pressing parameter can be compressing time value and press pressure value simultaneously, after detecting that press pressure value and compressing time value all fall into trigger range, performs subsequent step, to reach the effect of false-touch prevention.This trigger range is set in advance in allocation list.
When the time that pressing operation continues drops into trigger range, while this pressing operation of triggering, mobile terminal receives slide by driving layer, and each slide is made up of multiple touch point.
Locating module, obtains the input position parameter of slide;
Concrete, the region that mobile terminal is fallen into by the touch point of detecting slide also obtains the coordinate (X0, Y0) of touch point, to obtain the initial input location parameter of slide.
If driving layer (kernel) to identify is at frame region 20 touch-control, layer is then driven to be reported to the input equipment of ccf layer to be input1, instead of report with input0, namely, ccf layer does not need to judge current touch point is at which subregion, the size and the position that judge subregion is not needed yet, these judgements operate in drive on layer and complete, and, drive layer except reporting specifically which input equipment, the parameters of this touch point also can be reported to ccf layer, such as compressing time, position coordinates etc.Further, if drive layer identification to be at viewing area 10 touch-control, then drive layer to be reported to the input equipment input0 of ccf layer, instead of input1 report, then ignore this split screen instruction.
Locating module is also for obtaining the current location parameter of slide;
Mobile terminal is by the region that falls into of touch point of each Preset Time detecting slide and obtain changing coordinates (X1, Y1) or (X1 ', Y1 ') of touch point, to obtain the current location parameter of slide.In the present embodiment, the detecting time of presetting is detected once current location parameter every 1/85 second.
Comparison module, for comparing the numerical values recited of input position parameter and current location parameter;
Concrete, the numerical values recited comparing input position parameter and current location parameter is coordinate (X0, Y0) by calculating touch point and changing coordinates (X1, Y1) or (X1 ', Y1 ') distance, such as | Y0 – Y1|, or pass through formulacalculate.
Identification module, for according to the numerical values recited compared, determines the direction of slide.
Concrete, if Y0 is greater than Y1, then the direction of slide is upwards;
If Y0 is less than Y1 ', then the direction of slide is downward.
Judging unit 103, for judging whether split screen instruction meets the split screen trigger threshold preset;
Concrete, after determining the direction of slide, namely determine slide and slid into (X1, Y1) or (X1 ', Y1 ') by (X0, Y0), then judge that split screen instruction meets default split screen trigger threshold.
Split screen unit 104, for marking off M split screen region on the display region.
In the present embodiment, M is not less than 2.When M equals 2, split screen unit 104 triggers split screen instruction, viewing area there will be 1 cut-off rule, receive slide, user can pull or drag this cut-off rule and slide simultaneously, and identifies that this slide slides into B by A, then viewing area is marked off split screen region 1 and 2, these 2 split screen regions directly can show the application interface of running of mobile terminal, also can show interface of main menu, make user pass through to restart the new operation that should be used for.
Concrete, user produces split screen instruction by gesture, such as a finger presses frame region is motionless, to trigger pressing operation, another finger slides into B in frame region by A and carries out slide, and to produce split screen instruction, the position controlling cut-off rule by sliding changes the size in split screen region, make user's flexible and convenient operation, improve operating experience.
Further, when M equals 3, first receiving element 101 receives pressing operation, split screen unit 103 triggers split screen instruction, viewing area there will be a cut-off rule, receive slide simultaneously, user can pull or drag this cut-off rule and slide, and identify that this slide is via A, B to C, then viewing area is marked off split screen region 1,2 and 3, these 3 split screen regions directly can show the application interface of running of mobile terminal, also can show interface of main menu, make user pass through to restart the new operation that should be used for.
Concrete, user's finger presses frame region is motionless, to trigger pressing operation, another finger slides into B in frame region by A and carries out slide, and the 1st cut-off rule appears in viewing area, user continues to slide into C by B, there is the 2nd cut-off rule in viewing area, thus produces split screen instruction, and the position controlling cut-off rule by sliding changes the size in split screen region, make user's flexible and convenient operation, improve operating experience.
In the sixth embodiment, a point screen device for described mobile terminal also comprises:
Second receiving element 102, closes screen instruction for receiving;
Concrete, first, second receiving element 102 obtains the pressing operation duration parameter of user in frame region 20, and when the touch point of split screen instruction falls into frame region 20, the driving layer of mobile terminal then reports this touch point by the input equipment corresponding to frame region 20.
Then, the second receiving element 102 arranges and detects frequency is detect once current time parameter, until the duration parameters of pressing operation falls into time triggered scope in 1/85 second.In the present embodiment, this trigger range is set in advance in allocation list.
Judging unit 103 is also for judging whether close screen instruction meets the conjunction screen trigger threshold preset;
When the duration of pressing operation meets time triggered scope, then judging unit 103 is determined to trigger the instruction of conjunction screen, and viewing area produces M-1 cut-off rule.
Close screen unit 105, for by M the N number of split screen region of split screen region synthesis.
In the present embodiment, M is not less than 2, and N is less than M.When M equals 2, close screen unit 105 and trigger the instruction of conjunction screen, viewing area there will be 1 cut-off rule, then, receive the slide of user, user can a finger pull or drag this cut-off rule and slide, until the location parameter of cut-off rule meets the threshold values of setting, and display wherein 1 split screen region.Concrete, drag this cut-off rule and slide to split screen region 2 by split screen region 1, until when split screen region 1 accounts for 90% of viewing area, split screen region is 2-in-1 becomes split screen region 1, i.e. the interface in split-screen display region 1 on viewing area.Or also can drag this cut-off rule and slide to split screen region 1 by split screen region 2, until when split screen region 2 accounts for 90% of viewing area, split screen region 2 is synthesized in split screen region 1, i.e. the interface in split-screen display region 2 on viewing area.Or, drag cut-off rule and slide to split screen region 2 by split screen region 1, until split screen region 2 disappears, then the interface in split-screen display region 1 on viewing area.
When M equals 3, close screen unit 105 and trigger the instruction of conjunction screen, viewing area there will be 2 cut-off rules, then, receive the slide of user, user can a finger pull or drag one of them cut-off rule and slide to another cut-off rule, until the location parameter of cut-off rule meets default threshold values, and display wherein 1 split screen region.Concrete, drag the cut-off rule between split screen region 1 and split screen region 2, slided to split screen region 2 in split screen region 1, and cut-off rule between split screen region 2 and split screen region 3 is motionless, when split screen region 1 accounts for shown by split screen region 1 and split screen region 2 90%, split screen region is 2-in-1 becomes split screen region 1, then, drag the cut-off rule between the split screen region 1 after synthesis and split screen region 3, slided to split screen region 3 in split screen region 1 after synthesis, until when the split screen region 1 after synthesis accounts for 90% of viewing area, split screen region 3 synthesizes split screen region 1.
Point screen device of the mobile terminal of the present embodiment, can after the split screen instruction receiving user, judge whether this split screen instruction meets the split screen trigger threshold preset, when split screen instruction meets default split screen trigger threshold, mark off multiple split screen region on the display region, user can regulate the size in split screen region according to demand, is convenient to open and close screen, improves the operating experience of user.
It should be noted that, in this article, term " comprises ", " comprising " or its any other variant are intended to contain comprising of nonexcludability, thus make to comprise the process of a series of key element, method, article or device and not only comprise those key elements, but also comprise other key elements clearly do not listed, or also comprise by the intrinsic key element of this process, method, article or device.When not more restrictions, the key element limited by statement " comprising ... ", and be not precluded within process, method, article or the device comprising this key element and also there is other identical element.
The invention described above embodiment sequence number, just to describing, does not represent the quality of embodiment.
Through the above description of the embodiments, those skilled in the art can be well understood to the mode that above-described embodiment method can add required general hardware platform by software and realize, hardware can certainly be passed through, but in a lot of situation, the former is better execution mode.Based on such understanding, technical scheme of the present invention can embody with the form of software product the part that prior art contributes in essence in other words, this computer software product is stored in a storage medium (as ROM/RAM, magnetic disc, CD), comprising some instructions in order to make a station terminal equipment (can be mobile phone, computer, server, air conditioner, or the network equipment etc.) perform method described in each embodiment of the present invention.
These are only the preferred embodiments of the present invention; not thereby the scope of the claims of the present invention is limited; every utilize specification of the present invention and accompanying drawing content to do equivalent structure or equivalent flow process conversion; or be directly or indirectly used in other relevant technical fields, be all in like manner included in scope of patent protection of the present invention.