Movatterモバイル変換


[0]ホーム

URL:


CN107479725A - A kind of characters input method, device, dummy keyboard, electronic equipment and storage medium - Google Patents

A kind of characters input method, device, dummy keyboard, electronic equipment and storage medium
Download PDF

Info

Publication number
CN107479725A
CN107479725ACN201710828402.4ACN201710828402ACN107479725ACN 107479725 ACN107479725 ACN 107479725ACN 201710828402 ACN201710828402 ACN 201710828402ACN 107479725 ACN107479725 ACN 107479725A
Authority
CN
China
Prior art keywords
touch
touch point
character
input
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710828402.4A
Other languages
Chinese (zh)
Other versions
CN107479725B (en
Inventor
李凡智
刘旭国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing LtdfiledCriticalLenovo Beijing Ltd
Priority to CN201710828402.4ApriorityCriticalpatent/CN107479725B/en
Publication of CN107479725ApublicationCriticalpatent/CN107479725A/en
Application grantedgrantedCritical
Publication of CN107479725BpublicationCriticalpatent/CN107479725B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The application provides a kind of characters input method, device, dummy keyboard, electronic equipment and storage medium, and methods described includes:The first position of the first touch point on touch-screen is obtained, determines the first input character corresponding to first position;Obtain the second place of the second touch point on touch-screen;The position relationship of a point in the second touch point and touch-screen is obtained according to the second place;The second input character according to corresponding to position relationship and the first input character determine the second place;Touch order of each character of matching according to touch point is combined, draws multiple vocabulary;A correct vocabulary is chosen from the multiple vocabulary, as input vocabulary.That is electronic equipment can determine character according to the relative position between point, therefore electronic equipment is in execution character input operation, dummy keyboard can not be shown on the touchscreen, avoid occupying the viewing area of touch-screen, so as to show actual content in the whole viewing area of touch-screen.

Description

Character input method and device, virtual keyboard, electronic equipment and storage medium
The application has an application date of 2012, 10 and 15, and has an application number of: 201210390927.1, title of the invention: a method and a device for inputting characters, a virtual keyboard and a divisional application of electronic equipment are provided.
Technical Field
The present disclosure relates to the field of character input technologies, and in particular, to a character input method and apparatus, a virtual keyboard, an electronic device, and a storage medium.
Background
With the progress of scientific technology, electronic devices are becoming more and more intelligent, and thus electronic devices having touch screens are gradually emerging.
When a user uses the touch screen to input words and phrases, a virtual keyboard is displayed on the touch screen, so that the user can input words and phrases according to the needed words and phrases. Specifically, the absolute position of a corresponding key on a clicked virtual keyboard is obtained, and the required vocabulary is matched and input according to the absolute position.
However, the display of the virtual keyboard occupies a part of the display area of the touch screen, so that the effective display area of the touch screen is reduced, and the display of the actual content is affected. For example, currently, the electronic device stores information interaction between a user and the same contact in the same short message list, and after the virtual keyboard is turned on, the virtual keyboard can block a short message display area, which affects a display area of effective content.
Disclosure of Invention
The technical problem to be solved by the application is to provide a character input method, a character input device, a virtual keyboard, an electronic device and a storage medium, so as to solve the problem that in the prior art, after the virtual keyboard is started, the virtual keyboard can shield a short message display area and influence the display area of effective content.
Based on an aspect of the present application, a character input method is provided, which is applied to an electronic device, where the electronic device includes a touch screen, and includes:
obtaining a first position of a first touch point on the touch screen, and determining a first input character corresponding to the first position;
obtaining a second position of a second touch point on the touch screen;
obtaining the position relation between the second touch point and one point in the touch screen according to the second position;
determining a second input character corresponding to the second position according to the position relation and the first input character;
combining the matched characters according to the touch sequence of the touch points to obtain a plurality of words;
and selecting a correct vocabulary from the plurality of vocabularies as an input vocabulary.
Preferably, obtaining the position relationship between the second touch point and one point in the touch screen according to the second position includes: and acquiring the position relation between the second touch point and the positioning point in the touch screen according to the second position of the second touch point relative to the positioning point.
Preferably, obtaining the position relationship between the second touch point and one point in the touch screen according to the second position includes: and acquiring the position relation between the second touch point and the first touch point according to the second position of the second touch point relative to the first touch point.
Preferably, obtaining the position relationship between the second touch point and one point in the touch screen according to the second position includes: and acquiring the position relation between the second touch point and the first touch point according to the second position of the second touch point relative to the first touch point.
Preferably, determining a second input character corresponding to the second position according to the position relationship and the first input character includes: and selecting a character associated with the first input character from a preset vocabulary table according to the position relation and the first input character, and determining the character as a second input character, wherein the preset vocabulary table comprises words existing in a standard dictionary, or the preset vocabulary table comprises words existing in the standard dictionary and recorded words input by the user before.
Preferably, selecting a correct vocabulary from the plurality of vocabularies as the input vocabulary comprises:
comparing the plurality of vocabularies with a preset vocabulary table respectively, and selecting one vocabulary included in the preset vocabulary table as an input vocabulary; the preset vocabulary table comprises words which are already in the standard dictionary, or the preset vocabulary table comprises words which are already in the standard dictionary and recorded words which are input by the user before.
Preferably, under the condition that a first time difference value of a single touch point is smaller than a first preset time and a second time difference value of two touch points is smaller than a second preset time, obtaining a first position of the first touch point on the touch screen and determining a first input character corresponding to the first position are performed, wherein the first time difference value is a difference value between a start time and an end time of the single touch point, the second time difference value is a difference value between the end time of one touch point and the start time of the other touch point, and the two touch points are touch points touched twice adjacently.
Preferably, the preset command operation is performed when a time difference value of a single touch point is not less than a preset time, or the preset command operation is performed when a first time difference value of the single touch point is less than a first preset time and a second time difference value of two touch points is not less than a second preset time, wherein the first time difference value is a difference value between a start time and an end time of the single touch point, the second time difference value is a difference value between the end time of one touch point and the start time of the other touch point, and the two touch points are touch points touched twice in a neighboring sequence.
Preferably, the method further comprises the following steps:
displaying input characters on the touch screen.
According to another aspect of the present application, there is also provided a character input device applied to an electronic device, the electronic device including a touch screen, the device including:
the first character determining unit is used for obtaining a first position of a first touch point on the touch screen and determining a first input character corresponding to the first position;
the position acquisition unit is used for acquiring a second position of a second touch point on the touch screen;
the position relation obtaining unit is used for obtaining the position relation between the second touch point and one point in the touch screen according to the second position;
the second character determining unit is used for determining a second input character corresponding to the second position according to the position relation and the first input character;
the matching unit is used for combining the matched characters according to the touch sequence of the touch points to obtain a plurality of vocabularies;
and the selecting unit is used for selecting a correct vocabulary from the vocabularies as an input vocabulary.
Preferably, the position relationship obtaining unit is specifically configured to obtain a position relationship between the second touch point and a positioning point in the touch screen according to a second position of the second touch point relative to the positioning point.
Preferably, the position relation obtaining unit is specifically configured to obtain a position relation between the second touch point and the first touch point according to a second position of the second touch point relative to the first touch point.
Preferably, the position relation acquiring unit is specifically configured to acquire a position relation between the second touch point and the first touch point according to a second position of the second touch point relative to the first touch point.
Preferably, the second character determination unit is specifically configured to select a character associated with the first input character from a preset vocabulary according to the position relationship and the first input character, and determine the character as the second input character, where the preset vocabulary includes words already in the standard dictionary, or the preset vocabulary includes words already in the standard dictionary and recorded words that are input by the user before.
Preferably, the selecting unit is specifically configured to compare the plurality of vocabularies with a preset vocabulary table, and select one vocabulary included in the preset vocabulary table as an input vocabulary; the preset vocabulary table comprises words which are already in the standard dictionary, or the preset vocabulary table comprises words which are already in the standard dictionary and recorded words which are input by the user before.
Preferably, the first character determining unit is specifically configured to, when a first time difference value of a single touch point is smaller than a first preset time and a second time difference value of two touch points is smaller than a second preset time, obtain a first position of the first touch point on the touch screen, and determine a first input character corresponding to the first position, where the first time difference value is a difference value between a start time and an end time of the single touch point, the second time difference value is a difference value between the end time of one touch point and the start time of another touch point, and the two touch points are touch points touched twice adjacently.
Preferably, the touch screen further comprises a preset command executing unit, configured to execute a preset command operation if a time difference value of a single touch point is not less than a preset time, or execute a preset command operation if a first time difference value of a single touch point is less than a first preset time and a second time difference value of two touch points is not less than a second preset time, where the first time difference value is a difference value between a start time and an end time of the single touch point, the second time difference value is a difference value between an end time of one touch point and a start time of another touch point, and the two touch points are touch points touched twice in adjacent.
Preferably, the method further comprises the following steps: and the display unit is used for displaying input characters on the touch screen.
According to still another aspect of the present application, there is provided a virtual keyboard including the above character input device.
On the basis of the other aspect of the application, the electronic device further comprises a touch screen and the virtual keyboard, wherein the virtual keyboard is connected with the touch screen.
Based on still another aspect of the present application, there is also provided an electronic device, including: the character input device comprises a touch screen and a processor, wherein the processor is used for obtaining touch points on the touch screen to execute the character input method.
Based on still another aspect of the present application, a storage medium is further provided, where the storage medium stores a computer program stream, and the computer program stream is used to implement the above character input method.
Compared with the prior art, the method has the following advantages:
in this application, the electronic device may first obtain a first position of a first touch point on the touch screen, and determine a first input character corresponding to the first position. After the second position of the second touch point is obtained and the position relationship between the second touch point and one point in the touch screen is obtained, the second input character corresponding to the second position can be determined according to the position relationship and the first input character. That is to say, the electronic device can determine the characters according to the relative positions between the points, so that when the electronic device performs the character input operation, the virtual keyboard is not displayed on the touch screen, the display area of the touch screen is prevented from being occupied, and the actual content can be displayed in the whole display area of the touch screen.
Of course, it is not necessary for any product to achieve all of the above-described advantages at the same time for the practice of the present application.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
FIG. 1 is a flow chart of a character input method provided herein;
FIG. 2 is another flow chart of a character input method provided herein;
FIG. 3 is a flow chart of another character input method provided by the present application;
FIG. 4 is a flow chart of another method for inputting characters provided by the present application;
FIG. 5 is a flow chart of another character input method provided by the present application;
FIG. 6 is a schematic diagram of a structure of a character input device provided in the present application;
FIG. 7 is a schematic diagram of another structure of a character input device provided in the present application;
fig. 8 is a schematic structural diagram of a character input device provided by the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The application is operational with numerous general purpose or special purpose computing device environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multi-processor apparatus, distributed computing environments that include any of the above devices or equipment, and the like.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
When the existing electronic equipment uses a virtual keyboard to input words, the absolute position of a key on the virtual keyboard is firstly obtained, and then character matching is carried out according to the absolute position, so that in the character input process, the display of the virtual keyboard can occupy part of the display area of the touch screen, the effective display area of the touch screen is reduced, and the display of the actual content is influenced.
The character input method provided by the application adopts a mode of determining the characters by relative positions, so that the electronic equipment does not need to display a virtual keyboard any more in the character input process, and the display area of the touch screen is avoided being occupied. The following describes the character input method provided by the present application in detail by using specific embodiments.
One embodiment
Referring to fig. 1, a flow chart of a character input method provided by the present application is shown, where the character input method is applied to an electronic device including a touch screen, and may include the following steps:
step 101: the method comprises the steps of obtaining a first position of a first touch point on the touch screen, and determining a first input character corresponding to the first position.
Step 102: a second location of a second touch point on the touch screen is obtained.
When an object touches a touch screen of the electronic device, for example, a finger of a user touches the touch screen, a touch point is formed on the touch screen. A camera in the electronic equipment collects images formed on the touch screen, and then an image recognition chip of the equipment in the electronic equipment analyzes the images collected by the camera to recognize touch points on the touch screen, so that the positions of the touch points on the touch screen are obtained. The second touch point is a touch point formed on the touch screen at the current moment, and the first touch point is a touch point formed on the touch screen at the previous moment.
In this embodiment, the position of each touch point may be a position of the touch point relative to a center point of the touch screen or a position of a certain corner of the touch screen. Of course, the position of the touch point may also be the position of the touch point relative to the positioning point. The positioning point is a point formed by a certain key in the virtual keyboard on the touch screen, the position of the touch point relative to the positioning point can be directly above the positioning point, directly below the positioning point and 30 degrees to the left of the positioning point, and the specific position can be set according to an application scene.
The locating point is displayed on the touch screen when the electronic equipment starts character input operation, the locating point can be a point formed by an H key in an existing virtual keyboard on the touch screen or a point formed by an Enter key on the touch screen, the position of the locating point in the touch screen is preset in the electronic equipment, and the computing method of the position of the locating point in the touch screen can adopt a computing method of the absolute position of the locating point in the touch screen after the existing virtual keyboard is started.
Step 103: and obtaining the position relation between the second touch point and the positioning point in the touch screen according to the second position of the second touch point relative to the positioning point.
In this embodiment, when the position of the touch point is the position of the touch point relative to the positioning point, the position relationship is the position relationship between the second touch point and the positioning point. The second position is the position of the second touch point relative to the positioning point, and the position indicates the position relationship between the second touch point and the positioning point, so that the electronic device can obtain the position relationship between the second touch point and the positioning point after knowing the second position of the second touch point. If the second position is directly above the positioning point, the position relationship between the second touch point and the positioning point is as follows: the second touch point is located right above the positioning point.
Step 104: and determining a second input character corresponding to the second position according to the position relation and the first input character.
In this embodiment, the second input character determination process may be: and selecting a character associated with the first input character from a preset vocabulary table according to the position relation and the first input character, and determining the character as a second input character, wherein the preset vocabulary table comprises words existing in a standard dictionary, or the preset vocabulary table comprises words existing in the standard dictionary and recorded words input by the user before. The specific operation process is illustrated.
For example, the positioning point is a point formed by the H button on the touch screen, the first input character is a, the position relationship is directly above the positioning point, and the characters directly above the positioning point can be y and u according to the position relationship. The resulting character is combined with the first input character to yield a first string au and a first string ay. Inputting the first character string into a preset vocabulary table, checking whether a vocabulary comprising the first character string exists, and if so, determining the obtained character as the second input character.
By applying the technical scheme, the electronic equipment can firstly obtain the first position of the first touch point on the touch screen and determine the first input character corresponding to the first position. After the second position of the second touch point is obtained and the position relationship between the second touch point and one point in the touch screen is obtained, the second input character corresponding to the second position can be determined according to the position relationship and the first input character. That is to say, the electronic device can determine the characters according to the relative positions between the points, so that when the electronic device performs the character input operation, the virtual keyboard is not displayed on the touch screen, the display area of the touch screen is prevented from being occupied, and the actual content can be displayed in the whole display area of the touch screen.
Further, the existing virtual keyboard matches characters according to the absolute positions of the keys, that is, when a certain key is touched, the virtual keyboard can only match one character according to the absolute position of the key. When a certain key is touched, the adjacent keys are touched by mistake, namely, when the touch point is wrong, the virtual keyboard can only be matched with a wrong character according to the absolute position.
In this embodiment, the electronic device may determine the second input character according to the position relationship and the first input character, and therefore, when the adjacent key is touched by mistake, the character corresponding to the position of the touch point may include a wrong character and a correct character, so that the electronic device may obtain the correct character by using the character input method provided by the present application, and implement automatic error correction, and particularly when characters corresponding to positions where a plurality of touch points are located are continuously matched, the automatic error correction effect is better.
Another embodiment
The embodiment describes a specific process of inputting characters when the position of the touch point is relative to the position of the first touch point and the position relationship is between the second touch point and the first touch point. Referring to fig. 2, another flow chart of a character input method provided in the present application is shown, which may include the following steps:
step 201: the method comprises the steps of obtaining a first position of a first touch point on the touch screen, and determining a first input character corresponding to the first position.
Step 202: a second location of a second touch point on the touch screen is obtained.
In this embodiment, the position of each touch point may be a position of the touch point relative to a center point of the touch screen or a position of the touch point relative to a certain corner of the touch screen. Of course, the position of each touch point may also be relative to the position of the first touch point.
The first touch point is a first touch point formed on the touch screen in the character input process, the position of each touch point relative to the first touch point can be directly above the first touch point, directly below the first touch point, 30 degrees to the left of the previous touch point, and the like, and the specific position can be set according to an application scene.
And the position of the first touch point may be a position relative to a center point of the touch screen or a position of the touch point relative to a certain corner of the touch screen or a position relative to an anchor point. The information about the anchor point is described in the previous embodiment, and will not be described again.
Step 203: and obtaining the position relation between the second touch point and the first touch point according to the second position of the second touch point relative to the first touch point.
In this embodiment, the position of the touch point is a position of the touch point relative to the first touch point, and the positional relationship is a positional relationship between the second touch point and the first touch point. Since the second position is the position of the second touch point relative to the first touch point, which indicates the positional relationship between the second touch point and the first touch, the electronic device can obtain the positional relationship between the second touch point and the first touch after knowing the second position of the second touch point. If the second position is right above the first touch, the position relationship between the second touch point and the first touch point is as follows: the second touch point is located directly above the first touch.
Step 204: and determining a second input character corresponding to the second position according to the position relation and the first input character.
In this embodiment, the second input character determination process refers to the detailed description of the previous embodiment, which will not be described again.
Yet another embodiment
The present embodiment is different from the above embodiments in that: in this embodiment, the position of the touch point is a position of the touch point relative to the first touch point, and the position relationship is a position relationship between the touch point and the first touch point, as shown in fig. 3. Fig. 3 is another flowchart of a character input method provided in the present application, which may include the following steps:
step 301: the method comprises the steps of obtaining a first position of a first touch point on the touch screen, and determining a first input character corresponding to the first position.
Step 302: a second location of a second touch point on the touch screen is obtained.
In this embodiment, the second touch point is a touch point formed on the touch screen at the current moment, and the first touch point is a touch point formed on the touch screen at the previous moment. And the second position of the second touch point may be a position of the second touch point with respect to a center point of the touch screen or a position of the second touch point with respect to a certain corner of the touch screen. Of course, the position of the second touch point may also be relative to the position of the first touch point.
In this embodiment, the relative position of the second touch point with respect to the first touch point may be a position directly above the first touch point, a position directly below the first touch point, and a position 30 degrees to the left of the first touch point, and the like, and the specific position may be set according to an application scenario.
It should be noted that: the position of the first touch point may be a position with respect to a center point of the touch screen or a position of the touch point with respect to a certain corner of the touch screen or a position with respect to an anchor point. The information about the anchor point is described in the embodiment of fig. 1, and will not be described again.
Step 303: and obtaining the position relation between the second touch point and the first touch point according to the second position of the second touch point relative to the first touch point.
In this embodiment, the position of the touch point is a position of the touch point relative to the first touch point, and the positional relationship is a positional relationship between the second touch point and the first touch point. Since the second position is a position of the second touch point relative to the first touch point, which indicates a positional relationship between the second touch point and the first touch, the electronic device can obtain the positional relationship between the second touch point and the first touch after knowing the second position of the second touch point. If the second position is right above the first touch, the position relationship between the second touch point and the first touch point is as follows: the second touch point is located directly above the first touch.
Step 304: and determining a second input character corresponding to the second position according to the position relation and the first input character.
In this embodiment, the second input character determination process refers to the detailed description of the previous embodiment, which will not be described again.
Yet another embodiment
Referring to fig. 4, a flowchart of a character input method provided in the present application is shown, which may include the following steps:
step 401: the method comprises the steps of obtaining a first position of a first touch point on the touch screen, and determining a first input character corresponding to the first position.
Step 402: a second location of a second touch point on the touch screen is obtained.
Step 403: and obtaining the position relation between the second touch point and one point in the touch screen according to the second position.
Step 404: and determining a second input character corresponding to the second position according to the position relation and the first input character.
In this embodiment, the implementation process of steps 401 to 404 may refer to the implementation process described in any one of the three embodiments, and this embodiment will not be described again.
Step 405: and combining the matched characters according to the touch sequence of the touch points to obtain a plurality of words.
It is to be noted here that: in this embodiment, the electronic device may not combine the characters at will, but combine the characters according to the touch sequence of the touch points. Wherein the touch order is the sequence of touches.
Step 406: and selecting a correct vocabulary from the plurality of vocabularies as an input vocabulary.
In this embodiment, when the electronic device selects a correct vocabulary from the plurality of vocabularies, the plurality of vocabularies may be compared with the preset vocabulary table, and one vocabulary included in the preset vocabulary table is selected as an input vocabulary.
The preset vocabulary table comprises words which are already in the standard dictionary, or the preset vocabulary table comprises words which are already in the standard dictionary and recorded words which are input by the user before. The canonical dictionary may be a nationally published Xinhua dictionary.
Still taking the apearance as an example, if the adjacent key R is touched by mistake when inputting the character e when using the existing virtual keyboard, the vocabulary finally matched and output by the existing virtual keyboard is the apearance. By using the character input method provided by the embodiment, the matched vocabularies comprise the apparance and the apparance, and the vocabularies are respectively compared with the preset vocabulary table, so that the correct vocabulary is the apparance which is the vocabulary to be input, and automatic vocabulary correction is realized.
In all the above method embodiments, the electronic device may perform character input according to the technical scheme provided by the above method embodiment when it is determined that the current operation is a character input operation. The means for judging whether the current operation is a character operation may be: and judging whether a first time difference value of a single touch point is smaller than a first preset time or not and whether a second time difference value of two touch points is smaller than a second preset time or not, wherein the first time difference value is a difference value between the starting time and the ending time of the single touch point, the second time difference value is a difference value between the ending time of one touch point and the starting time of the other touch point, and the two touch points are touch points touched twice adjacently.
Under the condition that the first time difference value of a single touch point is smaller than the first preset time and the second time difference value of two touch points is smaller than the second preset time, the current operation is judged to be a character input operation, the step of obtaining the first position of the first touch point on the touch screen, determining the first input character corresponding to the first position is executed, and the character input process is further completed, as shown in fig. 5. Fig. 5 is a further flowchart of a character input method provided in the present application, wherein the implementation process of fig. 5 can refer to the implementation process in an embodiment corresponding to any one of the flowcharts of fig. 1 to 3, which is not described again.
And executing a preset command operation under the condition that the first time difference value of the single touch point is less than a first preset time and the second time difference value of the two touch points is not less than a second preset time, wherein the preset command operation can be a single-click operation. And executing a preset command operation under the condition that the first time difference value of the single touch point is not less than the first preset time, wherein the preset command operation can be a screen sliding operation.
In addition, in all the above method embodiments, during the character input process, the input characters may be displayed on the touch screen, and the display mode may be a semi-transparent display mode or an entity display mode, where the semi-transparent display mode indicates that the character display brightness is half of the actual display brightness, and the entity display mode indicates that the character display brightness is the actual display brightness.
While, for purposes of simplicity of explanation, the foregoing method embodiments have been described as a series of acts or combination of acts, it will be appreciated by those skilled in the art that the present application is not limited by the order of acts or acts described, as some steps may occur in other orders or concurrently with other steps in accordance with the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
Corresponding to the above method embodiment, the present application further provides a character input device, which is applied to an electronic device including a touch screen, and the structure diagram of the character input device is shown in fig. 6, including: a first character determining unit 11, a position acquiring unit 12, a positional relationship acquiring unit 13, and a second character determining unit 14. Wherein,
the first character determining unit 11 is configured to obtain a first position of a first touch point on the touch screen, and determine a first input character corresponding to the first position.
A position obtaining unit 12, configured to obtain a second position of a second touch point on the touch screen.
In this embodiment, the position of each touch point may be a position relative to an anchor point, or a position relative to a first touch point. The second touch point is a touch point formed on the touch screen at the current moment, and the first touch point is a touch point formed on the touch screen at the previous moment.
It should be noted that: the position of the first touch point may be a position relative to a center point of the touch screen or a position of the touch point relative to a certain corner of the touch screen or relative to an anchor point. The information about the anchor point is described in the embodiments of the method, which will not be described further.
And a position relation obtaining unit 13, configured to obtain a position relation between the second touch point and one point in the touch screen according to the second position.
In this embodiment, when the second position of the second touch point is a position relative to the positioning point, the position relationship obtaining unit 13 is specifically configured to obtain the position relationship between the second touch point and the positioning point in the touch screen according to the second position of the second touch point relative to the positioning point.
When the second position of the second touch point is a position relative to the first touch point, the position relationship obtaining unit 13 is specifically configured to obtain the position relationship between the second touch point and the first touch point in the touch screen according to the second position of the second touch point relative to the first touch point.
When the second position of the second touch point is a position relative to the first touch point, the position relationship obtaining unit 13 is specifically configured to obtain the position relationship between the second touch point and the first touch point in the touch screen according to the second position of the second touch point relative to the first touch point.
And a second character determining unit 14, configured to determine, according to the position relationship and the first input character, a second input character corresponding to the second position.
In this embodiment, the second character determining unit 14 is specifically configured to select a character associated with the first input character from a preset vocabulary table according to the position relationship and the first input character, and determine the character as the second input character, where the preset vocabulary table includes words already existing in the canonical dictionary, or the preset vocabulary table includes words already existing in the canonical dictionary and recorded words previously input by the user. The specific operation process is illustrated.
For example, the positioning point is a point formed by the H button on the touch screen, the first input character is a, the position relationship is directly above the positioning point, and the characters directly above the positioning point can be y and u according to the position relationship. The resulting character is combined with the first input character to yield a first string au and a first string ay. Inputting the first character string into a preset vocabulary table, checking whether a vocabulary comprising the first character string exists, and if so, determining the obtained character as the second input character.
Referring to fig. 7, which shows another schematic structural diagram of a character input device provided in the present application, on the basis of fig. 6, the character input device may further include: a matching unit 15 and a selecting unit 16. Wherein,
and the matching unit 15 is used for combining the matched characters according to the touch sequence of the touch points to obtain a plurality of vocabularies.
It is to be noted here that: in this embodiment, the matching unit 15 may not combine the characters at will, but combine the characters according to the touch sequence of the touch points. Wherein the touch order is the sequence of touches
A selecting unit 16, configured to select a correct vocabulary from the plurality of vocabularies as an input vocabulary. The selecting unit 16 may be specifically configured to compare a plurality of vocabularies with a preset vocabulary table, and select one vocabulary included in the preset vocabulary table as an input vocabulary; the preset vocabulary table comprises words which are already in the standard dictionary, or the preset vocabulary table comprises words which are already in the standard dictionary and recorded words which are input by the user before.
Still taking the apearance as an example, if the adjacent key R is touched by mistake when inputting the character e when using the existing virtual keyboard, the vocabulary finally matched and output by the existing virtual keyboard is the apearance. By using the character input device provided by the embodiment, the matched words comprise the apparance and the apparance, and the words are respectively compared with the preset vocabulary table, so that the correct word can be obtained as the apparance, and the apparance is the word to be input, thereby realizing automatic word correction.
In all the above device embodiments, the first character determining unit 11 is specifically configured to, when a first time difference value of a single touch point is smaller than a first preset time and a second time difference value of two touch points is smaller than a second preset time, obtain a first position of the first touch point on the touch screen, and determine a first input character corresponding to the first position, where the first time difference value is a difference value between a start time and an end time of the single touch point, the second time difference value is a difference value between the end time of one touch point and the start time of another touch point, and the two touch points are touch points touched twice in adjacent.
And in the case that the time difference value of the single touch point is not less than the preset time, the preset command operation is executed by the preset command execution unit 17, and the preset command operation may be a single click operation. In the case that the first time difference value of a single touch point is less than the first preset time, and the second time difference value of two touch points is not less than the second preset time, the preset command execution unit 17 executes a preset command operation, which may be a screen sliding operation.
In addition, the determined input character may be displayed on the touch screen by the display unit 18 in a manner of a semi-transparent display, in which the character display brightness is half of the actual display brightness, or a solid display, in which the character display brightness is the actual display brightness.
In the present embodiment, please refer to fig. 8 for a character input device including a preset command execution unit 15 and a display unit 16, wherein fig. 8 is a schematic structural diagram of a character input device provided in the present application based on fig. 6. Of course, fig. 8 may also be based on fig. 7, and this embodiment will not be described again.
The device described in this embodiment may be integrated into a virtual keyboard, and the virtual keyboard may be included in an electronic device, and the virtual keyboard may be connected to a touch screen in the electronic device.
The application also provides an electronic device, which comprises a touch screen and a processor, wherein the processor is used for obtaining the touch points on the touch screen to execute the character input method.
The application also provides a storage medium, wherein the storage medium is stored with a computer program flow, and the computer program flow is used for realizing the character input method.
The electronic device and the storage medium in this embodiment are based on different aspects of the same inventive concept, and the implementation process of the method has been described in detail in the foregoing, so that a person skilled in the art can clearly understand, according to the foregoing description, the implementation process of the processor in the electronic device and the implementation process of the computer program stream stored in the storage medium in this embodiment, and for the sake of brevity of the description, no further description is provided here.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments or some parts of the embodiments of the present application.
The character input method, the character input device, the virtual keyboard, the electronic device and the storage medium provided by the application are introduced in detail, specific examples are applied in the text to explain the principle and the implementation of the application, and the description of the above embodiments is only used to help understand the method and the core idea of the application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (22)

16. The character input device according to any one of claims 10 to 13, wherein the first character determination unit is specifically configured to, when a first time difference value of a single touch point is smaller than a first preset time and a second time difference value of two touch points is smaller than a second preset time, obtain a first position of the first touch point on the touch screen, and determine the first input character corresponding to the first position, where the first time difference value is a difference value between a start time and an end time of the single touch point, the second time difference value is a difference value between an end time of one touch point and a start time of another touch point, and the two touch points are touch points touched twice in adjacent.
17. The character input device according to any one of claims 10 to 13, further comprising a preset command executing unit configured to execute a preset command operation in a case where a time difference value of a single touch point is not less than a preset time, or in a case where a first time difference value of a single touch point is less than a first preset time and a second time difference value of two touch points is not less than a second preset time, wherein the first time difference value is a difference value between a start time and an end time of the single touch point, the second time difference value is a difference value between an end time of one touch point and a start time of the other touch point, and the two touch points are touch points touched by two adjacent times.
CN201710828402.4A2012-10-152012-10-15Character input method and device, virtual keyboard, electronic equipment and storage mediumActiveCN107479725B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201710828402.4ACN107479725B (en)2012-10-152012-10-15Character input method and device, virtual keyboard, electronic equipment and storage medium

Applications Claiming Priority (2)

Application NumberPriority DateFiling DateTitle
CN201210390927.1ACN103729132B (en)2012-10-152012-10-15 A character input method, device, virtual keyboard and electronic equipment
CN201710828402.4ACN107479725B (en)2012-10-152012-10-15Character input method and device, virtual keyboard, electronic equipment and storage medium

Related Parent Applications (1)

Application NumberTitlePriority DateFiling Date
CN201210390927.1ADivisionCN103729132B (en)2012-10-152012-10-15 A character input method, device, virtual keyboard and electronic equipment

Publications (2)

Publication NumberPublication Date
CN107479725Atrue CN107479725A (en)2017-12-15
CN107479725B CN107479725B (en)2021-07-16

Family

ID=50453227

Family Applications (2)

Application NumberTitlePriority DateFiling Date
CN201710828402.4AActiveCN107479725B (en)2012-10-152012-10-15Character input method and device, virtual keyboard, electronic equipment and storage medium
CN201210390927.1AActiveCN103729132B (en)2012-10-152012-10-15 A character input method, device, virtual keyboard and electronic equipment

Family Applications After (1)

Application NumberTitlePriority DateFiling Date
CN201210390927.1AActiveCN103729132B (en)2012-10-152012-10-15 A character input method, device, virtual keyboard and electronic equipment

Country Status (1)

CountryLink
CN (2)CN107479725B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114610164A (en)*2022-03-172022-06-10联想(北京)有限公司Information processing method and electronic device

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106610780A (en)*2015-10-272017-05-03中兴通讯股份有限公司Text selection method and intelligent terminal
CN107015727A (en)*2017-04-072017-08-04深圳市金立通信设备有限公司A kind of method and terminal of control character separator

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN1591298A (en)*1997-01-242005-03-09蒂吉通信系统公司Reduced keyboard disambiguating system
US20080167858A1 (en)*2007-01-052008-07-10Greg ChristieMethod and system for providing word recommendations for text input
US20090237361A1 (en)*2008-03-182009-09-24Microsoft CorporationVirtual keyboard based activation and dismissal
CN101685342A (en)*2008-09-262010-03-31联想(北京)有限公司Method and device for realizing dynamic virtual keyboard

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5620267A (en)*1993-10-151997-04-15Keyboard Advancements, Inc.Keyboard with thumb activated control key
JPH103335A (en)*1996-06-161998-01-06Shinichiro SakamotoTyping supporting goods for word processor, personal computer or typewriter
US6614422B1 (en)*1999-11-042003-09-02Canesta, Inc.Method and apparatus for entering data using a virtual input device
CN1746825A (en)*2003-06-042006-03-15黄健Information input and inputting device by pure orientation method
US20050162402A1 (en)*2004-01-272005-07-28Watanachote Susornpol J.Methods of interacting with a computer using a finger(s) touch sensing input device with visual feedback
US7777728B2 (en)*2006-03-172010-08-17Nokia CorporationMobile communication terminal
CN102023715B (en)*2009-09-102012-09-26张苏渝Induction signal inputting method and apparatus
US8704789B2 (en)*2011-02-112014-04-22Sony CorporationInformation input apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN1591298A (en)*1997-01-242005-03-09蒂吉通信系统公司Reduced keyboard disambiguating system
US20080167858A1 (en)*2007-01-052008-07-10Greg ChristieMethod and system for providing word recommendations for text input
CN101641661A (en)*2007-01-052010-02-03苹果公司Method and system for providing word recommendations for text input
US20090237361A1 (en)*2008-03-182009-09-24Microsoft CorporationVirtual keyboard based activation and dismissal
CN101685342A (en)*2008-09-262010-03-31联想(北京)有限公司Method and device for realizing dynamic virtual keyboard

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SILPOL SUKSWAI ET AL.: "Cursorkeyboard: An Input Method for Small Touch Screen Devices", 《2012 NINTH INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND SOFTWARE ENGINEERING》*

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114610164A (en)*2022-03-172022-06-10联想(北京)有限公司Information processing method and electronic device

Also Published As

Publication numberPublication date
CN103729132B (en)2017-09-29
CN107479725B (en)2021-07-16
CN103729132A (en)2014-04-16

Similar Documents

PublicationPublication DateTitle
US9703462B2 (en)Display-independent recognition of graphical user interface control
CN104462437B (en)The method and system of search are identified based on the multiple touch control operation of terminal interface
US20110201387A1 (en)Real-time typing assistance
US20080235621A1 (en)Method and Device for Touchless Media Searching
US20110047514A1 (en)Recording display-independent computerized guidance
CN105868385B (en)Method and system for searching based on terminal interface touch operation
EP2575009A2 (en)User interface method for a portable terminal
US9405558B2 (en)Display-independent computerized guidance
CN112540740B (en) Split-screen display method, device, electronic device, and readable storage medium
CN104808903B (en) Text selection method and device
CN104679278A (en)Character input method and device
CN104778195A (en)Terminal and touch operation-based searching method
WO2015043352A1 (en)Method and apparatus for selecting test nodes on webpages
US20150199171A1 (en)Handwritten document processing apparatus and method
CN107479725B (en)Character input method and device, virtual keyboard, electronic equipment and storage medium
CN106845190B (en)Display control system and method
CN104503679B (en)Searching method and searching device based on terminal interface touch operation
EP3776161B1 (en)Method and electronic device for configuring touch screen keyboard
US20160292140A1 (en)Associative input method and terminal
CN104516632B (en)Determine and touch the method and device that character scans for
US20150022460A1 (en)Input character capture on touch surface using cholesteric display
CN103793053B (en)Gesture projection method and device for mobile terminals
CN104423614B (en)A kind of keyboard layout method, device and electronic equipment
CN106919558B (en)Translation method and translation device based on natural conversation mode for mobile equipment
CN119576461A (en) Interface display method, device, electronic device and storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp