RELATED APPLICATIONSBenefit is claimed under 35 U.S.C. 119(a)-(d) to Foreign application Ser. 1071/CHE/2007 entitled “DATA PROCESSING SYSTEM AND METHOD” by Hewlett-Packard Development Company, L.P., filed on 22 May, 2007, which is herein incorporated in its entirety by reference for all purposes.
BACKGROUND TO THE INVENTIONInput devices such as graphics tablets can be used to create an image by hand, where the image contains handwritten text and/or figures. A user uses a pen to draw or write on the input device, and the input device records the drawing and/or writing and forms an image therefrom. The user may write or draw onto, for example, a sheet of paper positioned on top of the input device, or may write or draw directly onto the input device. The image is provided to an input device application running on a data processing system. The input device application is associated with the input device and collects images from the input device. A user can then manually provide the images to another application, or extract content from the images and provide the content to another application.
It is an object of embodiments of the invention to at least mitigate one or more of the problems of the prior art.
BRIEF DESCRIPTION OF THE DRAWINGSEmbodiments of the invention will now be described by way of example only, with reference to the accompanying drawings, in which:
FIG. 1 shows an example of a knownsystem100 for organising data;
FIG. 2 shows an example of a system for organising data according to embodiments of the invention;
FIG. 3 shows an example of an image provided by an input device;
FIG. 4 shows another example of an image provided by an input device;
FIG. 5 shows another example of an image provided by an input device; and
FIG. 6 shows another example of an image provided by an input device.
DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTIONEmbodiments of the invention recognise content within an image, and select a destination for the image or the content. The image is received from an input device such as, for example, a graphics tablet and may comprise, for example, a vector and/or raster representation. The content may comprise, for example, text, gestures, shapes, symbols, colours and/or any other content, data and/or information within the image, and/or any combination of different types of content. Additionally or alternatively, the recognised content may comprise metadata associated with the image that may be found within or separately from the image. For example, the input device may include controls that may be used to specify information about the image, and the information may be metadata associated with the image.
The destination for the image (and/or any of the content therein, which may comprise the recognised content and/or any other content) may be, for example, one or more applications, or may alternatively be some other destination for the image and/or content.
A gesture may comprise a shape, symbol or some other content within the image. For example, the gesture may comprise a shape that is created using one or more strokes by the user. The number of strokes that were made to create a gesture may be included with or separately from the image, for example in a vector image or in metadata associated with the image.
Thesystem100 ofFIG. 1 comprises agraphics tablet102 and adata processing system104. A user (not shown) writes or draws on the graphics tablet using a pen or other tool (not shown). The tool may be specifically designed for use with thegraphics tablet102, or the tool may be a general purpose tool. Thegraphics tablet102 may have one or more sheets of paper (not shown) placed on top of it for the user to write or draw on, or a user may write or draw directly onto the graphics tablet.
Thegraphics tablet102 collects strokes (that is, lines, curves, shapes, handwriting and the like) made by the user. Thegraphics tablet102 may provide information relating to the strokes to an input device application running on thedata processing system104 in real-time, as the strokes are made. Alternatively, thegraphics tablet102 may store the strokes without providing information to thedata processing system104. In this case, thegraphics tablet102 may include memory for storing the information, and can be used when not connected to thedata processing system104. Thegraphics tablet102 can later be connected to thedata processing system104 to provide (download) all of the information stored to thedata processing system104. Multiple pages may be provided to thedata processing system104.
The information provided to the input device application running on thedata processing system104 may be, for example, one or more raster images (for example, bitmaps) of the page or pages written or drawn on by the user. Alternatively, the information may be in the form of vector graphics that describe the image or images. Alternatively, the vector graphics may be a list of the strokes made by the user which can be assembled into an image of the page or pages.
Once the information is provided to the input device application running on thedata processing system104 in the form of vector or raster images, the image or images are typically displayed on a display device106 of thedata processing system104 by the input device application. The user then uses the input device application to manipulate the images, for example, to edit the images, copy and/or delete the images, and/or organise the images by providing the images and/or content therein to another application, such as, for example, a word processing application, email application or graphics application. The input device application may include means for extracting some content from the images such as, for example, optical character recognition (OCR) software. The user may use, for example, a user input device108 (such as a mouse and/or keyboard) and/or the display device106 to manipulate the images.
FIG. 2 shows asystem200 for organising data according to embodiments of the invention. Thesystem200 includes agraphics tablet202,data processing system204,display device206 and/or user input device208 (such as a mouse and/or keyboard) similar to those shown inFIG. 1. Thesystem200 also includes an organising application for organising data or information provided to thedata processing system204 by thegraphics tablet202.
When an image is provided to thedata processing system204 by thegraphics tablet202, the image is provided to theorganising application210. The organising application then recognises some content within the image, selects an application based on the recognised content, and sends the image and/or content in the image to the selected application. Therefore, the user does not need to manually interact with the organising application except when there is ambiguity or there are errors in the recognition, thus requiring fewer user skills and accelerating the organising process when compared to known systems.
The organising application selects an appropriate destination application for the image based on recognised content. The organising application first extracts at least some of the content from the image using, for example, optical character recognition (OCR) or handwriting recognition. The organisingapplication210 examines the content to attempt to recognise some of the content. If the organising application recognises some of the content, it uses the recognised content to select a destination application for the image and/or the content.
FIG. 3 shows an example of animage300 that is sent by thegraphics tablet202 to the organisingapplication210. The organising application extracts and recognises thecontent302, which reads “To: xyz@hp.com”, as being a destination email address for the content within theimage300. The image may have the content extracted from the image using, for example, optical character recognition (OCR) or handwriting recognition. Some or all of the content may be extracted from the image before the organisingapplication210 examines the content to attempt to recognise some of the content.
The organisingapplication210 may recognise various types of content and select an appropriate application accordingly. For example, where a postal address is recognised (which are typically in the top-right hand corner of a page and conform to a format, and are thus recognisable), the content of the image may be sent to a word processing application as the content is a letter, for example for posting or emailing as an attachment.
FIG. 4 shows another example of animage400 sent to the organisingapplication210. The image includes afirst area402 of handwritten text that is selected or highlighted using agesture404 shaped as a left brace “{”.Metadata406 is written alongside thegesture404. Thegesture404 and themetadata406 are located to the left of amargin408 within aregion410. Themargin408 may, in certain embodiments, be imaginary and may not be visible within theimage400. In certain embodiments, the margin is present on paper placed over thegraphics tablet202.
The organisingapplication210 recognises thegesture404 within theregion410 using, for example, known gesture recognition technology. Examples of gesture recognition technology that may be used may include one or more of the following, which are incorporated herein by reference for all purposes: “Handwritten Gesture Recognition for Gesture Keyboard”, R. Balaji, V. Deepu, Sriganesh Madhvanath and Jayasree Prabhakaran, Proceedings of the 10th International Workshop on Frontiers in Handwriting Recognition (IWFHR-10), La Baule, France, October 2006; “Scribble Matching”, Hull R., Reynolds D. & Gupta D., Proceedings of the 4th International Workshop on Frontiers in Handwriting Recognition, (1994), 285-295; “Automatic signature verification and writer identification—the state of the art”, Plamondon R., Lorette G., Pattern Recognition. Vol. 22, no. 2, pp. 107-131. 1989; “Character segmentation techniques for handwritten text—a survey, Dunn C. E., Wang P. S. P., Pattern Recognition, 1992. Vol. II., 11th IAPR International Conference on Pattern Recognition Methodology and Systems, Proceedings., pp 577-580.
The organisingapplication210 recognises thegesture404 and determines that the area oftext402 is selected. For example, the area of text selected is the area substantially between the upper and lower bounds of thegesture404 and across the full width of the page to the right of themargin408. The organising application also extracts themetadata406 content which reads “Word file:idea.doc”. The organising application determines that the text should be sent to a word processing application due to the presence of “word”, and also that the filename for thetext402 should be “idea.doc”. The organising application may send some or all of theimage400 to the word processing application, or may additionally or alternatively extract the text content from the area ofhandwritten text402 and send the extracted content to the word processing application. In embodiments of the invention, the user may interact with the organising application to define keywords such as, for example, “word” and/or other keywords, and to define the application associated with the keywords.
Similarly, the organisingapplication210 recognises asecond gesture412 within theregion410, and recognises that an area ofhandwritten text414 is selected. The organisingapplication210 extracts metadata416 content adjacent to thegesture412, and recognises that it comprises “Mail to:xyz@hp.com”. The organisingapplication210 recognises the presence of “Mail” and selects a mail (for example, email) application. The organising application also sends some or all of theimage400 to the mail application, or may additionally or alternatively extract the text content from the area ofhandwritten text414 and send the extracted content to the mail application.
In certain embodiments, the gestures may be anywhere within theimage400 and may not necessarily be located within theregion410. In certain embodiments, there is nosuch region410.
FIG. 5 shows another example of animage500. Theimage500 includes aregion502 to the left of the margin503 (which may or may not be visible) where gestures may be present. Theimage500 also includes afirst gesture504 within theregion502, which selects an area oftext506 and aFIG. 508, and asecond gesture510 that selects an area oftext512.
Theimage500 also includes fouricons514,516,518 and520 respectively. The icons may or may not be present within theimage500. The icons may, however, be present on, for example, a sheet of paper placed above thegraphics tablet202. The icons may act as substitutes for writing keywords or commands.
Each of thegestures504 and510 is associated with a curve that runs from the gesture to one of theicons514,516,518 and520. For example, thegesture504 is associated with acurve522 that runs from approximately the mid-point524 of thegesture522 to theicon514. Similarly, thegesture510 is associated with acurve526 that runs from approximately the mid-point528 of thegesture510 to theicon518. The curves may or may not touch the associated gesture or icon. Therefore, each gesture can be associated with an icon. The organisingapplication210 recognises which icon is associated with a gesture and uses this recognition to select an application. For example, the organisingapplication210 sends the area of theimage500 or the associated content, being the area ofhandwritten text506 and theFIG. 508, to an application associated with theicon514, as this icon is associated with thegesture504 that selected the area of the image or the content. For example, theicon514 may be associated with a word processing application, and therefore theorganising application210 sends the area of the image or the extracted content to the word processing application.
Similarly, if, for example, theicon518 is associated with an email application, at least the area ofhandwritten text512 or content extracted therefrom is sent by the organisingapplication210 to the email application.
FIG. 6 shows a further example of an image600. The image600 includes an area ofhandwritten text602 and adestination email address604 written as “To: text@hp.com”. Theaddress604 is surrounded by agesture606 shaped as a speech bubble. In embodiments of the invention, the organisingapplication210 does not consider a particular region for containing gestures, and does not recognise terms and phrases within content within the image. Instead, gestures are used to select portions of the content and the gesture indicates the nature of the content. For example, the organising application recognises that thegesture606 surrounds text and the shape of thegesture606 suggests that the text is a destination address. The organisingapplication210 therefore knows the type of data selected by thegesture606. The organisingapplication210 determines that the destination is an email address, and thus provides the image and/or the content therein to the email application.
The image600 includes asignature610 that is surrounded by anothergesture612. Thegesture612 is also shaped like a speech bubble, although it is upside-down compared to thegesture606. The shape (and orientation) of the gesture may be used by the organisingapplication210 to determine that thegesture612 surrounds asignature610. Therefore, for example, thesignature610 may not be converted into text and may remain as, for example, an image. Additionally or alternatively, the presence of a signature may be used by the organising application to determine, for example, that an electronic communication such as an email that contains content from within the image600 should be digitally signed.
Thus, in embodiments of the invention as described above, the user does not need to interact with an application to organise data provided using a graphics tablet or other input device unless there are ambiguities or there errors in the recognition. Indeed, it is possible that the data provided may be completely processed by a data processing system (for example, processing an email means sending the email to the destination email address without the user ever needing to interact with a data processing system). For example, the user could compose and send emails merely by writing on the graphics tablet or other input device.
Embodiments of the invention are described above with reference to a graphics tablet being the input device. However, other input devices may be used instead, such as, for example, motion sensing tools, touch-sensitive displays and other input devices. Furthermore, alternative embodiments of the invention may integrate certain elements. For example, a PDA may incorporate both an input device (such as a graphics tablet or touch-sensitive display) with a data processing system.
In embodiments of the invention which recognise only text within the content of an image, then optical character recognition (OCR), handwriting recognition and/or other text recognition may be sufficient to select an application for the image and/or content. Thus, methods for recognising symbols, drawings, gestures and/or other non-text content may be ignored when recognising content within the image.
It will be appreciated that embodiments of the present invention can be realised in the form of hardware, software or a combination of hardware and software. Any such software may be stored in the form of volatile or non-volatile storage such as, for example, a storage device like a ROM, whether erasable or rewritable or not, or in the form of memory such as, for example, RAM, memory chips, device or integrated circuits or on an optically or magnetically readable medium such as, for example, a CD, DVD, magnetic disk or magnetic tape. It will be appreciated that the storage devices and storage media are embodiments of machine-readable storage that are suitable for storing a program or programs that, when executed, implement embodiments of the present invention. Accordingly, embodiments provide a program comprising code for implementing a system or method as claimed in any preceding claim and a machine readable storage storing such a program. Still further, embodiments of the present invention may be conveyed electronically via any medium such as a communication signal carried over a wired or wireless connection and embodiments suitably encompass the same.
All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and/or all of the steps of any method or process so disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive.
Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features.
The invention is not restricted to the details of any foregoing embodiments. The invention extends to any novel one, or any novel combination, of the features disclosed in this specification (including any accompanying claims, abstract and drawings), or to any novel one, or any novel combination, of the steps of any method or process so disclosed. The claims should not be construed to cover merely the foregoing embodiments, but also any embodiments which fall within the scope of the claims.