Movatterモバイル変換


[0]ホーム

URL:


HK1057117A1 - Computer device - Google Patents

Computer device
Download PDF

Info

Publication number
HK1057117A1
HK1057117A1HK03109408AHK03109408AHK1057117A1HK 1057117 A1HK1057117 A1HK 1057117A1HK 03109408 AHK03109408 AHK 03109408AHK 03109408 AHK03109408 AHK 03109408AHK 1057117 A1HK1057117 A1HK 1057117A1
Authority
HK
Hong Kong
Prior art keywords
document
display
page
tool
computer device
Prior art date
Application number
HK03109408A
Other languages
Chinese (zh)
Other versions
HK1057117B (en
Inventor
马希德‧安瓦尔
Original Assignee
三星电子株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filedlitigationCriticalhttps://patents.darts-ip.com/?family=26244100&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=HK1057117(A1)"Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Priority claimed from GBGB0009129.8Aexternal-prioritypatent/GB0009129D0/en
Priority claimed from US09/703,502external-prioritypatent/US7055095B1/en
Application filed by 三星电子株式会社filedCritical三星电子株式会社
Publication of HK1057117A1publicationCriticalpatent/HK1057117A1/en
Publication of HK1057117BpublicationCriticalpatent/HK1057117B/en

Links

Classifications

Landscapes

Abstract

A computer device having a system for simulating tactile control over a document includes a a velocity detector for determining a velocity vector associated with pointer movement across a display screen. In response to a command for dragging a document, the velocity vector is employed in redrawing the document in a series of pictures that portray the document as moving on the display screen such that a user may drag a document and then release the pointer from the document, whereupon either: the page continues to move in a direction established by the velocity detector until the user indicates that the document is to stop moving; or the velocity decreases by a page inertia until it reaches zero velocity. A method for providing interface tools for manipulating a document on a display provides a tool that allows the user to adjust the prominence of selected portions of a displayed content document file in a displayed screen document, the selected portions comprising links or highlighted text.

Description

Computer equipment
The systems and methods described herein relate to earlier filed uk patent application 0009129.8 and earlier filed us patent application 09/703,502, and a us patent application entitled "method and system for processing digital documents" filed on even date herewith, all of which are by Majid Anwar and the contents of which are incorporated herein by reference.
Technical Field
The systems and methods described herein relate to systems and methods for viewing and processing the display of digital documents, and more particularly to user interface systems and methods for enabling a user to process and view digital documents presented on a display, such as on the display of a handheld electronic device (e.g., a computer, mobile communication device, or telephone); or on a display device associated with a touch controller (tactlemander).
Background
Today, efforts are made to construct mobile and handheld computing devices that facilitate enabling users to view documents, emails, video presentations, and other forms of content. To accomplish this, engineers and scientists have developed systems including the system described in the above-referenced U.S. patent application entitled "method and System for processing digital documents," the contents of which are incorporated by reference. As described herein, digital content, a document, audiovisual presentation, or other type of content, is processed by a software system operating on a handheld device, mobile device, or other platform and converted into a unified internal representation that can be processed and operated on by the software system so that the system can generate displays of different types of content and be presented on screen displays of the respective devices.
These systems, as well as other handheld and mobile computing systems (e.g., Palm Pilot, Compaq Ipaq, and mobile phones), are therefore capable of providing a display of content to a user. However, these handheld and mobile systems are generally limited to simple input devices, such as small and limited keyboards that are typically presented on cellular telephones, or to small touch screen systems, such as Palm computing device configured touch screens. Thus, while these systems are capable of presenting content to a user that may be quite complex, these systems have limited capabilities to allow the user to manipulate the display of the content, such as flipping through different pages of a document, or selecting different portions of a document. Thus, while these handheld and portable systems may be useful, their use is limited based in part on the user interfaces available to support the operations and view content presented on these devices.
Accordingly, there is a need in the art for systems and methods that provide improved user interface tools that enable easier manipulation and viewing of content presented by a handheld or portable device.
Additionally, there is a need in the art for user interface tools that allow content to be manipulated when the content is separate from its native applications.
Summary of The Invention
The systems and methods described herein provide advanced user interface tools that enable a user to more easily manipulate and view content presented on a mobile or handheld device. In a particular embodiment, the systems and methods described herein provide a graphical user interface that presents a touch and feel user interface experience. More specifically, the systems and methods described herein include a handheld or mobile computer device having a system for simulating touch controls on another unit, either on a document that can be viewed on the device itself, or by remote instruction or remote display. These systems may include: a housing supporting a processor, memory, and a touch sensitive display (or a display with remote touch sensitive control), system code stored in the memory and adapted to be executed by the processor. The system code may generate or provide a digital representation of a document, where the digital representation may contain data content and a page structure representing a layout of pages of the document. Thus, in some applications, the rendered image can contain the content of the document as well as the layout of the document, thereby providing an image that the document actually appears to be. The system may also include a rendering engine, which may include a parser and a renderer for rendering at least a portion of the page layout of the digital representation on the touch sensitive display. A screen monitor can monitor the touch sensitive screen for detecting movement across the surface of the touch sensitive screen, and an interface process can process the detected movement to detect movement representing a command to change the page structure of the digital representation. A navigation module can be responsive to the interface process and can change the rendered portion of the page layout. Thus, by changing the rendered portion of the page layout, the system enables the user to navigate through the digital representation of the document. While the systems and methods of the present invention will have applicability and value for use in other applications and on other types of systems, for purposes of illustration, the present invention will be described below with reference to applications in which such systems facilitate navigation of documents presented on a handheld computing device.
In particular, the systems and methods described herein provide, among other things, a computer device having a system for simulating touch control on a document, to name just one. In one embodiment, these systems include a processor, a memory, and a display; system code stored in the memory and adapted to be executed by the processor, the system code providing a digital representation of a document containing data content and a page structure representing a page layout of the document; a presentation engine for presenting at least a portion of the page layout of the digital representation on a display; a screen monitor for monitoring the screen to detect movement of an object through the image presented on the display; an interface process for processing the detected movement to detect motion indicative of a command for changing a rendered page structure of the digital representation, and a navigation module responsive to the interface process for changing a rendered portion of the page layout, wherein changing the rendered portion of the page layout enables a user to navigate through the digital representation of the document.
These computer devices can include a touch sensitive display, wherein a screen monitor monitors a touch sensitive screen for detecting movement across a surface of the touch sensitive display, and a computer display capable of depicting movement of a cursor across a screen of the display, and wherein the screen monitor detects movement of the cursor across the surface of the display. The processor, memory, screen monitor and display may be arranged as a useful data processing platform and device with a plurality of applications, including handheld computers, telephones, mobile data terminals, set-top boxes, embedded processors, notebook computers, computer workstations, printers, copiers and facsimile machines.
In some alternative embodiments, the computer device may further comprise a velocity detector for determining a velocity vector associated with the detected motion across the touch sensitive display surface, and means for applying a velocity characteristic to the document within a display.
In addition, these computer devices can have an interface process that makes it easier to navigate through a document or a collection of documents and other content. These interface processes can include a page-flip detector (page-flip detector) for detecting motion across the touchscreen surface at locations that present a portion of the page layout illustrating the corners of a document. The page turn detector is capable of rendering a portion of the page layout representing a page immediately adjacent to the currently rendered page. Similarly, the apparatus can include a page curl detector for presenting a portion of a page layout representing a portion of a page immediately adjacent to a currently presented page. In addition, the interface process can include a gesture process (gesturing process) for detecting a predetermined movement indicative of a command for selecting a portion of the page layout to be rendered or for changing the data content of the digital representation of the document. Still further interface controls include processes for controlling transparency characteristics of a document presented on a display, and for controlling a transparency characteristic of a selected portion of the document to adjust the clarity (visibility) of the selected portion relative to other portions of the document. Other interface processes can provide tools including tools representing magnification tools, rulers, text input cursors, thumbnail navigation bars, thumbnail views of linked content, and query tools.
In other aspects, the invention provides computer devices and associated processes having context sensitive graphical interface tools. These devices may include a processor, memory, and a touch sensitive display; a content document file stored in the memory and representing an internal representation of the content; a tool document file stored in the memory and providing an internal representation of a document providing an image representing the graphical interface tool; tool code capable of running a processor and associated with the tool document file and capable of processing the content document file to create an internal representation of the content, wherein the internal representation, when rendered, renders the content in a manner that achieves a display effect associated with the tool; parsing code to process the content document file, the tool document file, and the processed internal representation to produce a screen document for display; and interface code, executable on the processor, for enabling a user to arrange the image of the graphical interface tool into a selected contextual relationship on the presented content and for instructing the tool code to process a portion of the content document file associated with the selected location.
The contextual relationship between the graphical interface tool and the rendered content may vary depending on the application, and may be selected from, for example, the relative positions of the graphical interface tool and the rendered content, the time the graphical interface tool is acting on the rendered content, and the state of the rendered content. These devices are flexible and may be implemented in different forms and in different devices, including but not limited to: handheld computers, telephones, mobile data terminals, set-top boxes, embedded processors, notebook computers, computer workstations, printers, copiers, facsimile machines, and in-car systems and household appliances such as audio players, microwave ovens, refrigerators, and washing machines.
However, those skilled in the art will appreciate that these interface tools may be used in other applications, including applications where content is displayed on a conventional computer workstation that contains typical input tools, such as a standard keyboard and mouse. Additionally, it should be appreciated that the systems and methods described herein also provide useful tools for providing an interface for an embedded display system, such as an embedded visual display used as an output device. Examples of such embedded display systems may include cellular phones, copiers, which include a visual touch screen display that allows a user to select different options for performing a copy job, and may also present the user with an image of the document being copied. Other examples may include facsimile machines in which a visual display is provided to the user so that the user can view a depiction of an incoming facsimile. Other embodiments and applications of the user interface systems and methods described herein will be apparent to those of ordinary skill in the art.
In particular, the systems and methods described herein provide a user interface tool that allows a user to manipulate content displayed on a screen. In particular, the systems and methods described herein provide a software system that creates an abstraction layer for information presented on a display. This abstraction layer contains a document object containing information or content to be displayed on the screen. In one implementation, all information displayed on the screen is treated as one document. Therefore, at the highest level, the entire contents of one screen are understood as one document object. For further explanation of this embodiment, it should be understood that one document object may contain other document objects, each of which may contain a subset of the content displayed to the user. Thus, at the screen level, all displayed information will be understood as a single document, with items presented on the screen, such as web pages, streaming video, and graphical icons, each of which is understood as a document object contained within the advanced screen document object, respectively. Thus, all contents displayed on one screen are abstractly processed as one document, and this paradigm holds whether the contents being displayed are information representing a text page, or information representing a user interface tool or a window/desktop device. Thus, the user interface systems and methods described herein provide user interface tools and functionality that enable a user to manipulate document objects presented on a screen display.
Additionally, in one embodiment, the systems and methods described herein provide a handheld computing device that includes a housing that supports a processor, memory, and a touch-sensitive display. Further, the computing device may contain system code stored in the memory and adapted to be executed by the processor. The system code is capable of processing an input byte stream representing content to be displayed on the touch sensitive display and generating a content document file representing an internal representation of the content. A tool document file may also be stored in the memory and may provide an internal representation of a document that provides an image representing a graphical tool. Tool code that can process the content document file to create an internal representation of the content can be associated with a tool document, where the internal representation presents the content in a manner that achieves a display effect associated with the tool. The device may also include parsing code that processes the content document file, the tool document file, and the processed internal representation to produce a screen document for display on the touch sensitive display in a manner that depicts the display effect.
Brief description of the drawings
The above and other objects and advantages of the present invention will be more fully understood from the following further description thereof with reference to the accompanying drawings, in which:
FIG. 1 provides a functional block diagram of a system in accordance with the present invention;
FIG. 2 depicts an example of a tool produced by a system such as the system depicted in FIG. 1;
FIG. 3 depicts a graphical user interface tool presenting a plurality of thumbnails for navigating through a document having a plurality of pages;
FIG. 4 depicts a magnified graphical user interface tool that provides additional information within a magnified area in accordance with the present invention;
FIG. 5 depicts a translucent and adaptively sized scale patterning tool;
FIG. 6 depicts a transparent query markup graphical user interface tool;
FIG. 7 depicts a user interface mechanism for activating and deactivating a graphical tool:
FIGS. 8a and 8b depict user interface tools for visually enhancing selected portions of a displayed document;
FIG. 9 illustrates additional user interface tools in accordance with the present invention;
FIGS. 10 and 11 depict a text entry tool in accordance with the present invention;
FIGS. 12a-12g depict a set of strokes used to provide commands to a handheld system; and
13A-13B illustrate a user interface tool for scrolling through a document by applying a speed characteristic to the document being displayed.
Detailed description of illustrative embodiments
The systems and methods described herein include systems and methods for manipulating and viewing documents displayed on a viewing surface, such as a computer terminal, display screen, printer, plotter, or any other output device suitable for creating a visual representation of human-readable information. For purposes of illustration, the systems and methods will be described with reference to certain exemplary embodiments, including a handheld computer system that includes a touch screen display and is capable of displaying an integrated view of content produced in different formats. In particular, the systems and methods described herein include a graphical user interface tool capable of presenting tools that can be presented as content to be integrated with other content displayed on the screen.
Fig. 1 depicts a system 10 according to the present invention. System 10 is shown as a functional block diagram of a computer device of the type that typically includes a processor, memory, and a display. However, the system 10 may also be implemented, in whole or in part, as a software system comprising system code executable on a processor to configure the processor as a system according to the present invention. The depicted system 10 includes a computer process 8, a plurality of source documents 11, a tool document file 30, a shape processor 22, and a video display 26. The computer process 8 includes a plurality of document agents 12, a generic data object library 16, an internal representation file 14, a storage buffer or file 15, and a parser/renderer engine 18.
In the described embodiment, the display 26 is capable of presenting images of a plurality of different documents. Each of the representative outputs presented on the display 26 is referred to as a document, and each of the described documents can be associated with a separate application, such as with Word, Netscape Navigator, Real Player, Adobe, Visio, and other types of applications. It should be understood that the term document as used herein is intended to encompass documents, streaming video, web pages, and any other form of data that can be processed and displayed by the computer process 8.
The computer process 8 generates a single output display containing one or more documents within the display. The displayed document set represents content generated by an application program and this content is displayed within a program window generated by the computer process 8. The program window for the computer process 8 may also contain a set of icons representing tools configured to the graphical user interface and enabling the user to control the operation of documents appearing in the program window with the display.
For the described embodiment, the display 26 presents content representing different data types in a single integrated display. This is in contrast to conventional approaches, which have each application form its own display, which results in the presentation on the display device 26 containing several program windows, typically one for each application. In addition, each different type of program window will contain a different set of user interface tools for manipulating the content displayed in that window. Thus, the system described in FIG. 1 creates an integrated display that contains visual images of different types of documents. This includes web pages that would normally be viewed in a browser, Word documents or Word processing documents that would normally be viewed in a viewer, PDF documents that would normally be viewed in a vector graphics viewer, and streaming video that would normally be viewed in a video player. Thus, the described system 10 separates the contents of these documents from the underlying application and presents them for display on the screen 26.
To enable the user to manipulate the document, the system 10 depicted in FIG. 1 provides a set of tools that can be used to navigate through an entire batch of documents, whether it be a multi-page text document, a web page of a website, or a series of time-varying images that make up a video display. To this end, as will be described in more detail below, system 10 creates documents that represent tools and that can be displayed by system 10, just as system 1C displays any other type of document. The system 10 of the present invention therefore has the advantage of providing a consistent user interface and only requiring knowledge of a set of tools for displaying and controlling different documents.
As discussed above, each source document 11 is associated with a document agent 12, which agent 12 is capable of converting incoming documents into an internal representation of the content of that source document 11. To identify the appropriate document agent 12 to process the source document 11, the system 10 of FIG. 1 includes an application dispatcher (not shown) that controls the interface between the application and the system 10. In one implementation, an external Application Programming Interface (API) communicates with the application dispatcher, which passes data, calls the appropriate document agent 12, or otherwise executes requests made by an application program. To select an appropriate document agent 12 for a particular source document 11, the application distributor advertises the source document 11 to all loaded document agents 12. These document agents 12 then respond with information about their particular applicability for transforming the content of the published source document 11. Once the document agent 12 has responded, the application distributor selects a document agent 12 and passes a pointer, such as a URI (uniform resource identifier) of the source document 11, to the selected document agent 12.
As shown in FIG. 1, the document agent 12 uses a library 16 of standard object types to generate an internal representation 14 that describes the contents of the source document 11 in terms of a group of document objects defined in the library 16, along with parameters that define the characteristics of particular instances of each document object within the document. The document object types used in this internal representation 14 will typically include: text, bitmap graphics, and vector graphics, which may or may not be active, and may be two-dimensional or three-dimensional video, audio, and various types of interactive objects, such as buttons and icons. The vector graphics document object may be a page description language (PostScript) like path with specified filler and transparency. A text document object may declare a region of stylized text.
Once the document is converted into an internal representation of the document objects, the objects are passed to the parser/renderer 18. The parser/renderer 18 generates a context-specific representation or "view" of the document represented by the internal representation 14. The required view may have all documents, one complete document or parts of one or several of these documents. The parser/renderer 18 receives view control inputs that define the context of the view and any relevant temporal parameters of the particular document view to be generated. For example, the system 10 may be required to generate an enlarged view of a portion of a document and then pan or scroll the enlarged view to display adjacent portions of the document. The view control input is interpreted by the parser/renderer 18 to determine which parts of the internal representation require a particular view and how, when, and for how long the view will be displayed.
The context-specific representation/view is represented in terms of basic graphs and parameters. Optionally, there may be a feedback path 42 between the parser/renderer 18 and the internal representation 14, for example for the purpose of triggering an update of the content of the internal representation 14, such as in the case where the source document 11 represented by the internal representation 14 contains a multi-frame animation that varies over time.
Each source document 11 provides a digital representation of a document, such as a text document, a spreadsheet, or some other document. The document agent 12 creates an internal representation of the document. In one implementation, the created digital representation includes information describing the page layout of the document, including information about page size, margins, and other page layout information. The digital representation also contains information about the content of the source document, such as text, drawings, and other content information that appears in the document. Processes for converting a known file structure into another structure are known in the art, including systems that identify page structure and content information. Any suitable technique for performing this operation may be implemented without departing from the scope of the invention.
The output from the parser/renderer 18 represents the document with basic graphics. For each document object, the representation from the parser/renderer 18 defines the object at least in terms of an actual rectangular bounding box, the actual shape of the object bounded by the bounding box, the data content of the object, and its transparency. The shape processor 22 interprets the primitive object and converts it into an output frame format suitable for the target output device 26; for example, a dot-matrix map for a printer, a vector instruction set for a plotter, or a bitmap for a display device. An output control input 44 is connected to the shape processor 22 and is capable of transmitting user interface control signals to generate an output appropriate for the particular output device 26. Thus, the parser/renderer 18 and the shape processor 22 can act as an engine that renders portions of the page layout and page content on the display 26.
In addition, FIG. 1 depicts a tool document file 30. The tool document file 30 may be a computer data file that stores information representing an image, where the image may represent a tool, such as a magnifying glass, a cursor, a ruler, or any other type of tool. For purposes of illustration, the system 10 depicted in FIG. 1 will now be described with reference to an example in which the tool document file 30 includes data representing a graphical image of a magnifying glass. The magnifier image will be associated with a function that allows the user to magnify the image of a document stored on the display 26 by passing the magnifier over the corresponding image. As will be described in greater detail hereinafter, the magnifying lens may include a central lens portion in which the portion of a document that falls under the lens of the magnifying lens appears magnified to the user and thus presented in a magnified format relative to the remainder of the underlying document. While the following examples will be described primarily with reference to the magnifying glass tool, it will be apparent to those of ordinary skill in the art that other types of tools may be provided, all of which will be understood to be within the scope of the present invention using the systems and methods described herein.
Turning to fig. 2, the operation of the magnifying glass tool can be seen. Specifically, FIG. 2 depicts a display 26 wherein the display 26 presents a screen document 42 comprising a plurality of sub-units including a document 44, a thumbnail document 46, a magnifying glass document 48, and a lenticular document 50. The display 26 presents the screen 42 as a single integrated document containing the subdocuments 44 to 50. The content provided for the creation screen 42 can be from one or more source documents 11, the content of which is represented as a document 44 and a thumbnail document 46. The screen document 42 also contains content provided by the tool document file 30, which file 30 in this example contains data according to an internal representation data format, wherein the data represents an image of the magnifying glass 48. In addition, the tool document file 30 may contain a portal object that, by processing the appropriate portion of the screen document 42, creates a further document to present the content in an enlarged format to appear as an enlarged document 50 appearing within the lens of the magnifying glass 48. Thus, the document appearing within the lens 50 is derived from the underlying document, and therefore this derived document changes depending on the context in which the magnifying glass tool 48 is used. Thus, the specific behavior for the tool can vary depending on the context of its use. For example, one magnifying glass tool may be associated with tool code that processes the content of a content document having map data differently than a content document having text. For example, for a map, the magnifying glass tool may process the associated content document to present a handle within the associated file structure that is marked for display only within the view created by a magnifying glass. Thus, the derived document presented within the magnifying glass tool 48 can contain additional information, such as street names, tourist sites, public transportation locations, annotations, or other information. In this operation, the magnifying glass tool reacts to the context of the application, which is the rendering of the map's view. In other applications, where a magnifying glass tool is used on text, the state of the tool can cause a change to the color or style of the text, or can cause the presentation of text editing tools and user interface controls, such as control buttons, drop-down menus, annotation information, text bubbles, or other types of information.
Thus, the screen document 42 is an integration and aggregation of information contained within the source document 11 and the tool document file 30. An application associated with the tool document file 30 can process the appropriate content to create the enlarged view 50. The magnification tool 48 and associated source code can identify the portion of the screen document 42 that is to be rendered in a magnified format to create the magnified rendering 50. The tool code is further capable of processing the selected content to create the magnified view 50 and to clip the magnified view within the lens area of the magnifying glass 48 to achieve a display effect of a magnified area of the screen 26. Thus, the tool document and the source document 11 are in the same internal representation and can therefore be merged into a screen document 42, which screen document 42 can be rendered by the parser/renderer 18.
In one embodiment, the graphical tool 50 may be moved over the screen by dragging with a cursor, or by dragging a stylus or other pointer over the display screen if a touch sensitive screen is present. To handle this movement, the display 26 may include a screen monitoring process for monitoring the screen of the display 26 to detect movement of a cursor, stylus or other pointer over the document graphics presented on the screen. Such screen monitoring processes are known in the art, and any suitable process may be used. Thus, the monitoring process enables the user to perceive a touch control on the visual representation of the document 44. The movement detected by the screen monitor process may be passed to an interface process that processes the detected motion to detect motion indicative of a known command. The interface process may be a separate process or may be part of the screen monitoring process, as is common in the art. When the interface component detects a command to move the tool 50, a navigation module can create input signals instructing the parser/renderer 18 to create a new display for presentation to the user in which the tool 50 will be shown repositioned according to the user's needs.
Thus, the system described in FIG. 1 can provide a graphical user interface tool that can be integrated into a single screen display representing a single document containing multiple subdocuments, some of which contain the graphical tool itself. The power of this approach is to facilitate the development of novel graphical user interface tools that enable a user to manipulate and view documents on a display, and to simulate touch controls on the documents. These systems and methods are particularly well suited for use on handheld and mobile computing platforms in the absence of traditional input tools. Additional graphical user interface tools that may be provided by the systems and methods described herein include the bubble thumbnail graphical tool described in fig. 3. Specifically, FIG. 3 depicts the screen display 26 comprising a screen document 52, the screen document 52 comprising a plurality of subdocuments including the document 44 and the thumbnail documents 60 through 72. As shown in FIG. 3, the document 44 may be presented as a large document that uses most of the viewing area of the display 26. In this embodiment, the thumbnail documents 60 through 72 are arranged in a vertical column within the screen document 52 at a location immediately to the left of the display 26. The thumbnail documents 60 through 72 vary in size with the largest thumbnail document 60, which thumbnail document 60 is centered within the vertical array of thumbnail documents.
As further shown in FIG. 3, as the distance of the documents in the vertical array from the center document 60 increases, the size of the documents decreases. The measure of distance from the center document may represent the distance from the document 44 in units of pages, or may represent some other measure of distance or difference, such as the amount of time that has elapsed since the document was last viewed, an alphabetical difference, or some other characteristic. The documents 62 and 68 immediately adjacent the center document 60 are slightly smaller than the document 60. Further, documents 64 and 70, which are immediately adjacent to documents 62 and 68, respectively, and further from document 60, are again smaller than documents 64 and 68. The reduction in document size continues from documents 66 and 72, each of which is smaller. The visual impression (impression) created by the set of thumbnail documents 60 through 72 is used to indicate the document 60 (the largest document) that represents the document 44 being displayed within the largest viewing area of the screen document 52. Documents 62 through 72 become smaller in proportion to the "distance" from the currently viewed page 60. Thus, the vertical columns of thumbnail documents 60 through 72 provide a navigation tool that a user can use for selecting a document to display within the large viewing area of the display 26. In addition, the user can select a document within the vertical array of thumbnails to select a new document to display within the viewing area. For example, in those applications in which the screen display 26 is a touch sensitive screen display, the user may activate a new document to appear in the viewing area by contacting the corresponding thumbnail document in the array of documents 60 through 72. In those applications where the user is provided with a keyboard, or a mouse, the user may use a particular input device for selecting within the array of documents that the user wants to display in the viewing area. In an alternative embodiment, the user can scroll through the thumbnails to find a document of interest. Alternatively, scrolling through thumbnail documents can cause the document 44 to change as the document is scrolled. Alternatively, scrolling of thumbnail documents can occur independently of any changes to the document 44, where only the document 44 is changed when a new thumbnail document is selected.
Thus, because the systems and processes described herein may use thumbnail images for aligning a user during navigation to generate thumbnail images, the systems and processes described herein can include any suitable thumbnail generator processes, including those known in the art, including those that generate thumbnail images that are animated or animated thumbnails.
FIG. 4 depicts yet another embodiment of the systems and methods described herein, wherein the magnification tool earlier displayed in FIG. 2 is associated with tool code that causes information that was not earlier presented in a document to appear within the lens region of the magnifier object. More specifically, FIG. 4 depicts a display 26 that includes a screen document 42, the screen document 42 being displayed as a map in this view. Fig. 4 further depicts a magnifying glass tool 48 comprising a lens region 50. As shown in FIG. 4, the magnifying tool 48 is positioned over a portion of the map 42. As described above, the tool code associated with the magnifying glass 48 is capable of presenting a magnified view of the relevant portion of the screen document 42. As further shown in fig. 4, the enlarged portion 50 also contains additional information. For example, in the mapping application depicted in FIG. 4, the magnified rendering 50 may contain additional mapping information, such as secondary roads, locations of interest, or other information related to the content being magnified. On the other hand, however, the magnifying glass may be associated with tool code that changes the color of the information, or a portion of the information, within the viewing area 50, or presents user interface information, such as control buttons, drop-down menus, annotation information, text bubbles (text bubbles), or other types of information. Thus, the particular state of the tool can vary depending on the context in which it is used. For example, as described above, the magnifying glass tool may be associated with tool code that processes the content of a content document having map data differently than one having text. Thus, the system described herein provides context-sensitive tools and processes.
Fig. 5 and 6 depict further embodiments of graphical user interface tools that may be provided by the systems and methods described herein. In particular, FIG. 5 depicts a screen 26 containing a screen document 42, the screen document 42 containing two documents 80 and 81, and a ruler 82. The two documents 80 and 81 are intended to represent similar types of documents, each being a text document printed on the same size sheet, such as a4 sheet. However, because the document 81 is rendered on a scale larger than the rendering scale of the document 80, FIG. 5 depicts two documents 80 and 81 as pages of text, with one page larger than the other. Thus, documents 80 and 81 are similar documents that have been rendered with different scale factors. As depicted in fig. 5, the scale 82 may be a floating translucent scale that displays the dimensions of each document and that can be adapted to the dimensions of the underlying object. This is described by the scale of the scale 82 which increases in size as the length of the scale travels from document 80 to document 81. Fig. 5 depicts that the scale of the scale 82 changes in proportion to the scale of the underlying document. Thus, the scale 82 provides a context-sensitive user interface tool that is capable of adjusting the scale in response to the rendered scale of the content. Turning to FIG. 6, yet another user interface tool is depicted, floating translucent query tags 84, wherein the query tags 84 can display annotations for underlying objects, thus FIG. 6 depicts a display 26 containing a screen document 42, wherein the screen document 42 contains a document 80 and a floating translucent query tool 84. When the query tool 84 is activated by dragging an image of the query tool onto the document, or by selecting a query tool icon that is already located on the document, the query tool 84 will present text 88, which text 88 may contain information indicative of the underlying document 80 annotation.
FIG. 7a depicts a method for presenting available user interface tools to a user. In particular, FIG. 7a depicts a screen 26 that includes tool buttons 90. The tool button 90 provides a graphical representation of the magnifying tool 48. To activate the magnifying tool 48, the user may activate the magnifying tool 48 by clicking with a mouse, keypad, or touch screen, and drag the image of the tool 48 from the magnifying tool button. By clicking on this tool button 90, the system will process the information from the tool document file to create the image of the document 48 shown in FIG. 7 a. Fig. 7b depicts that in one practical implementation, the user interface enables the user to push the magnifying tool 48 off the screen 26 (optionally in any direction). By pushing the tool off the screen 26, the user deletes the tool 48 and restores the icon, or tool button 90, to the screen.
Fig. 8a and 8b depict yet another tool of the type that may be used when viewing documents that contain links, or other types of pointers to other documents, or other content. In particular, FIG. 8 depicts a tool in which one document 100 contains links 102 to another document. For these documents, the systems and methods described herein may provide a slider control 104, as well as buttons, switches, or some other control. The slider control 104 may enhance the user interface view of the document 100, thus enabling the user to control the prominence of the links 102 within the document by sliding the control 104. Thus, tool 104 enables a user to adjust the prominence of links within a document (such as document 100) so that the links can be more easily identified by the user. Fig. 8a and 8b further illustrate that document 100 may contain highlighted text, such as highlighted text 108 as shown. As with link 102, the slider control 104 may allow the highlighted text 108 to maintain its transparency, while the transparency of the rest of document 100 changes when the user changes slider control 104. In operation, slider control 104 may enable a user to adjust the transparency, or alpha map, of the objects that make up document 100, in addition to those that make up link 102 or highlight text 108. However, other techniques for attenuating or enhancing portions of a document may be practiced.
Turning to FIG. 9, yet another graphical user interface tool is presented in which a document 100 contains links 102. In addition, link 102 may be associated with a floating thumbnail document 110. As shown in fig. 9, the user may be presented with a control 112. When the control 112 is activated, those links 102 within the document 100 may be associated with a floating thumbnail representing a page view of the page associated with the respective link 102. Additionally, FIG. 9 shows that in an alternative practice, the display may further contain a set of read-ahead thumbnail documents 114. Thumbnail documents 114 may represent those documents associated with links 102 within document 100, or those documents associated with other pages of document 100 when document 100 is a multi-page document.
FIG. 10 depicts yet another example of a graphical user interface tool in accordance with the present invention. Specifically, FIG. 10 depicts a handheld computing device 120 having a set of characters 122 displayed on its display. As further shown in fig. 10, a cursor window 124 is displayed over a character within the text display 122. In the illustrated embodiment, the cursor window 124 provides a soft, translucent text entry pad that floats over the current text position. The pad may move with the movement of the position of the moving text and/or the pad may maintain its position as the text itself scrolls to the left to accommodate the movement of the text beneath the pad 124. As described above, the text pad cursor 124 may be generated by a tool document file 30 that is processed by the system 10 of FIG. 1. The tool document file may contain an internal representation of the text pad 124 displayed on the device 120. In one embodiment, the handheld device 120 includes a touch sensitive screen that enables a user to use a stylus for forming characters to be displayed on a screen within the text entry pad 124. The design and development of systems that allow such text input are well known in the art, and the systems and methods described herein may be used with any suitable system. In operation, a user may move a stylus across the screen of the device 120 to form the letters that will appear in the text entry pad 124. This operation is depicted in fig. 11, which depicts a series of text entry processes 130 through 138.
In particular, FIG. 11 depicts a text entry step 130 in which the cursor pad 124 is displayed on the display of the device 120. A user may trace a letter in the space defined by the cursor 124 or in another area and, optionally, a tracing curve may appear in the area defined by the text input cursor 124. The tracing curve entered by the user may be processed by a character recognition system of a type known in the art to associate the rulings with a character, such as the letter L in this example. Once character recognition has been completed, the recognized character L may be presented on the display and the cursor may be moved, or the text may be scrolled, as shown in step 132, but in either case, the cursor 124 becomes available for the user to enter more text. Thus, the user may enter text until a word is formed, as shown in step 134. When a complete word is entered, the user may move the cursor 124 to a position spaced apart from the written word and begin drawing characters that will be displayed within the text entry cursor 124, step 138. Thus, the cursor 124 provides a tool that allows inter-line insertion of content in a document, such as by inserting a piece of text in an existing line of text that appears within the document. In other applications, tools may be provided to edit the image, such as by clearing content, changing colors to edit the image, or to perform other application functions.
As described above, for those systems that include a touch sensitive display, the systems and methods described herein may provide for enabling a user to scribe lines on the display using a stylus and interpret the scribe lines by the system for providing character input. In addition, FIGS. 12a through 12g depict an exemplary series of command strokes (command strokes) that a user may enter by moving a stylus across the touch sensitive screen. Each of the command strokes shown in fig. 12a through 12g may be associated with a user interface command that a user may use for manipulating and viewing a document. For example, 12a describes a stroke in which the user forms a dome check mark, which the system can associate with it a command confirming an action proposed by the system. Similarly, FIG. 12b depicts a stroke forming peaks and valleys on the display that may be associated with a command to delete content from the display. FIG. 12c depicts a clockwise circular stroke that may be associated with returning a document to a home page or to resume, and FIG. 12d depicts an upward straight diagonal stroke that indicates a delete, clear, or no command. FIG. 12e depicts a box stroke proceeding in a counter-clockwise direction indicating a paragraph selection command, and FIGS. 12f and 12g depict strokes indicating a request by a user to move to the next or corresponding previous document. It will be understood by those of ordinary skill in the art that while the systems and methods described herein encompass systems and methods that work with different types of documents, such as Word documents, web pages, streaming media, and other types of content, the meaning of different strokes may vary depending on the application. For example, for a document representing a web page, the circular clockwise rotation in FIG. 12c may indicate a request for the document representing the web page to return to a web page associated with the web page document. Alternatively, when viewing streaming media content, the use of a circular clockwise stroke in FIG. 12c may indicate a restart request that causes the streaming video to stop and restart from the beginning. Thus, it will be apparent to one of ordinary skill in the art that the stroke commands described in FIGS. 12a through 12g may have different meanings depending on their application.
Fig. 13a and 13b depict one such command stroke that a user may use to click and drag a document, resulting in page movement of the document within the viewing area. In the described embodiment, a velocity detector process takes position readings periodically (e.g., every centisecond) during a document drag operation. From these position readings, a page speed determination may be made. The page speed determination may be used to enable the user interface to present a more natural way of moving a document through a viewing interval. To this end, a process may use the speed determination to instruct the parser/renderer 18 to redraw the document in a series of pictures that depict the document as it moves across the screen. For example, a user may drag a document at a certain speed and then release the stylus, mouse, or other input device from the document. Optionally, the document may stop moving when released. However, in an alternative practical approach, the page may continue to move in a certain direction until the user indicates that the document is about to stop moving, such as clicking on the document. For a multi-page document, the speed magnitude can be used to expand different pages of the document at the screen panorama, with the expansion rate determined by the page speed set by the user as the screen drags one page of the document. Optionally, the velocity may be subtracted by a constant page inertia value until it reaches zero velocity and page scrolling ceases, and further velocity detection may be used to increase (accumulate) the page velocity and thus the movement relative to the page inertia during page pan to allow a smooth continuous movement of the page between rapid successive drag operations.
Alternatively, other user interface processes may be provided to enhance the user experience with touch controls on the document. For example, the user interface may include a page turn detector for detecting motion on the display 26 at the location of the display associated with the upper right corner of the document 44 in FIG. 2. If the page-flip detector, or screen monitor, detects a swiping movement (broushingmotion) across the surface of the document 44, the page-flip detector can instruct the parser/renderer 18 to "flip" the page, resulting in the next page, chapter, scene or other segment to be displayed. Movement can be detected in either direction for page flipping back and forth, and the page flip detector can be context sensitive, which produces a new display appropriate for the application and content type. Optionally, the interface process may include a page curl detector that can operate similarly to the page turn detector, except that a movement in the upper right corner of the document 44 can cause the page curl detector to instruct the parser/renderer 18 to redraw the screen 42 or document 44 so that the corner of the document 44 is curled down and presented as part of the underlying page. Both the page turn and page curl detectors may be computer processes capable of generating instructions to cause the parser/renderer 18 to achieve a desired effect. In addition, a page zoom-in detector (such as a double click on the page area) can follow an up/down movement to zoom in/out the view. This function may be advantageously combined with a velocity detector to provide an inertial amplification feature.
It will be apparent to those skilled in the art that although fig. 1 illustrates the user interface system 10 as functional block elements, these elements can be implemented as computer programs, or portions of computer programs, which are capable of running on a data processor platform to thereby configure the data processor as a system in accordance with the present invention. Furthermore, although FIG. 1 depicts system 10 as an integrated unit, it will be apparent to those of ordinary skill in the art that this is merely an example and that the present invention can be implemented as a computer program distributed across multiple platforms.
As discussed above, the user interface system described above can be implemented as a software component operating on a data processing system that includes handheld computing platforms, as well as more conventional computing platforms such as a UNIX workstation. In these embodiments, the user interface system can be implemented as a C language computer program, or as a computer program written in any high-level language including C + +, Fortran, Java, or BASIC. Further, in an embodiment where the platform is primarily a microprocessor, microcontroller or DSP (digital signal processor), the user interface system can be implemented as a computer program written in microcode or written in a high level language and compiled down to microcode that can be executed on the platform in use. The development of these systems is well known to those of ordinary skill in the art and such techniques are described in the literature, for example, such literature includes texas instruments "digital signal processing applications using the TMS320 series," I, II, and volume III (1990). In addition, general techniques for advanced Programming are known and are described, for example, in Programming in C (1983) by Stephen G.Kochan, Hayden Press. It should be noted that DSPs are particularly suited for implementing signal processing functions, including pre-processing functions, such as image enhancement by adjusting contrast, edge sharpness and brightness. Code developed for DSP and microcontroller systems is derived according to principles well known in the art.
Additionally, it should be understood that although FIG. 1 illustrates the computer process 8 as comprising functional block elements, these elements can be implemented as computer programs, or portions of computer programs, that are capable of running on the data processing platform to thereby configure the data processing platform as a system in accordance with the present invention. Furthermore, while FIG. 1 depicts system 10 as an integrated unit of a process 8 and a display device 26, it will be apparent to those of ordinary skill in the art that this is merely an example, and that the system described herein can be implemented by other architectures and schemes, including system architectures that separate the document processing functions and user interface functions of process 8 from the document display operations performed by display 26.
Those of ordinary skill in the art will know of, or be able to ascertain using no more than routine experimentation, many equivalents to the embodiments and practices described herein. In addition, the inventive system and process has broad application and can be used in a range of devices including handheld computers, telephones, mobile data terminals, set-top boxes, embedded processors, notebook computers, computer workstations, printers, copiers, facsimile machines and other systems. In addition, those of ordinary skill in the art will appreciate that the systems described herein may be implemented with any suitable interface device, including touch sensitive screens and touch pads, mouse input devices, keyboards and keypads, joysticks, thumb-wheel devices, mice, trackballs, virtual reality input systems, sound control systems, eye movement control systems, and any other suitable device. It will therefore also be appreciated that the system described herein has many applications and advantages over the prior art, including providing a set of interface processes and systems that provide complex operations for different file types.
Therefore, it is to be understood that the invention is not limited to the embodiments disclosed herein, but should be understood in light of the following claims, which are to be interpreted as broadly as allowed under the law.

Claims (38)

HK03109408.4A2000-04-142001-04-17Computer deviceHK1057117B (en)

Applications Claiming Priority (5)

Application NumberPriority DateFiling DateTitle
GB0009129.82000-04-14
GBGB0009129.8AGB0009129D0 (en)2000-04-142000-04-14Digital document processing
US09/703,5022000-10-31
US09/703,502US7055095B1 (en)2000-04-142000-10-31Systems and methods for digital document processing
PCT/GB2001/001741WO2001079980A1 (en)2000-04-142001-04-17User interfaces and methods for manipulating and viewing digital documents

Publications (2)

Publication NumberPublication Date
HK1057117A1true HK1057117A1 (en)2004-03-12
HK1057117B HK1057117B (en)2006-08-11

Family

ID=

Also Published As

Publication numberPublication date
EP2012220A3 (en)2009-09-02
KR20030005277A (en)2003-01-17
KR100727195B1 (en)2007-06-13
CN1848081A (en)2006-10-18
HK1057278A1 (en)2004-03-19
KR100743797B1 (en)2007-07-30
JP2012094176A (en)2012-05-17
US20100185975A1 (en)2010-07-22
EP2284680A2 (en)2011-02-16
US20100192062A1 (en)2010-07-29
KR20070007213A (en)2007-01-12
AU5645901A (en)2001-10-30
EP2012220A2 (en)2009-01-07
KR20030044907A (en)2003-06-09
US7450114B2 (en)2008-11-11
KR20070005028A (en)2007-01-09
EP2284679B1 (en)2019-11-27
US20010032221A1 (en)2001-10-18
KR20030001415A (en)2003-01-06
AU2001250497A1 (en)2001-10-30
HK1093795A1 (en)2007-03-09
US20100185948A1 (en)2010-07-22
JP2012059275A (en)2012-03-22
US8358290B2 (en)2013-01-22
EP2284680A3 (en)2013-03-06
KR20030026927A (en)2003-04-03
CN1251056C (en)2006-04-12
KR100707579B1 (en)2007-04-13
KR100707645B1 (en)2007-04-13
KR100721634B1 (en)2007-05-23
CN1848081B (en)2010-05-26
CN1430766A (en)2003-07-16
WO2001080178A3 (en)2002-01-31
EP1272975B1 (en)2005-03-16
KR100707651B1 (en)2007-04-13
US8593436B2 (en)2013-11-26
EP2284682B1 (en)2020-06-03
EP1272975A2 (en)2003-01-08
KR100748802B1 (en)2007-08-13
EP2284681A2 (en)2011-02-16
KR20030039328A (en)2003-05-17
EP2284682A2 (en)2011-02-16
JP2003531428A (en)2003-10-21
EP2284679A3 (en)2013-04-10
JP4964386B2 (en)2012-06-27
HK1100456A1 (en)2007-09-21
US20020011990A1 (en)2002-01-31
EP2284682A3 (en)2013-04-03
KR20070035105A (en)2007-03-29
US7009626B2 (en)2006-03-07
EP2284681A3 (en)2013-04-03
WO2001080178A2 (en)2001-10-25
CN1426551A (en)2003-06-25
EP2284679A2 (en)2011-02-16
CN1253831C (en)2006-04-26
KR100799019B1 (en)2008-01-28
JP5265837B2 (en)2013-08-14
JP2012022695A (en)2012-02-02
JP2012033190A (en)2012-02-16
ATE291261T1 (en)2005-04-15
EP1272920A1 (en)2003-01-08
DE60109434T2 (en)2006-02-02
WO2001079980A1 (en)2001-10-25
JP2003531445A (en)2003-10-21
KR20020087974A (en)2002-11-23
US20090063960A1 (en)2009-03-05
KR100743781B1 (en)2007-07-30
DE60109434D1 (en)2005-04-21
JP5306429B2 (en)2013-10-02
ES2240451T3 (en)2005-10-16
JP5787775B2 (en)2015-09-30

Similar Documents

PublicationPublication DateTitle
US8593436B2 (en)User interface systems and methods for manipulating and viewing digital documents
US9778836B2 (en)User interface systems and methods for manipulating and viewing digital documents
JP4637455B2 (en) User interface utilization method and product including computer usable media
US20090315841A1 (en)Touchpad Module which is Capable of Interpreting Multi-Object Gestures and Operating Method thereof
CN1957320A (en) Navigation method, electronic device, user interface and computer program product
HK1057117B (en)Computer device
HK1093795B (en)User interfaces and methods for manipulating and viewing digital documents
JP2008076667A (en) Image display device, image display method, and program
HK1100456B (en)Digital document processing system, data processing system and peripheral device
HK1056636B (en)Digital document processing system, data processing system and peripheral device
HK1056636A1 (en)Digital document processing system, data processing system and peripheral device

Legal Events

DateCodeTitleDescription
PEPatent expired

Effective date:20210416


[8]ページ先頭

©2009-2025 Movatter.jp