Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a schematic structural diagram of a text control for text editing according to an embodiment of the present invention. The text control can be used as a functional plug-in a manuscript editing tool to realize character editing. As shown in fig. 1, the text control includes: a text input component 11, a text processing component 12 and atext rendering component 13;
the text input component 11 is used as an interactive interface for character editing and receives input information and an operation instruction generated by external triggering;
the text processing component 12 is used for editing and forming characters to be presented according to the input information received by the text input component, and is also used for analyzing and determining the form to be presented of the received operation instruction;
atext rendering component 13, configured to render the text to be presented based on a given rendering attribute and present the rendered text in real time; and the rendering module is also used for responding to each operating instruction and rendering the display in a corresponding to-be-displayed form in real time.
It should be noted that, the application context of the text control provided in the first embodiment may be understood as: the existing text control for performing the text editing is generally an integral control, and if the design of the existing text control is completed, the type of the function rendering items which can be performed in the text editing process can be determined accordingly, and the function rendering items which cannot be flexibly added in the subsequent use cannot be added.
Specifically, the text control provided by the present embodiment mainly includes three hierarchical regions, namely a text input component 11, a text processing component 12, and atext rendering component 13, and it can be seen from fig. 1 that the three components are in a layer-by-layer contained relationship, wherein, the text input component 11 is the outermost layer of the text control provided in this embodiment, and is equivalent to a container, the text processing component 12 and thetext rendering component 13 are encapsulated inside the text input component 11, so that the text input component 11 is equivalent to a unified entry for the outside, and is specifically used as an interactive interface for text editing to receive input information generated by external trigger (for example, a user can receive a trigger response to a keyboard to acquire characters input by the keyboard, and the keyboard can be a soft keyboard or a hardware keyboard) and an operation instruction (for example, an operation instruction triggered by the user can be received, and the operation instruction is generally generated by a user through touch or click or drag performed by a mouse).
Meanwhile, the text processing component 12 is equivalent to an intermediate layer of the text control provided in this embodiment, and is packaged in the content of the text input component 11, and is packaged in its interior with thetext rendering component 13, specifically, the component can obtain input information or an operation instruction received by the text input component 11 in real time, and then the component is responsible for processing the obtained information, for example, corresponding characters to be presented (only characters are formed but the characters cannot be directly presented on the screen) are formed according to the processing of the input information, and for example, a mode to be presented corresponding to the operation instruction is determined by processing, for example, functions to be actually realized according to the operation instruction are different, and correspondingly, the mode to be presented is determined to be different, for example, when an operation instruction generated by clicking a mouse is processed, the display position to be presented when the mouse is clicked can be determined, for example, when the operation instruction generated by the dragging of the mouse is processed, the operation instruction can determine a region to be presented with a selected effect when the mouse is dragged, and the like.
In addition, thetext rendering component 13 is equivalent to the innermost layer area of the text control provided in this embodiment, and it should be noted that, in the text rendering component, function rendering items required in the text editing are actually and specifically packaged, each function rendering item is mainly responsible for performing corresponding rendering processing on the edited text and presenting a corresponding rendering processing effect in real time, for example, the function rendering item responsible for text presentation may perform text rendering and real-time presentation on the formed text to be presented based on a given rendering attribute (such as font color, font size, font format, and the like), further, for example, the function rendering item responsible for cursor presentation may render and present a cursor at a determined display position to be presented by responding to a mouse click on a corresponding operation instruction, and for example, the function rendering item responsible for presentation of the selected effect may render and present the determined selected effect area to be presented by responding to a mouse drag a corresponding operation instruction at the determined selected effect.
Further, thetext rendering component 13 includes a rendering layer set arranged in parallel for implementing rendering with different functions; the rendering layer set comprises a character covering layer rendering layer and at least one of the following: the system comprises a character rendering layer, a cursor rendering layer and a selected effect rendering layer.
In this embodiment, the specific implementation of the various function rendering items that may be added in thetext rendering component 13 is equivalent to packaging function rendering layers corresponding to the various function rendering items in thetext rendering component 13, and it may be considered that parallel presentation layers exist between the function rendering layers. One of the purposes of the text control provided in this embodiment is to realize the presentation of the text cover layer effect, and thus, in order to realize the text cover layer in thetext rendering component 13, it is equivalent to increase a function rendering item of the text cover layer, that is, it can be considered that thetext rendering component 13 actually includes the text cover layer rendering layer for realizing the text cover layer effect. In addition, thetext rendering component 13 of this embodiment may further include a rendering layer for implementing rendering of other functions, such as a text rendering layer, a cursor rendering layer, and a selected effect rendering layer.
In order to ensure that the text mask effect is always presented in the uppermost layer of the screen, the embodiment may preferably set the mask display priority of the text mask rendering layer to be the highest.
Further, the text control further comprises:
the text management component is used for carrying out attribute management on the edited characters to be presented so as to form rendering attributes including the display form and the display position of the characters to be presented;
and the rendering data assembly is used for packaging the rendered characters to be presented and the current rendering data to form new rendering data, and the new rendering data is used as the processing basis of the text processing assembly.
It should be noted that the text control provided in this embodiment includes a text management component and a rendering data component in addition to the interactive components of the text input component 11, the text processing component 12, and thetext rendering component 13. Specifically, when performing text editing based on the text control provided in this embodiment, before performing rendering and presentation on the text to be presented based on thetext rendering component 13, attribute management needs to be performed on the edited text to be presented, that is, it is necessary to manage and determine the display form and the display position of the text to be presented, and the editing paragraphs to which the display form and the display position of the text to be presented belong, which are equivalent to rendering attributes of the text to be presented, and which can be formed by the text management component, based on the management of the text to be presented by the text management component, thetext rendering component 13 can correctly present the text to be presented on the screen at the correct editing position in the selected display form.
It can be understood that, when the text management component is designed, an appropriate data structure can be set for the text management component to realize management of text attributes according to the designed data structure, for example, when text editing is performed, operations of inserting, deleting and searching paragraphs are often frequently performed, so that a red-black tree can be preferably set as the data structure for paragraph management in the embodiment, because the time complexity of the data structure in the worst case is o log (n), which is a data structure with extremely high efficiency, fluency during editing and rendering of a large amount of texts can be ensured, and user experience of text editing can be improved.
In addition, after thetext rendering component 13 renders the text to be presented or renders the operation instruction in the form to be presented, the present embodiment further designs that the rendering data component encapsulates the newly rendered text data and the rendering data formed again to form new rendering data, it can be known that the formed rendering data can be fed back to the text processing component 12 as a data processing basis for text processing, and on the basis of the data processing, the text processing component 12 can process the input information or the operation instruction newly received by the text input component again, for example, a new text to be presented, a new cursor or a selected effect, and the like can be obtained.
Compared with the existing adopted text control, the text control provided by the embodiment of the invention realizes the layered design of the components required by the text editing, and the layered design can ensure that each component in the text control is manually controlled, namely, the functional rendering items required in the text editing can be dynamically increased on the premise of not influencing the text editing, so that the flexibility of the text editing is enhanced, and the user experience of the text editing is improved.
Example two
Fig. 2 is a flowchart of an implementation method for a mask layer rendering according to a second embodiment of the present invention, where the method is suitable for performing a mask layer rendering operation on characters in a text editing box, and the method may be executed by an implementation apparatus for a mask layer rendering, where the apparatus may be implemented by software and/or hardware, and is generally integrated on an intelligent device with a manuscript editing tool.
In this embodiment, the intelligent device may specifically be an intelligent mobile device with a document editing function, such as a mobile phone, a tablet computer, and a notebook, or may also be a fixed electronic device with a document editing function, such as a desktop computer.
It should be noted that in this embodiment, the implementation of the masking layer rendering still depends on the text control, but the method is described in this embodiment based on the interactive layer of text editing, and the text control is specifically presented in the form of a text edit box in the interactive layer. Specifically, the application context of this embodiment may be understood as that in a new document page of the document editing tool, a text editing box may be presented after triggering text editing, so that text editing, cursor position and selection effect, and the like may be presented in the text editing box. The present embodiment corresponds to a mask rendering implemented in a text edit box (text control) having a text mask rendering function.
As shown in fig. 2, a method for implementing a mask layer rendering according to a second embodiment of the present invention specifically includes the following operations:
s201, after the user is triggered to enter a Mongolian drawing interface, monitoring the target character which is selected by the cursor in the current editing frame.
Specifically, the current edit box may be equivalent to a presentation form of a text control currently performing text editing on the screen, where the text control is the text control provided in the first embodiment. In the embodiment, the text overlay rendering function in the text control is embodied on the interactive layer in the form of a function button or a shortcut key, so that a user can trigger an operation instruction for forming overlay rendering by overlay rendering.
In the step, the user can enter the masking layer drawing interface according to the triggering of the masking layer rendering function button, and then the operation of the user on the cursor in the current editing box can be monitored, the description from the bottom layer can be regarded as the receiving of the cursor moving operation by the text input component in the text control, and the determination of the form to be presented (equivalent to the actual selected position of the cursor moving) of the cursor moving operation by the text processing component. In the implementation, the characters selected by cursor movement are marked as target characters, and can also be understood as characters to be subjected to layer rendering.
S202, determining the area information of each target character relative to the current edit box.
It can be understood that, since the text editing is performed in the current edit box, it is equivalent to that the text editing is implemented depending on the text control provided in the above embodiment, when the target text is determined by monitoring the movement of the cursor, the coordinate information and the area information of the position occupied by the target text relative to the text control are also correspondingly determined.
Specifically, the determining the area information of each target character relative to the current edit box includes:
acquiring relative coordinate information of each target character relative to the current edit box; determining the area of each target character in the current edit box according to the length and width value occupied by each target character in the current edit box; and respectively determining the relative coordinate information and the area of each region as the region information of the corresponding target character equivalent to the current edit box.
In this embodiment, the area information of each target character selected when the cursor moves in the current edit box may be represented by a set data structure, and exemplary parameter information included in the preset data structure may be: x; y; width and height, wherein x represents the relative abscissa of a single selected character relative to the upper left corner of the current edit box; y represents the relative ordinate of a single selected character relative to the upper left corner of the current edit box; the width represents the width value of the selected area corresponding to the single selected character; height represents the length value (also equivalent to the height value) of the selected area corresponding to the single selected word. Thus, it is considered that the area information corresponding to each target character obtained in this step is actually represented in the above-described data structure format.
And S203, performing covering rendering on each target character based on the area information of each target character.
Describing from the bottom layer, the masking layer rendering performed in the step is actually equivalent to the masking layer rendering completed by the text rendering component of the text control, specifically, the operation of the step is mainly realized based on the character masking layer rendering layer in the text rendering component, and the operation process of realizing the masking layer rendering by the character masking layer rendering layer can be described as follows: and the character covering layer rendering layer acquires the area information of each target character, analyzes the coordinate information and the area information included in the acquired area information, determines the rendering area of each target character and draws the selected color in the rendering area.
Further, the performing a masking rendering on each target character based on the region information of each target character includes: determining an actual rendering region corresponding to each target character according to the relative coordinate information and the region area of each target character; and carrying out color covering on each actual rendering area based on the selected masking layer rendering color, and displaying on the current screen in real time.
In this embodiment, the actual rendering area may be understood as a rendering area of the target text actually corresponding to the text cover layer rendering layer, the color of the cover layer rendering in this embodiment may be a default white color, or may be a color selected manually, and the color that can be known to be covered by the cover layer is often displayed on the uppermost layer of other rendering effects of the text, thereby realizing the overall covering of various information of the text.
The present embodiment further optimizes and adds the operation of S204 on the basis of the above method steps, as shown in fig. 2, after S203, the method further includes:
and S204, after the mode of entering the manuscript demonstration mode is triggered, when the cursor is monitored to be positioned in a position area with the masking layer rendering on the screen, removing the masking layer rendering on the position area.
In this embodiment, it can be considered that the operations of S201 to S203 are specifically implemented in the text editing mode, and according to the setting, after entering the document presentation mode, the target characters at the position where the masking layer is rendered are still presented in the form of masked letters with colors, but in the document presentation mode, if the user clicks on a certain position with colors via a cursor, based on the operations in this step, the triggering of the user on any character with a masking layer rendering effect via the cursor can be monitored, and a position area of which target character is specifically triggered by the user can be determined according to the triggering position of the cursor on the screen, so that the color masking on the position area corresponding to the target character can be removed in this step, and the target character under color masking can be presented in real time. Specifically, the step is to perform a manuscript demonstration mode according to the triggering of a demonstration function button or a demonstration function shortcut key by a user.
In the method for implementing the overlay rendering provided by the second embodiment of the present invention, after entering the overlay rendering interface, a cursor is monitored to move a selected target character in a current edit box; then determining the area information of each target character relative to the current edit box; and finally, performing covering layer rendering on each target character according to the area information of each target character. By the technical scheme, the cursory layer rendering of any character in the current edit box can be simply and effectively realized on the basis of custom construction of the current edit box presented by the text control, and the user experience of the manuscript editing tool in character editing and demonstration is better improved.
EXAMPLE III
Fig. 3 is a block diagram of an implementation apparatus for performing a masking rendering according to a third embodiment of the present invention, where the apparatus is suitable for performing a masking rendering operation on a text in a text editing box, where the apparatus may be implemented by software and/or hardware and is generally integrated on an intelligent device having a manuscript editing tool. As shown in fig. 3, the apparatus includes: aninformation listening module 31, aninformation determining module 32 and a mask rendering module.
Theinformation monitoring module 31 is configured to monitor that a cursor moves a selected target character in a current edit box after triggering entry into a masked drawing interface;
aninformation determining module 32, configured to determine region information of each target text relative to a current edit box, where the current edit box is a presentation form of a text control on a screen according to a first embodiment of the present invention;
and a masklayer rendering module 33, configured to perform mask layer rendering on each target character based on the region information of each target character.
In this embodiment, the device firstly monitors the target character selected by the cursor moving in the current edit box after triggering to enter the masking drawing interface through theinformation monitoring module 31; then, determining the area information of each target character relative to the current edit box through aninformation determination module 32; finally, the maskinglayer rendering module 33 performs masking layer rendering on each target character based on the region information of each target character.
The device for realizing the overlay rendering provided by the third embodiment of the invention can simply and effectively realize the overlay rendering of any character in the current edit box on the basis of self-defining the current edit box displayed by the text control, and better improves the user experience of the manuscript edit tool in character editing and demonstration.
Further, theinformation determining module 31 is specifically configured to:
acquiring relative coordinate information of each target character relative to the current edit box; determining the area of each target character in the current edit box according to the length and width value occupied by each target character in the current edit box; and respectively determining the relative coordinate information and the area of each region as the region information of the corresponding target character equivalent to the current edit box.
Further, the maskinglayer rendering module 32 is specifically configured to:
determining an actual rendering region corresponding to each target character according to the relative coordinate information and the region area of each target character; and carrying out color covering on each actual rendering area based on the selected masking layer rendering color, and displaying on the current screen in real time.
Further, the apparatus further comprises:
and the maskinglayer removing module 34 is configured to, after the entry into the document presentation mode is triggered, remove the masking layer rendering on the position area when the cursor is monitored to be located in the position area with the masking layer rendering on the screen.
Example four
Fig. 4 is a schematic diagram of a hardware structure of an intelligent device according to a fourth embodiment of the present invention, and as shown in fig. 4, the intelligent device according to the fourth embodiment of the present invention includes: aprocessor 41 and astorage device 42. The number of the processors in the smart device may be one or more, fig. 4 illustrates oneprocessor 41, theprocessor 41 and thestorage device 42 in the smart device are also connected by a bus or in other manners, and fig. 4 illustrates a connection by a bus.
Thestorage device 42 in the smart device serves as a computer-readable storage medium, and may be configured to store one or more programs, where the programs may be software programs (such as the text control provided in the foregoing embodiments), computer-executable programs, and modules, such as program instructions/modules corresponding to the implementation method of the mask layer rendering in the embodiment of the present invention (for example, the modules in the implementation device of the mask layer rendering shown in fig. 3 include theinformation monitoring module 31, theinformation determining module 32, and the mask layer rendering module). Theprocessor 41 implements the editing of the text by executing the text control, as provided in the above embodiments, stored in the storage device; meanwhile, theprocessor 41 executes various functional applications and data processing of the smart device by running software programs, instructions and modules stored in thestorage device 42, that is, implements the method for implementing the masking layer rendering in the above method embodiments.
Thestorage device 42 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the device, etc. (such as the mask color and area information in the above-described embodiments). Further, thestorage 42 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples,storage 42 may further include memory located remotely fromprocessor 41, which may be connected to the device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
And, when the one or more programs included in the above-mentioned smart device are executed by the one ormore processors 41, the programs perform the following operations:
after triggering and entering a Mongolian drawing interface, monitoring a target character selected by a cursor in a current edit box, wherein the current edit box is a presentation form of a text control provided by the first embodiment of the invention on a screen; determining the area information of each target character relative to the current edit box; and performing layer rendering on each target character based on the region information of each target character.
In addition, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program may be a text control, and when the text control is executed by a processor, the text control may implement editing and presenting of characters; the program may also be an application program of the method for implementing the mask rendering provided in embodiment two, and when the application program is executed by a processor, the application program may implement the method for implementing the mask rendering, where the method includes: after triggering and entering a Mongolian drawing interface, monitoring a target character selected by a cursor in a current edit box, wherein the current edit box is a presentation form of a text control provided by the first embodiment of the invention on a screen; determining the area information of each target character relative to the current edit box; and performing layer rendering on each target character based on the region information of each target character.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.