Movatterモバイル変換


[0]ホーム

URL:


CN108279964B - Method and device for realizing covering layer rendering, intelligent equipment and storage medium - Google Patents

Method and device for realizing covering layer rendering, intelligent equipment and storage medium
Download PDF

Info

Publication number
CN108279964B
CN108279964BCN201810053788.0ACN201810053788ACN108279964BCN 108279964 BCN108279964 BCN 108279964BCN 201810053788 ACN201810053788 ACN 201810053788ACN 108279964 BCN108279964 BCN 108279964B
Authority
CN
China
Prior art keywords
rendering
text
layer
target character
component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810053788.0A
Other languages
Chinese (zh)
Other versions
CN108279964A (en
Inventor
张强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Guangzhou Shirui Electronics Co Ltd
Original Assignee
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Guangzhou Shirui Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shiyuan Electronics Thecnology Co Ltd, Guangzhou Shirui Electronics Co LtdfiledCriticalGuangzhou Shiyuan Electronics Thecnology Co Ltd
Priority to CN201810053788.0ApriorityCriticalpatent/CN108279964B/en
Publication of CN108279964ApublicationCriticalpatent/CN108279964A/en
Application grantedgrantedCritical
Publication of CN108279964BpublicationCriticalpatent/CN108279964B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了用于文字编辑的文本控件、蒙层渲染的实现方法及装置。该实现方法包括:在触发进入蒙层绘制界面后,监听光标在当前编辑框中移动选中的目标文字;确定各所述目标文字相对所述当前编辑框的区域信息;基于各所述目标文字的区域信息,在各所述目标文字上进行蒙层渲染。利用该方法,能够在自定义构建文本控件呈现的当前编辑框的基础上,简单有效的实现对当前编辑框中任意文字的蒙层渲染,更好地提升了文稿编辑工具在文字编辑及演示中的用户体验。

Figure 201810053788

The invention discloses a text control for text editing, and an implementation method and device for mask rendering. The implementation method includes: after triggering to enter the mask drawing interface, monitoring the cursor to move the selected target text in the current edit box; determining the area information of each target text relative to the current edit box; based on the area of each target text information, and perform mask rendering on each of the target texts. Using this method, on the basis of customizing the current edit box presented by the text control, the mask rendering of any text in the current edit box can be simply and effectively realized, which better improves the text editing and presentation of the manuscript editing tool. user experience.

Figure 201810053788

Description

Method and device for realizing covering layer rendering, intelligent equipment and storage medium
Technical Field
The invention relates to the technical field of computer application, in particular to a method and a device for realizing overlay rendering, intelligent equipment and a storage medium.
Background
The manuscript editing tool is office software frequently used in work and study of people, such as Microsoft's presentation manuscript software (PowerPoint, PPT), a user can edit the manuscript based on the manuscript editing tool and can display the edited content to others, and a similar manuscript editing tool is also installed in the currently popular intelligent teaching whiteboard, so that a teacher can edit and display the teaching content.
Generally, when a document is edited, the text part is often edited by depending on a text control in a document editing tool, the text control is equivalent to a functional plug-in for performing a text editing operation in the document editing tool, in an actual operation, a user needs to perform text editing in a text editing box supported by the text control, and the user usually has a requirement for performing text masking on the edited text. However, the text control used for forming the text editing box is often designed in an integral form, so that the current implementation of the text covering layer is based on all the characters in the whole text editing box, the covering layer covering cannot be set for partial characters in the text editing box, and if a user only wants to cover partial characters, the user can perform the operation of adding another element to directly cover the characters to be covered in the text editing box so as to simulate the covering layer for the partial characters.
However, when the number of the characters to be shielded is large and the distribution is scattered, the operation of performing the layer setting based on the above method is quite complicated, and the user experience of the document editing tool in the character editing and demonstration is affected.
Disclosure of Invention
The embodiment of the invention provides a text control for character editing and a method and a device for realizing overlay rendering, which can simply and effectively realize overlay rendering of any edited character.
In a first aspect, an embodiment of the present invention provides a text control for text editing, including: the system comprises a text input component, a text processing component and a text rendering component;
the text input component is used as an interactive interface for character editing and used for receiving input information and operation instructions generated by external triggering;
the text processing component is used for editing and forming characters to be presented according to the input information received by the text input component and analyzing and determining the form to be presented of the received operation instruction;
the text rendering component is used for rendering the characters to be presented based on given rendering attributes and presenting the characters in real time; and the rendering module is also used for responding to each operating instruction and rendering the display in a corresponding to-be-displayed form in real time.
In a second aspect, an embodiment of the present invention provides a method for implementing a mask rendering, including:
after triggering and entering a Mongolian drawing interface, monitoring the movement of a cursor in a current edit box to select a target character;
determining the area information of each target character relative to a current edit box, wherein the current edit box is a presentation form of a text control on a screen provided by the embodiment of the first aspect of the invention;
and performing layer rendering on each target character based on the region information of each target character.
In a third aspect, an embodiment of the present invention provides an apparatus for implementing a mask rendering, including:
the information monitoring module is used for monitoring the target character selected by the cursor in the current edit box after triggering to enter the masking layer drawing interface;
an information determining module, configured to determine region information of each target word relative to a current edit box, where the current edit box is a presentation form of a text control on a screen, according to an embodiment of the first aspect of the present invention;
and the mask layer rendering module is used for performing mask layer rendering on each target character based on the region information of each target character.
In a fourth aspect, an embodiment of the present invention provides an intelligent device, including:
one or more processors;
the storage device is used for storing the text control provided by the embodiment of the first aspect of the invention and is also used for storing one or more programs;
the text control is executed by the one or more processors such that the one or more processors implement text editing;
the one or more programs are executed by the one or more processors, so that the one or more processors implement the method for implementing the overlay rendering according to the embodiment of the second aspect of the present invention.
In a fifth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements text editing and/or implements an implementation method of a masking layer rendering as provided in the second aspect of the present invention.
In the method and the device for implementing the text control and the masking layer rendering for the text editing, the masking layer rendering is implemented as follows: after triggering and entering a Mongolian drawing interface, monitoring the movement of a cursor in a current edit box to select a target character; then determining the area information of each target character relative to the current edit box; and finally, performing covering layer rendering on each target character according to the area information of each target character. By the technical scheme, the cursory layer rendering of any character in the current edit box can be simply and effectively realized on the basis of custom construction of the current edit box presented by the text control, and the user experience of the manuscript editing tool in character editing and demonstration is better improved.
Drawings
Fig. 1 is a schematic structural diagram of a text control for text editing according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a method for implementing a mask rendering according to a second embodiment of the present invention;
fig. 3 is a block diagram of an implementation apparatus for overlay rendering according to a third embodiment of the present invention;
fig. 4 is a schematic diagram of a hardware structure of an intelligent device according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a schematic structural diagram of a text control for text editing according to an embodiment of the present invention. The text control can be used as a functional plug-in a manuscript editing tool to realize character editing. As shown in fig. 1, the text control includes: a text input component 11, a text processing component 12 and atext rendering component 13;
the text input component 11 is used as an interactive interface for character editing and receives input information and an operation instruction generated by external triggering;
the text processing component 12 is used for editing and forming characters to be presented according to the input information received by the text input component, and is also used for analyzing and determining the form to be presented of the received operation instruction;
atext rendering component 13, configured to render the text to be presented based on a given rendering attribute and present the rendered text in real time; and the rendering module is also used for responding to each operating instruction and rendering the display in a corresponding to-be-displayed form in real time.
It should be noted that, the application context of the text control provided in the first embodiment may be understood as: the existing text control for performing the text editing is generally an integral control, and if the design of the existing text control is completed, the type of the function rendering items which can be performed in the text editing process can be determined accordingly, and the function rendering items which cannot be flexibly added in the subsequent use cannot be added.
Specifically, the text control provided by the present embodiment mainly includes three hierarchical regions, namely a text input component 11, a text processing component 12, and atext rendering component 13, and it can be seen from fig. 1 that the three components are in a layer-by-layer contained relationship, wherein, the text input component 11 is the outermost layer of the text control provided in this embodiment, and is equivalent to a container, the text processing component 12 and thetext rendering component 13 are encapsulated inside the text input component 11, so that the text input component 11 is equivalent to a unified entry for the outside, and is specifically used as an interactive interface for text editing to receive input information generated by external trigger (for example, a user can receive a trigger response to a keyboard to acquire characters input by the keyboard, and the keyboard can be a soft keyboard or a hardware keyboard) and an operation instruction (for example, an operation instruction triggered by the user can be received, and the operation instruction is generally generated by a user through touch or click or drag performed by a mouse).
Meanwhile, the text processing component 12 is equivalent to an intermediate layer of the text control provided in this embodiment, and is packaged in the content of the text input component 11, and is packaged in its interior with thetext rendering component 13, specifically, the component can obtain input information or an operation instruction received by the text input component 11 in real time, and then the component is responsible for processing the obtained information, for example, corresponding characters to be presented (only characters are formed but the characters cannot be directly presented on the screen) are formed according to the processing of the input information, and for example, a mode to be presented corresponding to the operation instruction is determined by processing, for example, functions to be actually realized according to the operation instruction are different, and correspondingly, the mode to be presented is determined to be different, for example, when an operation instruction generated by clicking a mouse is processed, the display position to be presented when the mouse is clicked can be determined, for example, when the operation instruction generated by the dragging of the mouse is processed, the operation instruction can determine a region to be presented with a selected effect when the mouse is dragged, and the like.
In addition, thetext rendering component 13 is equivalent to the innermost layer area of the text control provided in this embodiment, and it should be noted that, in the text rendering component, function rendering items required in the text editing are actually and specifically packaged, each function rendering item is mainly responsible for performing corresponding rendering processing on the edited text and presenting a corresponding rendering processing effect in real time, for example, the function rendering item responsible for text presentation may perform text rendering and real-time presentation on the formed text to be presented based on a given rendering attribute (such as font color, font size, font format, and the like), further, for example, the function rendering item responsible for cursor presentation may render and present a cursor at a determined display position to be presented by responding to a mouse click on a corresponding operation instruction, and for example, the function rendering item responsible for presentation of the selected effect may render and present the determined selected effect area to be presented by responding to a mouse drag a corresponding operation instruction at the determined selected effect.
Further, thetext rendering component 13 includes a rendering layer set arranged in parallel for implementing rendering with different functions; the rendering layer set comprises a character covering layer rendering layer and at least one of the following: the system comprises a character rendering layer, a cursor rendering layer and a selected effect rendering layer.
In this embodiment, the specific implementation of the various function rendering items that may be added in thetext rendering component 13 is equivalent to packaging function rendering layers corresponding to the various function rendering items in thetext rendering component 13, and it may be considered that parallel presentation layers exist between the function rendering layers. One of the purposes of the text control provided in this embodiment is to realize the presentation of the text cover layer effect, and thus, in order to realize the text cover layer in thetext rendering component 13, it is equivalent to increase a function rendering item of the text cover layer, that is, it can be considered that thetext rendering component 13 actually includes the text cover layer rendering layer for realizing the text cover layer effect. In addition, thetext rendering component 13 of this embodiment may further include a rendering layer for implementing rendering of other functions, such as a text rendering layer, a cursor rendering layer, and a selected effect rendering layer.
In order to ensure that the text mask effect is always presented in the uppermost layer of the screen, the embodiment may preferably set the mask display priority of the text mask rendering layer to be the highest.
Further, the text control further comprises:
the text management component is used for carrying out attribute management on the edited characters to be presented so as to form rendering attributes including the display form and the display position of the characters to be presented;
and the rendering data assembly is used for packaging the rendered characters to be presented and the current rendering data to form new rendering data, and the new rendering data is used as the processing basis of the text processing assembly.
It should be noted that the text control provided in this embodiment includes a text management component and a rendering data component in addition to the interactive components of the text input component 11, the text processing component 12, and thetext rendering component 13. Specifically, when performing text editing based on the text control provided in this embodiment, before performing rendering and presentation on the text to be presented based on thetext rendering component 13, attribute management needs to be performed on the edited text to be presented, that is, it is necessary to manage and determine the display form and the display position of the text to be presented, and the editing paragraphs to which the display form and the display position of the text to be presented belong, which are equivalent to rendering attributes of the text to be presented, and which can be formed by the text management component, based on the management of the text to be presented by the text management component, thetext rendering component 13 can correctly present the text to be presented on the screen at the correct editing position in the selected display form.
It can be understood that, when the text management component is designed, an appropriate data structure can be set for the text management component to realize management of text attributes according to the designed data structure, for example, when text editing is performed, operations of inserting, deleting and searching paragraphs are often frequently performed, so that a red-black tree can be preferably set as the data structure for paragraph management in the embodiment, because the time complexity of the data structure in the worst case is o log (n), which is a data structure with extremely high efficiency, fluency during editing and rendering of a large amount of texts can be ensured, and user experience of text editing can be improved.
In addition, after thetext rendering component 13 renders the text to be presented or renders the operation instruction in the form to be presented, the present embodiment further designs that the rendering data component encapsulates the newly rendered text data and the rendering data formed again to form new rendering data, it can be known that the formed rendering data can be fed back to the text processing component 12 as a data processing basis for text processing, and on the basis of the data processing, the text processing component 12 can process the input information or the operation instruction newly received by the text input component again, for example, a new text to be presented, a new cursor or a selected effect, and the like can be obtained.
Compared with the existing adopted text control, the text control provided by the embodiment of the invention realizes the layered design of the components required by the text editing, and the layered design can ensure that each component in the text control is manually controlled, namely, the functional rendering items required in the text editing can be dynamically increased on the premise of not influencing the text editing, so that the flexibility of the text editing is enhanced, and the user experience of the text editing is improved.
Example two
Fig. 2 is a flowchart of an implementation method for a mask layer rendering according to a second embodiment of the present invention, where the method is suitable for performing a mask layer rendering operation on characters in a text editing box, and the method may be executed by an implementation apparatus for a mask layer rendering, where the apparatus may be implemented by software and/or hardware, and is generally integrated on an intelligent device with a manuscript editing tool.
In this embodiment, the intelligent device may specifically be an intelligent mobile device with a document editing function, such as a mobile phone, a tablet computer, and a notebook, or may also be a fixed electronic device with a document editing function, such as a desktop computer.
It should be noted that in this embodiment, the implementation of the masking layer rendering still depends on the text control, but the method is described in this embodiment based on the interactive layer of text editing, and the text control is specifically presented in the form of a text edit box in the interactive layer. Specifically, the application context of this embodiment may be understood as that in a new document page of the document editing tool, a text editing box may be presented after triggering text editing, so that text editing, cursor position and selection effect, and the like may be presented in the text editing box. The present embodiment corresponds to a mask rendering implemented in a text edit box (text control) having a text mask rendering function.
As shown in fig. 2, a method for implementing a mask layer rendering according to a second embodiment of the present invention specifically includes the following operations:
s201, after the user is triggered to enter a Mongolian drawing interface, monitoring the target character which is selected by the cursor in the current editing frame.
Specifically, the current edit box may be equivalent to a presentation form of a text control currently performing text editing on the screen, where the text control is the text control provided in the first embodiment. In the embodiment, the text overlay rendering function in the text control is embodied on the interactive layer in the form of a function button or a shortcut key, so that a user can trigger an operation instruction for forming overlay rendering by overlay rendering.
In the step, the user can enter the masking layer drawing interface according to the triggering of the masking layer rendering function button, and then the operation of the user on the cursor in the current editing box can be monitored, the description from the bottom layer can be regarded as the receiving of the cursor moving operation by the text input component in the text control, and the determination of the form to be presented (equivalent to the actual selected position of the cursor moving) of the cursor moving operation by the text processing component. In the implementation, the characters selected by cursor movement are marked as target characters, and can also be understood as characters to be subjected to layer rendering.
S202, determining the area information of each target character relative to the current edit box.
It can be understood that, since the text editing is performed in the current edit box, it is equivalent to that the text editing is implemented depending on the text control provided in the above embodiment, when the target text is determined by monitoring the movement of the cursor, the coordinate information and the area information of the position occupied by the target text relative to the text control are also correspondingly determined.
Specifically, the determining the area information of each target character relative to the current edit box includes:
acquiring relative coordinate information of each target character relative to the current edit box; determining the area of each target character in the current edit box according to the length and width value occupied by each target character in the current edit box; and respectively determining the relative coordinate information and the area of each region as the region information of the corresponding target character equivalent to the current edit box.
In this embodiment, the area information of each target character selected when the cursor moves in the current edit box may be represented by a set data structure, and exemplary parameter information included in the preset data structure may be: x; y; width and height, wherein x represents the relative abscissa of a single selected character relative to the upper left corner of the current edit box; y represents the relative ordinate of a single selected character relative to the upper left corner of the current edit box; the width represents the width value of the selected area corresponding to the single selected character; height represents the length value (also equivalent to the height value) of the selected area corresponding to the single selected word. Thus, it is considered that the area information corresponding to each target character obtained in this step is actually represented in the above-described data structure format.
And S203, performing covering rendering on each target character based on the area information of each target character.
Describing from the bottom layer, the masking layer rendering performed in the step is actually equivalent to the masking layer rendering completed by the text rendering component of the text control, specifically, the operation of the step is mainly realized based on the character masking layer rendering layer in the text rendering component, and the operation process of realizing the masking layer rendering by the character masking layer rendering layer can be described as follows: and the character covering layer rendering layer acquires the area information of each target character, analyzes the coordinate information and the area information included in the acquired area information, determines the rendering area of each target character and draws the selected color in the rendering area.
Further, the performing a masking rendering on each target character based on the region information of each target character includes: determining an actual rendering region corresponding to each target character according to the relative coordinate information and the region area of each target character; and carrying out color covering on each actual rendering area based on the selected masking layer rendering color, and displaying on the current screen in real time.
In this embodiment, the actual rendering area may be understood as a rendering area of the target text actually corresponding to the text cover layer rendering layer, the color of the cover layer rendering in this embodiment may be a default white color, or may be a color selected manually, and the color that can be known to be covered by the cover layer is often displayed on the uppermost layer of other rendering effects of the text, thereby realizing the overall covering of various information of the text.
The present embodiment further optimizes and adds the operation of S204 on the basis of the above method steps, as shown in fig. 2, after S203, the method further includes:
and S204, after the mode of entering the manuscript demonstration mode is triggered, when the cursor is monitored to be positioned in a position area with the masking layer rendering on the screen, removing the masking layer rendering on the position area.
In this embodiment, it can be considered that the operations of S201 to S203 are specifically implemented in the text editing mode, and according to the setting, after entering the document presentation mode, the target characters at the position where the masking layer is rendered are still presented in the form of masked letters with colors, but in the document presentation mode, if the user clicks on a certain position with colors via a cursor, based on the operations in this step, the triggering of the user on any character with a masking layer rendering effect via the cursor can be monitored, and a position area of which target character is specifically triggered by the user can be determined according to the triggering position of the cursor on the screen, so that the color masking on the position area corresponding to the target character can be removed in this step, and the target character under color masking can be presented in real time. Specifically, the step is to perform a manuscript demonstration mode according to the triggering of a demonstration function button or a demonstration function shortcut key by a user.
In the method for implementing the overlay rendering provided by the second embodiment of the present invention, after entering the overlay rendering interface, a cursor is monitored to move a selected target character in a current edit box; then determining the area information of each target character relative to the current edit box; and finally, performing covering layer rendering on each target character according to the area information of each target character. By the technical scheme, the cursory layer rendering of any character in the current edit box can be simply and effectively realized on the basis of custom construction of the current edit box presented by the text control, and the user experience of the manuscript editing tool in character editing and demonstration is better improved.
EXAMPLE III
Fig. 3 is a block diagram of an implementation apparatus for performing a masking rendering according to a third embodiment of the present invention, where the apparatus is suitable for performing a masking rendering operation on a text in a text editing box, where the apparatus may be implemented by software and/or hardware and is generally integrated on an intelligent device having a manuscript editing tool. As shown in fig. 3, the apparatus includes: aninformation listening module 31, aninformation determining module 32 and a mask rendering module.
Theinformation monitoring module 31 is configured to monitor that a cursor moves a selected target character in a current edit box after triggering entry into a masked drawing interface;
aninformation determining module 32, configured to determine region information of each target text relative to a current edit box, where the current edit box is a presentation form of a text control on a screen according to a first embodiment of the present invention;
and a masklayer rendering module 33, configured to perform mask layer rendering on each target character based on the region information of each target character.
In this embodiment, the device firstly monitors the target character selected by the cursor moving in the current edit box after triggering to enter the masking drawing interface through theinformation monitoring module 31; then, determining the area information of each target character relative to the current edit box through aninformation determination module 32; finally, the maskinglayer rendering module 33 performs masking layer rendering on each target character based on the region information of each target character.
The device for realizing the overlay rendering provided by the third embodiment of the invention can simply and effectively realize the overlay rendering of any character in the current edit box on the basis of self-defining the current edit box displayed by the text control, and better improves the user experience of the manuscript edit tool in character editing and demonstration.
Further, theinformation determining module 31 is specifically configured to:
acquiring relative coordinate information of each target character relative to the current edit box; determining the area of each target character in the current edit box according to the length and width value occupied by each target character in the current edit box; and respectively determining the relative coordinate information and the area of each region as the region information of the corresponding target character equivalent to the current edit box.
Further, the maskinglayer rendering module 32 is specifically configured to:
determining an actual rendering region corresponding to each target character according to the relative coordinate information and the region area of each target character; and carrying out color covering on each actual rendering area based on the selected masking layer rendering color, and displaying on the current screen in real time.
Further, the apparatus further comprises:
and the maskinglayer removing module 34 is configured to, after the entry into the document presentation mode is triggered, remove the masking layer rendering on the position area when the cursor is monitored to be located in the position area with the masking layer rendering on the screen.
Example four
Fig. 4 is a schematic diagram of a hardware structure of an intelligent device according to a fourth embodiment of the present invention, and as shown in fig. 4, the intelligent device according to the fourth embodiment of the present invention includes: aprocessor 41 and astorage device 42. The number of the processors in the smart device may be one or more, fig. 4 illustrates oneprocessor 41, theprocessor 41 and thestorage device 42 in the smart device are also connected by a bus or in other manners, and fig. 4 illustrates a connection by a bus.
Thestorage device 42 in the smart device serves as a computer-readable storage medium, and may be configured to store one or more programs, where the programs may be software programs (such as the text control provided in the foregoing embodiments), computer-executable programs, and modules, such as program instructions/modules corresponding to the implementation method of the mask layer rendering in the embodiment of the present invention (for example, the modules in the implementation device of the mask layer rendering shown in fig. 3 include theinformation monitoring module 31, theinformation determining module 32, and the mask layer rendering module). Theprocessor 41 implements the editing of the text by executing the text control, as provided in the above embodiments, stored in the storage device; meanwhile, theprocessor 41 executes various functional applications and data processing of the smart device by running software programs, instructions and modules stored in thestorage device 42, that is, implements the method for implementing the masking layer rendering in the above method embodiments.
Thestorage device 42 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the device, etc. (such as the mask color and area information in the above-described embodiments). Further, thestorage 42 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples,storage 42 may further include memory located remotely fromprocessor 41, which may be connected to the device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
And, when the one or more programs included in the above-mentioned smart device are executed by the one ormore processors 41, the programs perform the following operations:
after triggering and entering a Mongolian drawing interface, monitoring a target character selected by a cursor in a current edit box, wherein the current edit box is a presentation form of a text control provided by the first embodiment of the invention on a screen; determining the area information of each target character relative to the current edit box; and performing layer rendering on each target character based on the region information of each target character.
In addition, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program may be a text control, and when the text control is executed by a processor, the text control may implement editing and presenting of characters; the program may also be an application program of the method for implementing the mask rendering provided in embodiment two, and when the application program is executed by a processor, the application program may implement the method for implementing the mask rendering, where the method includes: after triggering and entering a Mongolian drawing interface, monitoring a target character selected by a cursor in a current edit box, wherein the current edit box is a presentation form of a text control provided by the first embodiment of the invention on a screen; determining the area information of each target character relative to the current edit box; and performing layer rendering on each target character based on the region information of each target character.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A method for realizing overlay rendering is characterized by comprising the following steps:
after triggering and entering a Mongolian drawing interface, monitoring a target character selected by a cursor in a current edit box, wherein the current edit box is a presentation form of a preset text control on a screen;
determining the area information of each target character relative to the current edit box;
performing covering layer rendering on each target character based on the region information of each target character;
the preset text control is presented in a text editing box form, and comprises the following steps: the system comprises a text input component, a text processing component and a text rendering component;
the text input assembly, the text processing assembly and the text rendering assembly are in layer-by-layer inclusion relation;
the text input assembly is used as a container and used for packaging the text processing assembly, and the text rendering assembly is packaged in the text processing assembly;
and packaging a function rendering layer corresponding to various required function rendering items in the text rendering component.
2. The method of claim 1,
the text input component is used as an interactive interface for character editing and used for receiving input information and operation instructions generated by external triggering;
the text processing component is used for editing and forming characters to be presented according to the input information received by the text input component and analyzing and determining the form to be presented of the received operation instruction;
the text rendering component is used for rendering the characters to be presented based on given rendering attributes and presenting the characters in real time; and the rendering module is also used for responding to each operating instruction and rendering the display in a corresponding to-be-displayed form in real time.
3. The method of claim 1, wherein the text rendering component comprises a set of rendering layers arranged in parallel for implementing different functional renderings;
the rendering layer set comprises a character covering layer rendering layer and at least one of the following: the system comprises a character rendering layer, a cursor rendering layer and a selected effect rendering layer.
4. The method of claim 1, further comprising:
the text management component is used for carrying out attribute management on the edited characters to be presented so as to form rendering attributes including the display form and the display position of the characters to be presented;
and the rendering data assembly is used for packaging the rendered characters to be presented and the current rendering data to form new rendering data, and the new rendering data is used as the processing basis of the text processing assembly.
5. The method according to any one of claims 1 to 4, wherein the determining of the area information of each target text relative to the current edit box comprises:
acquiring relative coordinate information of each target character relative to the current edit box;
determining the area of each target character in the current edit box according to the length and width value occupied by each target character in the current edit box;
and respectively determining the relative coordinate information and the area of each region as the region information of the corresponding target character equivalent to the current edit box.
6. The method of claim 5, wherein performing a masked rendering on each of the target words based on the region information of each of the target words comprises:
determining an actual rendering region corresponding to each target character according to the relative coordinate information and the region area of each target character;
and carrying out color covering on each actual rendering area based on the selected masking layer rendering color, and displaying on the current screen in real time.
7. The method of claim 1, further comprising:
and after the mode of entering the manuscript demonstration mode is triggered, when the cursor is monitored to be positioned in a position area with the masking layer rendering on the screen, removing the masking layer rendering on the position area.
8. An apparatus for implementing overlay rendering, comprising:
the information monitoring module is used for monitoring the target character selected by the cursor in the current edit box after triggering to enter the masking layer drawing interface;
the information determining module is used for determining the area information of each target character relative to a current edit box, wherein the current edit box is a presentation form of a preset text control on a screen;
the mask layer rendering module is used for performing mask layer rendering on each target character based on the region information of each target character;
the preset text control is presented in a text editing box form, and comprises the following steps: the system comprises a text input component, a text processing component and a text rendering component;
the text input assembly, the text processing assembly and the text rendering assembly are in layer-by-layer inclusion relation;
the text input assembly is used as a container and used for packaging the text processing assembly, and the text rendering assembly is packaged in the text processing assembly;
and packaging a function rendering layer corresponding to various required function rendering items in the text rendering component.
9. A smart device, comprising:
one or more processors;
storage means for storing one or more programs;
the one or more programs are executable by the one or more processors to cause the one or more processors to implement an implementation of a masking rendering as recited in any of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the method for implementing a skin rendering according to any one of claims 1-7.
CN201810053788.0A2018-01-192018-01-19Method and device for realizing covering layer rendering, intelligent equipment and storage mediumActiveCN108279964B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201810053788.0ACN108279964B (en)2018-01-192018-01-19Method and device for realizing covering layer rendering, intelligent equipment and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201810053788.0ACN108279964B (en)2018-01-192018-01-19Method and device for realizing covering layer rendering, intelligent equipment and storage medium

Publications (2)

Publication NumberPublication Date
CN108279964A CN108279964A (en)2018-07-13
CN108279964Btrue CN108279964B (en)2021-09-10

Family

ID=62804184

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201810053788.0AActiveCN108279964B (en)2018-01-192018-01-19Method and device for realizing covering layer rendering, intelligent equipment and storage medium

Country Status (1)

CountryLink
CN (1)CN108279964B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108958876B (en)*2018-07-232022-02-01郑州云海信息技术有限公司Display method and device of browser page
CN109145272B (en)*2018-07-272022-09-16广州视源电子科技股份有限公司 Text rendering and layout method, apparatus, device and storage medium
CN109375972B (en)*2018-09-172022-03-08广州视源电子科技股份有限公司 Method, apparatus, computer device and storage medium for multi-element layout
CN109657220A (en)*2018-12-112019-04-19万兴科技股份有限公司The online editing method, apparatus and electronic equipment of PDF document
CN109976632A (en)*2019-03-152019-07-05广州视源电子科技股份有限公司text animation control method and device, storage medium and processor
CN111723316B (en)*2019-03-222024-06-04阿里巴巴集团控股有限公司Character string rendering method and device, terminal equipment and computer storage medium
US11733669B2 (en)*2019-09-272023-08-22Rockwell Automation Technologies, Inc.Task based configuration presentation context
CN112862945B (en)*2019-11-272024-07-16北京沃东天骏信息技术有限公司Record generation method and device
CN111241643B (en)*2020-02-112023-05-09广东三维家信息科技有限公司Processing method and device of polygonal cabinet body and electronic equipment
CN111782311B (en)*2020-05-112023-01-10完美世界(北京)软件科技发展有限公司Rendering method, device, equipment and readable medium
CN111857491B (en)*2020-08-062022-01-11泰山信息科技有限公司Method, equipment and storage medium for implementing filter for selecting content of word processing software
CN113535046B (en)*2021-07-232024-06-18腾讯云计算(北京)有限责任公司Text component editing method, device, equipment and readable medium
CN113705156B (en)*2021-08-302024-12-03上海哔哩哔哩科技有限公司 Character processing method and device
CN113805753B (en)*2021-09-242025-05-30维沃移动通信有限公司 Text editing method, device and electronic equipment
CN114386369B (en)*2022-01-192025-08-12网易(杭州)网络有限公司Text processing method, device, electronic equipment and medium
CN114579117A (en)*2022-03-152022-06-03平安国际智慧城市科技股份有限公司Visual configuration method and device, electronic equipment and storage medium
CN116302257B (en)*2023-02-282024-04-30南京索图科技有限公司Method for realizing text editing box supporting drop-down selection of multiple groups of words

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
EP1285337A2 (en)*1999-11-022003-02-26Canal+ TechnologiesDisplaying graphical objects
CN104111787A (en)*2013-04-182014-10-22三星电子(中国)研发中心Method and device for realizing text editing on touch screen interface

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US7895531B2 (en)*2004-08-162011-02-22Microsoft CorporationFloating command object
CN102819325A (en)*2012-07-212012-12-12上海量明科技发展有限公司Input method and system for obtaining a plurality of character presenting effects
CN104142911B (en)*2013-05-082017-11-03腾讯科技(深圳)有限公司A kind of text information input method and device
CN104731787A (en)*2013-12-182015-06-24中兴通讯股份有限公司Method, device and terminal capable of realizing page layout
CN104050155A (en)*2014-07-012014-09-17西安诺瓦电子科技有限公司Text editing device and method
CN104636321B (en)*2015-02-282018-01-16广东欧珀移动通信有限公司Text display method and device
CN106911833A (en)*2015-12-212017-06-30北京奇虎科技有限公司A kind of data processing method and device
CN105760153A (en)*2016-01-272016-07-13努比亚技术有限公司Text extracting device and method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
EP1285337A2 (en)*1999-11-022003-02-26Canal+ TechnologiesDisplaying graphical objects
CN104111787A (en)*2013-04-182014-10-22三星电子(中国)研发中心Method and device for realizing text editing on touch screen interface

Also Published As

Publication numberPublication date
CN108279964A (en)2018-07-13

Similar Documents

PublicationPublication DateTitle
CN108279964B (en)Method and device for realizing covering layer rendering, intelligent equipment and storage medium
CN109933322B (en)Page editing method and device and computer readable storage medium
US11029836B2 (en)Cross-platform interactivity architecture
US20140331179A1 (en)Automated Presentation of Visualized Data
US11086498B2 (en)Server-side chart layout for interactive web application charts
CN110221759A (en)Element dragging method and device, storage medium and interactive intelligent panel
CN113655999B (en)Page control rendering method, device, equipment and storage medium
CN114168238B (en) Method, system and computer-readable storage medium implemented by a computing device
CN104915186B (en)A kind of method and apparatus making the page
CN111936966A (en) Design system for creating graphic content
CN112163432A (en)Translation method, translation device and electronic equipment
WO2020168786A1 (en)Touch operation response method and apparatus, storage medium and terminal
WO2022242379A1 (en)Stroke-based rendering method and device, storage medium and terminal
US10304225B2 (en)Chart-type agnostic scene graph for defining a chart
CN108389244B (en) An implementation method for rendering flash rich text according to specified character rules
US9037958B2 (en)Dynamic creation of user interface hot spots
US7924284B2 (en)Rendering highlighting strokes
US9928220B2 (en)Temporary highlighting of selected fields
WO2025044582A1 (en)Page management method and apparatus for virtual game, electronic device, and storage medium
CN113536755A (en) Method, apparatus, electronic device, storage medium and product for generating posters
US11048405B2 (en)Information processing device and non-transitory computer readable medium
US20190244405A1 (en)Information processing device and non-transitory computer readable medium storing information processing program
CN117194829A (en)Method, equipment and medium for generating rich text web page
CN104516860A (en)Methods and systems for selecting text within a displayed document
CN115730096A (en) Graphic element self-defining method, device, storage medium and computer equipment

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp