Movatterモバイル変換


[0]ホーム

URL:


CN114254610B - Slide processing method, device and electronic equipment - Google Patents

Slide processing method, device and electronic equipment

Info

Publication number
CN114254610B
CN114254610BCN202111556014.8ACN202111556014ACN114254610BCN 114254610 BCN114254610 BCN 114254610BCN 202111556014 ACN202111556014 ACN 202111556014ACN 114254610 BCN114254610 BCN 114254610B
Authority
CN
China
Prior art keywords
page
slide
color
template
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111556014.8A
Other languages
Chinese (zh)
Other versions
CN114254610A (en
Inventor
雷淑玲
辛煜辉
王茜
陈昭蓉
胡娟
程锦郁
程阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kingsoft Office Software Inc
Zhuhai Kingsoft Office Software Co Ltd
Guangzhou Kingsoft Mobile Technology Co Ltd
Wuhan Kingsoft Office Software Co Ltd
Original Assignee
Beijing Kingsoft Office Software Inc
Zhuhai Kingsoft Office Software Co Ltd
Guangzhou Kingsoft Mobile Technology Co Ltd
Wuhan Kingsoft Office Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kingsoft Office Software Inc, Zhuhai Kingsoft Office Software Co Ltd, Guangzhou Kingsoft Mobile Technology Co Ltd, Wuhan Kingsoft Office Software Co LtdfiledCriticalBeijing Kingsoft Office Software Inc
Priority to CN202111556014.8ApriorityCriticalpatent/CN114254610B/en
Publication of CN114254610ApublicationCriticalpatent/CN114254610A/en
Application grantedgrantedCritical
Publication of CN114254610BpublicationCriticalpatent/CN114254610B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本申请涉及一种幻灯片处理方法、装置、电子设备及计算机可读存储介质,涉及自动化办公领域,该方法包括:获取目标幻灯片的幻灯片信息,其中,幻灯片信息包括至少一个对象;对至少一个对象进行对象识别处理,得到识别结果;根据识别结果确定对应的配置信息,并确定与配置信息相匹配的至少一个模板;将幻灯片信息分别适配至至少一个模板中,得到与至少一个模板一一对应的至少一个幻灯片预览图。本申请能够自动对演示文稿中的幻灯片进行美化,提升用户办公效率。

This application relates to a slide processing method, device, electronic device, and computer-readable storage medium, and relates to the field of automated office work. The method comprises: obtaining slide information of a target slide, wherein the slide information includes at least one object; performing object recognition processing on the at least one object to obtain a recognition result; determining corresponding configuration information based on the recognition result, and determining at least one template that matches the configuration information; and adapting the slide information to the at least one template to obtain at least one slide preview image that corresponds to the at least one template. This application can automatically beautify slides in presentations, improving user office efficiency.

Description

Slide processing method and device and electronic equipment
Technical Field
The embodiment of the application relates to the field of automation office, in particular to a slide processing method, a slide processing device, electronic equipment and a computer readable storage medium.
Background
The presentation includes a plurality of pages for presenting static content (e.g., text, pictures) and dynamic content (e.g., video, audio), each page being a slide show.
With the wide application of the presentation, when the presentation is manufactured, a producer is required to input necessary presentation content in each slide, and is required to beautify the slides, for example, the theme color of the slides is modified to make the slides more fit with the presentation content, the typesetting of characters and pictures in the slides is modified to make the presentation content in the slides more transparent, so that the presentation content in the slides is more vivid and visual, the attraction degree of the slides to viewers is increased, and the pleasure of the viewers in watching is improved.
At present, the realization of slide beautification requires not only that a producer possess slide beautification knowledge, such as understanding color matching, style matching with content, etc., but also that the producer be skilled in using various editing functions in a presentation. In practical situations, it is generally difficult for an ordinary user of a presentation file to meet the two requirements, so when the ordinary user beautifies a slide, a great amount of time is generally required to be consumed to beautify the slide, for example, the position of an element is continuously dragged in the slide, the size and layout of the element in the slide are adjusted, and the found element is also required to be edited, and the whole process is complex in operation, long in time consumption and low in efficiency.
Disclosure of Invention
An object of an embodiment of the present application is to provide a slide processing method, apparatus, electronic device, and computer readable storage medium, so as to solve the problem in the prior art that when a user beautifies a slide, the office efficiency of the user is low due to complicated operation and long time consumption in the whole process.
According to a first aspect of the present application, there is provided a slide processing method, including obtaining slide information of a target slide, wherein the slide information includes at least one object, performing object recognition processing on the at least one object to obtain a recognition result, determining corresponding configuration information according to the recognition result, determining at least one template matching the configuration information, and adapting the slide information to the at least one template respectively to obtain at least one slide preview corresponding to the at least one template one by one.
Optionally, performing object recognition processing on at least one object to obtain a recognition result comprises performing at least one of recognizing the type of the at least one object, taking the type of the at least one object as the recognition result, recognizing the position of the at least one object, taking the position of the at least one object as the recognition result, recognizing the color of the at least one object, and taking the color of the at least one object as the recognition result.
Optionally, the object comprises a structural object or a material object, and the identification of the type of at least one object comprises identifying the structural type of the structural object in the case that the object is the structural object, wherein the structural type comprises any one of a title, a subtitle and a text, and identifying the material type of the material object in the case that the object is the material object, and wherein the material type comprises any one of a text, a picture, a video control, an audio control and a table.
Optionally, after identifying the material type of the material object, identifying at least one content element contained in the material object under the condition that the material type of the identified material object is the target material type, wherein the target material type comprises any one of a text, a picture and a table, and determining the content of the material object according to the at least one content element.
Optionally, determining at least one template matched with the configuration information comprises calculating the matching degree of each preset template in the template database and the configuration information, and taking the preset template corresponding to the matching degree as the template matched with the configuration information under the condition that the matching degree is larger than a matching degree threshold value.
Optionally, after determining at least one template matched with the configuration information in the case that the at least one template comprises a plurality of templates, the method further comprises the steps of obtaining the matching degree of each template in the plurality of templates and the configuration information, sorting the plurality of templates according to the matching degree of the templates and the configuration information to obtain a first sorting result, and displaying a plurality of slide previews corresponding to the plurality of templates one by one in a preset window interface according to the first sorting result.
Optionally, before displaying the plurality of slide previews corresponding to the plurality of templates one by one in a preset window interface according to the first sorting result, the method further comprises the steps of obtaining user behavior data of each template in the plurality of templates, counting data values corresponding to the user behavior data of each template, sorting the plurality of templates according to the sequence from high to low of the data values corresponding to the user behavior data of the plurality of templates to obtain a second sorting result, displaying the plurality of slide previews corresponding to the plurality of templates one by one in the preset window interface according to the first sorting result, and calculating a third sorting result according to the first sorting result and the second sorting result, and displaying the plurality of slide previews corresponding to the plurality of templates one by one in the preset window interface according to the third sorting result.
Optionally, after determining at least one template matched with the configuration information in the case that the at least one template comprises a plurality of templates, the method further comprises the steps of obtaining user behavior data of each template in the plurality of templates, counting data values corresponding to the user behavior data of each template, sorting the plurality of templates according to the sequence from high to low of the data values corresponding to the user behavior data of the plurality of templates to obtain a fourth sorting result, and displaying a plurality of slide preview pictures corresponding to the plurality of templates one by one in a preset window interface according to the fourth sorting result.
Optionally, after determining at least one template matched with the configuration information, the method further comprises determining beautification processing information corresponding to the slide information according to the slide information and a target template in the at least one template, respectively adapting the slide information to the at least one template to obtain at least one slide preview image corresponding to the at least one template one by one, and comprises beautifying the slide information according to the beautification processing information, and adapting the beautification processed slide information to a second target template to obtain a corresponding slide preview image.
According to a second aspect of the present application, there is also provided a slide processing apparatus, including an acquisition module configured to acquire slide information of a target slide, where the slide information includes at least one object, a recognition module configured to perform object recognition processing on the at least one object to obtain a recognition result, a determination module configured to determine corresponding configuration information according to the recognition result and determine at least one template matching the configuration information, and a processing module configured to adapt the slide information to the at least one template respectively to obtain at least one slide preview corresponding to the at least one template one by one.
Optionally, the identification module comprises at least one sub-module for identifying the type of the at least one object, taking the type of the at least one object as an identification result, a second identification sub-module for identifying the position of the at least one object, taking the position of the at least one object as an identification result, and a third identification sub-module for identifying the color of the at least one object, taking the color of the at least one object as an identification result.
Optionally, the object comprises a structural object or a material object, and the first identification submodule is used for identifying the structural type of the structural object in the case that the object is the structural object, wherein the structural type comprises any one of a title, a subtitle and a text, and identifying the material type of the material object in the case that the object is the material object, and the material type comprises any one of a text, a picture, a video control, an audio control and a table.
Optionally, the first identification sub-module is further configured to identify at least one content element included in the material object after identifying the material type of the material object, in the case that the material type of the identified material object is the target material type, where the target material type includes any one of text, picture and table, and determine the content of the material object according to the at least one content element.
Optionally, the determining module comprises a first determining submodule for determining corresponding configuration information according to the identification result and a second determining submodule for determining at least one template matched with the configuration information, wherein the second determining submodule is used for calculating the matching degree of each preset template in the template database and the configuration information, and taking the preset template corresponding to the matching degree as the template matched with the configuration information when the matching degree is larger than a matching degree threshold value.
Optionally, when the at least one template comprises a plurality of templates, the device further comprises a matching degree acquisition module for acquiring the matching degree of each template in the plurality of templates and the configuration information after the determining module determines the at least one template matched with the configuration information, a first ordering module for ordering the plurality of templates according to the matching degree of the templates and the configuration information to obtain a first ordering result, and a display module for displaying a plurality of slide preview pictures corresponding to the plurality of templates one by one in a preset window interface according to the first ordering result.
Optionally, the device further comprises a data acquisition module, a statistics module and a calculation module, wherein the data acquisition module is used for acquiring user behavior data of each template in the templates before the first display module displays the multiple slide previews corresponding to the templates one by one in a preset window interface according to the first sorting result, the statistics module is used for counting data values corresponding to the user behavior data of each template, sorting the templates according to the sequence from high to low of the data values corresponding to the user behavior data of the templates to obtain a second sorting result, the calculation module is used for calculating a third sorting result according to the first sorting result and the second sorting result, and the display module is used for displaying the multiple slide previews corresponding to the templates one by one in the preset window interface according to the third sorting result.
Optionally, when at least one of the templates includes a plurality of templates, the device further includes a data acquisition module, configured to acquire user behavior data of each of the templates after the determining module determines at least one of the templates that matches the configuration information, count data values corresponding to the user behavior data of each of the templates, order the templates according to an order of from high to low data values corresponding to the user behavior data of the templates, and obtain a fourth ordering result, and a display module, configured to display a plurality of slide previews corresponding to the templates one by one in a preset window interface according to the fourth ordering result.
Optionally, the device further comprises a beautification module, a processing module and a processing module, wherein the beautification module is used for determining beautification processing information corresponding to the slide information according to the slide information and a target template in the at least one template after the determination module determines at least one template matched with the configuration information, and the processing module is used for carrying out beautification processing on the slide information according to the beautification processing information, and adapting the beautification processed slide information to a second target template to obtain a corresponding slide preview image.
According to a third aspect of the application there is also provided an electronic device comprising a memory for storing a computer program and a processor for executing the computer program to carry out the method according to the first aspect of the application.
According to a fourth aspect of the present application there is also provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method according to the first aspect of the present application.
The method and the device for obtaining the slide preview image have the advantages that slide information of a target slide can be obtained firstly, the slide information comprises at least one object, object identification processing is conducted on the at least one object to obtain an identification result, corresponding configuration information is determined according to the identification result, at least one template matched with the configuration information is determined, and the slide information is respectively adapted to the at least one template to obtain at least one slide preview image corresponding to the at least one template one by one. Therefore, the application can automatically beautify the single-page slide, thereby effectively solving the problem of lower office efficiency of the user caused by complicated operation and longer time consumption in the whole process of beautifying the slide in the prior art, effectively making up the defects that the common user does not have slide beautifying knowledge and cannot be proficient in using various editing functions in the presentation file when beautifying the slide, and improving the office efficiency of the user.
Other features of embodiments of the present application and its advantages will become apparent from the following detailed description of exemplary embodiments of the application, which refers to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description, serve to explain the principles of the embodiments of the application.
FIG. 1 is a method flow chart of a slide processing method provided in an embodiment of the present application;
fig. 2 is a schematic diagram of at least one slide preview in a slide processing method according to an embodiment of the present application;
FIG. 3 is a method flow diagram of another slide processing method provided by an embodiment of the present application;
FIG. 4 is a method flow diagram of yet another slide processing method provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of a slide processing method according to an embodiment of the present application;
FIG. 6 is a functional block diagram of a slide processing apparatus according to an embodiment of the present application;
fig. 7 is a schematic diagram of a hardware architecture of an electronic device according to some embodiments of the application.
Detailed Description
Various exemplary embodiments of the present application will now be described in detail with reference to the accompanying drawings. It should be noted that the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present application unless it is specifically stated otherwise.
The following description of at least one exemplary embodiment is merely exemplary in nature and is in no way intended to limit the application, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of exemplary embodiments may have different values.
It should be noted that like reference numerals and letters refer to like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
The execution body of the embodiment of the present application may be an application located in the local terminal, or may be a functional unit such as a plug-in unit or a software development kit (Software Development Kit, SDK) disposed in the application located in the local terminal, which is not particularly limited in this embodiment.
It will be appreciated that the application may be a local program (NATIVE APP) installed on the terminal (i.e., the client), or may be a web page program (web App) of a browser on the terminal, which is not particularly limited in this embodiment.
In addition, the terminals according to the embodiments of the present application may include, but are not limited to, mobile phones, personal digital assistants (Personal DIGITAL ASSISTANT, PDA), wireless handheld devices, tablet computers (Tablet computers), personal computers (Personal Computer, PCs), personal computers (PDAs, personal DIGITAL ASSISTANTS), and the like.
Various embodiments and examples according to the present application are described below with reference to the accompanying drawings.
< Method example >
Fig. 1 is a method flowchart of a slide processing method according to an embodiment of the present application. As shown in FIG. 1, the method includes the following steps S110 to S140.
It should be noted that, the steps S110 to S140 may be executed at the server side or the client side. The following description will be mainly made on the example of execution at the server side.
Step S110, slide information of a target slide is acquired, wherein the slide information comprises at least one object.
The target slide is a slide which is selected by a user from at least one slide of the presentation and needs to be beautified. For example, when a beautifying instruction of a slide currently displayed in the display interface is received, the slide is taken as a target slide.
For example, the beautifying instruction is triggered after detecting that the user performs a corresponding selection operation on the target control in the display interface. The target control is a beautifying control preset in the display interface and used for starting a slide beautifying function. The server side obtains the slide information of the target slide, and the server side can send a request to the client side and receive the slide information of the target slide sent to the server side from the client side in response to the request, or can directly receive the slide information of the target slide from the client side.
The slide information includes at least one object in the target slide. For example, any of the at least one object may be a structural object or a material object. The structure object may be, for example, an object related to a structural component in a slide such as a title object, a subtitle object, a text object, or the like. The material objects may include, for example, text objects, picture objects, video control objects, audio control objects, form objects, and the like, related to the material. Any material object can be accommodated in the structure object.
And step S120, performing object recognition processing on at least one object to obtain a recognition result.
By way of example, step S120 may be at least one process of identifying a type of at least one object, taking the type of at least one object as a result of the identification, identifying a location of at least one object, taking the location of at least one object as a result of the identification, identifying a color of at least one object, and taking the color of at least one object as a result of the identification.
By way of example, in identifying the type of at least one object, since the object may be a structural object or a material object, in this case, in the case of a structural object, the structural type of the structural object may be identified, and the structural type may include any of a title, a subtitle, and a body, and in the case of a material object, the material type may be identified, and the material type may include any of a text, a picture, a video control, an audio control, and a table, for example. When the structure type or the material type is identified, at least one object is input into a preset object type identification model, and the type of each object in the output result of the object type identification model is obtained as the object type of the object.
The object type recognition model may be obtained by performing machine learning training on a plurality of first samples acquired in advance. Wherein each first sample comprises a first sample object and a type of manual annotation of the first sample object. Taking the first sample object of each first sample as input, taking the type of the first sample object label manually in each first sample as output, and training a plurality of first samples by adopting a preset machine learning algorithm to obtain an object type recognition model. In this case, an object is input to the object type recognition model, which corresponds to the type of the output object. The manner of performing machine learning training on the plurality of first samples may be set by those skilled in the art according to actual situations, which is not limited by the embodiment of the present application.
In some embodiments, in the case where the material type of the material object is identified as the target material type, at least one content element contained in the material object may also be identified. The target material type comprises any one of texts, pictures and tables. The content of the material object is then determined from the at least one content element.
Taking the target material type as an example of a picture, when the material type of the material object is identified as the picture, further identifying at least one content element contained in the picture, and then determining the content of the picture according to the at least one content element.
The content elements in the picture are elements contained in the picture, such as characters, trees, houses, rivers, appliances, etc. in the picture. The content of the picture comprises characters, scenery, animals, trophism, decorations and the like.
In some examples, an element type of at least one content element in the picture may be identified, the element type being, for example, a person, an animal, a tree, a house, a river, or an appliance, etc., and then the number of corresponding content elements under each element type is counted, and the element type corresponding to the content element with the largest number is taken as the content of the picture. For example, the at least one content element in the picture may include only a plurality of persons, or the at least one content element may include 5 persons and 1 animal, and the content of the picture may be determined to be a person.
In other examples, an element type of at least one content element in the picture may be identified, image element information including the element type of the at least one content element is searched in a preset mapping table, and a content type corresponding to the image element information is determined as a content of the picture. The mapping table contains a preset corresponding relation between image element information and content, and the image element information contains element types of one or more content elements. For example, at least one content element in the picture comprises a tree and a river, and the content corresponding to the image element information comprising the tree and the river is determined to be landscape according to the mapping table, and the content of the picture is determined to be landscape.
Taking a target material type as a text as an example, text can be subjected to text recognition, and the content of the text is determined according to a recognition result.
Taking the target material type as a table as an example, the characters in the table can be identified, and the content of the table is determined according to the identification result.
For example, when identifying the position of at least one object, for any object, the coordinates of the object in the target slide may be identified, and the coordinates of the object in the target slide may be taken as the position of the object.
For example, when identifying the color of at least one object, for any object, an RGB value (a value corresponding to each of three color channels of red, green, and blue in the RGB color mode) of each object may be identified, and the RGB value of the object is taken as the color of the object.
It will of course be appreciated that the above-listed ways of identifying the type, location, and color of at least one object are merely exemplary, and that the application is not limited in this manner as to the type, location, and color of at least one object, so long as the type, location, and color of at least one object can be identified.
And step S130, corresponding configuration information is determined according to the identification result, and at least one template matched with the configuration information is determined.
The configuration information includes at least one of a page type, a page color, a page element characteristic, and a page style. There are various ways of determining the corresponding configuration information according to the recognition result.
As can be seen from the description in step S120, the recognition result may be, for example, a type of the at least one object, a position of the at least one object, and/or a color of the at least one object. When determining the corresponding configuration information according to the recognition result, for example, the page type may be determined according to the type of the at least one object and the position of the at least one object, the page color may be determined according to the color of the at least one object, and the page element characteristic may be determined according to the type of the at least one object. The page style is determined based on the type of the at least one object, the location of the at least one object, and the color of the at least one object.
In some examples, determining the page type based on the type of the at least one object and the position of the at least one object may be determining the page structure of the target slide based on the type and position of the object identified in step S120 (e.g., for the type of object, determining the number and position of the types present in the target slide, taking the number and position of the types present in the target slide as the page structure), and then determining the page type based on the page structure.
The page types include any of cover page, directory page, transition page, text page (also called content page), and end page. In general, the same page types all have similar page structures, and thus, in some examples, the correspondence between the page structures and the page types may be preset, and then the page types may be determined according to the determined page structures and the correspondence.
For example, the cover page is generally centered at a position having a large title and a large title, in which case the page structure centered at the position having a large title and a large title may be corresponding to the cover page in the correspondence relationship described above, in which case, when the determined page structure includes the position centering having a large title and a large title, the page type of the target slide may be determined as the cover page based on the page structure and the correspondence relationship described above.
In other examples, when determining the page type, the determined page structure may be input into a preset page type recognition model, and an output result of the page type recognition model may be obtained as the page type of the target slide.
The page type recognition model may be obtained after machine learning training is performed on a plurality of second samples acquired in advance. Each second sample comprises a page structure sample and a page type sample corresponding to the page structure sample, the page structure sample of each second sample is taken as input, the page type sample of each second sample is taken as output, and a page type recognition model is obtained after training a plurality of second samples by adopting a preset machine learning algorithm. In this case, a page structure is input to the page type recognition model, and the output result of the page type recognition model is the page type corresponding to the page structure. The manner of performing the machine learning training on the plurality of second samples may be set by those skilled in the art according to the actual situation, which is not limited by the embodiment of the present application.
In some embodiments, the page number of the target slide may also be acquired before step S120 is performed. Typically, the slides in the presentation have page numbers that can be used to indicate where the slides are arranged in all of the slides contained in the presentation. While some page types (e.g., cover page, end page) are usually arranged in a fixed position, e.g., n pages of the presentation, where n is a natural number and n >1, the cover page is arranged in the first page (i.e., page 1) of the presentation, and the end page is arranged in the last page (i.e., page n) of the presentation, in which case it may be determined whether the page type of the slideshow is the cover page or the end page according to the page number of the target slideshow.
For example, the page number of the default cover page is 1, and in the case where the page number of the target slide is 1, the page type of the target slide is determined to be the cover page.
To improve the accuracy of page type identification, in some embodiments, page types may be determined from page numbers and page structures.
For example, the cover page is typically provided with a large title and the large title is centered in position, and in some cases the transition page is also provided with a large title and the large title is centered in position, so in the case where the page structure of the target slide is determined to have a large title and the large title is centered in position, the page type of the target slide may be the cover page or the transition page. In this case, the page of the target slide may be acquired, the page type may be determined to be the cover page if the target slide page number is 1, and the page type may be determined to be the transition page if the target slide page number is m (n pages are natural numbers and n >1, m is natural numbers and n > m >1 in total of n pages of the presentation). In some examples, when determining the page color from the colors of at least one object, the primary color of the target slide may be determined from the colors of all objects in the target slide, with the primary color of the target slide being the page color of the target slide. When determining the main color, the color area corresponding to the same color may be counted according to the color of at least one object identified in step S120, and the color corresponding to the largest color area among all the color areas may be used as the page color.
After the color corresponding to the largest color area in all the color areas is used as the page color, the page color can be finely adjusted according to the actual statistical habit of the user for selecting the theme color, and the color obtained after the fine adjustment is used as the page color, so that the page color accords with the habit of most users.
In other examples, after the color corresponding to the largest color area among all the color area areas is taken as the page color, there may be a case where the similarity between the page color and the colors of some objects in the target slide is large. Based on this, if the page color of the target slide is adjusted to be the object color, there may be a problem that the object visibility is poor, for example, the page color is dark green, and some objects in the target slide are also dark green or are very similar to the object color, in this case, the determined page color may be fine-tuned according to the colors of some objects, so that the similarity between the page color and the colors of some objects is less than or equal to a preset similarity threshold, so as to ensure the visibility of each object in the target slide. The similarity threshold may be set by those skilled in the art according to the actual situation, which is not limited in the embodiment of the present application.
The page element characteristics include, for example, the content of the material object corresponding to the determined target material type. For example, the story object includes text, the page element properties include the content of the text, the story object includes a picture, the page element properties include the content of the picture, and so on. When the page element characteristics are determined according to the type of at least one object, the content of the material object can be acquired as the page element characteristics under the condition that the type of the object is determined to be the material object. When determining the page style according to the type of the at least one object, the position of the at least one object and the color of the at least one object, the page style may be determined according to the obtained page color, the page element characteristics and the like. The page color and the determination manner of the page element characteristics can be referred to the corresponding description in the above embodiments, and will not be repeated here. When determining the page style according to the page color and the page element characteristics, for example, a color style mapping table may be preset, which includes mapping relationships between a plurality of preset page colors and preset page styles, and a preset setting type style mapping table, which includes mapping relationships between a plurality of preset page types and a plurality of preset page styles, and a preset page element style mapping table, which includes mapping relationships between a plurality of preset page element characteristics and a plurality of preset page styles. The method includes determining a first predetermined page style based on page color, determining a second predetermined page style based on page type, and determining a third predetermined page style based on page element characteristics. The page style is then determined based on the first predetermined page style, the second predetermined page style, and the third predetermined page style. For example, the predetermined page style having the largest number of occurrences among the first predetermined page style, the second predetermined page style, and the third predetermined page style may be used as the page style. In the case where the first predetermined page style, the second predetermined page style, and the third predetermined page style are all different, the priority may be set in advance for the first predetermined page style, the second predetermined page style, and the third predetermined page style, and the predetermined page style with the highest priority may be regarded as the page style. The priority may be set by default or set by the user according to his actual needs. When determining at least one template matched with the configuration information, calculating the matching degree of the preset template and the configuration information, and taking the preset template with the matching degree larger than the matching degree threshold as the at least one template matched with the configuration information.
In specific implementation, a template database may be pre-established, where a plurality of preset templates are set in the template database. The matching degree of each preset template in the template database and the configuration information can be calculated, whether the matching degree is larger than a matching degree threshold value is judged for the matching degree of any preset template and the configuration information, if yes, namely, the matching degree is larger than the matching degree threshold value, the preset template corresponding to the matching degree is used as the template matched with the configuration information. If the judgment result is negative, that is, the matching degree is smaller than or equal to the matching degree threshold value, the preset template corresponding to the matching degree is not used as the template matched with the configuration information.
The calculating of the matching degree of each preset template in the template database and the configuration information can be, for example, calculating the preset matching degree of the information and the preset template for each piece of information in the configuration information, for example, the configuration information only comprises a page type, and the preset matching degree of the page type and the preset template is calculated, and if the configuration information comprises a page type and a page color, the preset matching degree of the page type and the preset template and the preset matching degree of the page color and the preset template are calculated to obtain the preset matching degree corresponding to the page type and the preset matching degree corresponding to the page color, and then the preset matching degree of each piece of information in the configuration information and the preset weight of each piece of information in the configuration information are calculated to obtain the matching degree of the preset template and the configuration information. For example, the product of the preset matching degree of each piece of information in the configuration information and the preset weight of the piece of information in the configuration information is obtained and used as the weight matching degree of the piece of information, the weight matching degrees of all pieces of information in the configuration information are summed, and the summation result is used as the matching degree of the preset template and the configuration information.
And step 140, respectively adapting the slide information to the searched at least one template to obtain at least one slide preview image corresponding to the at least one template one by one.
The slide information may be respectively adapted to the searched at least one template to obtain at least one slide preview, and then the at least one slide preview is sent to the client for the client to display the at least one slide preview in the client interface (as shown in part a of fig. 2) for the user to preview.
It can be seen that the embodiment of the application can firstly acquire slide information of a target slide, wherein the slide information comprises at least one object, perform object identification processing on the at least one object to obtain an identification result, determine corresponding configuration information according to the identification result and determine at least one template matched with the configuration information, and respectively adapt the slide information to the at least one template to obtain at least one slide preview image corresponding to the at least one template one by one.
Therefore, the application can automatically beautify the single-page slide, thereby effectively solving the problem of lower office efficiency of the user caused by complicated operation and longer time consumption in the whole process of beautifying the slide in the prior art, effectively making up the defects that the common user does not have slide beautifying knowledge and cannot be proficient in using various editing functions in the presentation file when beautifying the slide, and improving the office efficiency of the user.
In some embodiments, the at least one template matching the configuration information includes a plurality of templates, in which case, after performing step S130, as shown in fig. 3, the embodiments of the present application may further perform steps S310 to S330 as follows.
Step S310, the matching degree of each template in the templates and the configuration information is obtained.
Each template has a matching degree with the configuration information, and the matching degree of each template and the configuration information is used as the matching degree corresponding to the template, so that a plurality of matching degrees corresponding to a plurality of templates one by one can be obtained. The manner of obtaining the matching degree between each template and the configuration information can be set by those skilled in the art according to actual situations, which is not limited in the embodiment of the present application.
Step S320, sorting the templates according to the matching degree of the templates and the configuration information to obtain a first sorting result.
The plurality of matching degrees may be ranked in order from high to low, and the plurality of templates may be ranked according to the ranking result of the plurality of matching degrees. The position of each template in the sequence of the templates is the same as the sequence position of the matching degree corresponding to the template in the sequence of the matching degrees.
And step 330, displaying a plurality of slide preview pictures corresponding to the templates one by one in a preset window interface according to the first sequencing result.
And sequentially displaying a plurality of slide preview images corresponding to the templates one by one in a preset window interface according to the arrangement sequence from high to low in the first ordering result so as to preferentially display the slide preview images corresponding to the templates with higher matching degree.
In some embodiments, the at least one template matching the configuration information includes a plurality of templates, in which case, as shown in fig. 4, the embodiment of the present application may further execute the following steps S410 to S430.
Step S410, user behavior data of each template in the plurality of templates is acquired.
The user behavior data may be, for example, data related to the behavior of the user on the template, such as the number of clicks, selections, previews, etc. of the user on the template.
Step S420, counting data values corresponding to the user behavior data of each template, and sorting the templates according to the sequence from high to low of the data values corresponding to the user behavior data of the templates to obtain a second sorting result. The user behavior data of each template corresponds to a specific data value, such as the number of clicks, selections, previews, etc. of the template by the user. In this case, the data values of the user behavior data of the plurality of templates are sorted in order from high to low, and the plurality of templates are sorted according to the sorting result of the data values of the user behavior data of the plurality of templates, so as to obtain a second sorting result. Wherein the positions of each template ordered in the plurality of templates are the same as the positions of the data values of the user behavior data of the template ordered in the data values of the user behavior data of the plurality of templates.
And step S430, displaying a plurality of slide preview pictures corresponding to the templates one by one in a preset window interface according to the second sorting result.
And according to the arrangement sequence from high to low in the second ordering result, sequentially displaying a plurality of slide preview pictures corresponding to the templates one by one in a preset window interface so as to preferentially display templates with higher values of the user behavior data.
In some embodiments, in the case of executing the steps S410-S420, the executing process of the step S330 may be as follows, calculating a third sorting result according to the first sorting result and the second sorting result, and displaying a plurality of slide preview images corresponding to the templates one by one in a preset window interface according to the third sorting result. Specifically, the matching degree of any template in the first sorting result and the matching degree of the template in the second sorting result can be summed to obtain a summation result as the first target matching degree of the template, and the first target matching degrees of all templates are sorted according to the order from high to low to obtain a third sorting result. Of course, a first weight corresponding to the first sorting result and a second weight corresponding to the second sorting result may be preset, then a product of the matching degree of a template and the first weight in the first sorting result is obtained as a first matching degree of the template, a product of the matching degree of the template and the second weight in the second sorting result is obtained as a second matching degree of the template, and then a summation result of the first matching degree and the second matching degree is obtained as a second target matching degree of the template, and the second target matching degrees of all the templates are sorted according to the order from high to low, so as to obtain a third sorting result.
In some embodiments, before executing step S110, the server may further receive the number information from the client, where the number information is the maximum number of slide previews that can be displayed in the client interface, if the number of at least one slide preview obtained by the server is greater than the maximum number of slide previews that can be displayed in the client interface, the server may send the slide preview images ranked in front to the client according to the ranking result, and after receiving the preset instruction (for example, the refresh instruction), the client sends the preset request to the server, so that the server sends the rest of slide previews to the client according to the preset request, if the number of rest of slide previews is greater than the maximum number of slide previews that can be displayed in the client interface, and otherwise, sends all of the rest of slide previews to the client.
In some embodiments, after executing step S130, the embodiment of the present application may further determine beautification processing information corresponding to the slide information according to the slide information and a target template in at least one template. The target template is any one of at least one template.
Since the target template is capable of adapting to at least one object in the target slide, for example, the target slide has a title therein, the target template has a template therein that adapts to the title. In this case, after the target template to which the slide information (i.e., at least one object in the target slide) is to be adapted is determined, the adaptation result may be automatically beautified according to the information such as the number and content of the at least one object, so that the adaptation result is more standard and attractive.
Illustratively, the target slide contains a plurality of objects of the same type, such as a plurality of pictures, a plurality of controls, a plurality of text boxes, a plurality of tables, and the like. In this case, after the plurality of objects are adapted in the target template, the adaptation result of the plurality of objects in the target template may be beautified according to the association relationship of the plurality of objects of the same type in logic, wherein the beautified information is beautified processing information. For example, typesetting, stitching, etc. of a plurality of identical objects may be performed.
For example, the number of pictures is plural, that is, the target slide has plural pictures, and the plural pictures are logically related in parallel, that is, all of the plural pictures are landscape pictures, based on the slide information. In this case, the pieces of picture-arrangement information generated by performing picture-arrangement processing on the plurality of pictures, that is, the pieces of beautification processing information. Based on the method, the purpose of carrying out jigsaw (jigsaw in PPT) processing on the plurality of pictures can be achieved, namely, the positions of the plurality of pictures are typeset according to a preset mode, so that the typeset positions of the plurality of pictures are more reasonable and attractive, and the situation that the plurality of pictures are randomly and randomly distributed in an adaptation result after slide information is adapted to a target template is effectively improved.
For another example, text content is identified according to slide information, and when it is determined that the text content is flow content, a corresponding flow chart is generated according to the text content, wherein the text content is generated into flow information of the flow chart, that is, beautifying processing information.
For another example, in the case that there are multiple objects in the target slide, there may be an associated relationship between the objects of different types, such as multiple titles in the target slide, multiple pictures corresponding to the multiple titles one to one, and text in the titles describes the content in the pictures corresponding to the titles. In this case, the corresponding beautification processing information may be generated so that each title and the corresponding picture are laid out together according to the beautification processing information, and a plurality of titles and a plurality of pictures are automatically corresponding.
In some embodiments, after performing step S140, the client may further receive a filtering instruction carrying filtering information, determine at least one target slide preview image matching the filtering information in the at least one slide preview image, and then display only the at least one target slide preview image in the client display interface. The above screening information may be set by the user according to actual conditions.
The following describes the embodiment of the present application by way of a specific example. As shown in fig. 5, taking an execution subject as a server (i.e., a server in fig. 5) as an example, the server receives slide information and quantity information of a target slide from a client, the quantity information being the maximum number of slide preview images that can be presented in a client interface (flow sequence corresponds to ① in fig. 5), and the server can sequentially perform functions of identifying, matching and applying according to the slide information of the target slide, and perform a function of downloading according to a result of performing the applying function and the received quantity information. Wherein, the identified function corresponds to the "identified" part in fig. 5, the matched function corresponds to the "matched" part in fig. 5, the applied function corresponds to the "applied" part in fig. 5, and the downloaded function corresponds to the "downloaded" part in fig. 5.
Specifically, for the "recognition" section in fig. 5, the functions thereof include a function of structure recognition (corresponding to "structure recognition" in fig. 5), a function of AI (artificial intelligence ) picture recognition (corresponding to "AI picture recognition" in fig. 5), a function of AI theme style recognition (corresponding to "AI theme style recognition" in fig. 5), and a function of AI theme color recognition (corresponding to "AI theme color recognition" in fig. 5).
Specifically, in fig. 5, the structure recognition, i.e., the type and the position of the object are recognized, the AI picture recognition, i.e., the object is a material object, and the material object with the material type being a picture is automatically recognized to recognize the picture in the target slide, and the process of structure recognition and AI picture recognition may refer to the corresponding description of the process of recognizing the type and the position of at least one object in step S120, which is not repeated here.
Specifically, in fig. 5, the AI theme style identifies, i.e., identifies, the page style of the target slide, the AI theme color identifies, i.e., identifies, the page color of the target slide, and the process of identifying the page style and the page color of the target slide may refer to the corresponding description of the process of determining the page style and the page color in step S130, which is not repeated herein.
After the "identify" process in fig. 5 is performed, the "match" function in fig. 5 may be performed based on the obtained result (the flow sequence corresponds to ② in fig. 5). Specifically, for the "matching" section in fig. 5, the functions thereof include a function of structure matching (corresponding to the "structure matching" in fig. 5), a function of matching degree calculation (corresponding to the "matching degree calculation" in fig. 5), a function of beautification property (corresponding to the "beautification property" in fig. 5), and a function of template recommendation (corresponding to the "template recommendation" in fig. 5).
It should be noted that, the function of the "matching" portion of the configuration information in fig. 5 only shows the function of matching the page structure (corresponding to the page type of the configuration information obtained according to the page structure in step S130), and does not show the function of matching other configuration information (such as the page color, the page element characteristic, and/or the page style mentioned in the above embodiment). In specific implementation, the implementation of the matching function related to the other configuration information is similar to that of the page type, and specific reference may be made to the implementation of the configuration information as the page type in fig. 5.
Specifically, in fig. 5, the structure matching, i.e. selecting the configuration information as the page type, the matching degree calculation corresponds to calculating the matching degree between the preset template and the page type in step S130, and the beautification characteristic corresponds to using the preset template with the matching degree greater than the matching degree threshold in step S130 as at least one template (the at least one template is the template to be recommended) matched with the configuration information, and the template recommends the template to be recommended that is the at least one template matched with the configuration information. The implementation process of the matching degree calculation, the beautifying feature and the template recommendation function can be specifically referred to the corresponding description in the above embodiment, and will not be repeated here.
After the "matching" process in fig. 5 is performed, the "applying" function in fig. 5 may be performed based on the obtained result (the flow sequence corresponds to ③ in fig. 5). Specifically, for the "apply it" section in fig. 5, the functions thereof include a function of downloading a user document (corresponding to the "download user document" in fig. 5), a function of selecting an apply template (corresponding to the "select apply template" in fig. 5), a function of applying characteristics (corresponding to the "apply characteristics" in fig. 5), and a function of generating templates and thumbnails (corresponding to the "generate templates and thumbnails" in fig. 5).
Specifically, in fig. 5, the process of downloading the user document, i.e. extracting the slide information from the client, selecting the template to be recommended, i.e. selecting the template to be recommended as the template to be recommended, and determining the beautifying process information corresponding to the slide information according to the slide information and the target template in the at least one template after determining at least one template matching the configuration information (i.e. executing step S130) in the above embodiment. The generation template and the thumbnail correspond to the execution process of step S140 in the above-described embodiment. The implementation process of downloading user documents, application characteristics and generating templates and thumbnail functions may be specifically referred to the corresponding description in the above embodiments, and will not be described herein.
After the "apply" process in fig. 5 is performed, the "download" function in fig. 5 may be performed based on the obtained result (the flow sequence corresponds to ④ in fig. 5). Specifically, for the "download" section in fig. 5, the functions thereof include a function of thumbnail download (corresponding to "thumbnail download" in fig. 5), a function of document download (corresponding to "document download" in fig. 5).
Specifically, in fig. 5, at least one slide preview obtained in the "jacket" is downloaded, i.e., downloaded, by the thumbnail. And downloading the documents, namely downloading the slide preview images of which the quantity corresponds to the quantity information in at least one slide preview image according to the quantity information from the client, and finally sending the slide preview images obtained by downloading the documents to the client (corresponding to ⑤ in the flow sequence).
It should be noted that, the foregoing embodiments are described in terms of an embodiment executed at a server, and the manner of execution of the client is similar to that of the server, and may be specifically described in embodiments executed at the server side with reference to the foregoing description, which is not repeated herein.
< Device example >
Fig. 6 is a functional block diagram of a slide processing apparatus according to some embodiments of the present application. As shown in fig. 6, the slide processing apparatus 60 may include:
the acquiring module 61 is configured to acquire slide information of a target slide, where the slide information includes at least one object.
And the recognition module 62 is configured to perform object recognition processing on at least one object to obtain a recognition result.
A determining module 63, configured to determine corresponding configuration information according to the identification result, and determine at least one template matched with the configuration information.
And a processing module 64, configured to adapt the slide information to at least one template, so as to obtain at least one slide preview corresponding to the at least one template one by one.
The method and the device can firstly acquire slide information of a target slide through the acquisition module, wherein the slide information comprises at least one object, then perform object identification processing on the at least one object through the identification module to obtain an identification result, determine corresponding configuration information according to the identification result through the determination module and determine at least one template matched with the configuration information, and finally respectively adapt the slide information into the at least one template through the processing module to obtain at least one slide preview picture corresponding to the at least one template one by one. Therefore, the application can automatically beautify the single-page slide, thereby effectively solving the problem of lower office efficiency of the user caused by complicated operation and longer time consumption in the whole process of beautifying the slide in the prior art, effectively making up the defects that the common user does not have slide beautifying knowledge and cannot be proficient in using various editing functions in the presentation file when beautifying the slide, and improving the office efficiency of the user.
Optionally, the identification module comprises at least one sub-module for identifying the type of the at least one object, taking the type of the at least one object as an identification result, a second identification sub-module for identifying the position of the at least one object, taking the position of the at least one object as an identification result, and a third identification sub-module for identifying the color of the at least one object, taking the color of the at least one object as an identification result.
Optionally, the object comprises a structural object or a material object, and the first identification submodule is used for identifying the structural type of the structural object in the case that the object is the structural object, wherein the structural type comprises any one of a title, a subtitle and a text, and identifying the material type of the material object in the case that the object is the material object, and the material type comprises any one of a text, a picture, a video control, an audio control and a table.
Optionally, the first identification sub-module is further configured to identify at least one content element included in the material object after identifying the material type of the material object, in the case that the material type of the identified material object is the target material type, where the target material type includes any one of text, picture and table, and determine the content of the material object according to the at least one content element.
Optionally, the determining module comprises a first determining submodule for determining corresponding configuration information according to the identification result and a second determining submodule for determining at least one template matched with the configuration information, wherein the second determining submodule is used for calculating the matching degree of each preset template in the template database and the configuration information, and taking the preset template corresponding to the matching degree as the template matched with the configuration information when the matching degree is larger than a matching degree threshold value.
Optionally, when the at least one template comprises a plurality of templates, the device further comprises a matching degree acquisition module for acquiring the matching degree of each template in the plurality of templates and the configuration information after the determining module determines the at least one template matched with the configuration information, a first ordering module for ordering the plurality of templates according to the matching degree of the templates and the configuration information to obtain a first ordering result, and a display module for displaying a plurality of slide preview pictures corresponding to the plurality of templates one by one in a preset window interface according to the first ordering result.
Optionally, the device further comprises a data acquisition module, a statistics module and a calculation module, wherein the data acquisition module is used for acquiring user behavior data of each template in the templates before the first display module displays the multiple slide previews corresponding to the templates one by one in a preset window interface according to the first sorting result, the statistics module is used for counting data values corresponding to the user behavior data of each template, sorting the templates according to the sequence from high to low of the data values corresponding to the user behavior data of the templates to obtain a second sorting result, the calculation module is used for calculating a third sorting result according to the first sorting result and the second sorting result, and the display module is used for displaying the multiple slide previews corresponding to the templates one by one in the preset window interface according to the third sorting result.
Optionally, when at least one of the templates includes a plurality of templates, the device further includes a data acquisition module, configured to acquire user behavior data of each of the templates after the determining module determines at least one of the templates that matches the configuration information, count data values corresponding to the user behavior data of each of the templates, order the templates according to an order of from high to low data values corresponding to the user behavior data of the templates, and obtain a fourth ordering result, and a display module, configured to display a plurality of slide previews corresponding to the templates one by one in a preset window interface according to the fourth ordering result.
Optionally, the device further comprises a beautification module, a processing module and a processing module, wherein the beautification module is used for determining beautification processing information corresponding to the slide information according to the slide information and a target template in the at least one template after the determination module determines at least one template matched with the configuration information, and the processing module is used for carrying out beautification processing on the slide information according to the beautification processing information, and adapting the beautification processed slide information to a second target template to obtain a corresponding slide preview image.
Fig. 7 is a schematic diagram of a hardware architecture of an electronic device according to one embodiment.
As shown in fig. 7, the electronic device 700 comprises a processor 701 and a memory 702, the memory 702 being for storing an executable computer program, the processor 701 being for performing a method as any of the method embodiments above, according to control of the computer program.
The above modules of the presentation processing apparatus 60 may be implemented by the processor 701 executing the computer program stored in the memory 702 in the present embodiment, or may be implemented by other circuit structures, which are not limited herein.
The electronic device 700 may be a cell phone, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), a wireless handheld device, a Tablet Computer (Tablet Computer), a Personal Computer (Personal Computer, PC), a Personal Computer (PDA, personal DIGITAL ASSISTANTS), etc.
The present application may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present application.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium include a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical encoding device, punch cards or intra-groove protrusion structures such as those having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for carrying out operations of the present application may be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as SMALLTALK, C ++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present application are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information for computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present application are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, implementation by software, and implementation by a combination of software and hardware are all equivalent.
The foregoing description of embodiments of the application has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the technical improvements in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the application is defined by the appended claims.

Claims (11)

Determining corresponding configuration information according to a recognition result and at least one template matched with the configuration information, wherein the configuration information comprises a page type, a page color, page element characteristics and a page style, determining corresponding configuration information according to the recognition result comprises determining the page type according to the type of at least one object and the position of at least one object, determining the page color according to the color of at least one object, determining the page element characteristics according to the type of at least one object, determining the page style according to the page color, the page type and the page element characteristics, wherein the method comprises the steps of presetting a color style mapping table, a type style mapping table and a page element style mapping table, determining a first preset page style according to the page color, determining a second preset page style according to the page type, determining a third preset page style according to the page element characteristics, and determining the page style according to the first preset page style, the second preset page style and the third preset page style;
The system comprises an acquisition module, a recognition module, a target slide selection module and a target slide selection module, wherein the target slide is a beautiful slide selected from at least one slide of a presentation file, the slide information comprises at least one object, the recognition module is used for carrying out object recognition processing on the at least one object to obtain a recognition result, the recognition module comprises the steps of counting color area corresponding to the same colors of the at least one object, taking the color corresponding to the largest color area in all the color area as a page color, and carrying out fine adjustment on the determined page color according to the color of the object under the condition that the similarity of the page color and the color of the object in the target slide is larger than a preset similarity threshold value, so that the similarity of the page color and the color of the object is smaller than or equal to the preset similarity threshold value;
The system comprises a determining module, a determining module and a processing module, wherein the determining module is used for determining corresponding configuration information according to a recognition result and determining at least one template matched with the configuration information, the configuration information comprises a page type, a page color, page element characteristics and a page style, the determining of the corresponding configuration information according to the recognition result comprises determining the page type according to the type of at least one object and the position of at least one object, determining the page color according to the color of at least one object, determining the page element characteristics according to the type of at least one object, determining the page style according to the page color, the page type and the page element characteristics, wherein the determining of the page style comprises presetting a color style mapping table, a type style mapping table and a page element style mapping table, determining a first preset page style according to the page color, determining a second preset page style according to the page type, determining a third preset page style according to the page element characteristics, and determining the page style according to the first preset page style, the second preset page style and the third preset page style;
The method comprises the steps of determining beautification processing information corresponding to slide information according to slide information and any one of at least one target template, generating beautification processing information which is generated by carrying out jigsaw processing on a plurality of pictures under the condition that the number of slide information identification pictures is a plurality of and the association relation of the plurality of pictures is a parallel relation, and generating beautification processing information which is generated by carrying out the jigsaw processing on the plurality of pictures under the condition that the object slide has a plurality of titles and a plurality of pictures corresponding to the plurality of titles one by one, wherein the text description in the titles is the content in the pictures corresponding to the titles;
CN202111556014.8A2021-12-172021-12-17 Slide processing method, device and electronic equipmentActiveCN114254610B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202111556014.8ACN114254610B (en)2021-12-172021-12-17 Slide processing method, device and electronic equipment

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202111556014.8ACN114254610B (en)2021-12-172021-12-17 Slide processing method, device and electronic equipment

Publications (2)

Publication NumberPublication Date
CN114254610A CN114254610A (en)2022-03-29
CN114254610Btrue CN114254610B (en)2025-09-23

Family

ID=80795726

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202111556014.8AActiveCN114254610B (en)2021-12-172021-12-17 Slide processing method, device and electronic equipment

Country Status (1)

CountryLink
CN (1)CN114254610B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106934336A (en)*2015-12-312017-07-07珠海金山办公软件有限公司A kind of method and device of lantern slide identification
CN108073680A (en)*2016-11-102018-05-25谷歌有限责任公司 Generate presentation slides with distilled content
CN109213982A (en)*2017-06-292019-01-15易享信息技术有限公司Color theme for demonstration is safeguarded
CN113094552A (en)*2021-03-192021-07-09北京达佳互联信息技术有限公司Video template searching method and device, server and readable storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106557289A (en)*2015-09-292017-04-05珠海金山办公软件有限公司A kind of lantern slide display packing, system and device
CN111930976B (en)*2020-07-162024-05-28平安科技(深圳)有限公司Presentation generation method, device, equipment and storage medium
CN111881307B (en)*2020-07-282024-04-05平安科技(深圳)有限公司Presentation generation method and device, computer equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106934336A (en)*2015-12-312017-07-07珠海金山办公软件有限公司A kind of method and device of lantern slide identification
CN108073680A (en)*2016-11-102018-05-25谷歌有限责任公司 Generate presentation slides with distilled content
CN109213982A (en)*2017-06-292019-01-15易享信息技术有限公司Color theme for demonstration is safeguarded
CN113094552A (en)*2021-03-192021-07-09北京达佳互联信息技术有限公司Video template searching method and device, server and readable storage medium

Also Published As

Publication numberPublication date
CN114254610A (en)2022-03-29

Similar Documents

PublicationPublication DateTitle
US10956784B2 (en)Neural network-based image manipulation
KR102684502B1 (en) Voice packet recommendation methods, devices, facilities and storage media
US8958662B1 (en)Methods and systems for automating insertion of content into media-based projects
CN109086297B (en)System and method for color selection and computer readable medium
CN110036356B (en)Image processing in VR systems
CN112306601B (en) Application interaction method, device, electronic device and storage medium
CN113840099B (en)Video processing method, device, equipment and computer readable storage medium
CN107153708A (en)A kind of picture inspection method and device, computer installation, computer-readable recording medium
CN111290688A (en)Multimedia note taking method, terminal and computer readable storage medium
CN113836334A (en)Image processing apparatus, image processing method, and recording medium
CN112200844A (en)Method, device, electronic equipment and medium for generating image
CN114238671B (en) Presentation processing method, device, electronic device and storage medium
CN109697242B (en)Photographing question searching method and device, storage medium and computing equipment
CN114254610B (en) Slide processing method, device and electronic equipment
CN112330728B (en) Image processing method, device, electronic device and readable storage medium
CN109032377A (en)The method and apparatus of output input method candidate word for electric terminal
US20240220084A1 (en)Information display method, device, computer apparatus and storage medium
CN113238686A (en)Document processing method and device and electronic equipment
CN107656760A (en)Data processing method and device, electronic equipment
US20150039643A1 (en)System for storing and searching image files, and cloud server
CN113315691B (en) Video processing method, device and electronic equipment
US11283945B2 (en)Image processing apparatus, image processing method, program, and recording medium
CN110009646B (en)Electronic album generation method and device, electronic equipment and storage medium
CN112765096A (en)Copying method and electronic device
CN112416239A (en)Media object interaction method and device in media library and electronic equipment

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp