Movatterモバイル変換


[0]ホーム

URL:


CN120386764B - Server parameter configuration method, electronic device, storage medium and program product - Google Patents

Server parameter configuration method, electronic device, storage medium and program product

Info

Publication number
CN120386764B
CN120386764BCN202510874229.6ACN202510874229ACN120386764BCN 120386764 BCN120386764 BCN 120386764BCN 202510874229 ACN202510874229 ACN 202510874229ACN 120386764 BCN120386764 BCN 120386764B
Authority
CN
China
Prior art keywords
word block
configuration
word
sample
semantic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202510874229.6A
Other languages
Chinese (zh)
Other versions
CN120386764A (en
Inventor
张晨
李锋
张玉峰
王帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
IEIT Systems Co Ltd
Original Assignee
Inspur Electronic Information Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Electronic Information Industry Co LtdfiledCriticalInspur Electronic Information Industry Co Ltd
Priority to CN202510874229.6ApriorityCriticalpatent/CN120386764B/en
Publication of CN120386764ApublicationCriticalpatent/CN120386764A/en
Application grantedgrantedCritical
Publication of CN120386764BpublicationCriticalpatent/CN120386764B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Landscapes

Abstract

Translated fromChinese

本发明公开了一种服务器参数配置方法、电子设备、存储介质及程序产品,涉及服务器技术领域。其中,方法包括根据服务器的待配置信息的各标准配置参数子类构建公共键‑值,将源服务器的各源配置项通过识别其与公共键‑值中的公共键的相同语义词块,确定各源配置项与公共键之间的对应关系,并基于满足语义相同的配置项的配置内容相同的条件,将各源配置项的配置内容映射为公共键对应的值,最后按照相同方式将映射后的公共键‑值再次映射至需要进行参数配置的目标服务器的待处理配置项。本发明可以解决相关技术无法实现高效且准确配置服务器参数的问题,能够实现对跨厂商机型的服务器配置的自动化精准复制。

The present invention discloses a server parameter configuration method, electronic device, storage medium and program product, and relates to the field of server technology. The method includes constructing a common key-value according to each standard configuration parameter subclass of the server's to-be-configured information, identifying the same semantic word blocks of each source configuration item of the source server with the common key in the common key-value, determining the correspondence between each source configuration item and the common key, and mapping the configuration content of each source configuration item to the value corresponding to the common key based on the condition that the configuration content of semantically identical configuration items is the same. Finally, the mapped common key-value is re-mapped to the to-be-processed configuration item of the target server that needs parameter configuration in the same manner. The present invention can solve the problem that related technologies cannot realize efficient and accurate configuration of server parameters, and can realize automatic and accurate replication of server configurations across manufacturers and models.

Description

Server parameter configuration method, electronic device, storage medium and program product
Technical Field
The present invention relates to the field of server technologies, and in particular, to a server parameter configuration method, an electronic device, a computer readable storage medium, and a computer program product.
Background
The data center contains computing power equipment represented by servers, and all servers need to be configured to the same configuration item in the operation and maintenance. According to the server parameter configuration method in the related art, due to the fact that the configuration and the model of different servers are obviously different, efficient and accurate parameter configuration cannot be achieved.
Disclosure of Invention
The invention provides a server parameter configuration method, electronic equipment, a computer readable storage medium and a computer program product, which realize automatic and accurate copying of server configuration of cross-manufacturer models.
In order to solve the technical problems, the invention provides the following technical scheme:
in one aspect, the present invention provides a method for configuring parameters of a server, including:
Redefining information to be configured of a server of the data center into a plurality of standard configuration parameter subclasses, and constructing a public key-value according to each standard configuration parameter subclass, wherein each standard configuration parameter subclass at least corresponds to one configuration item.
And acquiring each source configuration item of the source server with the configured parameters, determining the corresponding relation between each source configuration item and the public key by identifying the same semantic word blocks of the source configuration item and the public key of the public key-value, and mapping the configuration content of each source configuration item into a value corresponding to the public key based on the condition that the configuration content of the configuration item meeting the semantic similarity is the same.
And determining the corresponding relation between each configuration item to be processed and the public key by identifying the same semantic word blocks of each configuration item to be processed and the public key in the target server, and generating corresponding configuration content for each configuration item to be processed based on the mapped public key-value so as to complete the parameter configuration of the target server.
The invention also provides an electronic device comprising a memory and a processor for implementing the steps of any one of the server parameter configuration methods described above when executing a computer program stored in the memory.
The invention also provides a computer readable storage medium storing a computer program which when executed by a processor implements the steps of any of the server parameter configuration methods described above.
The invention finally provides a computer program product comprising computer programs/instructions which when executed by a processor implement the steps of any of the server parameter configuration methods described above.
The technical scheme provided by the invention has the advantages that the problem of synonym between the source server and the standard configuration parameters can be solved by identifying semantic information between the configured configuration items of the server and the standard configuration parameters defined by the public key, the difference of manufacturer parameters is shielded by semantic identification between the configuration items of the public key and the target server, the mapping relation from the source server to the configuration items of the target server can be determined after double-layer mapping, the problem that the semantic difference between the source server and the target server is overlarge is solved, the accuracy and the efficiency of mapping the target configuration items by the source configuration items are effectively improved by converting the public key, the current configuration information of the source server is converted into all the configuration items and copied to the configuration parameters required by the target server, the automatic and accurate copying of the configuration of the servers of a cross-manufacturer type is realized, the manual intervention of the parameter configuration flow of the server is reduced to the minimum, the efficiency and the accuracy of the configuration server can be greatly improved, the configuration failure or the configuration deviation caused by manual operation errors can be effectively avoided, and the period from the preparation to the upper line of the server can be effectively shortened.
In addition, the invention also provides corresponding electronic equipment, computer readable storage medium and computer program product for realizing the server parameter configuration method, so that the method has more practicability, and the electronic equipment, the computer readable storage medium and the computer program product have corresponding advantages.
Drawings
For a clearer description of the present invention or of the technical solutions related thereto, the following brief description will be given of the drawings used in the description of the embodiments or of the related art, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained from these drawings without the inventive effort of a person skilled in the art.
FIG. 1 is a schematic diagram of a hardware framework to which the server parameter configuration method of the present invention is applicable;
Fig. 2 is a schematic flow chart of a server parameter configuration method provided by the present invention;
FIG. 3 is a schematic diagram of a correspondence relationship between a standard configuration parameter subclass and a configuration item thereof in an exemplary application scenario provided by the present invention;
FIG. 4 is a schematic diagram of two-layer mapping in an exemplary application scenario provided by the present invention;
FIG. 5 is a flowchart illustrating another method for configuring server parameters according to the present invention;
FIG. 6 is a block diagram of an exemplary embodiment of a server parameter configuration apparatus according to the present invention;
Fig. 7 is a block diagram of an exemplary embodiment of an electronic device according to the present invention.
Detailed Description
In order to make the technical scheme of the present invention better understood by those skilled in the art, the present invention will be further described in detail with reference to the accompanying drawings and the detailed description. Wherein the terms "first," "second," and the like in the description and in the above-described figures are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprise" and "have," as well as any variations of the two, are intended to cover a non-exclusive inclusion. The term "exemplary" means "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
With the rapid development of artificial intelligence, big data and cloud technology, in order to meet the exponentially growing data processing, data center applications with high availability guarantee, which can efficiently store and manage large-scale data, are generated. The data center provides a place of an operating environment for centrally placed electronic information devices, performs centralized storage, calculation and exchange of data, is an infrastructure of a cloud computing bottom core, and comprises computing power (namely, data processing capacity) devices of information technologies represented by servers, and infrastructure support facilities for ensuring normal operation of the devices, such as power supply and distribution systems, refrigeration systems and the like. In the operation and maintenance of the data center, all servers are required to be configured into the same configuration item, and as different types of servers of different manufacturers exist in the data center and different server configurations exist, such as the problems of non-uniform parameter names, inconsistent structure, fuzzy semantics and the like, in the configuration process of the servers of the data center, the related technology still needs to have experienced technical staff intervention, such as manual configuration of certain parameters on a server management page, so that the configuration efficiency of the data center is low, the error rate is high and the expansibility is poor.
In view of this, in order to realize the automatic and accurate replication of the server configuration across manufacturer models, solve the problems that the manual intervention has high requirement on the experience of operators, time-consuming configuration and risk of inaccurate configuration, the invention proposes a double-layer mapping mode: and matching standard semantic labels between the configuration items of the configured servers and the public key, then determining the mapping relation between the source server and the configuration items of the target server by using the standard labels and mapping the configuration items of the target server through the public key, solving the problem that the semantic difference between the source server and the target server is too large, improving the accuracy of mapping the source configuration items to the target configuration items, and realizing the automatic and accurate copying of the server configuration of the cross-manufacturer model. The specific application environment architecture or specific hardware architecture upon which the execution of the server parameter configuration method depends is described herein. In the following, with reference to fig. 1, some possible application scenarios related to the technical solution of the present invention are described by way of example, where a data center includes a plurality of servers with identical configuration, and the configuration process for each server may include the following contents:
One server is randomly selected as an origin server 101 in the data center, and the other servers are selected as destination servers 102. Redefining information to be configured of a server of the data center into a plurality of standard configuration parameter subclasses, wherein each standard configuration parameter subclass at least corresponds to one configuration item, constructing a public key-value according to each standard configuration parameter subclass, and carrying out server parameter configuration on the source server. The parameter configuration process of any target server is that each source configuration item of the source server 101 with configured parameters is determined by identifying the same semantic word blocks of each source configuration item and a public key of a public key-value, corresponding relations between each source configuration item and the public key are determined, the configuration content of each source configuration item is mapped to a value corresponding to the public key based on the condition that the configuration content of the configuration items meeting the same semantic meaning is the same, the corresponding relations between each configuration item to be processed and the public key are determined by identifying the same semantic word blocks of each configuration item to be processed and the public key in the target server 102, and corresponding configuration content is generated for each configuration item to be processed based on the mapped public key-value, and the parameter configuration of the target server 102 is completed.
It should be noted that the above application scenario is only shown for the convenience of understanding the idea and principle of the present invention, and the embodiment of the present invention is not limited in any way. Rather, embodiments of the invention may be applied to any scenario where applicable. Having described aspects of the invention, various non-limiting embodiments of the invention are described in detail below with reference to the drawings and detailed description. Referring to fig. 2 first, fig. 2 is a flow chart of a server parameter configuration method provided in this embodiment, and this embodiment may include the following:
s201, redefining information to be configured of a server of the data center into a plurality of standard configuration parameter subclasses, and constructing a public key-value according to each standard configuration parameter subclass.
In this step, in the service scenario where the information to be configured is the current data center, all configurations of configuration parameters need to be performed on the servers, so that, for convenience of subsequent mapping processing, all configurations may be redefined and generalized into a plurality of configuration subclasses, which, for convenience of description, may be defined as standard configuration parameter subclasses, for example, the configuration of each server may be generalized into configuration subclasses such as NTP (Network Time Protocol ) configuration, service configuration, alarm configuration, user configuration, mailbox configuration, domain name system configuration, fan configuration, lightweight directory access protocol configuration, log configuration, and the like, where NTP is a configuration function of server configuration, and synchronizes the current time of the servers by configuring one or more time servers. Each standard configuration parameter subclass corresponds to at least one configuration item, and a corresponding configuration item is obtained for each standard configuration parameter subclass, as shown in fig. 3, in a manner of converting each standard configuration parameter subclass and each configuration item thereof into a key-value, for convenience of description, each standard configuration parameter subclass and each configuration item thereof are converted into a key-value and are defined as a common key-value, and for example, all configuration items of all standard configuration parameter subclasss can be taken as a whole and then converted into keys, and the values can be null or default values are adopted. The standard configuration parameters and configuration items can be described by any language, such as unified semantic mapping supporting multiple language configuration items of Chinese ("Network time server"), english ("Network TIME SERVER") and the like, and the cross-language configuration replication accuracy is realized by Unicode coding and word block alignment technology.
S202, acquiring each source configuration item of the source server with configured parameters, determining the corresponding relation between each source configuration item and the public key by identifying the same semantic word blocks of each source configuration item and the public key, and mapping the configuration content of each source configuration item into a value corresponding to the public key based on the condition that the configuration content of the configuration items with the same semantics is the same.
In this step, configuration information of the source server is obtained, where the configuration information includes at least each configuration item and corresponding configuration content, and for convenience of distinction, the configuration item of the source server is defined as a source configuration item, where the source configuration item may include NTP configuration, service configuration, alarm configuration, log configuration, mailbox configuration, domain name system configuration, fan configuration, lightweight directory access protocol configuration, user configuration, and the like, and the obtained configuration information may also be converted into a key-value, where the source configuration item is a key, and the corresponding configuration content is a value. The source configuration item and the configuration content can be described by adopting any language, such as supporting Chinese expression of 'Network time server', english expression of 'Network TIME SERVER', and the like, and the cross-language configuration replication accuracy is realized by Unicode coding and word block alignment technology.
It can be understood that, because the server manufacturers are different, the server models are different, the parameter names of the configuration items of the server are different, in order to mask the difference between the source server and the target server, the corresponding relationship between each source configuration item of the source server and each configuration item of each standard configuration parameter subclass can be established first, that is, the condition that the configuration items of the source configuration item of the source server and the configuration items of the standard configuration parameter subclass belong to the same configuration item in nature but use different configuration names is identified. For accurate recognition, the common key and each source configuration item can be recognized according to the word blocks, and any related technology capable of recognizing the word block semantics can be used, so that the realization of the invention is not affected. The semantically identical word blocks in the step refer to that the two word blocks are in essence to express the same meaning, and the same meaning comprises the description of the same object by using different languages and the description of similar similarity. After the corresponding relation between each source configuration item and the configuration item of each standard configuration parameter subclass is determined, the configuration content of the source configuration item with the corresponding relation, namely the value of the key value pair corresponding to the source configuration item, is used as the value of the configuration item of the standard configuration parameter subclass corresponding to the public key, and the process of mapping each source configuration item of the source server to the public key is completed.
S203, determining the corresponding relation between each configuration item to be processed and the public key by identifying the same semantic word blocks of each configuration item to be processed and the public key in the target server, and generating corresponding configuration content for each configuration item to be processed based on the mapped public key-value so as to complete the parameter configuration of the target server.
The target server and the source server are servers in a data center, the target server is a server needing to perform server parameter configuration according to the source server, when the step S201 is completed to extract keys configuring all types of servers and is used as a common key, the step S202 is completed to map configuration items of the source server onto the common key, the configuration items needing to be configured of the target server can be collected in a mode similar to the step S202, for convenience of description, the configuration items needing to be configured of the target server are defined as configuration items to be processed, the corresponding relation between standard configuration parameter subclasses and each configuration item to be processed is determined in the same identification mode as the steps, and the common key is mapped onto the configuration items to be processed of the target server, as shown in fig. 4, the configuration items mapped twice are used as parameter contents configured for the configuration items to be processed of the target server. After double-layer mapping, determining a mapping relation between each configuration item from the source SERVER to the target SERVER, converting the current configuration of the source SERVER into all configuration items, and copying configuration parameters required by the target SERVER, wherein the configuration items are configured with the corresponding value of server_NTP_IP1 of the source SERVER being 192.168.1.1, and the result of configuration item mapping is NTP_SERVER1, namely the server_NTP_IP1 of the source SERVER corresponds to the NTP_SERVER1 of the target SERVER, so that the NTP_SERVER1 in the configuration items to be processed of the target SERVER can be configured to be 192.168.1.1. After mapping all the configuration items, generating configuration item parameters of the target server, and correspondingly configuring the configuration items to be processed of the target server by calling Redfish (standard protocol name) interfaces.
In the technical scheme provided by the embodiment, the problem of synonym between the source server and the standard configuration parameters can be solved by identifying semantic information between the configured configuration items of the server and the standard configuration parameters defined by the public key, the difference of manufacturer parameters is shielded by semantic identification between the configuration items of the public key and the target server, the mapping relation from the source server to the configuration items of the target server can be determined after double-layer mapping, the problem that the semantic difference between the source server and the target server is overlarge is solved, the accuracy and the efficiency of mapping the target configuration items by the source configuration items are effectively improved by converting the public key, the current configuration information of the source server is converted into all the configuration items and copied to the configuration parameters required by the target server, the automatic and accurate copying of the configuration of the server of a cross-manufacturer type is realized, the manual intervention of the parameter configuration flow of the server in the data center is reduced to the minimum, the efficiency and the accuracy of the configuration server can be greatly improved, the configuration failure or the configuration deviation caused by manual operation errors can be effectively avoided, and the period from the preparation to the upper line of the server can be effectively shortened.
In the above embodiment, the configuration information of each server of the data center is not limited in any way, and this embodiment further provides an exemplary implementation manner, which may include the following:
The data acquisition component can be preset in each server of the data center, and the data acquisition component is called to acquire the information to be configured of the server of the data center, each source configuration item of the source server and the configuration item to be processed of the target server. The data collection component encapsulates at least REDFISH API (Application Programming Interface ) and script, and the user manages the server via REDFISH API based on standardized RESTful (REST (Representational STATE TRANSFER) based web API interface for presentation layer state transition) architectural style design, performs operations such as acquiring sensor data, configuring parameters, performing power control, and the like. Scripts may be written using any scripting language to obtain information about the baseboard management controller through the script in conjunction with Redfish client-side libraries.
As can be seen from the above, the present embodiment can simplify the resource management of the physical server through Redfish, easily manage each resource of the physical server, improve the monitoring efficiency of the data center server, and obtain the server configuration information efficiently and accurately.
The above embodiment does not limit how to determine the correspondence between each source configuration item and the common key, and this embodiment also provides an exemplary implementation manner, which may include the following:
As shown in FIG. 4, each word block of each source configuration item and a public key is extracted respectively, a corresponding source word block sequence and a corresponding public word block sequence are generated, semantic quantization values of each word block in each source word block sequence and each public word block sequence are calculated respectively based on word block positions and word block initial values, the similarity degree between the semantic quantization values of each word block of each source word block sequence and the semantic quantization values of each word block of the public word block sequence is calculated respectively, and two word blocks meeting preset similarity conditions are taken as the same semantic word blocks.
The method comprises the steps of extracting each source configuration item and each word block of a public key through a word segmentation method, forming a sequence of the extracted word blocks according to an extraction sequence, adding identification marks before and after the sequence for facilitating automatic detection, namely, for example, performing word segmentation processing on each source configuration item to obtain a group of source configuration word blocks corresponding to each source configuration item, adding sequence starting marks and sequence ending marks for each group of source configuration word blocks to obtain each source word block sequence, performing word segmentation processing on the public key to obtain a group of public word blocks, and adding sequence starting marks and sequence ending marks for the public word blocks to obtain the public word block sequence. And performing word segmentation on each configuration item to be processed to obtain a group of word blocks to be processed, and adding a sequence start identifier and a sequence end identifier to the word blocks to be processed to obtain a word block sequence to be processed. By way of example, a corresponding semantic sequence may be automatically generated for source configuration items, common keys, pending configuration items by a bi-directional encoder characterization based on a transducer network model by BERT (Bidirectional Encoder Representations from Transformers) such language model. Taking BMC_NTP_Servers1 as an example of the configuration item name of the source configuration item, the BERT model performs word segmentation processing on the source configuration item to obtain BMC, _NTP, _Servers and 1, and combines word blocks to obtain a word block sequence,Wherein n is the number of word blocks, and a sequence start identifier [ CLS ] and a sequence end identifier [ SEP ] are added to the sequence to obtain a final source word block sequence w:
The embodiment also provides a calculating process of the semantic quantization value of each word block, wherein each source word block sequence of each source word block sequence is processed according to the following method, for convenience of description, the source word block sequence being processed is defined as a current source word block sequence, according to the initial value of each word block in the current source word block sequence, the initial value can be given according to a preset initial value generating mode, such as a random number, or can be set in other modes, such as a default value, the position quantization value of each word block in the current source word block sequence determines the initial semantic quantization value of each word block in the current source word block sequence, the position quantization value is used for representing the position of the word block in the sequence, and can be used as the quantization value through a position sequence number. Based on preset word block initial value optimization information, for example, the initial semantic quantization value of the current source word block sequence can be adjusted by multiplying an adjustment coefficient and the like, so that the semantic quantization value of each word block of the current source word block sequence is obtained. According to the initial value of each word block in the public word block sequence and the corresponding position quantization value, determining the initial semantic quantization value of each word block in the public word block sequence, and according to the word block initial value optimization information, adjusting the initial semantic quantization value of each word block in the public word block sequence to obtain the semantic quantization value of each word block in the public word block sequence. Similarly, determining initial semantic quantization values of the word blocks of the current word block sequence to be processed according to initial values of the word blocks in the current word block sequence to be processed and position quantization values of the word blocks in the current word block sequence to be processed, and adjusting the initial semantic quantization values of the current word block sequence to be processed based on word block initial value optimization information to obtain the semantic quantization values of the word blocks of the current word block sequence to be processed.
For the sake of clarity of the implementation of this embodiment for those skilled in the art, this embodiment takes NTP configuration as an example, where the time SERVER configuration item of the NTP obtained from the source SERVER is bmc_timer_ip1, that is, the source configuration item is bmc_timer_ip1, and the public key includes server_ntp_ip1, server_ntp_ip2, MODE, NTPENABLE (NTP configuration is effective). Firstly, performing word segmentation operation on BMC_TIMER_IP1 and SERVER_NTP_IP1 of a public key by using WordPiece (sub word segmentation method) algorithm to respectively obtain corresponding sequences: AndCan be according to the relationCalculating initial semantic quantization values of word blocks in the sequence,An initial semantic quantization value representing the ith word block in the sequence w,For the initial value of the word block,Words are embedded for a position, i.e. a position quantization value, which indicates that a word block is located at a position of the sequence. For example, when a language model is used to calculate a semantic quantization value of a word block, word block initial value optimization information is optimized according to a self-attention calculation result, any word block in the sequence is calculated, a query vector, a key vector and a value vector of the current word block are calculated based on the initial semantic quantization value and a weight parameter matrix of the current word block, an attention score of the current word block is determined according to the key vector and the query vector, attention dispersion distribution information of the current word block is calculated according to the attention score, and the semantic quantization value of the current word block is determined according to the attention dispersion distribution information of the current word block and value vectors of other word blocks belonging to the same configuration item.
When the semantic quantization value of each word block in the source configuration item and the public key is obtained through calculation, the similarity degree between the source configuration item and the word block can be calculated through any similarity calculation relational expression. For example, each word block of bmc_timer_ip1 calculates similarity with the semantic quantization value of each word block of server_ntp_ip1, server_ntp_ip2, MODE, NTPENABLE, and the preset similarity condition is preset and can be flexibly adjusted according to practical situations, and the preset similarity condition is used for measuring whether two word blocks are similar, for example, the similarity is greater than 95%, or the similarity threshold is set to 0.95, and the preset similarity condition is satisfied without being lower than the similarity threshold. For example, byThe degree of similarity is calculated and,And representing a similarity value, wherein i is an ith word block, n is the total number of word blocks, x represents a semantic quantization value of each word block of the source word block sequence, y represents a semantic quantization value corresponding to each word block of the common word block sequence, and the closer the result is to 1, the more similar the two configuration items are.
Further, in consideration of the situation that there may be a source configuration item that cannot be matched with a public key and the situation that there is a failure to match a corresponding configuration item for a configuration item to be processed in the public key, based on the above embodiment, the present invention may further determine, through alternative item and third party detection, a correspondence between a source configuration item and a public key and a correspondence between a public key and a configuration item to be processed in the case that the source configuration item cannot be matched, where the implementation process is as follows:
When at least one source target configuration item in the source server cannot be matched with word blocks with the same semantic meaning in a public key, selecting a plurality of word blocks to be selected, which meet the preset similar semantic conditions with the source configuration item, from the public key, filling each word block to be selected and the source configuration item into a first target position of a user configuration page, displaying the user configuration page, and when a word block selection instruction is received, establishing a corresponding relation between the target word blocks selected from each word block to be selected and the source configuration item. When at least one target configuration item to be processed in the target server cannot be matched with word blocks with the same semantic meaning in the public key, selecting a plurality of word blocks to be selected, which meet the preset similar semantic conditions with the target configuration item to be processed, from the public key, filling each word block to be selected and the target configuration item to be processed into a first target position of a user configuration page, displaying the user configuration page, and when a word block selection instruction is received, establishing a corresponding relation between the target word blocks selected from each word block to be selected and the target configuration item to be processed.
For example, the preset similarity condition is that the similarity threshold exceeds 0.95, and when a word block with the similarity threshold exceeding 0.95 cannot be found, the source configuration item or the to-be-processed configuration item is considered to be unsuccessfully matched with the public key. The preset similarity semantic condition is that the first m word blocks with highest similarity are taken as word blocks to be selected, m is taken as 3 or 5, and a mode can be selected as the preset similarity semantic condition, which is not limited in the invention. For example, calculating the similarity between the word block NTP of the SERVER_NTP_IP1 and each word block in the public key, if the highest similarity is not more than 0.95, considering that the NTP is not successfully matched with the public key, sorting the similarity between the NTP and each word block in the public key from top to bottom, taking the first three with the highest similarity value as word blocks to be selected, manually calibrating the user through a user configuration page, and generating a word block selection instruction from a target word block selected from each word block to be selected, and issuing the word block selection instruction in the user configuration page, wherein the source configuration item is BMC_LDAP_Group, the target word block is ldap_group, and the word block selection instruction can be expressed as BMC_LDAP_ Group is equivalent to LDAP _group.
As can be seen from the above, the embodiment can support the introduction of a manual calibration feedback interface, support the manual calibration of unidentified configuration items by a user through a visual interface, and improve the efficiency and accuracy of parameter configuration.
Further, in order to improve accuracy of server parameter configuration, based on the above embodiment, the present invention further provides a multiple verification process, which may include the following:
As shown in FIG. 5, the current configuration parameter content corresponding to each pending configuration item of the target server is obtained, the current configuration parameter content corresponding to each pending configuration item is compared with the corresponding content of the parameter configuration standard template, when the comparison failure target configuration parameter content to be confirmed exists, the target configuration parameter content and the corresponding target pending configuration item are filled into a second target position of the user configuration page, the user configuration page is displayed, and when a configuration information adjustment instruction is received, the target configuration parameter content corresponding to the target server is updated by using the new configuration parameter content.
When the parameter configuration of the target server is completed, all configuration items of the target server can be obtained through Redfish interfaces, the configuration items are compared with the set parameter configuration standard template, the parameter configuration standard template sets standard parameter formats and numerical value allowed ranges corresponding to the configuration items, and the value verification is completed through verification of the parameter formats, such as regular matching of IP addresses and numerical value ranges, such as log retention days 0-365. Based on the parameter configuration standard template, judging whether the setting is successful, if the setting is failed, displaying failure prompt information and corresponding contents to a user in a visual mode, and manually adjusting the configuration items by the user, if the mapping of the configuration items to be processed to the source configuration items is wrong, manually adjusting and reconfiguring. Further, the configuration item with failed verification can start an exponential backoff retry mechanism, that is, after the first failure, the configuration item can wait for 5 seconds and retry again, still fails, wait for 10 seconds again for the second Time, retry for 3 times at most, and automatically switch the standby mapping paths, for example, switch from "ntp_servers" → "time_servers" to "time_servers" → "ntp_sources", so as to effectively improve the recovery rate of single-point configuration failure.
Further, as shown in fig. 5, it may also be verified from service logic whether the server configuration is successful, which may include determining a service function of each configuration item to be processed in the server when the current configuration parameter content corresponding to each configuration item to be processed is successfully matched with the parameter configuration standard template, verifying the function of each configuration item to be processed, namely, obtaining a first function parameter value of the target server when the current configuration parameter content corresponding to the first configuration item to be processed is set, obtaining a source function parameter value of the source server when the source server and the first configuration item to be processed play the same function at the current time, obtaining a second function parameter value of the target server when the target server and the first configuration item to be processed play the same function at the current time, and successfully configuring the first configuration item to be processed when the first function parameter value, the second function parameter value and the source function parameter value meet preset verification conditions.
In this embodiment, when the parameter configuration of the target server is completed, the service functionality verification may be performed when the value verification is performed or not performed on the configuration parameter according to the standard template. When the function verification is performed, the function distinction can be performed according to the function of the configuration item, such as verifying whether the NTP configuration successfully performs the service function of synchronizing time according to the required configuration, and whether the alarm configuration performs the service function of alarming according to the required configuration. Taking ntp_server1 as an example, ntp_server1 is used as a time synchronization SERVER IP (Internet Protocol Address ), acquiring the set time above IP 192.168.1.1, acquiring the current time of the source SERVER again, and finally acquiring the current time of the target SERVER, and comparing the time difference of the three, if the preset verification condition is met, for example, the NTP time synchronization error is <5 seconds, that is, the first function parameter value, the second function parameter value and the source function parameter value are not more than 5 seconds or not more than 10 seconds, the configuration item is configured successfully.
As can be seen from the above, the present embodiment verifies the parameter format and the range by the value verification, and verifies the configuration validity from the service logic by the function verification, so that the configuration error interception rate of 99.2% can be achieved, and the parameter configuration accuracy of the target server is effectively improved.
Based on the above embodiment, the present invention further provides another implementation process of the dual-layer mapping relationship, which may include the following contents:
Training a semantic quantization model, defining word blocks in the model training process as sample word blocks for convenience of description, and defining semantic quantization values corresponding to the sample word blocks as semantic quantization sample values. The model training process comprises the steps of obtaining a plurality of seed word blocks in advance, generating initial numerical value vectors for the seed word blocks, obtaining a text sample set, carrying out word segmentation processing on each text sample of the text sample set, generating initial semantic quantization sample values of the sample word blocks of each text sample according to position information of each sample word block in the corresponding text sample and the initial numerical value vectors, adjusting corresponding initial semantic quantization sample values according to self-attention calculation results of the initial semantic quantization sample values of each sample word block to obtain semantic quantization sample values for updating the initial numerical value vectors of each seed word block, and carrying out iterative training on the semantic quantization model by continuously approaching the semantic quantization sample values of the sample word blocks with the same semantics until a preset model iteration update ending condition is met, such as model convergence, or the total iteration times is reached or model accuracy is larger than a preset accuracy threshold, so as to obtain a trained semantic quantization model. And respectively inputting each word block of each source configuration item, the public key and the configuration item to be processed into the semantic quantization model trained in the process, correspondingly processing the input according to the semantic quantization model, outputting corresponding semantic quantization values, and determining the same semantic word blocks by comparing the semantic quantization values of each word block when the semantic quantization values of each word block are obtained.
The seed word blocks are common word blocks under different configuration scenes of the server, each word block can be randomly assigned with a 768-dimensional random number vector as an initial numerical vector, for example, NTP corresponds to the initial numerical vector [0,1, -0.3,0.7 ], the initial numerical vector is 768 random numbers, the range is-1 to 1, and then the corresponding 768 vectors are adjusted through large-scale text training. The text sample set can comprise billions of sentences, 768 vectors of the seed word blocks are optimized through the sentences, for example, sentence 1 is that an NTP server is a generic term for setting server time and time zone, sentence 2 is that a TIMER server function comprises setting server time and time zone, when the NTP server and the time server are used as two main words, semantic similarity is high during semantic analysis, and the 768 vectors of the NTP and the TIMER are in a seed word block library, and the 768 vectors of the NTP and the TIMER need to be gradually close. For example, a seed word block library may be preset, and the seed word block library may be constructed by using a domain ontology modeling technology, for example, and includes at least more than 1200 configuration items, and covers data required by at least 23 types of server configuration scenarios, such as NTP, log, alarm, and the like. Further, the system can record seed scene libraries of each user site, record the accuracy of each seed scene library in the user site, extract seed scene libraries with high accuracy into the seed word block libraries, and when a new user uses the seed word block libraries with high accuracy as initialized seed word block libraries, without repeated training.
For each text word block sequence, determining the initial semantic quantization sample value of each sample word block of the current text word block sequence according to the segment identification quantization value of the text sample to which the current text word block sequence belongs and the position quantization value of each sample word block corresponding to the current text word block sequence in the current text word block sequence. For example, the word block sequence of sentence 1 is,Where n is the number of word blocks,Identify [ CLS ] for the sequence of sentence 1,The word block NTP of the vector is optimized for the sentence 1, and the word block sequence of the sentence 2 is that,Where n is the number of word blocks,Identify [ CLS ] for the sequence of sentence 2,For the word block TIMER in sentence 2 that needs to optimize the vector, the initial value of the vector of the jth word block of the w-th sequence can be based onDetermining, wherein,For the initial vector value of the seed word block,Identifying a quantized value, i.e., a segment embedded word, for a segment, the value being a random vector, each sentence randomly initializes a 768 vector representing which sentence it belongs to, for representing the sentence vector,The word is embedded for a position, that is, a position quantization value, which indicates that the word block is located at the position of the sentence, that is, after the sentence is segmented, the word is located at the position of the sentence, for example, NTP is 1 at the position of sentence 1, and the position quantization value is 768 vectors of 1.
The invention further provides an exemplary network model structure of the semantic quantization model, which comprises an input layer, a word block processing model layer, an attention calculating layer, a multi-layer feedforward network layer and an output layer, wherein various sub-word blocks and text sample sets are input through the input layer, labels of whether the same semantic word blocks are contained among the text samples of the text sample sets are provided, the word block processing model layer is utilized to generate initial numerical value vectors for the various sub-word blocks, word segmentation is conducted on the text samples of the text sample sets, initial semantic quantization sample values of the sample word blocks of the text sample are generated according to the position information of the sample word blocks in the corresponding text samples and the initial numerical value vectors, the attention calculating layer is utilized to conduct self-attention calculation on the sample word blocks, the corresponding initial semantic quantization sample values are adjusted according to the self-attention calculating results of the initial semantic quantization sample values of the sample word blocks, so that semantic quantization sample values of the initial numerical value vectors of the various sub-word blocks are obtained, and the semantic quantization sample values of the sample word blocks with the same semantic quantization sample blocks are approximated by the feedforward network layer until preset updating end conditions are met.
The word block processing model layer may use a BERT model, the attention calculating layer may use a multi-layer transform network structure, the BERT model may divide a configuration item name (such as bmc_ntp_servers1) to generate an initial semantic quantization sample value, and then the initial semantic quantization sample value is input into the multi-layer transform network structure, and the multi-layer transform network structure is used to adjust the initial semantic quantization sample value to generate a multidimensional vector containing semantic and sequence information, where the dimension is the same as that of the initial vector value, such as 768-dimensional vector. I.e. configuration items are first encoded by BERT, wherein,The output of the BERT is represented as,,,The word block beginning with word segmentation, in the BERT model, the output of the last layer contains rich semantic information of the column name of the whole configuration item, wherein the [ CLS ] marks the corresponding vectorC represents the vector value of [ CLS ] and is used as a semantic vector of the whole column name for subsequent classification, similarity calculation and other tasks. Taking the first coding result as the second coding result to enter the parameters, wherein the formula is,The number of layers is represented, and finally, the semantic vector, namely the semantic quantization sample value, of each configuration item is obtained. By layer-by-layer encoding, the output of each layerIs an intermediate feature representation of the input sequence encoded by the layer. As the number of layers of the multilayer transducer network structure increases,Increasingly contains richer, more abstract semantic information. For example at a lower levelIn (1), the local features and shallow semantics of the word blocks can be captured more, and the word blocks are at higher layersIn the method, the global semantics and complex dependency relationship of the whole sequence can be captured, and the output of each layerWill be input to the next layerAnd the final result is obtained through layer-by-layer transfer and treatment.
The secondary coding process of the attention calculating layer comprises the steps of calculating query vectors, key vectors and value vectors of current sample word blocks based on initial semantic quantization sample values and weight parameter matrixes of the current sample word blocks for each sample word block, determining attention scores of the current sample word blocks according to the key vectors and the query vectors, calculating attention distribution information of the current sample word blocks according to the attention scores, and determining semantic quantization sample values of the current sample word blocks according to the attention distribution information of the current sample word blocks and the value vectors of other sample word blocks belonging to the same text sample.
For example, the initial semantically quantized sample value of the sample word block isThe weight parameter matrix is a learnable parameter, and is respectively expressed as,Conventional initial values are given in the BERT model, the model training process can continuously learn and adjust the parameters, query vectors, key vectors and value vectors are Q, K and V, and the parameters can be according to the relation,,The query vector, key vector, and value vector may be calculated separately. Can be based on relationThe attention score AS is calculated and,For the dimensions of the query vector and the key vector, the initial learning matrix is a 64-dimensional learning matrix,The initial value may be set to 8, the distraction profile AD may be calculated using a softmax function, i.e. Ad=softmax (AS), and finally may be calculated byObtaining an output vector after secondary processing and a semantically quantized sample value,The value vectors calculated for other segmentations in the sentence, at this time, have the same semantic word block semantic quantized sample values, such as the semantic quantized sample values of the NTP configuration item in sentence 1 in the above exampleSemantic quantized sample values of TIMER configuration items in sentence 2 in the above exampleThere is a start of a trend towards approaching. And finally, carrying out feedforward network adjustment on the vector after secondary processing, namely inputting the output vector O (w) into a feedforward network layer, wherein the initial value of the feedforward network layer can be 12 layers, and carrying out dynamic adjustment according to sample data and model training conditions. The feedforward network layer passes through the relationThe treatment is carried out in such a way that,AndIs a matrix of weights that can be learned,All are bias vectors, are parameters to be learned in model training, reLU (Linear rectification function ) is used as a feed-forward network layer activation function, FN is the output of the feed-forward network layer, and the multi-layer feed-forward network layer continuously repeats the process until two vectors are continuously close.
Furthermore, in order to improve the performance of the semantic quantization model, the semantic quantization model may be further adjusted during the use process, which may include the following contents:
The method comprises the steps of setting a mapping relation between a target server and a configuration item of a source server, which are successfully configured, as a new training sample, when a seed updating instruction is triggered, adding a new seed word block into a seed word block library when the new training sample exists a new seed word block which does not exist in the seed word block library, taking a semantic quantized sample value of the new seed word block as an initial numerical vector, and updating the initial numerical vector of a corresponding word block by using the semantic quantized sample value of the word block in the new training sample when the semantic quantized sample value of the word block in the new training sample is different from the initial numerical vector of the same seed word block in the seed word block library.
In this embodiment, after all configurations of the source server are copied to the target server, each word block of the source configuration item, the public key and the configuration item to be processed is compared with a seed word block, whether a new seed word block which is not in the seed word block library used in model training exists is determined, and if yes, the seed library is updated. Further, the condition of different initial numerical vectors of the same seed word block can be updated. For the corresponding relation between the source configuration items and the public keys, the corresponding relation between the public keys and the configuration items to be processed, which are established in the mapping process, a semantic sentence can be added for the matched configuration items, for example, the SERVER_NTP_IP1 is equivalent to the BMC_TIMER_IP1, the generated sentence is put into a text sample set again, and the accuracy of the model is increased by utilizing new sample training. For accuracy of the data sample, after the value check and the function check of the target server are completed, corresponding semantic sentences can be generated for the configuration items establishing the mapping relation. Further, in order to improve the performance of the semantic quantization model, the manual calibration result in the above embodiment or the content to be manually adjusted by the human may be converted into a semantic sentence by "server_ntp_ip1 is equivalent to bmc_timer_ip1" and "bmc_ldap_ Group is equivalent to LDAP _group", and the semantic quantization model is retrained by updating the seed word block library or the text sample set after parsing the semantic sentence by the natural language model, for example, triggering to retrain after the performance of the semantic quantization model is reduced, or triggering to retrain after a new seed or a new text sample exceeds a certain threshold. The accuracy of the model is linearly improved along with the service time through active learning, and the efficiency and accuracy of server parameter configuration are improved through the closed loop of 'manual intervention-model learning-automatic optimization'.
Finally, the present invention also provides another implementation manner of server configuration, where the embodiment may pre-train an end-to-end intelligent semantic entity, where the intelligent semantic entity at least includes a data collection tool, a seed word block library, a semantic quantization model, a semantic recognition module, and a verification module, where the semantic recognition module encapsulates a computer program for executing similarity calculation and selecting the same word block according to a result of similarity calculation, and the verification module encapsulates a computer program for executing value verification and function verification, and through the end-to-end intelligent semantic entity, the configuration process of each configuration item to be processed of the target server may be directly completed:
A11, inputting authentication basic information such as user name and password of the field server, collecting configuration items of all servers through a data collection tool, generating standard configuration parameter subclasses and source configuration information, and converting the obtained configuration information into key-value modes.
A12, segmenting each source configuration item information of the source server one by one, calculating initial semantic quantization values of each word block in each source word block sequence based on the word block position and the word block initial value, and performing self-attention calculation on all segmentation words of the configuration item to obtain semantic quantization values of each data block.
A13, word segmentation and attention calculation are carried out on the public keys in the same way, and the encoder encodes the public keys to generate semantic quantization values of each public key.
A14, circulating semantic quantized values of all configuration items of the source server, calculating similarity of each semantic quantized value and the semantic quantized value of the public key, finding items with similarity exceeding 0.95 in the public key for the source configuration items, and performing manual calibration as word blocks with the same semantics, namely, successfully mapping the source configuration items in the public key, marking the source configuration items with failed mapping, and sending the source configuration items to a user in a visual mode.
And A15, after the mapping of the source service configuration items is completed, collecting the configuration items to be processed of the target server, performing word segmentation, attention calculation and encoder encoding on all the configuration items of the target server in the same way, generating semantic quantization sample values of all the configuration items of the target server, circulating the semantic quantization sample values of all the configuration items of the target server, calculating the similarity of each semantic quantization sample value and the semantic vector of the public key, finding out items with the similarity exceeding 0.95 in the public key for the configuration items to be processed, and taking the items as word blocks with the same semantics, namely, the configuration items to be processed with the same semantics are successfully mapped in the public key, marking the configuration items to be processed with failed mapping, and sending the configuration items to a user in a visual way for manual calibration.
A16, after the double-layer mapping is completed, the content of the configuration item of the source server can be configured into the content of the configuration item corresponding to the target server through the public key, all the content of the configuration item of the source server is mapped to the configuration content required by the configuration item of the target server one by one, the format is converted into an object, and the Redfish interface is called to carry out corresponding configuration on the configuration item to be processed of the target server.
And A17, after the configuration is completed, respectively performing value check and functional check on the configuration result, and feeding back the failure configuration condition to the user so as to enable the user to manually adjust.
A18, recording the whole process of configuration item adjustment, feeding back the final correct mapping result to the intelligent semantic body, and regenerating all seed word block libraries by the intelligent semantic body.
As can be seen from the above, in this embodiment, by constructing an intelligent semantic object driving server to perform configuration, automatically collecting a source server configuration item through Redfish protocols, combining BERT semantic vector generation and dual-layer mapping, shortening the replication time of single server configuration from more than 30 minutes of manual operation to a minute level, improving efficiency by more than 90%, capturing deep meanings of the configuration item based on 768-dimensional semantic vectors of the BERT model, solving the problems of synonym and synonym, reducing the configuration error rate from 15% to below 5% of the traditional scheme, ensuring that the configuration is effective and accords with service expectations through binary verification and functional verification, avoiding server function abnormality caused by configuration errors, realizing automation and intellectualization of cross-vendor server configuration, and realizing efficient and accurate server configuration.
It should be noted that, in the present invention, there is no strict sequence of execution among the steps, so long as the sequence accords with the logic sequence, the steps may be executed simultaneously, or may be executed according to a certain preset sequence, and fig. 2 and fig. 5 are only a schematic manner, and do not represent that only such execution sequence is possible.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment.
The invention also provides a corresponding device for the server parameter configuration method, so that the method has higher practicability. Wherein the device may be described separately from the functional module and the hardware. In the following description, a server parameter configuration apparatus according to the present invention is described, which is configured to implement a server parameter configuration method according to the present invention, and in this embodiment, the server parameter configuration apparatus may include or be divided into one or more program modules, where the one or more program modules are stored in a storage medium and executed by one or more processors, to implement a server parameter configuration method according to the first embodiment of the present invention. Program modules in the present embodiment refer to a series of computer program instruction segments capable of performing a specific function, and are more suitable than the program itself for describing the execution of the server parameter configuration apparatus in a storage medium. The following description will specifically describe functions of each program module of the present embodiment, and a server parameter configuration apparatus described below and a server parameter configuration method described above may be referred to correspondingly to each other.
Based on the angles of the functional modules, referring to fig. 6, fig. 6 is a block diagram of a server parameter configuration apparatus provided in this embodiment under a specific implementation manner, where the apparatus may include:
The public standard information construction module 601 is configured to redefine information to be configured of a server of the data center into a plurality of standard configuration parameter subclasses, and construct a public key-value according to each standard configuration parameter subclass, where each standard configuration parameter subclass corresponds to at least one configuration item.
The first-level mapping module 602 is configured to obtain each source configuration item of the source server with configured parameters, determine a corresponding relationship between each source configuration item and a common key by identifying the same semantic word block of the common key of each source configuration item and the common key-value, and map the configuration content of each source configuration item to a value corresponding to the common key based on the condition that the configuration content of the configuration item with the same semantic is the same.
The second-level mapping module 603 determines a corresponding relationship between each configuration item to be processed and the common key by identifying the same semantic word blocks of each configuration item to be processed and the common key in the target server, and generates corresponding configuration content for each configuration item to be processed based on the mapped common key-value, so as to complete parameter configuration of the target server.
In some implementations of the present embodiment, the first-level mapping module 602 may be further configured to extract each word block of each source configuration item and the common key, and generate a corresponding source word block sequence and a corresponding common word block sequence, calculate, based on the word block position and the word block initial value, a semantic quantization value of each word block in each source word block sequence and the common word block sequence, respectively, calculate a similarity between the semantic quantization value of each word block in each source word block sequence and the semantic quantization value of each word block in the common word block sequence, and use, as the same semantic word block, two word blocks that satisfy a preset similarity condition.
As an exemplary implementation manner of the foregoing embodiment, the first-level mapping module 602 may be further configured to perform word segmentation on each source configuration item to obtain a set of source configuration word blocks corresponding to each source configuration item, add a sequence start identifier and a sequence end identifier to each set of source configuration word blocks to obtain each source word block sequence, perform word segmentation on a public key to obtain a set of public word blocks, and add a sequence start identifier and a sequence end identifier to the public word blocks to obtain a public word block sequence.
As another exemplary implementation manner of the foregoing embodiment, the first-level mapping module 602 may be further configured to determine, for each source word block sequence, an initial semantic quantization value of each word block in the current source word block sequence according to an initial value of each word block in the current source word block sequence and a position quantization value of each word block in the current source word block sequence, adjust, based on word block initial value optimization information, the initial semantic quantization value of each word block in the current source word block sequence to obtain a semantic quantization value of each word block in the current source word block sequence, determine, according to an initial value of each word block in the common word block sequence and a corresponding position quantization value thereof, an initial semantic quantization value of each word block in the common word block sequence, and adjust, according to word block initial value optimization information, the initial semantic quantization value of each word block in the common word block sequence to obtain a semantic quantization value of each word block in the common word block sequence.
In other implementations of this embodiment, the second-level mapping module 603 may be further configured to, when at least one target configuration item to be processed in the target server cannot be matched with a word block having the same semantic in the common key, select a plurality of candidate word blocks from the common key that satisfy a preset similar semantic condition with the target configuration item to be processed, fill each candidate word block and the target configuration item to be processed into a first target location of the user configuration page, and display the user configuration page, and when a word block selection instruction is received, establish a correspondence between the target word block selected from each candidate word block and the target configuration item to be processed.
In other embodiments of the present embodiment, the apparatus further includes a verification module, configured to obtain current configuration parameter contents corresponding to each pending configuration item of the target server, compare the current configuration parameter contents corresponding to each pending configuration item with corresponding contents of a parameter configuration standard template, when there is a target configuration parameter content to be confirmed that is failed in comparison, fill the target configuration parameter content and the corresponding target pending configuration item to a second target location of the user configuration page, and display the user configuration page, and when a configuration information adjustment instruction is received, update the target configuration parameter content corresponding to the target server with the new configuration parameter content.
As an exemplary implementation manner of the above embodiment, the verification module may be further configured to determine a service function of each configuration item to be processed in the server when the current configuration parameter content corresponding to each configuration item to be processed is successfully matched with the parameter configuration standard template, verify the function of each configuration item to be processed, obtain a first function parameter value of the target server when the current configuration parameter content corresponding to the first configuration item to be processed is set, obtain a source function parameter value of the source server when the source server and the first configuration item to be processed play the same function at the current time, obtain a second function parameter value of the target server when the target server and the first configuration item to be processed play the same function at the current time, and successfully configure the first configuration item to be processed when the first function parameter value, the second function parameter value and the source function parameter value satisfy a preset verification condition.
The first-level mapping module 602 may be further configured to input each source configuration item and each word block of the common key to a trained semantic quantization model, obtain a semantic quantization value of each word block according to an output of the semantic quantization model, and determine the same semantic word block by comparing the semantic quantization values of each word block, where a training process of the semantic quantization model includes obtaining a plurality of seed word blocks in advance and generating initial numerical vectors for each seed word block, obtaining a text sample set, where each text sample of the text sample set has a label containing the same semantic word block between each text sample, performing word segmentation processing on each text sample of the text sample set, generating an initial semantic quantization sample value of each sample word block of the text sample according to position information of each sample word block in the corresponding text sample and the initial numerical vectors, adjusting the corresponding initial semantic quantization sample value according to a self-attention calculation result of each sample word block, obtaining a quantization sample value of the initial numerical vectors for updating each sample word block, and performing iterative training on the sample quantization model until the semantic quantization condition is satisfied continuously, and the iterative training model is completed.
As an exemplary implementation manner of the embodiment, the semantic quantization model includes an input layer, a word block processing model layer, an attention calculation layer, a multi-layer feedforward network layer and an output layer, various sub-word blocks and text sample sets are input through the input layer, labels of whether the text samples of the text sample sets contain the same semantic word blocks or not are arranged between the text samples of the text sample sets, initial numerical vectors are generated for the various sub-word blocks by the word block processing model layer, word segmentation processing is conducted on the text samples of the text sample sets, initial semantic quantization sample values of the sample word blocks of the text sample sets are generated according to position information of the sample word blocks in the corresponding text samples and the initial numerical vectors, the attention calculation layer is utilized to conduct self-attention calculation on the sample word blocks, corresponding initial quantization sample values are adjusted according to self-attention calculation results of the initial semantic quantization sample values of the sample word blocks, so that semantic quantization sample values of the initial semantic quantization sample values of the various sub-word blocks are obtained, and the semantic quantization sample values of the sample blocks with the same semantic meaning are approximated by the word blocks of the pre-feed network layer until preset iteration update finishing conditions are met.
As another exemplary implementation manner of the foregoing embodiment, the first level mapping module 602 may be further configured to perform word segmentation processing on each text sample, and add a sequence start identifier and a sequence end identifier to each group of text sample word blocks to obtain each text word block sequence, and determine, for each text word block sequence, an initial semantic quantization sample value of each sample word block of the current text word block sequence according to a segment identifier quantization value of a text sample to which the current text word block sequence belongs, a position quantization value of each sample word block in the current text word block sequence corresponding to the current text word block sequence, and a current initial numerical vector of a seed word block corresponding to each sample word block of the current text word block sequence.
As another exemplary implementation of the foregoing embodiment, the first-level mapping module 602 may be further configured to, for each sample word block, calculate a query vector, a key vector, and a value vector for the current sample word block based on the initial semantic quantization sample value and the weight parameter matrix of the current sample word block, determine an attention score for the current sample word block according to the key vector and the query vector, calculate attention-deficit-distribution information for the current sample word block according to the attention score, and determine a semantic quantization sample value for the current sample word block according to the attention-deficit-distribution information for the current sample word block and the value vectors of other sample word blocks belonging to the same text sample.
As another exemplary implementation manner of the foregoing embodiment, the foregoing first level mapping module 602 may be further configured to use a mapping relationship between configuration items of a target server and a source server that are successfully configured as a new training sample, when a seed update instruction is triggered, add the new seed word block to the seed word block library when the new training sample has a new seed word block that does not exist in the seed word block library, and use a semantic quantization sample value of the new seed word block as an initial numerical vector, and update an initial numerical vector of a corresponding word block by using the semantic quantization sample value of a word block in the new training sample when the semantic quantization sample value of the word block in the new training sample is different from the initial numerical vector of the same seed word block in the seed word block library.
The server parameter configuration device mentioned above is described from the perspective of functional modules, and further, the invention also provides an electronic device, which is described from the perspective of hardware. Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. The electronic device comprises a memory 701 and a processor 702, the memory 701 having stored therein a computer program, the processor 702 being arranged to run the computer program to perform the steps of any of the server parameter configuration method embodiments described above.
Embodiments of the present application also provide a computer readable storage medium having a computer program stored therein, wherein the computer program is configured to perform the steps of any of the server parameter configuration method embodiments described above when run.
In an exemplary embodiment, the computer readable storage medium may include, but is not limited to, a U disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, etc. various media in which a computer program may be stored.
Embodiments of the present application also provide a computer program product comprising a computer program which, when executed by a processor, implements the steps of any of the server parameter configuration method embodiments described above.
Embodiments of the present application also provide another computer program product comprising a non-volatile computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of any of the server parameter configuration method embodiments described above.
The above describes a server parameter configuration method, an electronic device, a computer readable storage medium and a computer program product provided by the present invention in detail. In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, so that the same or similar parts between the embodiments are referred to each other. The various example units and algorithm steps described in the various disclosed embodiments, whether implemented in electronic hardware or in computer software, depend on the particular application and design constraints of the solution, and one skilled in the art may use different methods for each particular application to implement the described functions without departing from the scope of the invention. The present invention is capable of numerous modifications and adaptations without departing from the principles of the present invention, and such modifications and adaptations are intended to be within the scope of the present invention.

Claims (14)

The method comprises the steps of respectively extracting each source configuration item and each word block of a public key, generating a corresponding source word block sequence and a corresponding public word block sequence, respectively calculating semantic quantization values of each word block in each source word block sequence and each public word block sequence based on word block positions and word block initial values, respectively calculating the semantic quantization values of each word block in each source word block sequence and each word block in the public word block sequence, respectively calculating the similarity degree between the semantic quantization values of each word block in each source word block sequence and the semantic quantization values of each word block in the public word block sequence, and taking two word blocks meeting preset similarity conditions as the same semantic word blocks.
The method comprises the steps of verifying functions of each to-be-processed configuration item, obtaining a first function parameter value of a target server when the content of a current configuration parameter corresponding to the first to-be-processed configuration item is set, obtaining a source function parameter value when the source server and the first to-be-processed configuration item play the same function at the current moment, obtaining a second function parameter value when the target server and the first to-be-processed configuration item play the same function at the current moment, and successfully configuring the first to-be-processed configuration item when the first function parameter value, the second function parameter value and the source function parameter value meet preset verification conditions.
CN202510874229.6A2025-06-272025-06-27Server parameter configuration method, electronic device, storage medium and program productActiveCN120386764B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202510874229.6ACN120386764B (en)2025-06-272025-06-27Server parameter configuration method, electronic device, storage medium and program product

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202510874229.6ACN120386764B (en)2025-06-272025-06-27Server parameter configuration method, electronic device, storage medium and program product

Publications (2)

Publication NumberPublication Date
CN120386764A CN120386764A (en)2025-07-29
CN120386764Btrue CN120386764B (en)2025-09-19

Family

ID=96487452

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202510874229.6AActiveCN120386764B (en)2025-06-272025-06-27Server parameter configuration method, electronic device, storage medium and program product

Country Status (1)

CountryLink
CN (1)CN120386764B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111488182A (en)*2020-04-132020-08-04北京字节跳动网络技术有限公司System configuration method, device, equipment and storage medium
CN112882974A (en)*2021-02-092021-06-01深圳市云网万店科技有限公司JSON data conversion method and device, computer equipment and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
EP3467642B1 (en)*2017-10-042023-10-04ServiceNow, Inc.Guided configuration item class creation in a remote network management platform
CN112667248B (en)*2020-12-082024-09-24深圳前海微众银行股份有限公司Method, device, equipment and storage medium for generating server deployment parameters
CN113420113B (en)*2021-06-212022-09-16平安科技(深圳)有限公司Semantic recall model training and recall question and answer method, device, equipment and medium
CN118210573A (en)*2024-03-282024-06-18苏州元脑智能科技有限公司Configuration method and device of basic input/output system, electronic equipment and medium
CN119477193A (en)*2024-10-242025-02-18中国建设银行股份有限公司 Method, device, electronic device and storage medium for generating event template
CN120066968A (en)*2025-02-272025-05-30北京五八信息技术有限公司Page test method for application program, computing device, storage medium and program product
CN119883773B (en)*2025-03-262025-07-04浪潮计算机科技有限公司 A BIOS operation and maintenance method, system and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111488182A (en)*2020-04-132020-08-04北京字节跳动网络技术有限公司System configuration method, device, equipment and storage medium
CN112882974A (en)*2021-02-092021-06-01深圳市云网万店科技有限公司JSON data conversion method and device, computer equipment and storage medium

Also Published As

Publication numberPublication date
CN120386764A (en)2025-07-29

Similar Documents

PublicationPublication DateTitle
US12149413B2 (en)Cybersecurity incident response and security operation system employing playbook generation through custom machine learning
JP7143456B2 (en) Medical Fact Verification Method and Verification Device, Electronic Device, Computer Readable Storage Medium, and Computer Program
CN113094200A (en)Application program fault prediction method and device
CN114064487B (en)Code defect detection method
CN112016553A (en) Optical character recognition (OCR) system, automatic OCR correction system, method
US20240330446A1 (en)Finding semantically related security information
CN114281931A (en)Text matching method, device, equipment, medium and computer program product
CN114187486A (en)Model training method and related equipment
CN119359252A (en) A method, device and storage medium for implementing a business intelligent flow engine
CN120014327A (en) Knowledge base construction method and storage medium based on automatic extraction, classification and association of multi-modal entities
CN119577127A (en) Dynamic document generation method and system based on large language model LLM
CN120386764B (en)Server parameter configuration method, electronic device, storage medium and program product
CN118364813B (en) Knowledge enhancement method, system, device and medium based on machine reading comprehension
US20250139173A1 (en)Ai quiz builder
CN119357378A (en) A method, device, equipment and storage medium for entity relationship extraction based on domain adaptation transfer learning
CN116595995B (en)Determination method of action decision, electronic equipment and computer readable storage medium
CN114357950B (en) Data rewriting method, device, storage medium and computer equipment
CN113986245B (en) Target code generation method, device, equipment and medium based on HALO platform
CN114663650A (en)Image description generation method and device, electronic equipment and readable storage medium
CN119990335B (en) A method and system for handling customer consultation issues based on machine learning
US20250085952A1 (en)Systems and methods for facilitating provisioning of software solutions
CN120276909B (en)Method for detecting node abnormality of super computing system
CN112380860B (en)Sentence vector processing method, sentence matching device, sentence vector processing equipment and sentence matching medium
US20230418971A1 (en)Context-based pattern matching for sensitive data detection
US20250190712A1 (en)Methods and systems for transferring knowledge from large language models to small language models using out-of-distribution feedbacks

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp