Movatterモバイル変換


[0]ホーム

URL:


CN117591244A - Model construction method and device based on machine learning platform and electronic equipment - Google Patents

Model construction method and device based on machine learning platform and electronic equipment
Download PDF

Info

Publication number
CN117591244A
CN117591244ACN202311628007.3ACN202311628007ACN117591244ACN 117591244 ACN117591244 ACN 117591244ACN 202311628007 ACN202311628007 ACN 202311628007ACN 117591244 ACN117591244 ACN 117591244A
Authority
CN
China
Prior art keywords
target
node
parameters
operator
nodes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311628007.3A
Other languages
Chinese (zh)
Other versions
CN117591244B (en
Inventor
李金辉
李明
李宜婧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zetyun Tech Co ltd
Original Assignee
Beijing Zetyun Tech Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zetyun Tech Co ltdfiledCriticalBeijing Zetyun Tech Co ltd
Priority to CN202311628007.3ApriorityCriticalpatent/CN117591244B/en
Publication of CN117591244ApublicationCriticalpatent/CN117591244A/en
Application grantedgrantedCritical
Publication of CN117591244BpublicationCriticalpatent/CN117591244B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The application provides a model construction method and device based on a machine learning platform and electronic equipment, and relates to the technical field of machine learning, wherein the method comprises the following steps: starting a target container, and creating a plurality of nodes in a page of the target container, wherein each node comprises an initial algorithm with preset parameters; acquiring input data of a target node, training an initial algorithm of the target node based on the input data to obtain a training result, wherein the target node is any node in the plurality of nodes; adjusting the preset parameters based on the training result to obtain target parameters corresponding to the target nodes; and generating a target model based on the target parameters respectively corresponding to the plurality of nodes. According to the method and the device, all algorithms in the whole container do not need to be operated when parameters are adjusted, and the adjustment efficiency of the parameters can be effectively improved.

Description

Translated fromChinese
基于机器学习平台的模型构建方法、装置及电子设备Model construction method, device and electronic equipment based on machine learning platform

技术领域Technical field

本申请涉及机器学习技术领域,具体涉及一种基于机器学习平台的模型构建方法、装置及电子设备。This application relates to the field of machine learning technology, and specifically to a model construction method, device and electronic equipment based on a machine learning platform.

背景技术Background technique

在机器学习技术中,为了对模型对应的算法进行参数调整,需要将模型的代码输入至容器中,通过容器运行模型的全部算法。在相关技术中,为了调整模型对应的算法中的参数,需要将整个容器中的代码运行,以得到训练结果,再基于训练结果调整参数。由于调整参数时需要反复运行整个代码,造成调整参数时耗费较长时间的情况,导致算法的参数调整效率较低。In machine learning technology, in order to adjust the parameters of the algorithm corresponding to the model, the code of the model needs to be input into the container, and all the algorithms of the model need to be run through the container. In related technologies, in order to adjust the parameters in the algorithm corresponding to the model, it is necessary to run the code in the entire container to obtain the training results, and then adjust the parameters based on the training results. Since the entire code needs to be run repeatedly when adjusting parameters, it takes a long time to adjust parameters, resulting in low parameter adjustment efficiency of the algorithm.

可见,相关技术中存在算法的参数调整效率较低的问题。It can be seen that there is a problem in the related technology that the parameter adjustment efficiency of the algorithm is low.

发明内容Contents of the invention

本申请实施例提供一种基于机器学习平台的模型构建方法、装置、电子设备及可读存储介质,以解决相关技术中存在算法的参数调整效率较低的问题。Embodiments of the present application provide a model construction method, device, electronic device, and readable storage medium based on a machine learning platform to solve the problem of low algorithm parameter adjustment efficiency in related technologies.

为解决上述问题,本申请是这样实现的:In order to solve the above problems, this application is implemented as follows:

第一方面,本申请实施例提供一种基于机器学习平台的模型构建方法,包括:In the first aspect, embodiments of this application provide a model construction method based on a machine learning platform, including:

启动目标容器,在所述目标容器的页面中创建多个节点,其中,每个节点包括具有预设参数的初始算法;Start the target container and create multiple nodes in the page of the target container, wherein each node includes an initial algorithm with preset parameters;

获取目标节点的输入数据,并基于所述输入数据对所述目标节点的初始算法进行训练,得到训练结果,所述目标节点为所述多个节点中的任一节点;Obtain the input data of the target node, and train the initial algorithm of the target node based on the input data to obtain training results, where the target node is any node among the plurality of nodes;

基于所述训练结果调整所述预设参数,得到所述目标节点对应的目标参数;Adjust the preset parameters based on the training results to obtain target parameters corresponding to the target nodes;

基于所述多个节点分别对应的目标参数,生成目标模型。Based on the target parameters respectively corresponding to the plurality of nodes, a target model is generated.

第二方面,本申请实施例还提供一种基于机器学习平台的模型构建装置,包括:In a second aspect, embodiments of the present application also provide a model building device based on a machine learning platform, including:

创建模块,用于启动目标容器,在所述目标容器的页面中创建多个节点,其中,每个节点包括具有预设参数的初始算法;Create a module, used to start the target container and create multiple nodes in the page of the target container, wherein each node includes an initial algorithm with preset parameters;

第一训练模块,用于获取目标节点的输入数据,并基于所述输入数据对所述目标节点的初始算法进行训练,得到训练结果,所述目标节点为所述多个节点中的任一节点;The first training module is used to obtain the input data of the target node, and train the initial algorithm of the target node based on the input data to obtain training results. The target node is any node among the plurality of nodes. ;

调整模块,用于基于所述训练结果调整所述预设参数,得到所述目标节点对应的目标参数;An adjustment module, configured to adjust the preset parameters based on the training results to obtain the target parameters corresponding to the target nodes;

生成模块,用于基于所述多个节点分别对应的目标参数,生成目标模型Generating module, used to generate a target model based on the target parameters corresponding to the multiple nodes.

第三方面,本申请实施例还提供一种电子设备,包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如上述第一方面所述的基于机器学习平台的模型构建方法中的步骤。In a third aspect, embodiments of the present application further provide an electronic device, including a processor, a memory, and a computer program stored on the memory and executable on the processor. The computer program is executed by the processor. When implementing the steps in the model building method based on the machine learning platform described in the first aspect above.

第四方面,本申请实施例还提供一种可读存储介质,用于存储程序,所述程序被处理器执行时实现如上述第一方面所述的基于机器学习平台的模型构建方法中的步骤。In a fourth aspect, embodiments of the present application also provide a readable storage medium for storing a program that, when executed by a processor, implements the steps in the model building method based on a machine learning platform as described in the first aspect. .

在本申请实施例中,通过启动目标容器,在目标容器的页面中创建多个节点,其中,每个节点包括具有预设参数的初始算法;获取目标节点的输入数据,并基于输入数据对目标节点的初始算法进行训练,得到训练结果,目标节点为多个节点中的任一节点;基于训练结果调整预设参数,得到目标节点对应的目标参数;基于多个节点分别对应的目标参数,生成目标模型。这样,在对初始算法的预设参数进行调整的过程中不需要运行整个目标容器中全部节点的初始算法,仅需要运行包括待调整的预设参数对应的节点的算法,即可实现对待调整的预设参数的调整,得到调整后的目标参数,以提高参数的调整效率。与此同时,在调整得到多个节点对应的目标参数之后,基于多个节点分别对应的目标参数,生成目标模型,完成模型的训练,提高了模型训练的效率。In the embodiment of this application, by starting the target container, multiple nodes are created in the page of the target container, where each node includes an initial algorithm with preset parameters; the input data of the target node is obtained, and based on the input data, the target node is The initial algorithm of the node is trained to obtain the training results. The target node is any node among multiple nodes; the preset parameters are adjusted based on the training results to obtain the target parameters corresponding to the target node; based on the target parameters corresponding to the multiple nodes, a generated target model. In this way, in the process of adjusting the preset parameters of the initial algorithm, there is no need to run the initial algorithm of all nodes in the entire target container. It is only necessary to run the algorithm including the nodes corresponding to the preset parameters to be adjusted, and the process to be adjusted can be realized. Adjust the preset parameters to obtain the adjusted target parameters to improve the efficiency of parameter adjustment. At the same time, after adjusting the target parameters corresponding to multiple nodes, a target model is generated based on the target parameters corresponding to multiple nodes, and the training of the model is completed, which improves the efficiency of model training.

附图说明Description of drawings

为更清楚地说明本申请实施例的技术方案,下面将对本申请实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to explain the technical solutions of the embodiments of the present application more clearly, the drawings needed to be used in the description of the embodiments of the present application will be briefly introduced below. Obviously, the drawings in the following description are only some embodiments of the present application. For those of ordinary skill in the art, other drawings can be obtained based on these drawings without exerting any creative effort.

图1是本申请实施例提供的机器学习平台的界面的示意图;Figure 1 is a schematic diagram of the interface of the machine learning platform provided by the embodiment of the present application;

图2是本申请实施例提供的一种基于机器学习平台的模型构建方法的流程图;Figure 2 is a flow chart of a model construction method based on a machine learning platform provided by an embodiment of the present application;

图3是本申请实施例提供的目标容器中多个节点的示意图;Figure 3 is a schematic diagram of multiple nodes in the target container provided by the embodiment of the present application;

图4是本申请实施例提供的模型训练流程图;Figure 4 is a model training flow chart provided by the embodiment of the present application;

图5是本申请实施例提供的节点X和节点Y的训练示意图;Figure 5 is a schematic diagram of the training of node X and node Y provided by the embodiment of the present application;

图6是本申请实施例提供的多个节点的连接示意图;Figure 6 is a schematic diagram of the connection of multiple nodes provided by the embodiment of the present application;

图7是本申请实施例提供的一种基于机器学习平台的模型构建装置的结构图;Figure 7 is a structural diagram of a model building device based on a machine learning platform provided by an embodiment of the present application;

图8是本申请实施例提供的一种电子设备的结构图。Figure 8 is a structural diagram of an electronic device provided by an embodiment of the present application.

具体实施方式Detailed ways

下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. Obviously, the described embodiments are part of the embodiments of the present application, rather than all of the embodiments. Based on the embodiments in this application, all other embodiments obtained by those of ordinary skill in the art without creative efforts fall within the scope of protection of this application.

建模是目前机器学习技术中广泛应用的关键技术之一,相关技术中建模需要根据建模对应的场景编写算法代码,并在容器中运行算法代码,经过反复的参数调整和运行,以对模型进行优化。在相关技术中,每次运行算法代码都需要启动容器,运行容器内全部的算法代码,再将运行结果保存到磁盘。在需要分析运行结果时,需要将磁盘内的运行结果读取到内存中,再对内存中的运行结果进行处理。因此在相关技术中,为了调整算法的参数,需要多次启动容器运行算法,再将磁盘内的运行结果读取到内存,导致占用较多的资源,同时运行整个算法也会消耗大量的时间,导致算法的参数调整的效率较低。Modeling is one of the key technologies widely used in machine learning technology. In related technologies, modeling requires writing algorithm code according to the scene corresponding to the modeling, and running the algorithm code in the container. After repeated parameter adjustment and operation, the algorithm can The model is optimized. In related technologies, each time you run the algorithm code, you need to start the container, run all the algorithm codes in the container, and then save the running results to the disk. When you need to analyze the running results, you need to read the running results from the disk into the memory, and then process the running results in the memory. Therefore, in related technologies, in order to adjust the parameters of the algorithm, it is necessary to start the container multiple times to run the algorithm, and then read the running results from the disk into the memory, which takes up a lot of resources, and running the entire algorithm at the same time also consumes a lot of time. This results in lower efficiency in parameter adjustment of the algorithm.

为提高算法的参数调整效率,本申请实施例提供了一种机器学习平台,如图1所示为机器学习平台的界面,通过该机器学习平台可启动多个容器,目标容器为多个容器中的任意一个容器。在目标容器中可新建多个节点,所述多个节点中每个节点用于添加初始算法,所述初始算法包括预设参数,所述多个节点包括目标节点;In order to improve the parameter adjustment efficiency of the algorithm, embodiments of the present application provide a machine learning platform. Figure 1 shows the interface of the machine learning platform. Multiple containers can be started through the machine learning platform, and the target container is one of the multiple containers. any container. Multiple nodes can be created in the target container, each node in the multiple nodes is used to add an initial algorithm, the initial algorithm includes preset parameters, and the multiple nodes include the target node;

其中,在所述目标容器启动后,所述目标节点用于获取输入数据,并基于所述输入数据对所述目标节点的初始算法进行测试,得到训练结果。After the target container is started, the target node is used to obtain input data, and test the initial algorithm of the target node based on the input data to obtain training results.

上述目标容器为包括多个节点的容器,每个节点可以独立的运行,通过在每个节点添加初始算法,使得通过运行某一个节点即可运行该节点对应的初始算法,不需要运行容器的全部算法,以降低调整参数需要的时间。The above target container is a container that includes multiple nodes. Each node can be run independently. By adding an initial algorithm to each node, the initial algorithm corresponding to the node can be run by running a certain node without running the entire container. algorithm to reduce the time required to adjust parameters.

上述节点是目标容器中配置的虚拟节点,每个节点对应至少一个目标容器中的算子,每个算子可以独立运行。在需要进行模型训练的情况下,在每个节点添加初始算法,通过节点对应的算子对初始算法进行训练,从而实现对不同算法的依次训练。The above nodes are virtual nodes configured in the target container. Each node corresponds to at least one operator in the target container, and each operator can run independently. When model training is required, an initial algorithm is added to each node, and the initial algorithm is trained through the operator corresponding to the node, thereby achieving sequential training of different algorithms.

请参见图2,图2是本申请实施例提供的一种基于机器学习平台的模型构建方法的流程图,如图2所示,包括以下步骤:Please refer to Figure 2. Figure 2 is a flow chart of a model building method based on a machine learning platform provided by an embodiment of the present application. As shown in Figure 2, it includes the following steps:

步骤S1:启动目标容器,在所述目标容器的页面中创建多个节点,其中,每个节点包括具有预设参数的初始算法。Step S1: Start the target container, and create multiple nodes in the page of the target container, where each node includes an initial algorithm with preset parameters.

上述目标容器为图1所示的容器,在进行模型训练前,需要启动目标容器,通过目标容器内的节点进行模型训练。上述多个节点为在目标容器的页面上创建的多个节点,节点的数量与需要进行训练的模型相关,模型对应一个或多个待训练的初始算法,每个初始算法对应一个独立的节点。The above target container is the container shown in Figure 1. Before performing model training, you need to start the target container and perform model training through the nodes in the target container. The above multiple nodes are multiple nodes created on the page of the target container. The number of nodes is related to the model that needs to be trained. The model corresponds to one or more initial algorithms to be trained, and each initial algorithm corresponds to an independent node.

其中,每个初始算法中包括预设参数,上述预设参数为每个初始算法中的参数,通过训练数据对初始算法中的预设参数进行调整,从而实现对初始算法的训练。Each initial algorithm includes preset parameters, and the above-mentioned preset parameters are parameters in each initial algorithm. The preset parameters in the initial algorithm are adjusted through training data, thereby achieving training of the initial algorithm.

步骤S2:获取目标节点的输入数据,并基于所述输入数据对所述目标节点的初始算法进行训练,得到训练结果,所述目标节点为所述多个节点中的任一节点;Step S2: Obtain input data of the target node, and train the initial algorithm of the target node based on the input data to obtain training results. The target node is any node among the multiple nodes;

上述输入数据为目标节点的输入数据,该输入数据用于对目标节点的初始算法进行训练,对初始算法的预设参数进行调整,直到预设参数符合设定要求。上述训练结果为通过目标节点中的初始算法对输入数据进行测试得到的结果,目标节点中的初始算法在测试得到训练结果后,输出该训练结果,以使得基于训练结果调整预设参数。The above input data is the input data of the target node. This input data is used to train the initial algorithm of the target node and adjust the preset parameters of the initial algorithm until the preset parameters meet the setting requirements. The above training results are the results obtained by testing the input data through the initial algorithm in the target node. After the initial algorithm in the target node obtains the training results through testing, it outputs the training results so that the preset parameters can be adjusted based on the training results.

步骤S3:基于所述训练结果调整所述预设参数,得到所述目标节点对应的目标参数。Step S3: Adjust the preset parameters based on the training results to obtain target parameters corresponding to the target nodes.

上述目标参数为调整后的参数,上述基于训练结果对预设参数进行调整,可以调整预设参数,直至得到符合训练截至条件的目标参数。例如,基于训练结果对预设参数进行训练,直至目标节点输出的训练结果在设定的数值范围内,此时目标容器内预设参数的数值为目标参数;还例如是设置调整次数,在基于训练结果调整预设参数的次数达到调整次数的情况下,目标容器内预设参数的数值为目标参数。The above-mentioned target parameters are adjusted parameters. The above-mentioned adjustment of the preset parameters is based on the training results. The preset parameters can be adjusted until the target parameters that meet the training cut-off conditions are obtained. For example, the preset parameters are trained based on the training results until the training results output by the target node are within the set value range. At this time, the value of the preset parameters in the target container is the target parameter; or for example, setting the number of adjustments, based on When the number of times the preset parameters are adjusted as a result of training reaches the number of adjustments, the value of the preset parameters in the target container is the target parameter.

步骤S4:基于所述多个节点分别对应的目标参数,生成目标模型。Step S4: Generate a target model based on the target parameters corresponding to the multiple nodes.

上述多个节点分别设置有预设参数,通过对多个节点的初始算法的预设参数进行调整,得到多个节点分别对应的目标参数,以实现对每个节点的算法的训练;再基于多个节点分别对应的目标参数生成调整后的算法,组装多个节点的调整后的算法,生成训练后的目标模型。The above-mentioned multiple nodes are respectively set with preset parameters. By adjusting the preset parameters of the initial algorithm of the multiple nodes, the target parameters corresponding to the multiple nodes are obtained to achieve the training of the algorithm for each node; and then based on the multiple nodes The adjusted algorithm is generated based on the target parameters corresponding to each node, and the adjusted algorithms of multiple nodes are assembled to generate a trained target model.

在本申请实施例中,通过启动目标容器,在目标容器的页面中创建多个节点,其中,每个节点包括具有预设参数的初始算法;获取目标节点的输入数据,并基于输入数据对目标节点的初始算法进行训练,得到训练结果,目标节点为多个节点中的任一节点;基于训练结果调整预设参数,得到目标节点对应的目标参数;基于多个节点分别对应的目标参数,生成目标模型。这样,在对初始算法的预设参数进行调整的过程中不需要运行整个目标容器中全部节点的初始算法,仅需要运行包括待调整的预设参数对应的节点的算法,即可实现对待调整的预设参数的调整,得到调整后的目标参数,以提高参数的调整效率。与此同时,在调整得到多个节点对应的目标参数之后,基于多个节点分别对应的目标参数,生成目标模型,完成模型的训练,提高了模型训练的效率。In the embodiment of this application, by starting the target container, multiple nodes are created in the page of the target container, where each node includes an initial algorithm with preset parameters; the input data of the target node is obtained, and based on the input data, the target node is The initial algorithm of the node is trained to obtain the training results. The target node is any node among multiple nodes; the preset parameters are adjusted based on the training results to obtain the target parameters corresponding to the target node; based on the target parameters corresponding to the multiple nodes, a generated target model. In this way, in the process of adjusting the preset parameters of the initial algorithm, there is no need to run the initial algorithm of all nodes in the entire target container. It is only necessary to run the algorithm including the nodes corresponding to the preset parameters to be adjusted, and the process to be adjusted can be realized. Adjust the preset parameters to obtain the adjusted target parameters to improve the efficiency of parameter adjustment. At the same time, after adjusting the target parameters corresponding to multiple nodes, a target model is generated based on the target parameters corresponding to multiple nodes, and the training of the model is completed, which improves the efficiency of model training.

在一个实施例中在所述获取目标节点的输入数据之前,所述方法还包括:In one embodiment, before obtaining the input data of the target node, the method further includes:

为所述多个节点中的每个节点配置至少一个算子,并基于所述至少一个算子确定所述初始算法;Configuring at least one operator for each node in the plurality of nodes, and determining the initial algorithm based on the at least one operator;

建立所述多个节点之间的连接关系,所述连接关系用于表征所述多个节点中每个节点对应的上级节点和/或下级节点,每个节点的输入数据为对应上级节点的输出数据,每个节点的输出数据为对应下级节点的输入数据。Establish a connection relationship between the multiple nodes. The connection relationship is used to characterize the upper-level node and/or lower-level node corresponding to each node in the multiple nodes. The input data of each node is the output of the corresponding upper-level node. Data, the output data of each node is the input data of the corresponding subordinate node.

上述算子为每个节点配置的,用于对节点的输入数据进行计算的算子,通过为每个节点配置至少一个算子,并建立算子之间的运行顺序、算子之间的输入输出关系和/或配置算子中的参数,从而确定每个节点的初始算法。其中,预设参数包括算子中的参数,通过训练结果对预设参数进行调整,具体为对节点配置的至少一个算子中的参数进行调整。The above operators are configured for each node and are used to calculate the input data of the node. By configuring at least one operator for each node, and establishing the running order between operators and the input between operators Output relationships and/or configure parameters in operators to determine the initial algorithm for each node. The preset parameters include parameters in the operator, and the preset parameters are adjusted based on the training results, specifically adjusting the parameters in at least one operator configured by the node.

上述连接关系为每两个节点之间的连接关系,在两个节点存在连接关系的情况下,一个节点的输出数据为另一个节点的输入数据。通过确定多个节点之间的连接关系,从而形成两两节点之间相互关联的连接网,该连接网为多个节点构成的虚拟连接网,通过该连接网可以实现基于连接网确定每个节点的上级节点和下级节点。The above connection relationship is the connection relationship between every two nodes. When there is a connection relationship between two nodes, the output data of one node is the input data of the other node. By determining the connection relationship between multiple nodes, an interconnected connection network between two nodes is formed. This connection network is a virtual connection network composed of multiple nodes. Through this connection network, each node can be determined based on the connection network. superior node and subordinate node.

示例性的,如图3所示,目标容器包括四个节点,节点1、节点2、节点3和节点4依次连接,在需要对节点2的算法进行训练的情况下,确定节点2的上级节点为节点1,在该情况下获取节点1的输出数据作为节点2的输入数据,并基于节点1的输出数据对节点2的初始算法进行训练。For example, as shown in Figure 3, the target container includes four nodes. Node 1, node 2, node 3 and node 4 are connected in sequence. When the algorithm of node 2 needs to be trained, the superior node of node 2 is determined. is node 1. In this case, the output data of node 1 is obtained as the input data of node 2, and the initial algorithm of node 2 is trained based on the output data of node 1.

进一步地,在目标节点不存在上级节点的情况下,即目标节点为连接网中第一个节点的情况下,目标节点的输入数据为目标容器的输入数据,在该情况下基于目标容器的输入数据对目标节点的初始算法进行训练。Further, when the target node does not have a superior node, that is, when the target node is the first node in the connection network, the input data of the target node is the input data of the target container. In this case, it is based on the input of the target container. The data trains the initial algorithm of the target node.

在本申请实施例中,通过为所述多个节点中的每个节点配置至少一个算子,并基于所述至少一个算子确定所述初始算法;建立所述多个节点之间的连接关系,以形成连接网,通过连接网可实现基于连接网确定每个节点的上级节点和下级节点,从而实现通过目标节点对应的上级节点的输出数据对目标节点的初始算法进行训练。In this embodiment of the present application, by configuring at least one operator for each of the multiple nodes, and determining the initial algorithm based on the at least one operator; establishing a connection relationship between the multiple nodes , to form a connection network, through which the superior node and subordinate node of each node can be determined based on the connection network, so that the initial algorithm of the target node can be trained through the output data of the superior node corresponding to the target node.

在一个实施例中,所述获取目标节点的输入数据,并基于所述输入数据对所述目标节点的初始算法进行训练,得到训练结果,包括:In one embodiment, obtaining the input data of the target node, training the initial algorithm of the target node based on the input data, and obtaining the training results includes:

在所述目标节点具有上级节点的情况下,将所述目标节点的上级节点的输出数据获取为所述目标节点的输入数据;When the target node has an upper-level node, obtain the output data of the upper-level node of the target node as the input data of the target node;

基于所述目标节点的所述至少一个算子对所述输入数据进行处理,得到所述训练结果;其中,所述初始算法包括所述至少一个算子,每个算子配置有算子初始参数,所述预设参数包括所述至少一个算子的算子初始参数;The input data is processed based on the at least one operator of the target node to obtain the training result; wherein the initial algorithm includes the at least one operator, and each operator is configured with operator initial parameters , the preset parameters include operator initial parameters of the at least one operator;

所述基于所述训练结果调整所述预设参数,得到所述目标节点对应的目标参数,包括:The adjusting the preset parameters based on the training results to obtain the target parameters corresponding to the target node includes:

基于所述训练结果对所述目标节点的至少一个算子中的每个算子的算子初始参数进行调整,直到满足训练截至条件,得到所述至少一个算子对应的算子结果参数;Adjust the operator initial parameters of each operator in at least one operator of the target node based on the training results until the training cut-off condition is met, and the operator result parameters corresponding to the at least one operator are obtained;

根据所述至少一个算子对应的算子结果参数,确定所述目标节点的目标参数。The target parameter of the target node is determined according to the operator result parameter corresponding to the at least one operator.

需要说明的是,在目标节点具有上级节点的情况下,将目标节点的上级几点的输出数据作为目标节点的输入数据,而在目标节点不具有上级节点的情况下,即目标节点为目标容器中的第一个节点的情况下,目标节点的输入数据为目标容器的输入数据。It should be noted that when the target node has an upper-level node, the output data of the upper-level points of the target node is used as the input data of the target node. When the target node does not have an upper-level node, that is, the target node is the target container. In the case of the first node in , the input data of the target node is the input data of the target container.

在上述初始算法中配置了至少一个算子,算子中包括初始参数,对初始算法的预设参数的调整,具体为基于训练结果对目标节点的至少一个算子中的每个算子的算子初始参数进行调整,直至满足训练截止条件,以得到每个算子对应的调整后的初始参数,即每个算子对应的算子结果参数。再通过目标节点中每个算子对应的算子结果参数,确定目标参数,目标参数为按照每个算子之间的运行顺序和/或算子之间的输入输出关系组合的算子训练结果。At least one operator is configured in the above-mentioned initial algorithm. The operator includes initial parameters and adjustments to the preset parameters of the initial algorithm, specifically the calculation of each operator in at least one operator of the target node based on the training results. The initial parameters of each operator are adjusted until the training cutoff conditions are met to obtain the adjusted initial parameters corresponding to each operator, that is, the operator result parameters corresponding to each operator. Then determine the target parameters through the operator result parameters corresponding to each operator in the target node. The target parameters are the operator training results combined according to the running order between each operator and/or the input-output relationship between operators. .

其中,训练截止条件可以为如下一项:Among them, the training cut-off condition can be one of the following:

(1)计算包括每个算子的算子训练结果的初始算法的损失值,损失值小于或等于设定损失阈值;(1) Calculate the loss value of the initial algorithm including the operator training results of each operator, and the loss value is less than or equal to the set loss threshold;

(2)基于训练结果调整每个算子的初始参数的次数等于设定次数;(2) The number of times the initial parameters of each operator are adjusted based on the training results is equal to the set number;

(3)调整后的包括每个算子的算子训练结果的初始算法对输入数据进行计算,得到的训练结果在设定的范围内。(3) The adjusted initial algorithm including the operator training results of each operator calculates the input data, and the obtained training results are within the set range.

在满足上述训练截止条件的情况下,最后调整得到的预设参数为目标参数。When the above training cut-off conditions are met, the final adjusted preset parameters are the target parameters.

上述损失值用于表征当前初始算法中的算子中调整后的初始参数是否满足训练预期,其中,在损失值小于或等于设定损失阈值的情况下,当前初始算法中的算子中调整后的初始参数满足训练预期,在该情况下当前初始算法中的算子中调整后的初始参数为算子结果参数;在损失值大于设定损失阈值的情况下,需要对算子中调整后的初始参数进行进一步调整。The above loss value is used to characterize whether the adjusted initial parameters in the operator in the current initial algorithm meet the training expectations. Among them, when the loss value is less than or equal to the set loss threshold, the adjusted initial parameters in the operator in the current initial algorithm The initial parameters of the operator meet the training expectations. In this case, the adjusted initial parameters in the operator in the current initial algorithm are the operator result parameters; when the loss value is greater than the set loss threshold, the adjusted initial parameters in the operator need to be Initial parameters are further adjusted.

另外,除了通过损失值确定预设参数是否满足训练预期,还可以通过调整算子中调整后的初始参数的次数确定是否满足训练预期,其中,在基于训练结果调整算子中调整后的初始参数的次数等于设定次数的情况下,认为训练满足预期,在该情况下将最后一次调整的参数设为算子结果参数。In addition, in addition to determining whether the preset parameters meet training expectations through the loss value, you can also determine whether they meet training expectations by adjusting the number of times of the adjusted initial parameters in the operator, where the adjusted initial parameters in the operator are adjusted based on the training results. When the number of times is equal to the set number, the training is considered to meet expectations. In this case, the last adjusted parameter is set as the operator result parameter.

进一步地,还可以通过训练结果是否在设定范围内确定满足训练预期,其中,在包括算子中调整后的初始参数的初始算法输出的训练结果在设定范围内的情况下,认为算子中调整后的初始参数为算子结果参数。Further, it can also be determined whether the training results are within a set range to meet the training expectations, wherein, if the training results output by the initial algorithm including the adjusted initial parameters in the operator are within the set range, the operator is considered The adjusted initial parameters in are the operator result parameters.

在本申请实施例中,通过在目标节点具有上级节点的情况下,将目标节点的上级节点的输出数据获取为目标节点的输入数据;基于目标节点的至少一个算子对输入数据进行处理,得到训练结果;其中,初始算法包括至少一个算子,每个算子配置有算子初始参数,预设参数包括至少一个算子的算子初始参数;基于训练结果对目标节点的至少一个算子中的每个算子的算子初始参数进行调整,直到满足训练截至条件,得到至少一个算子对应的算子结果参数;根据至少一个算子对应的算子结果参数,确定目标节点的目标参数,实现通过训练结果确定目标节点的目标参数,进而实现了对目标容器中的一个节点的独立训练。In the embodiment of the present application, when the target node has a superior node, the output data of the superior node of the target node is obtained as the input data of the target node; the input data is processed based on at least one operator of the target node, and we obtain Training results; wherein, the initial algorithm includes at least one operator, each operator is configured with operator initial parameters, and the preset parameters include the operator initial parameters of at least one operator; based on the training results, at least one operator of the target node is The operator initial parameters of each operator are adjusted until the training cut-off conditions are met, and the operator result parameters corresponding to at least one operator are obtained; the target parameters of the target node are determined based on the operator result parameters corresponding to at least one operator, The target parameters of the target node are determined through the training results, thereby achieving independent training of a node in the target container.

在一个实施例中,在所述基于所述训练结果调整所述预设参数,得到所述目标节点对应的目标参数之后,所述方法还包括:In one embodiment, after adjusting the preset parameters based on the training results to obtain the target parameters corresponding to the target nodes, the method further includes:

基于所述目标节点中的所述至少一个算子对所述输入数据进行运算,得到所述目标节点的输出数据,其中,所述至少一个算子具有对应的算子结果参数;Perform an operation on the input data based on the at least one operator in the target node to obtain the output data of the target node, where the at least one operator has a corresponding operator result parameter;

将所述目标节点的输出数据和/或所述目标节点的目标参数存储至所述目标容器对应的内存中。Store the output data of the target node and/or the target parameters of the target node into the memory corresponding to the target container.

在相关技术中,目标容器的输出数据需要先输出至磁盘中,再从磁盘内将目标容器的输出数据读取到内存,以对输出数据进行处理。这样,在对目标容器的训练过程中,需要频繁对磁盘输入输出数据,导致占用较多的硬件资源,且无法快速得到输出数据,效率较低。In the related technology, the output data of the target container needs to be output to the disk first, and then the output data of the target container is read from the disk into the memory to process the output data. In this way, during the training process of the target container, data needs to be input and output to the disk frequently, which consumes more hardware resources and cannot obtain output data quickly, resulting in low efficiency.

在本申请实施例中,将训练结果存储至目标容器对应的内存中,具体为将目标节点的输出数据和/或目标节点的目标参数存储至目标容器对应的内存中,在需要目标节点的输出数据和/或目标节点的目标参数的情况下,不需要从磁盘内将训练结果读取到内存,直接从内存中获取训练结果,从而避免对磁盘输入输出数据,实现降低硬件资源的占用,同时提高了数据的获取效率。In the embodiment of this application, the training results are stored in the memory corresponding to the target container. Specifically, the output data of the target node and/or the target parameters of the target node are stored in the memory corresponding to the target container. When the output of the target node is required, In the case of data and/or target parameters of the target node, there is no need to read the training results from the disk into the memory, and obtain the training results directly from the memory, thereby avoiding the input and output of data to the disk, reducing the occupation of hardware resources, and at the same time Improved data acquisition efficiency.

其中,上述将目标节点的输出数据和/或目标节点的目标参数存储至目标容器对应的内存中,具体可以通过将目标容器设为内存模式,在该模式下目标容器中每个节点的输出数据和/或目标参数均直接存储至内存中。Among them, the output data of the target node and/or the target parameters of the target node are stored in the memory corresponding to the target container. Specifically, the target container can be set to the memory mode, in which the output data of each node in the target container and/or target parameters are stored directly into memory.

在一个实施例中,所述方法还包括:In one embodiment, the method further includes:

根据所述多个节点之间的连接关系,依次对所述多个节点的初始算法进行训练,得到所述多个节点分别对应的目标参数;其中,在当前节点不是所述多个节点中的第一个节点的情况下,从目标容器对应的内存中获取上级节点的输出数据作为当前节点的输入数据;According to the connection relationship between the multiple nodes, the initial algorithms of the multiple nodes are trained sequentially to obtain the target parameters corresponding to the multiple nodes respectively; wherein, the current node is not one of the multiple nodes. In the case of the first node, the output data of the superior node is obtained from the memory corresponding to the target container as the input data of the current node;

关闭所述目标容器。Close the target container.

需要说明的是,在目标节点配置的至少一个算子的初始参数训练完成,得到算子结果参数之后,目标容器内还可能存在其他的节点需要进行训练,在训练之前需要通过多个节点之间的连接关系,获取每个节点的输入数据。而在将目标节点的输出数据和/或目标节点的目标参数存储至目标容器对应的内存中的情况下,可以直接从内存获取待训练节点的输入数据,实现对节点配置的至少一个算子的初始参数的快速训练。It should be noted that after the initial parameter training of at least one operator configured on the target node is completed and the operator result parameters are obtained, there may be other nodes in the target container that need to be trained. Before training, it is necessary to pass between multiple nodes. The connection relationship of each node is obtained to obtain the input data of each node. When the output data of the target node and/or the target parameters of the target node are stored in the memory corresponding to the target container, the input data of the node to be trained can be obtained directly from the memory to implement at least one operator configured on the node. Fast training of initial parameters.

其中,目标容器包括多个节点,在训练过程中,可以按照连接关系依次对每个节点配置的至少一个算子的初始参数进行训练,直至将全部节点训练完成,再关闭目标容器,避免频繁启动和关闭目标容器,从而减少频繁启动和关闭目标容器造成的机器学习平台的资源占用。The target container includes multiple nodes. During the training process, the initial parameters of at least one operator configured on each node can be trained sequentially according to the connection relationship until all nodes are trained, and then the target container is closed to avoid frequent startups. and shut down the target container, thereby reducing the resource usage of the machine learning platform caused by frequent startup and shutdown of the target container.

进一步地,在将全部节点训练完成,关闭目标容器之前,需要将内存中存储的,训练过程中生成的数据返回至机器学习平台,以便于后续建立目标模型,生成的数据包括每个节点训练完成后的输出数据,以及每个节点训练完成后的目标参数。Furthermore, before all nodes are trained and the target container is closed, the data stored in the memory and generated during the training process need to be returned to the machine learning platform to facilitate the subsequent establishment of the target model. The generated data includes the completed training of each node. The final output data, and the target parameters of each node after training is completed.

在一个实施例中,在所述将所述目标节点的输出数据和/或所述目标节点的目标参数存储至所述目标容器对应的内存中之后,所述方法还包括:In one embodiment, after storing the output data of the target node and/or the target parameters of the target node into the memory corresponding to the target container, the method further includes:

响应于对所述目标节点的输出数据和/或所述目标节点的目标参数进行查看的操作指令,从所述目标容器对应的内存中获取所述目标节点的输出数据和/或所述目标节点的目标参数;In response to an operation instruction to view the output data of the target node and/or the target parameter of the target node, obtain the output data of the target node and/or the target node from the memory corresponding to the target container. target parameters;

显示所述目标节点的输出数据和/或所述目标节点的目标参数。Display the output data of the target node and/or the target parameters of the target node.

上述显示目标节点的输出数据和/或目标节点的目标参数的设备可以是运行目标容器的设备,也可以是与运行目标容器的设备通信连接的设备,通过显示目标节点的输出数据和/或目标节点的目标参数,进而实现对预设参数进行调整。The above-mentioned device for displaying the output data of the target node and/or the target parameters of the target node may be a device running the target container, or may be a device communicatively connected to the device running the target container, by displaying the output data and/or target parameters of the target node. The target parameters of the node are then adjusted to the preset parameters.

上述进行查看的操作指令为显示目标节点的输出数据和/或目标节点的目标参数的设备发出的指令,在本申请实施例中,在接收查看的操作指令的情况下,从目标容器对应的内存中获取目标节点的输出数据和/或目标节点的目标参数,再对目标节点的输出数据和/或目标节点的目标参数进行显示,从而实现对目标节点的输出数据和/或目标节点的目标参数的快速获取,进而实现基于目标数据调整初始算法中的预设参数。The above operation instruction for viewing is an instruction issued by a device that displays the output data of the target node and/or the target parameters of the target node. In the embodiment of the present application, upon receiving the operation instruction for viewing, the operation instruction is retrieved from the memory corresponding to the target container. Obtain the output data of the target node and/or the target parameters of the target node, and then display the output data of the target node and/or the target parameters of the target node, thereby realizing the output data of the target node and/or the target parameters of the target node. Quick acquisition, thereby adjusting the preset parameters in the initial algorithm based on the target data.

其中,在接收查看的操作指令,且目标节点还未训练完成的情况下,显示的数据为训练过程中的数据(即训练过程中目标节点的输出数据和/或目标节点的算子中调整的初始参数);在接收查看的操作指令,且目标节点完成训练的情况下,显示的数据为完成训练后的目标节点的输出数据和/或目标节点的目标参数。Among them, when the viewing operation instruction is received and the target node has not been trained, the displayed data is the data in the training process (that is, the output data of the target node during the training process and/or the adjusted value in the operator of the target node). Initial parameters); when receiving the viewing operation command and the target node completes training, the displayed data is the output data of the target node after completing training and/or the target parameters of the target node.

在一个实施例中,所述多个节点包括读取数据节点、数据处理节点、模型训练节点和模型评估节点,每个节点配置的所述至少一个算子用于执行每个节点对应的功能。例如读取数据节点中配置的每个算子(operator)均为执行读取数据功能的算法或函数等,模型训练节点中配置的每个算子均为执行模型训练功能的算法或函数等。在一些实施例中,每个节点中的至少一个算子之间具有连接关系,当前节点中的第一个算子的输入数据是上一个节点的输出数据,且当前节点中的其他算子的输入数据是该算子对应的上一级算子的输出数据。每个节点中的至少一个算子按照连接关系对节点的输入数据进行处理,实现该节点对应的功能,最终得到该节点的输出数据。In one embodiment, the multiple nodes include reading data nodes, data processing nodes, model training nodes and model evaluation nodes, and the at least one operator configured on each node is used to perform the function corresponding to each node. For example, each operator configured in the read data node is an algorithm or function that performs the function of reading data, and each operator configured in the model training node is an algorithm or function that performs the function of model training. In some embodiments, at least one operator in each node has a connection relationship, the input data of the first operator in the current node is the output data of the previous node, and the input data of other operators in the current node is The input data is the output data of the upper-level operator corresponding to this operator. At least one operator in each node processes the input data of the node according to the connection relationship, realizes the function corresponding to the node, and finally obtains the output data of the node.

需要说明的是,在目标容器中不同节点执行的具体内容不同,例如,读取数据节点仅用于在内存中读取该节点的输入数据;数据处理节点用于对读取到的数据进行处理,例如归一化处理或者滤波处理等;模型训练节点用于对包括算子初始参数的算子进行训练和调整,以得到算子结果参数;模型评估节点用于对进行一次模型训练节点处理后的算子(即算子中的初始算法进行了一次调整)进行评估,确定当前的训练结果是否满足训练截止条件。通过不同的节点共同完成节点内的初始算法的预设参数的训练。It should be noted that the specific content executed by different nodes in the target container is different. For example, the reading data node is only used to read the input data of the node in the memory; the data processing node is used to process the read data. , such as normalization processing or filtering processing, etc.; the model training node is used to train and adjust the operator including the initial parameters of the operator to obtain the operator result parameters; the model evaluation node is used to perform a model training node processing on The operator (that is, the initial algorithm in the operator has been adjusted once) is evaluated to determine whether the current training results meet the training cutoff conditions. The training of the preset parameters of the initial algorithm within the node is completed through different nodes.

其中,不同节点中配置的至少一个算子用于执行该节点的功能,例如,数据处理节点中的算子用于对输入数据进行归一化处理或者滤波处理等;模型训练节点中的算子用于对数据进行计算;模型评估节点中的算子用于计算损失值或训练次数等。Among them, at least one operator configured in different nodes is used to perform the function of the node. For example, the operator in the data processing node is used to normalize or filter the input data; the operator in the model training node Used to calculate data; operators in the model evaluation node are used to calculate loss values or training times, etc.

在一个实施例中,所述基于所述多个节点分别对应的目标参数,生成目标模型,包括:In one embodiment, generating a target model based on target parameters respectively corresponding to the plurality of nodes includes:

基于所述多个节点的连接关系将所述多个节点进行连接,生成建模工作流,所述多个节点分别对应所述目标参数;Connect the multiple nodes based on the connection relationship of the multiple nodes to generate a modeling workflow, and the multiple nodes respectively correspond to the target parameters;

运行所述建模工作流,生成所述目标模型。Run the modeling workflow to generate the target model.

需要说明的是,在目标容器中对每个节点的初始算法的预设参数进行训练后,从目标容器对应的内存中得到每个节点的输出数据和/或每个节点的目标参数,此时每个初始算法均已完成训练,得到训练完成的算法。机器学习平台通过基于多个节点的连接关系将多个节点进行连接,生成建模工作流,建模工作流中的节点包括训练完成的算法,通过运行建模工作流,从而得到完成模型训练的目标模型。该目标模型通过本申请中目标容器训练得到,能有效提高训练效率。It should be noted that after training the preset parameters of the initial algorithm of each node in the target container, the output data of each node and/or the target parameters of each node are obtained from the memory corresponding to the target container. At this time Each initial algorithm has completed training, and the trained algorithm is obtained. The machine learning platform connects multiple nodes based on the connection relationships of multiple nodes to generate a modeling workflow. The nodes in the modeling workflow include the trained algorithm. By running the modeling workflow, the results of the model training are obtained. target model. The target model is obtained by training the target container in this application, which can effectively improve training efficiency.

进一步地,本申请实施例还提供如图4所示的模型训练示意图,如图4所示,模型训练流程包括:Further, the embodiment of the present application also provides a schematic diagram of model training as shown in Figure 4. As shown in Figure 4, the model training process includes:

1)启动目标容器,在启动目标容器前,设置目标容器需要占用的中央处理器(Central Processing Unit,CPU)和内存;1) Start the target container. Before starting the target container, set the Central Processing Unit (CPU) and memory that the target container needs to occupy;

2)在启动目标容器后,在目标容器中创建多个节点,包括节点X和节点Y,向每个节点依次添加算子,每个节点中的算子共同对应该节点的一个初始算法,其中,算子可以再目标容器的页面采用拖拽方式组装;2) After starting the target container, create multiple nodes in the target container, including node X and node Y, and add operators to each node in turn. The operators in each node jointly correspond to an initial algorithm of the node, where , the operator can be assembled by dragging and dropping on the page of the target container;

3)建立节点之间的连接关系,并基于连接关系对目标容器中的三个节点进行拼接,形成如图5所示的多个节点的工作流,在工作流中,节点X为节点Y的上级节点,节点X的输出数据为节点Y的输入数据;3) Establish the connection relationship between nodes, and splice the three nodes in the target container based on the connection relationship to form a multiple-node workflow as shown in Figure 5. In the workflow, node X is the node Y. For superior nodes, the output data of node X is the input data of node Y;

4)对节点X中的初始算法进行训练(即对节点X中的算子1、算子2和算子3对应的初始算法进行训练),并基于训练结果对初始算法中的预设参数进行调整,直至符合训练预期;4) Train the initial algorithm in node X (that is, train the initial algorithm corresponding to operator 1, operator 2, and operator 3 in node Adjust until it meets training expectations;

5)将节点X的输出数据存储在内存中,以使得可以通过内存获取输出数据进行模型训练或预览;5) Store the output data of node X in the memory so that the output data can be obtained through the memory for model training or preview;

6)从内存中获取节点X的输出数据,并基于节点X的输出数据对节点Y中的初始算法进行训练(即对节点X中的算子4、算子5和算子6对应的初始算法进行训练);6) Obtain the output data of node X from the memory, and train the initial algorithm in node Y based on the output data of node for training);

7)在目标容器中所有的节点中的初始算法中的预设参数调整完成后,组装全部节点,形成如图6所示的结构,运行全部的节点,生成训练模型。7) After the preset parameters in the initial algorithm in all nodes in the target container are adjusted, assemble all nodes to form the structure shown in Figure 6, run all nodes, and generate a training model.

这样,通过依次对每个节点中的初始算法进行训练,在参数调整时不需要每次均运行整个容器中的全部算子,仅需要运行待调整参数对应的目标节点的算子,即可实现对模型对应的算子参数调整,从而提高了参数的调整效率。In this way, by training the initial algorithm in each node in turn, it is not necessary to run all the operators in the entire container each time when adjusting parameters. It is only necessary to run the operators of the target node corresponding to the parameters to be adjusted. This can be achieved Adjust the operator parameters corresponding to the model, thereby improving the efficiency of parameter adjustment.

请参见图7,图7是本申请实施例提供的一种基于机器学习平台的模型构建装置的结构图,如图7所示,基于机器学习平台的模型构建装置700包括:Please refer to Figure 7. Figure 7 is a structural diagram of a model building device based on a machine learning platform provided by an embodiment of the present application. As shown in Figure 7, a model building device 700 based on a machine learning platform includes:

创建模块701,用于启动目标容器,在所述目标容器的页面中创建多个节点,其中,每个节点包括具有预设参数的初始算法;The creation module 701 is used to start a target container and create multiple nodes in the page of the target container, where each node includes an initial algorithm with preset parameters;

第一训练模块702,用于获取目标节点的输入数据,并基于所述输入数据对所述目标节点的初始算法进行训练,得到训练结果,所述目标节点为所述多个节点中的任一节点;The first training module 702 is used to obtain input data of a target node, and train the initial algorithm of the target node based on the input data to obtain training results. The target node is any one of the plurality of nodes. node;

调整模块703,用于基于所述训练结果调整所述预设参数,得到所述目标节点对应的目标参数;The adjustment module 703 is used to adjust the preset parameters based on the training results to obtain the target parameters corresponding to the target nodes;

生成模块704,用于基于所述多个节点分别对应的目标参数,生成目标模型。The generation module 704 is configured to generate a target model based on target parameters respectively corresponding to the plurality of nodes.

在一个实施例中,在所述第一训练模块702之前,所述基于机器学习平台的模型构建装置700包括:In one embodiment, before the first training module 702, the machine learning platform-based model building device 700 includes:

配置模块,用于为所述多个节点中的每个节点配置至少一个算子,并基于所述至少一个算子确定所述初始算法;A configuration module configured to configure at least one operator for each node in the plurality of nodes, and determine the initial algorithm based on the at least one operator;

建立模块,用于建立所述多个节点之间的连接关系,所述连接关系用于表征所述多个节点中每个节点对应的上级节点和/或下级节点,每个节点的输入数据为对应上级节点的输出数据,每个节点的输出数据为对应下级节点的输入数据。An establishment module is used to establish a connection relationship between the multiple nodes. The connection relationship is used to characterize the upper-level node and/or lower-level node corresponding to each node in the multiple nodes. The input data of each node is Corresponding to the output data of the superior node, the output data of each node is the input data of the corresponding subordinate node.

在一个实施例中,所述第一训练模块702包括:In one embodiment, the first training module 702 includes:

获取单元,用于在所述目标节点具有上级节点的情况下,将所述目标节点的上级节点的输出数据获取为所述目标节点的输入数据;An acquisition unit configured to acquire the output data of the upper-level node of the target node as the input data of the target node when the target node has an upper-level node;

训练单元,用于基于所述目标节点的所述至少一个算子对所述输入数据进行处理,得到所述训练结果;其中,所述初始算法包括所述至少一个算子,每个算子配置有算子初始参数,所述预设参数包括所述至少一个算子的算子初始参数;A training unit, configured to process the input data based on the at least one operator of the target node to obtain the training result; wherein the initial algorithm includes the at least one operator, and each operator is configured There are operator initial parameters, and the preset parameters include the operator initial parameters of the at least one operator;

所述调整模块703包括:The adjustment module 703 includes:

第一调整单元,用于基于所述训练结果对所述目标节点的至少一个算子中的每个算子的算子初始参数进行调整,直到满足训练截至条件,得到所述至少一个算子对应的算子结果参数;A first adjustment unit, configured to adjust the operator initial parameters of each operator in at least one operator of the target node based on the training results until the training end condition is met and the corresponding value of the at least one operator is obtained. operator result parameters;

第二调整单元,用于根据所述至少一个算子对应的算子结果参数,确定所述目标节点的目标参数。The second adjustment unit is configured to determine the target parameter of the target node according to the operator result parameter corresponding to the at least one operator.

在一个实施例中,在所述调整模块703之后,所述基于机器学习平台的模型构建装置700还包括:In one embodiment, after the adjustment module 703, the machine learning platform-based model building device 700 further includes:

计算模块,用于基于所述目标节点中的所述至少一个算子对所述输入数据进行运算,得到所述目标节点的输出数据,其中,所述至少一个算子具有对应的算子结果参数;A calculation module configured to perform operations on the input data based on the at least one operator in the target node to obtain the output data of the target node, where the at least one operator has a corresponding operator result parameter. ;

存储模块,用于将所述目标节点的输出数据和/或所述目标节点的目标参数存储至所述目标容器对应的内存中。A storage module, configured to store the output data of the target node and/or the target parameters of the target node into the memory corresponding to the target container.

在一个实施例中,所述基于机器学习平台的模型构建装置700还包括:In one embodiment, the machine learning platform-based model building device 700 further includes:

第二训练模块,用于根据所述多个节点之间的连接关系,依次对所述多个节点的初始算法进行训练,得到所述多个节点分别对应的目标参数;其中,在当前节点不是所述多个节点中的第一个节点的情况下,从目标容器对应的内存中获取上级节点的输出数据作为当前节点的输入数据;The second training module is used to train the initial algorithms of the multiple nodes in sequence according to the connection relationships between the multiple nodes, and obtain the target parameters corresponding to the multiple nodes respectively; wherein, when the current node is not In the case of the first node among the multiple nodes, obtain the output data of the superior node from the memory corresponding to the target container as the input data of the current node;

关闭模块,用于关闭所述目标容器。The shutdown module is used to close the target container.

在一个实施例中,在所述存储模块之后,所述基于机器学习平台的模型构建装置700还包括:In one embodiment, after the storage module, the machine learning platform-based model building device 700 further includes:

响应模块,用于响应于对所述目标节点的输出数据和/或所述目标节点的目标参数进行查看的操作指令,从所述目标容器对应的内存中获取所述目标节点的输出数据和/或所述目标节点的目标参数;A response module, configured to respond to an operation instruction to view the output data of the target node and/or the target parameters of the target node, and obtain the output data and/or the target node from the memory corresponding to the target container. Or the target parameters of the target node;

显示模块,用于显示所述目标节点的输出数据和/或所述目标节点的目标参数。A display module is used to display the output data of the target node and/or the target parameters of the target node.

在一个实施例中,所述每个节点中的一个节点为读取数据节点、数据处理节点、模型训练节点或模型评估节点,所述每个节点配置的所述至少一个算子为执行读取数据、数据处理、模型训练或模型评估的算子。In one embodiment, one of the nodes in each node is a reading data node, a data processing node, a model training node or a model evaluation node, and the at least one operator configured in each node is to perform reading. Operators for data, data processing, model training, or model evaluation.

在一个实施例中,所述生成模块704包括:In one embodiment, the generation module 704 includes:

第一生成单元,用于基于所述多个节点的连接关系将所述多个节点进行连接,生成建模工作流,所述多个节点分别对应所述目标参数;A first generation unit, configured to connect the multiple nodes based on the connection relationship of the multiple nodes and generate a modeling workflow, where the multiple nodes respectively correspond to the target parameters;

第二生成单元,用于运行所述建模工作流,生成所述目标模型。The second generation unit is used to run the modeling workflow and generate the target model.

本申请实施例提供的基于机器学习平台的模型构建装置为能实现上述基于机器学习平台的模型构建方法的各实施例的各个过程,技术特征一一对应,且能达到相同的技术效果,为避免重复,这里不再赘述。The model construction device based on the machine learning platform provided by the embodiment of the present application can realize the various processes of the above embodiments of the model construction method based on the machine learning platform. The technical features correspond one to one and can achieve the same technical effect. In order to avoid Repeat, I won’t go into details here.

需要说明的是,本申请实施例中的基于机器学习平台的模型构建装置可以是装置,也可以是电子设备中的部件、集成电路、或芯片。It should be noted that the model building device based on the machine learning platform in the embodiment of the present application may be a device, or may be a component, integrated circuit, or chip in an electronic device.

本申请实施例还提供一种电子设备,参见图8,图8是本申请实施提供的一种电子设备的结构示意图,电子设备包括存储器801、处理器802和存储在存储器801上运行的程序或者指令,该程序或者指令被处理器802执行时可实现图2对应的方法实施例中的任意步骤及达到相同的有益效果,此处不再赘述。An embodiment of the present application also provides an electronic device. Refer to Figure 8. Figure 8 is a schematic structural diagram of an electronic device provided by the implementation of the present application. The electronic device includes a memory 801, a processor 802, and a program stored and run on the memory 801 or When the instruction, program or instruction is executed by the processor 802, any steps in the method embodiment corresponding to FIG. 2 can be implemented and the same beneficial effects can be achieved, which will not be described again here.

其中,处理器802可以是CPU、ASIC、FPGA或GPU。Among them, the processor 802 can be a CPU, ASIC, FPGA or GPU.

本领域普通技术人员可以理解实现上述实施例方法的全部或者部分步骤是可以通过程序指令相关的硬件来完成,所述的程序可以存储于一可读取介质中。Those of ordinary skill in the art can understand that all or part of the steps to implement the methods of the above embodiments can be completed by hardware related to program instructions, and the program can be stored in a readable medium.

本申请实施例还提供一种可读存储介质,可读存储介质上存储有计算机程序,计算机程序被处理器执行时可实现上述图2对应的方法实施例中的任意步骤,且能达到相同的技术效果,为避免重复,这里不再赘述。所述的存储介质,如只读存储器(Read-OnlyMemory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等。Embodiments of the present application also provide a readable storage medium. A computer program is stored on the readable storage medium. When the computer program is executed by a processor, any of the steps in the method embodiment corresponding to Figure 2 can be implemented and the same can be achieved. To avoid repetition, the technical effects will not be repeated here. The storage medium, such as read-only memory (ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.

本申请实施例中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。此外,本申请中使用“和/或”表示所连接对象的至少其中之一,例如A和/或B和/或C,表示包含单独A,单独B,单独C,以及A和B都存在,B和C都存在,A和C都存在,以及A、B和C都存在的7种情况。The terms "first", "second", etc. in the embodiments of this application are used to distinguish similar objects and are not necessarily used to describe a specific order or sequence. Furthermore, the terms "including" and "having" and any variations thereof are intended to cover non-exclusive inclusions, e.g., a process, method, system, product, or apparatus consisting of a series of steps or units need not be limited to those expressly listed steps or units, but may include other steps or units not expressly listed or inherent to such processes, methods, products or apparatuses. In addition, the use of "and/or" in this application indicates at least one of the connected objects, such as A and/or B and/or C, indicating that A alone, B alone, C alone, and both A and B exist, There are 7 situations in which both B and C exist, both A and C exist, and A, B, and C all exist.

需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。It should be noted that, in this document, the terms "comprising", "comprises" or any other variations thereof are intended to cover a non-exclusive inclusion, such that a process, method, article or device that includes a series of elements not only includes those elements, It also includes other elements not expressly listed or inherent in the process, method, article or apparatus. Without further limitation, an element defined by the statement "comprises a..." does not exclude the presence of additional identical elements in a process, method, article or apparatus that includes that element.

通过以上的实施方式的描述,本领域的技术人员可以清楚地解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,空调器,或者第二终端设备等)执行本申请各个实施例的方法。Through the above description of the embodiments, those skilled in the art can clearly understand that the methods of the above embodiments can be implemented by means of software plus the necessary general hardware platform. Of course, it can also be implemented by hardware, but in many cases the former is better. implementation. Based on this understanding, the technical solution of the present application can be embodied in the form of a software product in essence or that contributes to related technologies. The computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk). ), includes several instructions to cause a terminal (which can be a mobile phone, a computer, a server, an air conditioner, or a second terminal device, etc.) to execute the methods of various embodiments of the present application.

上面结合附图对本申请的实施例进行描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。The embodiments of the present application are described above in conjunction with the accompanying drawings. However, the present application is not limited to the above-mentioned specific implementations. The above-mentioned specific implementations are only illustrative and not restrictive. Those of ordinary skill in the art will Inspired by the application, many forms can be made without departing from the purpose of the application and the scope protected by the claims, all of which fall within the protection of the application.

Claims (10)

CN202311628007.3A2023-11-302023-11-30Model construction method and device based on machine learning platform and electronic equipmentActiveCN117591244B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202311628007.3ACN117591244B (en)2023-11-302023-11-30Model construction method and device based on machine learning platform and electronic equipment

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202311628007.3ACN117591244B (en)2023-11-302023-11-30Model construction method and device based on machine learning platform and electronic equipment

Publications (2)

Publication NumberPublication Date
CN117591244Atrue CN117591244A (en)2024-02-23
CN117591244B CN117591244B (en)2024-12-27

Family

ID=89921678

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202311628007.3AActiveCN117591244B (en)2023-11-302023-11-30Model construction method and device based on machine learning platform and electronic equipment

Country Status (1)

CountryLink
CN (1)CN117591244B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112418438A (en)*2020-11-242021-02-26国电南瑞科技股份有限公司Container-based machine learning procedural training task execution method and system
CN116167463A (en)*2023-04-262023-05-26之江实验室 Method, device, storage medium and electronic equipment for model training
US20230169402A1 (en)*2020-06-022023-06-01Nokia Technologies OyCollaborative machine learning
CN116644804A (en)*2023-07-272023-08-25浪潮电子信息产业股份有限公司 Distributed training system, neural network model training method, device and medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20230169402A1 (en)*2020-06-022023-06-01Nokia Technologies OyCollaborative machine learning
CN112418438A (en)*2020-11-242021-02-26国电南瑞科技股份有限公司Container-based machine learning procedural training task execution method and system
CN116167463A (en)*2023-04-262023-05-26之江实验室 Method, device, storage medium and electronic equipment for model training
CN116644804A (en)*2023-07-272023-08-25浪潮电子信息产业股份有限公司 Distributed training system, neural network model training method, device and medium

Also Published As

Publication numberPublication date
CN117591244B (en)2024-12-27

Similar Documents

PublicationPublication DateTitle
CN110162322A (en)A kind of upgrade method and device
CN110766167B (en) Interactive feature selection method, device and readable storage medium
CN110210807A (en)Stock-taking method, equipment and storage medium
CN111475402B (en)Program function testing method and related device
CN112996020A (en)Bluetooth-based automatic testing method and device and Bluetooth testing terminal
CN117407911A (en)Compliance detection method, compliance detection device, electronic equipment and computer storage medium
CN112633065A (en)Face detection method, system, storage medium and terminal based on data enhancement
CN112686375A (en)Neural network model generation method and device
CN111752839A (en) Test case, rule generation, chip testing method, apparatus, equipment and medium
CN112699648B (en) Data processing method and device
CN117591244A (en)Model construction method and device based on machine learning platform and electronic equipment
CN117572851A (en)Method and device for testing controller, electronic equipment and storage medium
CN112416738A (en) Image testing method, device, computer device and readable storage medium
CN117573543A (en)Test data generation method and device, electronic equipment and storage medium
CN110716855B (en)Processor instruction set testing method and device
CN111324757B (en)Map data problem processing method and device
CN110209566B (en)Method and device for testing software
CN112084666A (en)Test output method and device and electronic equipment
CN114615032B (en)Behavior safety baseline fusion learning method and device, electronic equipment and storage medium
CN116611167B (en) Method and device for generating dynamic simulation model file of tire system
CN115941496B (en)LED display system equipment topology editing method and device and electronic equipment
CN115081235B (en) Feature processing method, device, storage medium and electronic device
CN114549941B (en)Model testing method and device and electronic equipment
CN110096555B (en)Table matching processing method and device for distributed system
CN112632883B (en)Method, device, equipment and medium for testing simulation result of device model

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp