Movatterモバイル変換


[0]ホーム

URL:


CN118394592B - A Paas platform based on cloud computing - Google Patents

A Paas platform based on cloud computing
Download PDF

Info

Publication number
CN118394592B
CN118394592BCN202410452051.1ACN202410452051ACN118394592BCN 118394592 BCN118394592 BCN 118394592BCN 202410452051 ACN202410452051 ACN 202410452051ACN 118394592 BCN118394592 BCN 118394592B
Authority
CN
China
Prior art keywords
module
resource
preset
service request
load
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410452051.1A
Other languages
Chinese (zh)
Other versions
CN118394592A (en
Inventor
黄强
朱湘军
彭永坚
汪壮雄
李利苹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Video Star Intelligent Co ltd
Original Assignee
Guangzhou Video Star Intelligent Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Video Star Intelligent Co ltdfiledCriticalGuangzhou Video Star Intelligent Co ltd
Priority to CN202410452051.1ApriorityCriticalpatent/CN118394592B/en
Publication of CN118394592ApublicationCriticalpatent/CN118394592A/en
Application grantedgrantedCritical
Publication of CN118394592BpublicationCriticalpatent/CN118394592B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The application is suitable for the technical field of cloud computing, and provides a Paas platform based on cloud computing, which comprises a real-time monitoring module, a cloud computing module and a cloud computing module, wherein the real-time monitoring module is used for monitoring resource monitoring data of the Paas platform in real time; the system comprises a load prediction module, a resource management operation module, a first acquisition module and a resource allocation module, wherein the load prediction module is used for inputting resource monitoring data into a preset load prediction model to obtain a predicted load quantity, the resource management operation module is used for executing resource management operation corresponding to the predicted load data based on the predicted load quantity, the first acquisition module is used for acquiring a user service request, and the resource allocation module is used for dynamically allocating resource nodes according to the resource monitoring data and the user service request based on a preset load balancing strategy. Therefore, the application can allocate the resources in real time according to the load condition by realizing dynamic resource increase and decrease and automatic load balancing, thereby improving the utilization rate of the resources, avoiding the waste of the resources and improving the overall resource management efficiency.

Description

Paas platform based on cloud computing
Technical Field
The application belongs to the technical field of cloud computing, and particularly relates to a Paas platform based on cloud computing.
Background
Currently, cloud computing develops rapidly, more and more enterprises and individuals choose to migrate services to a cloud platform, and due to the huge scale of the cloud platform, resource allocation and management are often complex, but the prior art cannot realize efficient resource monitoring and management, so that the technical problems of low resource utilization rate and resource waste exist.
Disclosure of Invention
The embodiment of the application provides a Paas platform based on cloud computing, which can solve the problems of low resource utilization rate and resource waste in the prior art.
In a first aspect, an embodiment of the present application provides a Paas platform based on cloud computing, including:
The real-time monitoring module is used for monitoring the resource monitoring data of the PaaS platform in real time;
The load prediction module is used for inputting the resource monitoring data into a preset load prediction model to obtain the predicted load quantity;
The resource management operation module is used for executing resource management operation corresponding to the predicted load quantity based on the predicted load quantity;
the first acquisition module is used for acquiring a user service request;
and the resource allocation module is used for dynamically allocating resource nodes according to the resource monitoring data and the user service request based on a preset load balancing strategy.
In a possible implementation manner of the first aspect, the Paas platform based on cloud computing further includes:
The second acquisition module is used for acquiring historical training data;
the preprocessing module is used for preprocessing the historical training data;
The extraction module is used for extracting feature training data from the preprocessed historical data;
and the training module is used for training the preset load prediction model according to the characteristic training data based on the preset loss function to obtain a trained preset load prediction model.
In a possible implementation manner of the first aspect, the preset loss function is:
Wherein,Representing a predetermined loss function of the device,The true value is represented by a value that is true,Representing a predicted value of the preset load prediction model,Representing the number of samples.
In a possible implementation manner of the first aspect, the preset load prediction model includes a preset input layer, a preset long-short-term network layer, a preset multi-layer sensing layer and a preset output layer;
The load prediction module comprises:
the vectorization processing sub-module is used for vectorizing the resource monitoring data according to a preset input layer to obtain a feature vector;
the time correlation extraction sub-module is used for extracting time correlation of the feature vector according to a preset long-term network layer to obtain a hidden state;
The nonlinear transformation submodule is used for carrying out nonlinear transformation on the hidden state according to a preset multi-layer sensing layer to obtain a high-dimensional vector;
And the classification processing sub-module is used for carrying out classification processing on the high-dimensional vector according to a preset output layer to obtain the predicted load quantity.
In a possible implementation manner of the first aspect, the classification processing sub-module includes:
The load prediction unit is used for obtaining the predicted load quantity according to the following formula:
;
;
;
;
Wherein,Represent the firstThe data of the monitoring of the individual resources,The vectorization function is represented by a vector,The feature vector is represented by a vector of features,Representing the current point in timeIs used for the feature vector of (a),Representing the previous point in timeIs used to determine the hidden state of the (c),Indicating the long-short period memory operation corresponding to the preset long-short period network layer,Representing the current point in timeIs used to determine the hidden state of the (c),The weight matrix is represented by a matrix of weights,The offset vector is represented as such,Represents the corresponding activation operation of the preset multi-layer sensing layer,Representing a high-dimensional vector of the vector,Indicating that the high-dimensional vector belongs to the firstThe score of the individual category(s),Representing the total number of categories,Representing the predicted load quantity.
In a possible implementation manner of the first aspect, the resource management operation module includes:
The first resource management operation sub-module is used for executing first resource management operation when the predicted load quantity is larger than a preset load quantity threshold value;
Or alternatively
And the second resource management operation sub-module is used for executing a second resource management operation when the predicted load quantity is smaller than a preset load quantity threshold value.
In a possible implementation manner of the first aspect, the user service request is a first user service request;
the resource allocation module comprises:
The first calling sub-module is used for calling a first load balancing strategy corresponding to the first user service request;
And the first resource allocation submodule is used for dynamically allocating resource nodes according to the resource monitoring data based on the first load balancing strategy.
In a possible implementation manner of the first aspect, the user service request is a second user service request;
the resource allocation module comprises:
The second calling sub-module is used for calling a second load balancing strategy corresponding to the second user service request;
And the second resource allocation submodule dynamically allocates resource nodes according to the resource monitoring data based on the second load balancing strategy.
In a possible implementation manner of the first aspect, the user service request is a third user service request;
the resource allocation module comprises:
A third calling sub-module for calling a third load balancing strategy corresponding to the third service request;
And the third resource allocation sub-module is used for dynamically allocating resource nodes according to the resource monitoring data based on the third load balancing strategy.
In a possible implementation manner of the first aspect, the user service request is a fourth user service request;
the resource allocation module comprises:
a fourth calling sub-module for calling a fourth load balancing strategy corresponding to the fourth user service request;
And the fourth resource allocation sub-module is used for dynamically allocating resource nodes according to the resource monitoring data based on the fourth load balancing strategy.
Compared with the prior art, the embodiment of the application has the beneficial effects that:
The Paas platform based on cloud computing comprises a real-time monitoring module, a load prediction module, a resource management operation module, a first acquisition module and a resource allocation module, wherein the real-time monitoring module is used for monitoring resource monitoring data of the Paas platform in real time, the load prediction module is used for inputting the resource monitoring data into a preset load prediction model to obtain predicted load quantity, the resource management operation module is used for executing resource management operation corresponding to the predicted load quantity based on the predicted load quantity, the first acquisition module is used for acquiring a user service request, and the resource allocation module is used for dynamically allocating resource nodes according to the resource monitoring data and the user service request based on a preset load balancing strategy. Therefore, the PaaS platform can allocate the resources according to the load condition in real time by realizing dynamic resource increase and decrease and automatic load balancing, so that the resource utilization rate is improved, the resource waste is avoided, and the overall resource management efficiency is improved.
In addition, the embodiment of the application can accurately know the current state and the resource demand of the PaaS platform by monitoring the resource monitoring data on the PaaS platform in real time, can predict future loads by analyzing historical data and the resource monitoring data and combining with a preset load prediction model, and can output the predicted load quantity so as to provide accurate basis for resource allocation. When the predicted load increases, the PaaS platform can automatically increase resources, such as virtual machine instances, storage capacity and the like, so as to meet the demands of users, ensure that the PaaS platform can dynamically expand the resources, improve the system performance and stability, and avoid resource waste. By acquiring the user service request, automatically carrying out load distribution according to the characteristics of the resource monitoring data and the service request, ensuring the load balance among all the resource nodes, and by selecting a proper load balance strategy, such as polling, minimum connection, IP hashing and the like, the optimal distribution of the resources can be realized, and the performance and the user experience of the system are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a Paas platform based on cloud computing according to an embodiment of the present application;
Fig. 2 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in the present description and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
Furthermore, the terms "first," "second," "third," and the like in the description of the present specification and in the appended claims, are used for distinguishing between descriptions and not necessarily for indicating or implying a relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
The following describes the technical scheme of the embodiment of the application.
Referring to fig. 1, a schematic structural diagram of a Pass platform based on cloud computing according to an embodiment of the present application may include a real-time monitoring module 11, a load prediction module 12, a resource management operation module 13, a first obtaining module 14, and a resource allocation module 15.
The real-time monitoring module 11 is configured to monitor the resource monitoring data of the PaaS platform in real time.
The resource monitoring data comprises a CPU, a memory, a network bandwidth and the like, and the resource monitoring data is used for determining the current state and the resource requirement of the Paas platform.
The load prediction module 12 is configured to input the resource monitoring data into a preset load prediction model, so as to obtain a predicted load quantity.
The preset load prediction model may be a load prediction model obtained by training historical training data on the basis of a deep learning model (such as a convolutional neural network model, etc.), or may be a trained load prediction model obtained directly.
In one possible implementation manner, the Paas platform based on cloud computing further includes:
The second acquisition module is used for acquiring historical training data;
the preprocessing module is used for preprocessing the historical training data;
The extraction module is used for extracting feature training data from the preprocessed historical data;
and the training module is used for training the preset load prediction model according to the characteristic training data based on the preset loss function to obtain a trained preset load prediction model.
Illustratively, the preset loss function is:
Wherein,Representing a predetermined loss function of the device,The true value is represented by a value that is true,Representing a predicted value of the preset load prediction model,Representing the number of samples.
Illustratively, historical training data refers to user historical request information and corresponding historical resource monitoring data.
The preprocessing module is specifically used for performing operations such as data cleaning, resampling, data smoothing and the like on the historical training data.
The feature data refers to data that extracts features capable of reflecting the trend of load variation so as to perform training of a predictive model, such as a time stamp, a trend of load variation, a historical load number, a resource utilization rate, and the like;
the extraction module is specifically used for extracting feature training data in the historical data through statistical analysis, time sequence analysis or data mining and the like.
The training module is specifically used for:
Inputting the characteristic data into a preset load prediction model, and outputting a calculation model through forward propagation;
Calculating a loss function, comparing a model predicted value with a real label, calculating a loss value, calculating a gradient through a back propagation algorithm, solving the gradient of a preset load predicted model parameter, updating the preset load predicted model parameter according to the calculated gradient by using an optimization algorithm, and repeating the steps until a preset stop condition is reached.
In one possible implementation manner, the preset load prediction model includes a preset input layer, a preset long-period network layer, a preset multi-layer sensing layer and a preset output layer;
The load prediction module comprises:
the vectorization processing sub-module is used for vectorizing the resource monitoring data according to a preset input layer to obtain a feature vector;
the time correlation extraction sub-module is used for extracting time correlation of the feature vector according to a preset long-term network layer to obtain a hidden state;
The nonlinear transformation submodule is used for carrying out nonlinear transformation on the hidden state according to a preset multi-layer sensing layer to obtain a high-dimensional vector;
And the classification processing sub-module is used for carrying out classification processing on the high-dimensional vector according to a preset output layer to obtain the predicted load quantity.
The classification processing sub-module comprises:
The load prediction unit is used for obtaining the predicted load quantity according to the following formula:
;
;
;
;
Wherein,Represent the firstThe data of the monitoring of the individual resources,The vectorization function is represented by a vector,The feature vector is represented by a vector of features,Representing the current point in timeIs used for the feature vector of (a),Representing the previous point in timeIs used to determine the hidden state of the (c),Indicating the long-short period memory operation corresponding to the preset long-short period network layer,Representing the current point in timeIs used to determine the hidden state of the (c),The weight matrix is represented by a matrix of weights,The offset vector is represented as such,Represents the corresponding activation operation of the preset multi-layer sensing layer,Representing a high-dimensional vector of the vector,Indicating that the high-dimensional vector belongs to the firstThe score of the individual category(s),Representing the total number of categories,Representing the predicted load quantity.
According to the embodiment of the application, the resource monitoring data is converted into the feature vector through the preset vectorization function, different types of monitoring data can be effectively converted into the unified feature representation form, subsequent processing and analysis can be more conveniently carried out, the preset long-short-term network layer, such as LSTM, is used for extracting time correlation from the feature vector to obtain the hidden state, the processing can capture long-term dependence and short-term fluctuation in time sequence data, for a load prediction model, modeling and prediction can be better carried out on time sequence change of the data, memory of historical data is kept, the preset multi-layer perception layer is utilized for carrying out nonlinear transformation on the hidden state to obtain a high-dimensional vector, fitting capacity and expression capacity of the model can be increased, nonlinear relation in the data can be better learned by the model, the multi-layer perception layer can abstract and transform the hidden state layer by layer, therefore higher-level feature representation is extracted, the high-dimensional vector is subjected to classification processing by applying the preset output layer to obtain the predicted load quantity, the high-dimensional vector is mapped into the different task load quantity by the classifier, and the high-dimensional load quantity can be mapped into the different task load quantity.
And the resource management operation module 13 is used for executing resource management operation corresponding to the predicted load quantity based on the predicted load quantity.
Illustratively, the resource management operation module includes:
and the first resource management operation sub-module is used for executing the first resource management operation when the predicted load quantity is larger than a preset load quantity threshold value.
Specifically, the first resource management operation is to increase the computing resources (such as virtual machines, containers, etc.) or the storage resources (such as expanding the disk capacity) to meet the higher load demand when the predicted load number is greater than the preset load number threshold.
Illustratively, the resource management operation module includes:
and the second resource management operation sub-module is used for executing a second resource management operation when the predicted load quantity is smaller than a preset load quantity threshold value.
In particular, the second resource management operation is to free up computing or storage resources that are no longer needed to save costs and resource usage.
A first obtaining module 14, configured to obtain a user service request.
The user service request is a first user service request, a second user service request, a third user service request or a fourth user service request. For example, the first user service request may be a development service requirement, the second user service request may be a test service requirement, the third user service request may be a deployment service requirement, and the fourth user service requirement may be a management service requirement.
And the resource allocation module 15 is configured to dynamically allocate resource nodes according to the resource monitoring data and the user service request based on a preset load balancing policy.
In a specific application, the user service request is a first user service request;
the resource allocation module comprises:
The first calling sub-module is used for calling a first load balancing strategy corresponding to the first user service request;
And the first resource allocation submodule is used for dynamically allocating resource nodes according to the resource monitoring data based on the first load balancing strategy.
Wherein the first load balancing policy refers to determining which resource node is allocated to develop a service request to process according to a preset load balancing policy (such as polling, least connection, IP hashing, etc.), and considering factors such as load condition, network bandwidth, response time, etc. of the current resource node, so as to ensure that a developer's request can be responded quickly and serviced with high quality,
In a specific application, the user service request is a second user service request;
the resource allocation module comprises:
The second calling sub-module is used for calling a second load balancing strategy corresponding to the second user service request;
And the second resource allocation submodule dynamically allocates resource nodes according to the resource monitoring data based on the second load balancing strategy.
The second load balancing policy refers to performing different resource allocation and simulation according to the test requirement (such as simulating high concurrency or large data volume, etc.), and using corresponding load balancing policies (such as polling, least connection, IP hashing, etc.), to uniformly allocate the test service request to different resource nodes for processing.
In a specific application, the user service request is a third user service request;
the resource allocation module comprises:
A third calling sub-module for calling a third load balancing strategy corresponding to the third service request;
And the third resource allocation sub-module is used for dynamically allocating resource nodes according to the resource monitoring data based on the third load balancing strategy.
The third load balancing policy refers to that according to the characteristics of the management service request, indexes in terms of system stability and throughput are preferentially considered, corresponding load balancing policies (such as polling, least connection, IP hashing and the like) are called, and the management service request is evenly distributed to different resource nodes for processing.
In a specific application, the user service request is a fourth user service request;
the resource allocation module comprises:
a fourth calling sub-module for calling a fourth load balancing strategy corresponding to the fourth user service request;
And the fourth resource allocation sub-module is used for dynamically allocating resource nodes according to the resource monitoring data based on the fourth load balancing strategy.
The fourth load balancing policy refers to that according to the characteristics of the deployment service request, the resource utilization rate and the priority of the deployment task are preferentially considered, the corresponding load balancing policy (such as polling, least connection, IP hashing, etc.) is called, and the deployment service request is evenly distributed to different resource nodes for processing.
It can be understood that the embodiment of the application realizes the purpose of dynamically distributing the resource nodes according to the resource monitoring data and the user service request based on the preset load balancing strategy, and selects different load balancing strategies by calling different sub-modules, thereby realizing the selection of different load balancing strategies for different types of user service requests so as to provide high-efficiency, quick and high-quality service.
The Paas platform based on cloud computing comprises a real-time monitoring module, a load prediction module, a resource management operation module, a first acquisition module and a resource allocation module, wherein the real-time monitoring module is used for monitoring resource monitoring data of the Paas platform in real time, the load prediction module is used for inputting the resource monitoring data into a preset load prediction model to obtain predicted load quantity, the resource management operation module is used for executing resource management operation corresponding to the predicted load quantity based on the predicted load quantity, the first acquisition module is used for acquiring a user service request, and the resource allocation module is used for dynamically allocating resource nodes according to the resource monitoring data and the user service request based on a preset load balancing strategy.
Therefore, the PaaS platform can allocate the resources according to the load condition in real time by realizing dynamic resource increase and decrease and automatic load balancing, so that the resource utilization rate is improved, the resource waste is avoided, and the overall resource management efficiency is improved.
In addition, the embodiment of the application can accurately know the current state and the resource demand of the PaaS platform by monitoring the resource monitoring data on the PaaS platform in real time, can predict future loads by analyzing historical data and the resource monitoring data and combining with a preset load prediction model, and can output the predicted load quantity so as to provide accurate basis for resource allocation. When the predicted load increases, the PaaS platform can automatically increase resources, such as virtual machine instances, storage capacity and the like, so as to meet the demands of users, ensure that the PaaS platform can dynamically expand the resources, improve the system performance and stability, and avoid resource waste. By acquiring the user service request, automatically carrying out load distribution according to the characteristics of the resource monitoring data and the service request, ensuring the load balance among all the resource nodes, and by selecting a proper load balance strategy, such as polling, minimum connection, IP hashing and the like, the optimal distribution of the resources can be realized, and the performance and the user experience of the system are improved.
Fig. 2 is a schematic structural diagram of a server according to an embodiment of the present application. As shown in fig. 2, the server 2 of this embodiment includes at least one processor 20, a memory 21, and a computer program 22 stored in the memory 21 and executable on the at least one processor 20, the processor 20 implementing method steps applying the cloud computing based Pass platform described above when executing the computer program 22.
The server 2 may be a computing device such as a cloud server. The server may include, but is not limited to, a processor 20, a memory 21. It will be appreciated by those skilled in the art that fig. 2 is merely an example of the server 2 and is not meant to be limiting as the server 2, and may include more or fewer components than shown, or may combine certain components, or different components, such as may also include input-output devices, network access devices, etc.
The Processor 20 may be a central processing unit (Central Processing Unit, CPU), the Processor 20 may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL processors, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), off-the-shelf Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 21 may in some embodiments be an internal storage unit of the server 2, such as a hard disk or a memory of the server 2. The memory 21 may also be an external storage device of the server 2 in other embodiments, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD) or the like, which are provided on the server 2. Further, the memory 21 may also include both an internal storage unit and an external storage device of the server 2. The memory 21 is used for storing an operating system, application programs, boot loader (BootLoader), data, other programs, etc., such as program codes of the computer program. The memory 21 may also be used for temporarily storing data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program can realize the method steps of the Pass platform based on cloud computing when being executed by a processor.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium can include at least any entity or device capable of carrying computer program code to a server, a recording medium, computer Memory, read-Only Memory (ROM), random-access Memory (RAM, random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other manners. For example, the apparatus/network device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The foregoing embodiments are merely illustrative of the technical solutions of the present application, and not restrictive, and although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those skilled in the art that modifications may still be made to the technical solutions described in the foregoing embodiments or equivalent substitutions of some technical features thereof, and that such modifications or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (8)

Wherein,Represent the firstThe data of the monitoring of the individual resources,The vectorization function is represented by a vector,The feature vector is represented by a vector of features,Representing the current point in timeIs used for the feature vector of (a),Representing the previous point in timeIs used to determine the hidden state of the (c),Indicating the long-short period memory operation corresponding to the preset long-short period network layer,Representing the current point in timeIs used to determine the hidden state of the (c),The weight matrix is represented by a matrix of weights,The offset vector is represented as such,Represents the corresponding activation operation of the preset multi-layer sensing layer,Representing a high-dimensional vector of the vector,Indicating that the high-dimensional vector belongs to the firstThe score of the individual category(s),Representing the total number of categories,Representing the predicted load quantity.
CN202410452051.1A2024-04-162024-04-16 A Paas platform based on cloud computingActiveCN118394592B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202410452051.1ACN118394592B (en)2024-04-162024-04-16 A Paas platform based on cloud computing

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202410452051.1ACN118394592B (en)2024-04-162024-04-16 A Paas platform based on cloud computing

Publications (2)

Publication NumberPublication Date
CN118394592A CN118394592A (en)2024-07-26
CN118394592Btrue CN118394592B (en)2025-02-11

Family

ID=91998612

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202410452051.1AActiveCN118394592B (en)2024-04-162024-04-16 A Paas platform based on cloud computing

Country Status (1)

CountryLink
CN (1)CN118394592B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN118869700B (en)*2024-09-252025-01-24浙江云针信息科技有限公司 A network load balancing method and system for cloud servers
CN119005363B (en)*2024-10-242025-02-07广州兴趣岛信息科技有限公司 An optimization method, system, terminal device and storage medium for AI large model

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108965381A (en)*2018-05-312018-12-07康键信息技术(深圳)有限公司Implementation of load balancing, device, computer equipment and medium based on Nginx
CN117435336A (en)*2023-08-152024-01-23中国工商银行股份有限公司LSTM-based PaaS platform capacity expansion and contraction method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107911470B (en)*2017-11-302018-12-14掌阅科技股份有限公司Distributed dynamic load-balancing method calculates equipment and computer storage medium
CN111638958B (en)*2020-06-022024-04-05中国联合网络通信集团有限公司 Cloud host load processing method, device, control device and storage medium
CN113220450B (en)*2021-04-292022-10-21南京邮电大学Load prediction method, resource scheduling method and device for cloud-side multi-data center
CN117880291A (en)*2023-12-122024-04-12天翼云科技有限公司Multi-cloud resource load balancing method, device, equipment and medium based on GPT technology

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108965381A (en)*2018-05-312018-12-07康键信息技术(深圳)有限公司Implementation of load balancing, device, computer equipment and medium based on Nginx
CN117435336A (en)*2023-08-152024-01-23中国工商银行股份有限公司LSTM-based PaaS platform capacity expansion and contraction method and device

Also Published As

Publication numberPublication date
CN118394592A (en)2024-07-26

Similar Documents

PublicationPublication DateTitle
CN118394592B (en) A Paas platform based on cloud computing
WO2022037337A1 (en)Distributed training method and apparatus for machine learning model, and computer device
CN113377540A (en)Cluster resource scheduling method and device, electronic equipment and storage medium
CN108762885B (en)Virtual machine creating method and device, management equipment and terminal equipment
CN115033340B (en) A host selection method and related device
CN113268403B (en) Time series analysis and forecasting methods, devices, equipment and storage media
CN116841753B (en)Stream processing and batch processing switching method and switching device
CN111565216A (en)Back-end load balancing method, device, system and storage medium
CN118245234B (en)Distributed load balancing method and system based on cloud computing
CN112148483A (en)Container migration method and related device
CN117311973A (en)Computing device scheduling method and device, nonvolatile storage medium and electronic device
CN119248515B (en)Computing resource scheduling method, device and system based on bare metal or container
Silva et al.SmartRank: a smart scheduling tool for mobile cloud computing
CN113419863A (en)Data distribution processing method and device based on node capability
CN113177613A (en)System resource data distribution method and device
CN114281664B (en)Application load data prediction method, device and storage medium
CN117806951A (en)Intelligent scheduling method and related equipment applied to distributed operation of test cases
CN117407178A (en)Acceleration sub-card management method and system for self-adaptive load distribution
CN117667305A (en)Service scene-based security policy deployment method and device and electronic equipment
CN112631577B (en)Model scheduling method, model scheduler and model safety test platform
CN116089367A (en)Dynamic barrel dividing method, device, electronic equipment and medium
CN112148491B (en)Data processing method and device
CN113296951A (en)Resource allocation scheme determination method and equipment
CN112529428A (en)Method and equipment for evaluating operation efficiency of bank outlet equipment
CN118672767B (en) Computing power scheduling method and system combining GPU virtualization and AI

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp