Disclosure of Invention
Aiming at the technical defects, the invention aims to provide a method for testing the hardware performance of a server.
The method comprises the following steps of obtaining the type of software used by each user in each type of server in a history period of a target production enterprise, so as to perform primary test screening on the software used by each user in each type of server, and further screening to obtain the primary test software used by each type of server.
And step two, final test screening, namely, according to the primary test software of each type corresponding to each type of server, performing final screening on the primary test software of each type corresponding to each type of server, and further screening to obtain the final test software of each type corresponding to each type of server.
Inputting various types of final test software corresponding to various types of servers into various types of server hardware performance test platforms, so as to perform performance test on various hardware in various types of servers in the current period of a target production enterprise, setting a plurality of acquisition time points in the test process, and analyzing monomer performance evaluation values corresponding to the single operation of various types of final test software and combined performance evaluation values corresponding to the combined operation of various types of final test software of various types of hardware in various types of servers at various acquisition time points.
And fourthly, judging the hardware performance, namely analyzing the performance evaluation value corresponding to each hardware in each type of server according to the monomer performance evaluation value corresponding to the single operation of each hardware in each type of final test software and the combined performance evaluation value corresponding to the combined operation of each type of final test software in each type of final test software at each acquisition time point, and further judging whether the performance of each hardware in each type of server is qualified.
A1, collecting the use software corresponding to each user in each type of server in a history period from a server management system of a target production enterprise, classifying the collected use software corresponding to each user in each type of server to obtain the use software corresponding to each user in each type of server, and further counting the use frequency of the use software corresponding to each type of server.
A2, comparing the use frequency of each type of use software corresponding to each type of server with the use frequency corresponding to the set primary test use software, if the use frequency of a certain type of use software corresponding to a certain type of server is larger than or equal to the use frequency corresponding to the set primary test use software, indicating that the type of use software corresponding to the certain type of server has test representativeness, and if the use frequency of a certain type of use software corresponding to the certain type of server is smaller than the use frequency corresponding to the set primary test use software, indicating that the type of use software corresponding to the certain type of server does not have test representativeness, and marking each type of use software corresponding to the certain type of server with test representativeness as each type of primary test use software.
The method comprises the steps of B1, obtaining evaluation values of the performance of the influencing hardware corresponding to the primary test software in each type of server, and arranging the evaluation values of the performance of the influencing hardware corresponding to the primary test software in each type of server in a sequence from large to small.
And B2, marking ten types of primary test use software with highest influence on the hardware performance evaluation value in the servers of various types as various types of final test use software.
The method comprises obtaining CPU peak value usage rate, memory occupation peak value and software collapse times corresponding to the primary test software in each type of server, and recording the CPU peak value usage rate, memory occupation peak value and software collapse times corresponding to the primary test software in each type of server as、AndQ represents the number corresponding to each type of server, q is a positive integer, w represents the number corresponding to each type of primary test software, w is a positive integer, and the number is substituted into a calculation formula:
Obtaining the evaluation value of the influence hardware performance corresponding to the primary test software in each type of serverWherein, the method comprises the steps of, wherein,、、Respectively setting standard CPU peak value usage rate and standard memory occupation peak value corresponding to the usage software, the number of times of standard software breakdown,、、Respectively set weight factors corresponding to the CPU peak value using rate, the memory occupied peak value and the software crash times,、、Respectively set regulating factors corresponding to the CPU peak value using rate, the memory occupied peak value and the software breakdown times of the software,Representing natural constants.
Preferably, the method comprises the steps of inputting the corresponding final test software of each type of server into the hardware performance test platform of each type of server for performance test, wherein the specific test process comprises the following steps of C1, installing each hardware in each type of server in the current period of a target production enterprise into each corresponding test platform, and installing the corresponding final test software of each type on the test platform of each type of server.
And C2, performing monomer operation test on each hardware in each type of server, namely, only running one type of final test software on the test server by a test platform running each type of server at each time, and further collecting monomer performance data corresponding to monomer operation of each hardware in each type of server in each type of final test software by a server hardware performance monitoring tool at each collection time point, wherein the monomer performance data comprises monomer response time, monomer power consumption and monomer heat dissipation efficiency.
And C3, after the single operation test of each hardware in each type of server is completed, carrying out the combined operation test on each hardware in each type of server, and carrying out the combined operation test of each type of final test software according to the occurrence of each type of final test software, namely, only operating one type of final test software on the test server by a test platform for operating each type of server each time, and further collecting corresponding combined performance data of each hardware in each type of server in the combined operation of each type of final test software by a server hardware performance monitoring tool at each collection time point, wherein the combined performance data comprises combined response duration, combined power consumption and combined heat dissipation efficiency.
The specific analysis process is that monomer response time, monomer power consumption and monomer heat dissipation efficiency corresponding to the running of each hardware in each type of final test software monomer in each type of server at each collection time point are obtained, monomer response time, monomer power consumption and monomer heat dissipation efficiency corresponding to the running of each hardware in each type of final test software monomer in each collection time point are input into a monomer performance evaluation analysis model, and monomer performance evaluation values corresponding to the running of each hardware in each type of final test software monomer in each collection time point are output.
The method comprises the steps of acquiring a combined response time, combined power consumption and combined heat radiation efficiency corresponding to the combined operation of all hardware in all types of final test software in all types of servers at all collection time points, inputting the combined response time, the combined power consumption and the combined heat radiation efficiency corresponding to the combined operation of all hardware in all types of final test software in all types of servers at all collection time points into a combined performance evaluation analysis model, and outputting the combined performance evaluation value corresponding to the combined operation of all hardware in all types of final test software in all types of servers at all collection time points.
Preferably, the specific analysis process includes inputting a single performance evaluation value corresponding to single operation of each hardware in each type of server at each collection time point in each type of final test software and a combined performance evaluation value corresponding to combined operation of each type of final test software into a performance evaluation value evaluation model, and outputting a performance evaluation value result corresponding to each hardware in each type of server;
And the performance evaluation value results comprise values of 1 and-1, when the performance evaluation value result corresponding to a certain hardware in a certain type of server is 1, the performance of the hardware in the certain type of server is qualified, otherwise, when the performance evaluation value result corresponding to a certain hardware in a certain type of server is-1, the performance of the hardware in the certain type of server is unqualified, and if the performance of the certain hardware in the certain type of server is unqualified, early warning feedback is carried out.
Preferably, the expression of the performance evaluation value evaluation model is:
In which, in the process,Representing the performance evaluation value result corresponding to the r-th hardware in the q-th type server,Representing the performance evaluation value corresponding to the r-th hardware in the q-th type server,Q represents the number corresponding to each type of server, q is a positive integer, r represents the number corresponding to each hardware, and r is a positive integer.
The single performance evaluation value corresponding to the single operation of each hardware in each type of server at each collection time point in each type of final test software and the combined performance evaluation value corresponding to the combined operation of each type of final test software are recorded asAndSubstituting into a calculation formula:
obtaining performance evaluation values corresponding to the hardware in each type of serverWherein i represents the corresponding number of each acquisition time point, i is a positive integer, f represents the corresponding number of each type of final test software monomer operation, f is a positive integer, h represents the corresponding number of each type of final test software combination operation, h is a positive integer,、The set standard monomer performance evaluation values of the hardware running in the final test using software monomer and the standard combination performance evaluation values of the final test using software combination running correspond to each other,、The weight factors corresponding to the monomer performance evaluation values of the set hardware when the final test is run by using the software monomer and the weight factors corresponding to the combined performance evaluation values when the final test is run by using the software combination are respectively set.
Preferably, if the performance of a certain hardware in a certain type of server is not qualified, early warning feedback is performed, and the specific early warning process is that when the performance evaluation result of the certain hardware in the certain type of server is detected to be unqualified, the system immediately starts an early warning mechanism, and at the moment, early warning information is sent to related personnel responsible for operation and maintenance work of the server, wherein the early warning information comprises the type of the server and the name of the hardware.
The method and the device have the beneficial effects that 1, in the embodiment of the invention, the initial test software is determined firstly by collecting, classifying and screening the user software in each type of server in the history period of the target production enterprise, and then the final test software is further screened out. The method ensures that the test software is closely related to the actual use condition of the server, avoids the test of irrelevant software, and can more accurately reflect the hardware performance of the server in a real service scene. For example, in an enterprise, if a certain database server mainly runs a database management software of a specific version, the database management software can be screened out for targeted test by the method, instead of performing generalized test on all possible software, so that the fit between performance test and actual use condition is improved. This enables a comprehensive assessment of the hardware's performance in complex business scenarios, as in an actual server operating environment, multiple software types tend to run simultaneously and interact with each other. For example, on a server running Web server software, database software and middleware at the same time, the comprehensive evaluation can accurately detect the performance bottleneck of hardware when processing the cooperative work of multiple software, and the accuracy of hardware performance evaluation is improved.
2. In the embodiment of the invention, when the final test software is screened, the key indexes such as CPU peak value utilization rate, memory occupation peak value, software collapse times and the like corresponding to the software are obtained, and the key indexes are substituted into a calculation formula to obtain the evaluation value affecting the hardware performance. The quantification mode carries out clear numerical representation on the influence degree of the fuzzy software on hardware, so that the testing process is more scientific. For example, through setting the weight factors and the adjustment factors, according to the importance degree of enterprises on different factors, if the requirement on the stability of a server is high, the weight of the number of software crashes can be increased, the influence of each software on the hardware performance can be accurately measured, a reliable basis is provided for subsequent testing and evaluation, and the collected performance data of the hardware under different running conditions, such as monomer response time, monomer power consumption, monomer heat dissipation efficiency, combined response time, combined power consumption, combined heat dissipation efficiency and the like, are converted into quantifiable evaluation values by using a monomer performance evaluation analysis model and a performance evaluation value evaluation model. By comparing the hardware performance with the set threshold value, whether the hardware performance is qualified or not can be accurately judged. The model-based method makes the assessment process standardized and objectified, reduces subjectivity and error of artificial judgment, and provides scientific basis for performance judgment of server hardware.
3. According to the embodiment of the invention, the actual performance condition of the server hardware under different service loads can be known through accurate hardware performance test. Enterprises can reasonably configure server resources according to the information, and excessive configuration or insufficient configuration of hardware resources are avoided. For example, if the hardware of a server has a larger margin under the current service load, the service of some other servers can be considered to be migrated to improve the utilization rate of hardware resources, otherwise, if the performance of the hardware is close to the limit, the hardware can be planned to be upgraded in advance to ensure the smooth operation of the service, and meanwhile, the complete flow from software acquisition, screening to hardware performance test, evaluation and early warning is definitely specified, and each step has a detailed operation method and basis. This allows for repeatability of the testing process, with similar results being obtained by different testers in the same way. For example, newly-entered operation and maintenance personnel can accurately test the hardware performance of the server according to the method, personal experience is not needed, and stability and reliability of test quality are ensured.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The embodiment of the invention is shown in fig. 1, and the method comprises the steps of firstly, obtaining type use software, namely obtaining use software corresponding to each user in each type of server in a historical period of a target production enterprise, so as to perform primary test screening on the use software corresponding to each user in each type of server, and further screening to obtain primary test use software corresponding to each type of server.
In a specific embodiment, the first test screening is performed on each piece of usage software corresponding to each user in each type of server, and the specific screening process includes that A1, each piece of usage software corresponding to each user in each type of server in a history period is collected from a server management system of a target production enterprise, each piece of usage software corresponding to each user in each collected type of server is classified, each piece of usage software corresponding to each user in each type of server is obtained, and then the usage frequency of each piece of usage software corresponding to each type of server is counted.
It should be noted that the types of software used include testing operating system, testing driver version and application program, extracting software data used by various types of server users from server management system of target production enterprise, including information such as operating system, driver version and application program, cleaning the collected data, removing repeated, incomplete or irrelevant information, ensuring accuracy and consistency of data, and classifying the data into several categories according to the type of software, namely, identifying and classifying different operating systems such as Windows, linux, unix. And the drivers are used for identifying and classifying various drivers, such as network drivers, display card drivers and the like, and recording version information of the drivers. Application program, identifying and classifying various application programs such as office software, databases, development tools and the like.
A2, comparing the use frequency of each type of use software corresponding to each type of server with the use frequency corresponding to the set primary test use software, if the use frequency of a certain type of use software corresponding to a certain type of server is larger than or equal to the use frequency corresponding to the set primary test use software, indicating that the type of use software corresponding to the certain type of server has test representativeness, and if the use frequency of a certain type of use software corresponding to the certain type of server is smaller than the use frequency corresponding to the set primary test use software, indicating that the type of use software corresponding to the certain type of server does not have test representativeness, and marking each type of use software corresponding to the certain type of server with test representativeness as each type of primary test use software.
And step two, final test screening, namely, according to the primary test software of each type corresponding to each type of server, performing final screening on the primary test software of each type corresponding to each type of server, and further screening to obtain the final test software of each type corresponding to each type of server.
In a specific embodiment, the final screening is performed on the primary test software of each type corresponding to each type of server, where the specific screening process includes B1, obtaining evaluation values of the performance of the influencing hardware corresponding to the primary test software of each type in each type of server, and arranging the evaluation values of the performance of the influencing hardware corresponding to the primary test software of each type in each type of server in order from big to small.
And B2, marking ten types of primary test use software with highest influence on the hardware performance evaluation value in the servers of various types as various types of final test use software.
In another specific embodiment, the method comprises the steps of obtaining CPU peak value usage, memory occupation peak value and software crash times corresponding to each type of primary test usage software in each type of server, and recording the CPU peak value usage, memory occupation peak value and software crash times corresponding to each type of primary test usage software in each type of server as respectively、AndQ represents the number corresponding to each type of server, q is a positive integer, w represents the number corresponding to each type of primary test software, w is a positive integer, and the number is substituted into a calculation formula:
Obtaining the evaluation value of the influence hardware performance corresponding to the primary test software in each type of serverWherein, the method comprises the steps of, wherein,、、Respectively setting standard CPU peak value usage rate and standard memory occupation peak value corresponding to the usage software, the number of times of standard software breakdown,、、Respectively set weight factors corresponding to the CPU peak value using rate, the memory occupied peak value and the software crash times,、、Respectively set regulating factors corresponding to the CPU peak value using rate, the memory occupied peak value and the software breakdown times of the software,Representing natural constants.
It should be noted that the number of the substrates,、、Are all larger than 0 and smaller than 1,、、Are all greater than 0 and less than 1.
It should also be noted that the summary of the experimental data and the large amount of research data is used. And setting the peak value use rate, the peak value occupied by the standard memory and the number of times of standard software crash corresponding to the use software according to the professional institutions and research institutions, and simultaneously, carrying out discussion and confirmation with the industry organizations or the professional institutions according to the professional knowledge and the research basis of the field expert. The expert sets the weight factor corresponding to the peak value using rate of the CPU using software, the weight factor corresponding to the peak value occupied by the memory, the weight factor corresponding to the number of software crashes according to own experience and knowledge, and sets the adjusting factor corresponding to the peak value using rate of the CPU using software, the adjusting factor corresponding to the peak value occupied by the memory and the adjusting factor corresponding to the number of software crashes.
According to the embodiment of the invention, the initial test software is determined firstly by collecting, classifying and screening the user software in each type of server in the history period of the target production enterprise, and then the final test software is further screened out. The method ensures that the test software is closely related to the actual use condition of the server, avoids the test of irrelevant software, and can more accurately reflect the hardware performance of the server in a real service scene. For example, in an enterprise, if a certain database server mainly runs a database management software of a specific version, the database management software can be screened out for targeted test by the method, instead of performing generalized test on all possible software, so that the fit between performance test and actual use condition is improved. This enables a comprehensive assessment of the hardware's performance in complex business scenarios, as in an actual server operating environment, multiple software types tend to run simultaneously and interact with each other. For example, on a server running Web server software, database software and middleware at the same time, the comprehensive evaluation can accurately detect the performance bottleneck of hardware when processing the cooperative work of multiple software, and the accuracy of hardware performance evaluation is improved.
Inputting various types of final test software corresponding to various types of servers into various types of server hardware performance test platforms, so as to perform performance test on various hardware in various types of servers in the current period of a target production enterprise, setting a plurality of acquisition time points in the test process, and analyzing monomer performance evaluation values corresponding to the single operation of various types of final test software and combined performance evaluation values corresponding to the combined operation of various types of final test software of various types of hardware in various types of servers at various acquisition time points.
It should be noted that, the running of the final test software alone refers to a running mode of only one type of final test software on the test server at a time in the performance test process of the server hardware. The running mode is mainly used for isolating the influence of different software on hardware and independently evaluating the performance of each software on the hardware of the server.
The final test software is a test operation mode of simultaneously operating a plurality of software according to a combination mode of various types of final test software in an actual service scene in the process of testing the hardware performance of the server. The running mode is used for simulating the working state of the server in the real service environment and evaluating the comprehensive performance of hardware when coping with the cooperative work of various software.
In a specific embodiment, the performance test is performed by inputting the final test software of each type corresponding to each type of server into the hardware performance test platform of each type of server, and the specific test process is that C1, each hardware in each type of server in the current period of the target production enterprise is installed into each corresponding test platform, and the corresponding final test software of each type is installed on the test platform of each type of server.
And C2, performing monomer operation test on each hardware in each type of server, namely, only running one type of final test software on the test server by a test platform running each type of server at each time, and further collecting monomer performance data corresponding to monomer operation of each hardware in each type of server in each type of final test software by a server hardware performance monitoring tool at each collection time point, wherein the monomer performance data comprises monomer response time, monomer power consumption and monomer heat dissipation efficiency.
And C3, after the single operation test of each hardware in each type of server is completed, carrying out the combined operation test on each hardware in each type of server, and carrying out the combined operation test of each type of final test software according to the occurrence of each type of final test software, namely, only operating one type of final test software on the test server by a test platform for operating each type of server each time, and further collecting corresponding combined performance data of each hardware in each type of server in the combined operation of each type of final test software by a server hardware performance monitoring tool at each collection time point, wherein the combined performance data comprises combined response duration, combined power consumption and combined heat dissipation efficiency.
It should be noted that, appropriate monitoring points are set at the software code and hardware interface level, and for software, a timestamp mark is inserted at the code position where the task request is sent, and a timestamp mark is also inserted at the receiving code position where the hardware completes the task return result. The time stamps can be realized through a log recording function of software or a special performance monitoring tool, so that single response time and combined response time are obtained, hardware power consumption data are obtained through a power meter connected to a server power line or a power consumption monitoring chip of a server main board, so that single power consumption and combined power consumption are obtained, the starting time of all software running simultaneously to send out task requests and the ending time of all tasks finishing to return results are recorded by using the monitoring tool, and the difference value of the two is the combined response time. Considering task association, more accurate data is obtained by simulating different load combinations and measuring for multiple times, and the total power consumption is directly measured through a power monitoring function of a server power supply distribution unit or an external power meter. The estimated power consumption values of all the hardware components can be read by a software tool and added to obtain the total power consumption. And analyzing the power consumption change conditions under different software combinations and loads, integrating the data of each hardware temperature sensor, and calculating the average temperature rise value of the hardware. The total energy consumption of the heat dissipation system is determined, and heat transfer and environmental factors inside the case are considered. According to the law of conservation of energy, the ratio of heat taken away by the heat dissipation system to the total heat generated by hardware is calculated, and then the single heat dissipation efficiency and the combined heat dissipation efficiency are obtained.
In another specific embodiment, the method for analyzing the monomer performance evaluation value corresponding to the monomer operation of each hardware in each type of final test software in each type of server at each collection time point comprises the steps of obtaining monomer response time, monomer power consumption and monomer heat dissipation efficiency corresponding to the monomer operation of each hardware in each type of final test software in each collection time point in each type of server, inputting the monomer response time, the monomer power consumption and the monomer heat dissipation efficiency corresponding to the monomer operation of each hardware in each type of final test software in each collection time point in a monomer performance evaluation analysis model, and outputting the monomer performance evaluation value corresponding to the monomer operation of each hardware in each type of server in each collection time point in each type of final test software.
The analysis process of the monomer performance evaluation value corresponding to the monomer operation of each hardware in each type of server at each collection time point in each type of final test software comprises the steps of normalizing the monomer response time length, the monomer power consumption and the monomer heat dissipation efficiency corresponding to the monomer operation of each hardware in each type of final test software at each collection time point in each type of final test software, and respectively recording the monomer response time length, the monomer power consumption and the monomer heat dissipation efficiency corresponding to the monomer operation of each hardware in each type of server at each collection time point after the treatment as、AndSubstituting into an analysis formula:
obtaining monomer performance evaluation values corresponding to the running of each hardware in each type of servers at each acquisition time point in each type of final test software monomer,、、The method comprises the steps of respectively setting a weight coefficient corresponding to the response time length of a monomer running software monomer for final testing, a weight coefficient corresponding to the power consumption of the monomer and a weight coefficient corresponding to the heat dissipation efficiency of the monomer, wherein i represents a number corresponding to each acquisition time point, i is a positive integer, f represents a number corresponding to each type of the monomer running software monomer for final testing, f is a positive integer, q represents a number corresponding to each type of server, q is a positive integer, r represents a number corresponding to each hardware, and r is a positive integer.
It should be noted that the number of the substrates,、、Are all greater than 0 and less than 1.
It should also be noted that the discussion and confirmation is based on the expertise and study of the field specialist and is also discussed and confirmed with the industry organization or professional institution. And setting a weight coefficient corresponding to the response time of the monomer running by the final test software monomer, a weight coefficient corresponding to the power consumption of the monomer and a weight coefficient corresponding to the heat dissipation efficiency of the monomer by an expert according to own experience and knowledge.
In a specific embodiment, the method for analyzing the combined performance evaluation value corresponding to the combined operation of the hardware in the various types of servers at each collection time point comprises the steps of obtaining the combined response time, the combined power consumption and the combined heat dissipation efficiency corresponding to the combined operation of the hardware in the various types of servers at each collection time point, inputting the combined response time, the combined power consumption and the combined heat dissipation efficiency corresponding to the combined operation of the hardware in the various types of servers at each collection time point into a combined performance evaluation analysis model, and outputting the combined performance evaluation value corresponding to the combined operation of the hardware in the various types of servers at each collection time point.
The analysis process of the combined performance evaluation value corresponding to the operation of each hardware in each type of server at each collection time point in each type of final test software monomer comprises the steps of normalizing the combined response time length, the combined power consumption and the combined heat dissipation efficiency corresponding to the operation of each hardware in each type of final test software monomer at each collection time point in each type of server, and respectively recording the combined response time length, the combined power consumption and the combined heat dissipation efficiency corresponding to the operation of each hardware in each type of server at each collection time point in each type of final test software monomer after the normalization、AndSubstituting into an analysis formula:
Obtaining the corresponding combined performance evaluation value of each hardware in each type of server at each collection time point in each type of final test software monomer operation,、、The method comprises the steps of respectively setting a weight coefficient corresponding to a set final test software monomer operation combination response time length, a weight coefficient corresponding to combination power consumption and a weight coefficient corresponding to combination heat dissipation efficiency, wherein i represents a number corresponding to each acquisition time point, i is a positive integer, h represents a number corresponding to each type of final test software combination operation, h is a positive integer, q represents a number corresponding to each type of server, q is a positive integer, r represents a number corresponding to each hardware, and r is a positive integer.
It should be noted that the number of the substrates,、、Are all greater than 0 and less than 1.
It should also be noted that the discussion and confirmation is based on the expertise and study of the field specialist and is also discussed and confirmed with the industry organization or professional institution. And setting a weight coefficient corresponding to the final test use software monomer operation combination response time, a weight coefficient corresponding to the combination power consumption and a weight coefficient corresponding to the combination heat dissipation efficiency by an expert according to own experience and knowledge.
And fourthly, judging the hardware performance, namely analyzing the performance evaluation value corresponding to each hardware in each type of server according to the monomer performance evaluation value corresponding to the single operation of each hardware in each type of final test software and the combined performance evaluation value corresponding to the combined operation of each type of final test software in each type of final test software at each acquisition time point, and further judging whether the performance of each hardware in each type of server is qualified.
In a specific embodiment, the specific analysis process is that the single performance evaluation value corresponding to the single operation of each hardware in each type server at each collection time point in each type final test software and the combined performance evaluation value corresponding to the combined operation of each type final test software are input into a performance evaluation value evaluation model, and the performance evaluation value result corresponding to each hardware in each type server is output.
And the performance evaluation value results comprise values of 1 and-1, when the performance evaluation value result corresponding to a certain hardware in a certain type of server is 1, the performance of the hardware in the certain type of server is qualified, otherwise, when the performance evaluation value result corresponding to a certain hardware in a certain type of server is-1, the performance of the hardware in the certain type of server is unqualified, and if the performance of the certain hardware in the certain type of server is unqualified, early warning feedback is carried out.
In a specific embodiment, the expression of the performance evaluation value evaluation model is:
In which, in the process,Representing the performance evaluation value result corresponding to the r-th hardware in the q-th type server,Representing the performance evaluation value corresponding to the r-th hardware in the q-th type server,Q represents the number corresponding to each type of server, q is a positive integer, r represents the number corresponding to each hardware, and r is a positive integer.
The single performance evaluation value corresponding to the single operation of each hardware in each type of server at each collection time point in each type of final test software and the combined performance evaluation value corresponding to the combined operation of each type of final test software are recorded asAndSubstituting into a calculation formula:
obtaining performance evaluation values corresponding to the hardware in each type of serverWherein i represents the corresponding number of each acquisition time point, i is a positive integer, f represents the corresponding number of each type of final test software monomer operation, f is a positive integer, h represents the corresponding number of each type of final test software combination operation, h is a positive integer,、The set standard monomer performance evaluation values of the hardware running in the final test using software monomer and the standard combination performance evaluation values of the final test using software combination running correspond to each other,、The weight factors corresponding to the monomer performance evaluation values of the set hardware when the final test is run by using the software monomer and the weight factors corresponding to the combined performance evaluation values when the final test is run by using the software combination are respectively set.
It should be noted that the number of the substrates,、Are all greater than 0 and less than 1.
It should also be noted that the summary of the experimental data and the large amount of research data is used. And setting a standard monomer performance evaluation value corresponding to the hardware running in the final test using software monomer and a standard combination performance evaluation value corresponding to the final test using software combination running according to a professional institution and a research institution, and simultaneously, carrying out discussion and confirmation with an industry organization or the professional institution according to the basis of professional knowledge and research of a field expert. And setting weight factors corresponding to the monomer performance evaluation values of the hardware in the final test using software monomer operation and weight factors corresponding to the combined performance evaluation values in the final test using software combination operation by experts according to own experience and knowledge.
In the embodiment of the invention, when the final test software is screened, the key indexes such as CPU peak value utilization rate, memory occupation peak value, software collapse times and the like corresponding to the software are obtained, and the key indexes are substituted into a calculation formula to obtain the evaluation value affecting the hardware performance. The quantification mode carries out clear numerical representation on the influence degree of the fuzzy software on hardware, so that the testing process is more scientific. For example, through setting the weight factors and the adjustment factors, according to the importance degree of enterprises on different factors, if the requirement on the stability of a server is high, the weight of the number of software crashes can be increased, the influence of each software on the hardware performance can be accurately measured, a reliable basis is provided for subsequent testing and evaluation, and the collected performance data of the hardware under different running conditions, such as monomer response time, monomer power consumption, monomer heat dissipation efficiency, combined response time, combined power consumption, combined heat dissipation efficiency and the like, are converted into quantifiable evaluation values by using a monomer performance evaluation analysis model and a performance evaluation value evaluation model. By comparing the hardware performance with the set threshold value, whether the hardware performance is qualified or not can be accurately judged. The model-based method makes the assessment process standardized and objectified, reduces subjectivity and error of artificial judgment, and provides scientific basis for performance judgment of server hardware.
In a specific embodiment, if the performance of a certain hardware in a certain type of server is not qualified, early warning feedback is performed, and the specific early warning process is that when the performance evaluation result of a certain hardware in a certain type of server is monitored to be displayed as being not qualified, the system immediately starts an early warning mechanism, and at the moment, early warning information is sent to related personnel responsible for operation and maintenance work of the server, wherein the early warning information comprises the server type and the hardware name.
According to the embodiment of the invention, the actual performance condition of the server hardware under different service loads can be known through accurate hardware performance test. Enterprises can reasonably configure server resources according to the information, and excessive configuration or insufficient configuration of hardware resources are avoided. For example, if the hardware of a server has a larger margin under the current service load, the service of some other servers can be considered to be migrated to improve the utilization rate of hardware resources, otherwise, if the performance of the hardware is close to the limit, the hardware can be planned to be upgraded in advance to ensure the smooth operation of the service, and meanwhile, the complete flow from software acquisition, screening to hardware performance test, evaluation and early warning is definitely specified, and each step has a detailed operation method and basis. This allows for repeatability of the testing process, with similar results being obtained by different testers in the same way. For example, newly-entered operation and maintenance personnel can accurately test the hardware performance of the server according to the method, personal experience is not needed, and stability and reliability of test quality are ensured.
The foregoing is merely illustrative and explanatory of the principles of the invention, as various modifications and additions may be made to the specific embodiments described, or similar arrangements may be substituted by those skilled in the art, without departing from the principles of the invention or beyond the scope of the invention as defined in the description.