Movatterモバイル変換


[0]ホーム

URL:


CN119201652B - A method for testing server hardware performance - Google Patents

A method for testing server hardware performance
Download PDF

Info

Publication number
CN119201652B
CN119201652BCN202411742039.0ACN202411742039ACN119201652BCN 119201652 BCN119201652 BCN 119201652BCN 202411742039 ACN202411742039 ACN 202411742039ACN 119201652 BCN119201652 BCN 119201652B
Authority
CN
China
Prior art keywords
type
server
software
hardware
performance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411742039.0A
Other languages
Chinese (zh)
Other versions
CN119201652A (en
Inventor
马益飞
金理行
张米娜
王晓峰
汪明峰
蔡嘉炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Xiaoniu Cloud Information Technology Co ltd
Original Assignee
Hangzhou Xiaoniu Cloud Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Xiaoniu Cloud Information Technology Co ltdfiledCriticalHangzhou Xiaoniu Cloud Information Technology Co ltd
Priority to CN202411742039.0ApriorityCriticalpatent/CN119201652B/en
Publication of CN119201652ApublicationCriticalpatent/CN119201652A/en
Application grantedgrantedCritical
Publication of CN119201652BpublicationCriticalpatent/CN119201652B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The invention discloses a method for testing the hardware performance of a server, which relates to the technical field of hardware performance testing, and when screening the final test application software, and obtaining key indexes such as CPU peak value utilization rate, memory occupation peak value, software crash frequency and the like corresponding to the software, and substituting the key indexes into a calculation formula to obtain the evaluation value affecting the hardware performance. The quantitative mode carries out clear numerical representation on the influence degree of the fuzzy software on the hardware, so that the testing process is more scientific, the method can screen out the fuzzy software for targeted testing instead of carrying out generalized testing on all possible software, thereby improving the fit degree of the performance test and the actual use condition.

Description

Method for testing hardware performance of server
Technical Field
The invention relates to the technical field of hardware performance test, in particular to a method for testing the hardware performance of a server.
Background
With the continuous increase of the digitization degree of the target manufacturing enterprises, the business operation of the target manufacturing enterprises is more and more dependent on various services provided by the servers. There are multiple types of servers within an enterprise that run a large number of different types of software to support different departments and business processes. In the daily operation process, the quality of the hardware performance of the server directly influences the continuity, efficiency and user experience of the service, so a method for testing the hardware performance of the server is needed.
A server hardware performance automatic detection method and system based on reference test index disclosed in the patent application of the invention with publication number CN114595132A in the prior art comprises the following steps of firstly reading a reference test index library, remotely and automatically obtaining target server hardware configuration, generating an index test pre-evaluation value, creating a server performance index detection script, remotely executing the detection script, structuring a performance index detection result, and combining to generate a server performance detection report. The server hardware performance automatic detection method and system based on the reference test index provided by the invention realize that the server performance is automatically tested under the model of the reference test index, index data are acquired, the index data are analyzed and processed, a detection test script is automatically generated in the process, server information is collected by combining an interface of an application server BMCWebAPI, a test report is automatically generated, the requirement of the server hardware performance reference index test is effectively met, and the rapid automation of the test process is realized.
Aiming at the scheme, the technical problems that 1, the prior art lacks a screening method, and the test can perform performance test on all software installed on a server. This can result in a test scope that is too broad, containing many pieces of software that have little impact on server hardware performance or are rarely used in a practical business scenario. For example, some users occasionally install small tool software for personal testing, which is also brought into the testing scope, so that a large amount of irrelevant data is generated in the testing process, the testing time and workload are increased, the testing efficiency is reduced, and core software frequently used in the actual production environment of enterprises may not be focused, so that the testing result cannot accurately reflect the hardware performance of the server in the real service scene. For example, the core business of an enterprise is based on a specific database management software and related middleware, but the test does not pay attention to the influence of the software on hardware, but tests various non-critical software, so that the obtained hardware performance evaluation result has larger deviation from the performance in the actual business operation.
The prior art may only pay attention to the performance of hardware when single software runs, and neglecting the complex situation that a server runs multiple kinds of software in combination in actual running. For example, in an enterprise server environment, web server software and database software typically work simultaneously, and data interactions and resource sharing between them can have an interplay on hardware performance. If only the performance of the hardware when the Web server software or the database software is independently operated is tested, the hardware bottleneck such as memory contention, CPU scheduling conflict and the like possibly occurring when the software is operated in a combined mode cannot be found.
Without a method for quantifying the influence of software on hardware performance in the prior art, it is difficult to determine the roles of different software in the process of reducing server performance. For example, when a performance problem occurs in a server, it cannot be determined whether the CPU is overloaded due to a certain high-load computing software or the disk I/O bottleneck is caused by frequently read-written storage software, which makes the direction of optimizing the performance of the server or solving the problem lack, and there is no software screening mechanism based on quantization indexes, so that when software for testing is selected, the software for testing may be relatively blind, and the software with the greatest influence on the hardware performance cannot be accurately selected for performing the important test. This may result in testing resources being wasted on software that has less impact on hardware performance, while critical software that may actually cause performance problems is ignored.
Disclosure of Invention
Aiming at the technical defects, the invention aims to provide a method for testing the hardware performance of a server.
The method comprises the following steps of obtaining the type of software used by each user in each type of server in a history period of a target production enterprise, so as to perform primary test screening on the software used by each user in each type of server, and further screening to obtain the primary test software used by each type of server.
And step two, final test screening, namely, according to the primary test software of each type corresponding to each type of server, performing final screening on the primary test software of each type corresponding to each type of server, and further screening to obtain the final test software of each type corresponding to each type of server.
Inputting various types of final test software corresponding to various types of servers into various types of server hardware performance test platforms, so as to perform performance test on various hardware in various types of servers in the current period of a target production enterprise, setting a plurality of acquisition time points in the test process, and analyzing monomer performance evaluation values corresponding to the single operation of various types of final test software and combined performance evaluation values corresponding to the combined operation of various types of final test software of various types of hardware in various types of servers at various acquisition time points.
And fourthly, judging the hardware performance, namely analyzing the performance evaluation value corresponding to each hardware in each type of server according to the monomer performance evaluation value corresponding to the single operation of each hardware in each type of final test software and the combined performance evaluation value corresponding to the combined operation of each type of final test software in each type of final test software at each acquisition time point, and further judging whether the performance of each hardware in each type of server is qualified.
A1, collecting the use software corresponding to each user in each type of server in a history period from a server management system of a target production enterprise, classifying the collected use software corresponding to each user in each type of server to obtain the use software corresponding to each user in each type of server, and further counting the use frequency of the use software corresponding to each type of server.
A2, comparing the use frequency of each type of use software corresponding to each type of server with the use frequency corresponding to the set primary test use software, if the use frequency of a certain type of use software corresponding to a certain type of server is larger than or equal to the use frequency corresponding to the set primary test use software, indicating that the type of use software corresponding to the certain type of server has test representativeness, and if the use frequency of a certain type of use software corresponding to the certain type of server is smaller than the use frequency corresponding to the set primary test use software, indicating that the type of use software corresponding to the certain type of server does not have test representativeness, and marking each type of use software corresponding to the certain type of server with test representativeness as each type of primary test use software.
The method comprises the steps of B1, obtaining evaluation values of the performance of the influencing hardware corresponding to the primary test software in each type of server, and arranging the evaluation values of the performance of the influencing hardware corresponding to the primary test software in each type of server in a sequence from large to small.
And B2, marking ten types of primary test use software with highest influence on the hardware performance evaluation value in the servers of various types as various types of final test use software.
The method comprises obtaining CPU peak value usage rate, memory occupation peak value and software collapse times corresponding to the primary test software in each type of server, and recording the CPU peak value usage rate, memory occupation peak value and software collapse times corresponding to the primary test software in each type of server asAndQ represents the number corresponding to each type of server, q is a positive integer, w represents the number corresponding to each type of primary test software, w is a positive integer, and the number is substituted into a calculation formula:
Obtaining the evaluation value of the influence hardware performance corresponding to the primary test software in each type of serverWherein, the method comprises the steps of, wherein,Respectively setting standard CPU peak value usage rate and standard memory occupation peak value corresponding to the usage software, the number of times of standard software breakdown,Respectively set weight factors corresponding to the CPU peak value using rate, the memory occupied peak value and the software crash times,Respectively set regulating factors corresponding to the CPU peak value using rate, the memory occupied peak value and the software breakdown times of the software,Representing natural constants.
Preferably, the method comprises the steps of inputting the corresponding final test software of each type of server into the hardware performance test platform of each type of server for performance test, wherein the specific test process comprises the following steps of C1, installing each hardware in each type of server in the current period of a target production enterprise into each corresponding test platform, and installing the corresponding final test software of each type on the test platform of each type of server.
And C2, performing monomer operation test on each hardware in each type of server, namely, only running one type of final test software on the test server by a test platform running each type of server at each time, and further collecting monomer performance data corresponding to monomer operation of each hardware in each type of server in each type of final test software by a server hardware performance monitoring tool at each collection time point, wherein the monomer performance data comprises monomer response time, monomer power consumption and monomer heat dissipation efficiency.
And C3, after the single operation test of each hardware in each type of server is completed, carrying out the combined operation test on each hardware in each type of server, and carrying out the combined operation test of each type of final test software according to the occurrence of each type of final test software, namely, only operating one type of final test software on the test server by a test platform for operating each type of server each time, and further collecting corresponding combined performance data of each hardware in each type of server in the combined operation of each type of final test software by a server hardware performance monitoring tool at each collection time point, wherein the combined performance data comprises combined response duration, combined power consumption and combined heat dissipation efficiency.
The specific analysis process is that monomer response time, monomer power consumption and monomer heat dissipation efficiency corresponding to the running of each hardware in each type of final test software monomer in each type of server at each collection time point are obtained, monomer response time, monomer power consumption and monomer heat dissipation efficiency corresponding to the running of each hardware in each type of final test software monomer in each collection time point are input into a monomer performance evaluation analysis model, and monomer performance evaluation values corresponding to the running of each hardware in each type of final test software monomer in each collection time point are output.
The method comprises the steps of acquiring a combined response time, combined power consumption and combined heat radiation efficiency corresponding to the combined operation of all hardware in all types of final test software in all types of servers at all collection time points, inputting the combined response time, the combined power consumption and the combined heat radiation efficiency corresponding to the combined operation of all hardware in all types of final test software in all types of servers at all collection time points into a combined performance evaluation analysis model, and outputting the combined performance evaluation value corresponding to the combined operation of all hardware in all types of final test software in all types of servers at all collection time points.
Preferably, the specific analysis process includes inputting a single performance evaluation value corresponding to single operation of each hardware in each type of server at each collection time point in each type of final test software and a combined performance evaluation value corresponding to combined operation of each type of final test software into a performance evaluation value evaluation model, and outputting a performance evaluation value result corresponding to each hardware in each type of server;
And the performance evaluation value results comprise values of 1 and-1, when the performance evaluation value result corresponding to a certain hardware in a certain type of server is 1, the performance of the hardware in the certain type of server is qualified, otherwise, when the performance evaluation value result corresponding to a certain hardware in a certain type of server is-1, the performance of the hardware in the certain type of server is unqualified, and if the performance of the certain hardware in the certain type of server is unqualified, early warning feedback is carried out.
Preferably, the expression of the performance evaluation value evaluation model is:
In which, in the process,Representing the performance evaluation value result corresponding to the r-th hardware in the q-th type server,Representing the performance evaluation value corresponding to the r-th hardware in the q-th type server,Q represents the number corresponding to each type of server, q is a positive integer, r represents the number corresponding to each hardware, and r is a positive integer.
The single performance evaluation value corresponding to the single operation of each hardware in each type of server at each collection time point in each type of final test software and the combined performance evaluation value corresponding to the combined operation of each type of final test software are recorded asAndSubstituting into a calculation formula:
obtaining performance evaluation values corresponding to the hardware in each type of serverWherein i represents the corresponding number of each acquisition time point, i is a positive integer, f represents the corresponding number of each type of final test software monomer operation, f is a positive integer, h represents the corresponding number of each type of final test software combination operation, h is a positive integer,The set standard monomer performance evaluation values of the hardware running in the final test using software monomer and the standard combination performance evaluation values of the final test using software combination running correspond to each other,The weight factors corresponding to the monomer performance evaluation values of the set hardware when the final test is run by using the software monomer and the weight factors corresponding to the combined performance evaluation values when the final test is run by using the software combination are respectively set.
Preferably, if the performance of a certain hardware in a certain type of server is not qualified, early warning feedback is performed, and the specific early warning process is that when the performance evaluation result of the certain hardware in the certain type of server is detected to be unqualified, the system immediately starts an early warning mechanism, and at the moment, early warning information is sent to related personnel responsible for operation and maintenance work of the server, wherein the early warning information comprises the type of the server and the name of the hardware.
The method and the device have the beneficial effects that 1, in the embodiment of the invention, the initial test software is determined firstly by collecting, classifying and screening the user software in each type of server in the history period of the target production enterprise, and then the final test software is further screened out. The method ensures that the test software is closely related to the actual use condition of the server, avoids the test of irrelevant software, and can more accurately reflect the hardware performance of the server in a real service scene. For example, in an enterprise, if a certain database server mainly runs a database management software of a specific version, the database management software can be screened out for targeted test by the method, instead of performing generalized test on all possible software, so that the fit between performance test and actual use condition is improved. This enables a comprehensive assessment of the hardware's performance in complex business scenarios, as in an actual server operating environment, multiple software types tend to run simultaneously and interact with each other. For example, on a server running Web server software, database software and middleware at the same time, the comprehensive evaluation can accurately detect the performance bottleneck of hardware when processing the cooperative work of multiple software, and the accuracy of hardware performance evaluation is improved.
2. In the embodiment of the invention, when the final test software is screened, the key indexes such as CPU peak value utilization rate, memory occupation peak value, software collapse times and the like corresponding to the software are obtained, and the key indexes are substituted into a calculation formula to obtain the evaluation value affecting the hardware performance. The quantification mode carries out clear numerical representation on the influence degree of the fuzzy software on hardware, so that the testing process is more scientific. For example, through setting the weight factors and the adjustment factors, according to the importance degree of enterprises on different factors, if the requirement on the stability of a server is high, the weight of the number of software crashes can be increased, the influence of each software on the hardware performance can be accurately measured, a reliable basis is provided for subsequent testing and evaluation, and the collected performance data of the hardware under different running conditions, such as monomer response time, monomer power consumption, monomer heat dissipation efficiency, combined response time, combined power consumption, combined heat dissipation efficiency and the like, are converted into quantifiable evaluation values by using a monomer performance evaluation analysis model and a performance evaluation value evaluation model. By comparing the hardware performance with the set threshold value, whether the hardware performance is qualified or not can be accurately judged. The model-based method makes the assessment process standardized and objectified, reduces subjectivity and error of artificial judgment, and provides scientific basis for performance judgment of server hardware.
3. According to the embodiment of the invention, the actual performance condition of the server hardware under different service loads can be known through accurate hardware performance test. Enterprises can reasonably configure server resources according to the information, and excessive configuration or insufficient configuration of hardware resources are avoided. For example, if the hardware of a server has a larger margin under the current service load, the service of some other servers can be considered to be migrated to improve the utilization rate of hardware resources, otherwise, if the performance of the hardware is close to the limit, the hardware can be planned to be upgraded in advance to ensure the smooth operation of the service, and meanwhile, the complete flow from software acquisition, screening to hardware performance test, evaluation and early warning is definitely specified, and each step has a detailed operation method and basis. This allows for repeatability of the testing process, with similar results being obtained by different testers in the same way. For example, newly-entered operation and maintenance personnel can accurately test the hardware performance of the server according to the method, personal experience is not needed, and stability and reliability of test quality are ensured.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of the steps of the method of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The embodiment of the invention is shown in fig. 1, and the method comprises the steps of firstly, obtaining type use software, namely obtaining use software corresponding to each user in each type of server in a historical period of a target production enterprise, so as to perform primary test screening on the use software corresponding to each user in each type of server, and further screening to obtain primary test use software corresponding to each type of server.
In a specific embodiment, the first test screening is performed on each piece of usage software corresponding to each user in each type of server, and the specific screening process includes that A1, each piece of usage software corresponding to each user in each type of server in a history period is collected from a server management system of a target production enterprise, each piece of usage software corresponding to each user in each collected type of server is classified, each piece of usage software corresponding to each user in each type of server is obtained, and then the usage frequency of each piece of usage software corresponding to each type of server is counted.
It should be noted that the types of software used include testing operating system, testing driver version and application program, extracting software data used by various types of server users from server management system of target production enterprise, including information such as operating system, driver version and application program, cleaning the collected data, removing repeated, incomplete or irrelevant information, ensuring accuracy and consistency of data, and classifying the data into several categories according to the type of software, namely, identifying and classifying different operating systems such as Windows, linux, unix. And the drivers are used for identifying and classifying various drivers, such as network drivers, display card drivers and the like, and recording version information of the drivers. Application program, identifying and classifying various application programs such as office software, databases, development tools and the like.
A2, comparing the use frequency of each type of use software corresponding to each type of server with the use frequency corresponding to the set primary test use software, if the use frequency of a certain type of use software corresponding to a certain type of server is larger than or equal to the use frequency corresponding to the set primary test use software, indicating that the type of use software corresponding to the certain type of server has test representativeness, and if the use frequency of a certain type of use software corresponding to the certain type of server is smaller than the use frequency corresponding to the set primary test use software, indicating that the type of use software corresponding to the certain type of server does not have test representativeness, and marking each type of use software corresponding to the certain type of server with test representativeness as each type of primary test use software.
And step two, final test screening, namely, according to the primary test software of each type corresponding to each type of server, performing final screening on the primary test software of each type corresponding to each type of server, and further screening to obtain the final test software of each type corresponding to each type of server.
In a specific embodiment, the final screening is performed on the primary test software of each type corresponding to each type of server, where the specific screening process includes B1, obtaining evaluation values of the performance of the influencing hardware corresponding to the primary test software of each type in each type of server, and arranging the evaluation values of the performance of the influencing hardware corresponding to the primary test software of each type in each type of server in order from big to small.
And B2, marking ten types of primary test use software with highest influence on the hardware performance evaluation value in the servers of various types as various types of final test use software.
In another specific embodiment, the method comprises the steps of obtaining CPU peak value usage, memory occupation peak value and software crash times corresponding to each type of primary test usage software in each type of server, and recording the CPU peak value usage, memory occupation peak value and software crash times corresponding to each type of primary test usage software in each type of server as respectivelyAndQ represents the number corresponding to each type of server, q is a positive integer, w represents the number corresponding to each type of primary test software, w is a positive integer, and the number is substituted into a calculation formula:
Obtaining the evaluation value of the influence hardware performance corresponding to the primary test software in each type of serverWherein, the method comprises the steps of, wherein,Respectively setting standard CPU peak value usage rate and standard memory occupation peak value corresponding to the usage software, the number of times of standard software breakdown,Respectively set weight factors corresponding to the CPU peak value using rate, the memory occupied peak value and the software crash times,Respectively set regulating factors corresponding to the CPU peak value using rate, the memory occupied peak value and the software breakdown times of the software,Representing natural constants.
It should be noted that the number of the substrates,Are all larger than 0 and smaller than 1,Are all greater than 0 and less than 1.
It should also be noted that the summary of the experimental data and the large amount of research data is used. And setting the peak value use rate, the peak value occupied by the standard memory and the number of times of standard software crash corresponding to the use software according to the professional institutions and research institutions, and simultaneously, carrying out discussion and confirmation with the industry organizations or the professional institutions according to the professional knowledge and the research basis of the field expert. The expert sets the weight factor corresponding to the peak value using rate of the CPU using software, the weight factor corresponding to the peak value occupied by the memory, the weight factor corresponding to the number of software crashes according to own experience and knowledge, and sets the adjusting factor corresponding to the peak value using rate of the CPU using software, the adjusting factor corresponding to the peak value occupied by the memory and the adjusting factor corresponding to the number of software crashes.
According to the embodiment of the invention, the initial test software is determined firstly by collecting, classifying and screening the user software in each type of server in the history period of the target production enterprise, and then the final test software is further screened out. The method ensures that the test software is closely related to the actual use condition of the server, avoids the test of irrelevant software, and can more accurately reflect the hardware performance of the server in a real service scene. For example, in an enterprise, if a certain database server mainly runs a database management software of a specific version, the database management software can be screened out for targeted test by the method, instead of performing generalized test on all possible software, so that the fit between performance test and actual use condition is improved. This enables a comprehensive assessment of the hardware's performance in complex business scenarios, as in an actual server operating environment, multiple software types tend to run simultaneously and interact with each other. For example, on a server running Web server software, database software and middleware at the same time, the comprehensive evaluation can accurately detect the performance bottleneck of hardware when processing the cooperative work of multiple software, and the accuracy of hardware performance evaluation is improved.
Inputting various types of final test software corresponding to various types of servers into various types of server hardware performance test platforms, so as to perform performance test on various hardware in various types of servers in the current period of a target production enterprise, setting a plurality of acquisition time points in the test process, and analyzing monomer performance evaluation values corresponding to the single operation of various types of final test software and combined performance evaluation values corresponding to the combined operation of various types of final test software of various types of hardware in various types of servers at various acquisition time points.
It should be noted that, the running of the final test software alone refers to a running mode of only one type of final test software on the test server at a time in the performance test process of the server hardware. The running mode is mainly used for isolating the influence of different software on hardware and independently evaluating the performance of each software on the hardware of the server.
The final test software is a test operation mode of simultaneously operating a plurality of software according to a combination mode of various types of final test software in an actual service scene in the process of testing the hardware performance of the server. The running mode is used for simulating the working state of the server in the real service environment and evaluating the comprehensive performance of hardware when coping with the cooperative work of various software.
In a specific embodiment, the performance test is performed by inputting the final test software of each type corresponding to each type of server into the hardware performance test platform of each type of server, and the specific test process is that C1, each hardware in each type of server in the current period of the target production enterprise is installed into each corresponding test platform, and the corresponding final test software of each type is installed on the test platform of each type of server.
And C2, performing monomer operation test on each hardware in each type of server, namely, only running one type of final test software on the test server by a test platform running each type of server at each time, and further collecting monomer performance data corresponding to monomer operation of each hardware in each type of server in each type of final test software by a server hardware performance monitoring tool at each collection time point, wherein the monomer performance data comprises monomer response time, monomer power consumption and monomer heat dissipation efficiency.
And C3, after the single operation test of each hardware in each type of server is completed, carrying out the combined operation test on each hardware in each type of server, and carrying out the combined operation test of each type of final test software according to the occurrence of each type of final test software, namely, only operating one type of final test software on the test server by a test platform for operating each type of server each time, and further collecting corresponding combined performance data of each hardware in each type of server in the combined operation of each type of final test software by a server hardware performance monitoring tool at each collection time point, wherein the combined performance data comprises combined response duration, combined power consumption and combined heat dissipation efficiency.
It should be noted that, appropriate monitoring points are set at the software code and hardware interface level, and for software, a timestamp mark is inserted at the code position where the task request is sent, and a timestamp mark is also inserted at the receiving code position where the hardware completes the task return result. The time stamps can be realized through a log recording function of software or a special performance monitoring tool, so that single response time and combined response time are obtained, hardware power consumption data are obtained through a power meter connected to a server power line or a power consumption monitoring chip of a server main board, so that single power consumption and combined power consumption are obtained, the starting time of all software running simultaneously to send out task requests and the ending time of all tasks finishing to return results are recorded by using the monitoring tool, and the difference value of the two is the combined response time. Considering task association, more accurate data is obtained by simulating different load combinations and measuring for multiple times, and the total power consumption is directly measured through a power monitoring function of a server power supply distribution unit or an external power meter. The estimated power consumption values of all the hardware components can be read by a software tool and added to obtain the total power consumption. And analyzing the power consumption change conditions under different software combinations and loads, integrating the data of each hardware temperature sensor, and calculating the average temperature rise value of the hardware. The total energy consumption of the heat dissipation system is determined, and heat transfer and environmental factors inside the case are considered. According to the law of conservation of energy, the ratio of heat taken away by the heat dissipation system to the total heat generated by hardware is calculated, and then the single heat dissipation efficiency and the combined heat dissipation efficiency are obtained.
In another specific embodiment, the method for analyzing the monomer performance evaluation value corresponding to the monomer operation of each hardware in each type of final test software in each type of server at each collection time point comprises the steps of obtaining monomer response time, monomer power consumption and monomer heat dissipation efficiency corresponding to the monomer operation of each hardware in each type of final test software in each collection time point in each type of server, inputting the monomer response time, the monomer power consumption and the monomer heat dissipation efficiency corresponding to the monomer operation of each hardware in each type of final test software in each collection time point in a monomer performance evaluation analysis model, and outputting the monomer performance evaluation value corresponding to the monomer operation of each hardware in each type of server in each collection time point in each type of final test software.
The analysis process of the monomer performance evaluation value corresponding to the monomer operation of each hardware in each type of server at each collection time point in each type of final test software comprises the steps of normalizing the monomer response time length, the monomer power consumption and the monomer heat dissipation efficiency corresponding to the monomer operation of each hardware in each type of final test software at each collection time point in each type of final test software, and respectively recording the monomer response time length, the monomer power consumption and the monomer heat dissipation efficiency corresponding to the monomer operation of each hardware in each type of server at each collection time point after the treatment asAndSubstituting into an analysis formula:
obtaining monomer performance evaluation values corresponding to the running of each hardware in each type of servers at each acquisition time point in each type of final test software monomer,The method comprises the steps of respectively setting a weight coefficient corresponding to the response time length of a monomer running software monomer for final testing, a weight coefficient corresponding to the power consumption of the monomer and a weight coefficient corresponding to the heat dissipation efficiency of the monomer, wherein i represents a number corresponding to each acquisition time point, i is a positive integer, f represents a number corresponding to each type of the monomer running software monomer for final testing, f is a positive integer, q represents a number corresponding to each type of server, q is a positive integer, r represents a number corresponding to each hardware, and r is a positive integer.
It should be noted that the number of the substrates,Are all greater than 0 and less than 1.
It should also be noted that the discussion and confirmation is based on the expertise and study of the field specialist and is also discussed and confirmed with the industry organization or professional institution. And setting a weight coefficient corresponding to the response time of the monomer running by the final test software monomer, a weight coefficient corresponding to the power consumption of the monomer and a weight coefficient corresponding to the heat dissipation efficiency of the monomer by an expert according to own experience and knowledge.
In a specific embodiment, the method for analyzing the combined performance evaluation value corresponding to the combined operation of the hardware in the various types of servers at each collection time point comprises the steps of obtaining the combined response time, the combined power consumption and the combined heat dissipation efficiency corresponding to the combined operation of the hardware in the various types of servers at each collection time point, inputting the combined response time, the combined power consumption and the combined heat dissipation efficiency corresponding to the combined operation of the hardware in the various types of servers at each collection time point into a combined performance evaluation analysis model, and outputting the combined performance evaluation value corresponding to the combined operation of the hardware in the various types of servers at each collection time point.
The analysis process of the combined performance evaluation value corresponding to the operation of each hardware in each type of server at each collection time point in each type of final test software monomer comprises the steps of normalizing the combined response time length, the combined power consumption and the combined heat dissipation efficiency corresponding to the operation of each hardware in each type of final test software monomer at each collection time point in each type of server, and respectively recording the combined response time length, the combined power consumption and the combined heat dissipation efficiency corresponding to the operation of each hardware in each type of server at each collection time point in each type of final test software monomer after the normalizationAndSubstituting into an analysis formula:
Obtaining the corresponding combined performance evaluation value of each hardware in each type of server at each collection time point in each type of final test software monomer operation,The method comprises the steps of respectively setting a weight coefficient corresponding to a set final test software monomer operation combination response time length, a weight coefficient corresponding to combination power consumption and a weight coefficient corresponding to combination heat dissipation efficiency, wherein i represents a number corresponding to each acquisition time point, i is a positive integer, h represents a number corresponding to each type of final test software combination operation, h is a positive integer, q represents a number corresponding to each type of server, q is a positive integer, r represents a number corresponding to each hardware, and r is a positive integer.
It should be noted that the number of the substrates,Are all greater than 0 and less than 1.
It should also be noted that the discussion and confirmation is based on the expertise and study of the field specialist and is also discussed and confirmed with the industry organization or professional institution. And setting a weight coefficient corresponding to the final test use software monomer operation combination response time, a weight coefficient corresponding to the combination power consumption and a weight coefficient corresponding to the combination heat dissipation efficiency by an expert according to own experience and knowledge.
And fourthly, judging the hardware performance, namely analyzing the performance evaluation value corresponding to each hardware in each type of server according to the monomer performance evaluation value corresponding to the single operation of each hardware in each type of final test software and the combined performance evaluation value corresponding to the combined operation of each type of final test software in each type of final test software at each acquisition time point, and further judging whether the performance of each hardware in each type of server is qualified.
In a specific embodiment, the specific analysis process is that the single performance evaluation value corresponding to the single operation of each hardware in each type server at each collection time point in each type final test software and the combined performance evaluation value corresponding to the combined operation of each type final test software are input into a performance evaluation value evaluation model, and the performance evaluation value result corresponding to each hardware in each type server is output.
And the performance evaluation value results comprise values of 1 and-1, when the performance evaluation value result corresponding to a certain hardware in a certain type of server is 1, the performance of the hardware in the certain type of server is qualified, otherwise, when the performance evaluation value result corresponding to a certain hardware in a certain type of server is-1, the performance of the hardware in the certain type of server is unqualified, and if the performance of the certain hardware in the certain type of server is unqualified, early warning feedback is carried out.
In a specific embodiment, the expression of the performance evaluation value evaluation model is:
In which, in the process,Representing the performance evaluation value result corresponding to the r-th hardware in the q-th type server,Representing the performance evaluation value corresponding to the r-th hardware in the q-th type server,Q represents the number corresponding to each type of server, q is a positive integer, r represents the number corresponding to each hardware, and r is a positive integer.
The single performance evaluation value corresponding to the single operation of each hardware in each type of server at each collection time point in each type of final test software and the combined performance evaluation value corresponding to the combined operation of each type of final test software are recorded asAndSubstituting into a calculation formula:
obtaining performance evaluation values corresponding to the hardware in each type of serverWherein i represents the corresponding number of each acquisition time point, i is a positive integer, f represents the corresponding number of each type of final test software monomer operation, f is a positive integer, h represents the corresponding number of each type of final test software combination operation, h is a positive integer,The set standard monomer performance evaluation values of the hardware running in the final test using software monomer and the standard combination performance evaluation values of the final test using software combination running correspond to each other,The weight factors corresponding to the monomer performance evaluation values of the set hardware when the final test is run by using the software monomer and the weight factors corresponding to the combined performance evaluation values when the final test is run by using the software combination are respectively set.
It should be noted that the number of the substrates,Are all greater than 0 and less than 1.
It should also be noted that the summary of the experimental data and the large amount of research data is used. And setting a standard monomer performance evaluation value corresponding to the hardware running in the final test using software monomer and a standard combination performance evaluation value corresponding to the final test using software combination running according to a professional institution and a research institution, and simultaneously, carrying out discussion and confirmation with an industry organization or the professional institution according to the basis of professional knowledge and research of a field expert. And setting weight factors corresponding to the monomer performance evaluation values of the hardware in the final test using software monomer operation and weight factors corresponding to the combined performance evaluation values in the final test using software combination operation by experts according to own experience and knowledge.
In the embodiment of the invention, when the final test software is screened, the key indexes such as CPU peak value utilization rate, memory occupation peak value, software collapse times and the like corresponding to the software are obtained, and the key indexes are substituted into a calculation formula to obtain the evaluation value affecting the hardware performance. The quantification mode carries out clear numerical representation on the influence degree of the fuzzy software on hardware, so that the testing process is more scientific. For example, through setting the weight factors and the adjustment factors, according to the importance degree of enterprises on different factors, if the requirement on the stability of a server is high, the weight of the number of software crashes can be increased, the influence of each software on the hardware performance can be accurately measured, a reliable basis is provided for subsequent testing and evaluation, and the collected performance data of the hardware under different running conditions, such as monomer response time, monomer power consumption, monomer heat dissipation efficiency, combined response time, combined power consumption, combined heat dissipation efficiency and the like, are converted into quantifiable evaluation values by using a monomer performance evaluation analysis model and a performance evaluation value evaluation model. By comparing the hardware performance with the set threshold value, whether the hardware performance is qualified or not can be accurately judged. The model-based method makes the assessment process standardized and objectified, reduces subjectivity and error of artificial judgment, and provides scientific basis for performance judgment of server hardware.
In a specific embodiment, if the performance of a certain hardware in a certain type of server is not qualified, early warning feedback is performed, and the specific early warning process is that when the performance evaluation result of a certain hardware in a certain type of server is monitored to be displayed as being not qualified, the system immediately starts an early warning mechanism, and at the moment, early warning information is sent to related personnel responsible for operation and maintenance work of the server, wherein the early warning information comprises the server type and the hardware name.
According to the embodiment of the invention, the actual performance condition of the server hardware under different service loads can be known through accurate hardware performance test. Enterprises can reasonably configure server resources according to the information, and excessive configuration or insufficient configuration of hardware resources are avoided. For example, if the hardware of a server has a larger margin under the current service load, the service of some other servers can be considered to be migrated to improve the utilization rate of hardware resources, otherwise, if the performance of the hardware is close to the limit, the hardware can be planned to be upgraded in advance to ensure the smooth operation of the service, and meanwhile, the complete flow from software acquisition, screening to hardware performance test, evaluation and early warning is definitely specified, and each step has a detailed operation method and basis. This allows for repeatability of the testing process, with similar results being obtained by different testers in the same way. For example, newly-entered operation and maintenance personnel can accurately test the hardware performance of the server according to the method, personal experience is not needed, and stability and reliability of test quality are ensured.
The foregoing is merely illustrative and explanatory of the principles of the invention, as various modifications and additions may be made to the specific embodiments described, or similar arrangements may be substituted by those skilled in the art, without departing from the principles of the invention or beyond the scope of the invention as defined in the description.

Claims (8)

A2, comparing the use frequency of each type of use software corresponding to each type of server with the use frequency corresponding to the set primary test use software, if the use frequency of a certain type of use software corresponding to a certain type of server is larger than or equal to the use frequency corresponding to the set primary test use software, indicating that the type of use software corresponding to the certain type of server has test representativeness, and if the use frequency of a certain type of use software corresponding to the certain type of server is smaller than the use frequency corresponding to the set primary test use software, indicating that the type of use software corresponding to the certain type of server does not have test representativeness, and marking each type of use software corresponding to the certain type of server with test representativeness as each type of primary test use software.
And C3, after the single operation test of each hardware in each type of server is completed, carrying out the combined operation test on each hardware in each type of server, and carrying out the combined operation test of each type of final test software according to the occurrence of each type of final test software, namely, only operating one type of final test software on the test server by a test platform for operating each type of server each time, and further collecting corresponding combined performance data of each hardware in each type of server in the combined operation of each type of final test software by a server hardware performance monitoring tool at each collection time point, wherein the combined performance data comprises combined response duration, combined power consumption and combined heat dissipation efficiency.
The method comprises the steps of obtaining the combined response time, the combined power consumption and the combined heat radiation efficiency corresponding to the combined operation of all hardware in all types of servers at all collection time points in all types of final test use software, inputting the combined response time, the combined power consumption and the combined heat radiation efficiency corresponding to the combined operation of all hardware in all types of final test use software in all types of servers at all collection time points into a combined performance evaluation analysis model, and outputting the combined performance evaluation value corresponding to the combined operation of all hardware in all types of servers at all collection time points in all types of final test use software.
obtaining performance evaluation values corresponding to the hardware in each type of serverWherein i represents the corresponding number of each acquisition time point, i is a positive integer, f represents the corresponding number of each type of final test software monomer operation, f is a positive integer, h represents the corresponding number of each type of final test software combination operation, h is a positive integer,The set standard monomer performance evaluation values of the hardware running in the final test using software monomer and the standard combination performance evaluation values of the final test using software combination running correspond to each other,The weight factors corresponding to the monomer performance evaluation values of the set hardware when the final test is run by using the software monomer and the weight factors corresponding to the combined performance evaluation values when the final test is run by using the software combination are respectively set.
CN202411742039.0A2024-11-292024-11-29 A method for testing server hardware performanceActiveCN119201652B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202411742039.0ACN119201652B (en)2024-11-292024-11-29 A method for testing server hardware performance

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202411742039.0ACN119201652B (en)2024-11-292024-11-29 A method for testing server hardware performance

Publications (2)

Publication NumberPublication Date
CN119201652A CN119201652A (en)2024-12-27
CN119201652Btrue CN119201652B (en)2025-03-18

Family

ID=94061825

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202411742039.0AActiveCN119201652B (en)2024-11-292024-11-29 A method for testing server hardware performance

Country Status (1)

CountryLink
CN (1)CN119201652B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111580934A (en)*2020-05-132020-08-25杭州电子科技大学Resource allocation method for consistent performance of multi-tenant virtual machines in cloud computing environment
CN117331846A (en)*2023-11-302024-01-02河北雄安尚世嘉科技有限公司Internet-based software development, operation, test and management system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CA2948700A1 (en)*2015-08-112017-02-11Txmq, Inc.Systems and methods for websphere mq performance metrics analysis
JP7120708B2 (en)*2017-10-132022-08-17ホアウェイ・テクノロジーズ・カンパニー・リミテッド System and method for cloud device collaborative real-time user usage and performance anomaly detection
CN117827618A (en)*2024-02-222024-04-05天津市职业大学 Computer software performance analysis method and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111580934A (en)*2020-05-132020-08-25杭州电子科技大学Resource allocation method for consistent performance of multi-tenant virtual machines in cloud computing environment
CN117331846A (en)*2023-11-302024-01-02河北雄安尚世嘉科技有限公司Internet-based software development, operation, test and management system

Also Published As

Publication numberPublication date
CN119201652A (en)2024-12-27

Similar Documents

PublicationPublication DateTitle
US7472037B2 (en)System and methods for quantitatively evaluating complexity of computing system configuration
EP1624397A1 (en)Automatic validation and calibration of transaction-based performance models
CN111459700A (en)Method and apparatus for diagnosing device failure, diagnostic device, and storage medium
WO2019153487A1 (en)System performance measurement method and device, storage medium and server
CN118519873B (en) Computer performance evaluation method and system
CN118839126A (en)Power system risk assessment method and equipment based on power data analysis
CN116737554A (en) An intelligent analysis and processing system and method based on big data
CN119829469A (en)Firmware testing method, electronic device, storage medium and program product
CN120353688A (en)Performance evaluation method, device, equipment, medium and product of storage server
CN119311556B (en)Performance evaluation method, system and storage medium for software system
CN111274112A (en)Application program pressure test method and device, computer equipment and storage medium
CN118779798B (en) Power edge optimization control method and system based on IoT cloud-edge collaboration
US20060025981A1 (en)Automatic configuration of transaction-based performance models
CN114691521A (en)Software testing platform based on artificial intelligence
CN113742248A (en)Method and system for predicting organization process based on project measurement data
CN117993694B (en)Quick multi-laboratory dynamic inspection system and device
CN119201652B (en) A method for testing server hardware performance
Mancebo et al.GSMP: Green software measurement process
CN118095555A (en)Method, device and equipment for predicting emission of greenhouse gases in coal combustion
CN116682479A (en)Method and system for testing enterprise-level solid state disk time delay index
CN116795710A (en)Performance evaluation method and device for real-time system
CN116107854A (en) A computer operation and maintenance index prediction method, system, equipment and medium
CN106855840B (en)System CPU analysis method and device
CN117408517A (en)Engineering cost risk control management system
CN120804388A (en)Defect retrieval method and system for software development test

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp