Disclosure of Invention
The application mainly aims to provide a method, a device, equipment and a medium for generating a function execution duty ratio result based on an embedded system, which aim to solve the technical problem of how to efficiently acquire the function execution duty ratio in real time.
In order to achieve the above object, the present application provides a method for generating a function execution duty ratio result based on an embedded system, the method comprising:
Acquiring an original file of an embedded system and receiving a sampling file transmitted by a lower computer;
Carrying out data cleaning and analysis on the original file to obtain a basic database;
generating a sampling database according to the sampling file;
comparing the identification information in the sampling database with the basic information in the basic database to obtain an execution duty ratio, wherein the execution duty ratio is the proportion of the sampling times of the objective function in the total sampling times;
Combining the execution duty ratio with the basic information to generate an execution duty ratio result, wherein the execution duty ratio result comprises a function name, sampling times and an operation duty ratio;
the step of comparing the identification information in the sampling database with the basic information in the basic database to obtain the execution duty ratio comprises the following steps:
receiving a new sampling file output by a lower computer;
inputting a preset neural network to the new sampling file to obtain an updated sampling database;
And comparing the identification information in the sampling database with the basic information in the basic database based on the updated sampling database to obtain an execution duty ratio.
In one embodiment, the step of performing data cleaning and parsing on the original file to obtain a base database includes:
obtaining a function address mapping table from the original file;
Extracting information from the function address mapping table to obtain function basic information, wherein the function basic information comprises a function name, a starting address and an ending address;
constructing a structured data table based on the function basic information;
Performing data cleaning processing on the structured data table to generate a first database;
the first database is stored as a base function library.
In an embodiment, the step of generating a sample database from the sample file comprises:
Determining an original pointer sampling sequence based on the sampling file;
performing address alignment processing on the original pointer sampling sequence to obtain an address interval corresponding to the original pointer sampling sequence;
Counting based on the address interval to obtain a corresponding occurrence frequency and frequency distribution table;
Establishing a second database according to the occurrence frequency and the frequency distribution table, wherein the second database comprises an address value, occurrence times and a time stamp;
And removing the second database to obtain a sampling database.
In an embodiment, the step of performing a rejection process on the second database to obtain a sampled database includes:
acquiring an operation time stamp and a basic database range;
removing abnormal data with address values exceeding the range of the basic database in the second database to obtain an updated second database;
Obtaining updated address values and the occurrence times of each address value in a preset sampling period according to the updated second database;
and generating a sampling database based on the updated address values, the occurrence times of each address value in a preset sampling period and the corresponding running time stamp.
In one embodiment, the step of comparing the identification information in the sampling database with the basic information in the basic database to obtain an execution duty ratio includes:
extracting pointer information of a plurality of sampling moments from the sampling database, and acquiring total sampling times;
Acquiring a plurality of corresponding function identification information according to a plurality of pointer information;
searching corresponding basic information in the basic database according to the plurality of function identification information to obtain the sampling times of the objective function;
and determining the execution duty ratio of the objective function based on the sampling times of the objective function and the total sampling times.
In an embodiment, the method is applied to a lower computer, and the method includes:
acquiring program counter data of the embedded system during operation through a timing interrupt mechanism;
Converting the program counter data to obtain pointer information;
storing the pointer information into an acquisition information table;
and when the acquired information table meets the preset capacity, outputting the acquired information table as a sampling file and sending the sampling file to an upper computer so as to execute the method.
In an embodiment, after the step of comparing the identification information in the sampling database with the basic information in the basic database to obtain the execution duty ratio, the method further includes:
the step of collecting the program counter data of the embedded system in running through the timing interrupt mechanism comprises the following steps:
Setting a timer through a timing interrupt mechanism;
configuring the triggering period of the timer as a preset sampling interval;
The program counter is sampled according to the preset sampling interval, and program counter data are recorded;
Writing the program counter data into a circulation buffer area to obtain the data volume of the circulation buffer area;
Detecting the data volume of the circulating buffer zone and comparing the data volume with a preset capacity threshold value to obtain a detection result;
And outputting program counter data when the detection result is that the data quantity of the circulating buffer area exceeds a preset capacity threshold value.
In addition, in order to achieve the above object, the present application also provides a function execution duty ratio result generating device based on an embedded system, the function execution duty ratio result generating device based on the embedded system includes:
The acquisition module is used for acquiring an original file of the embedded system and receiving a sampling file transmitted by the lower computer;
The processing module is used for cleaning and analyzing the data of the original file to obtain a basic database;
the processing module is also used for generating a sampling database according to the sampling file;
The comparison module is used for comparing the identification information in the sampling database with the basic information in the basic database to obtain an execution duty ratio, wherein the execution duty ratio is the proportion of the sampling times of the objective function in the total sampling times;
the result module is used for combining the execution duty ratio with the basic information to generate an execution duty ratio result, wherein the execution duty ratio result comprises a function name, sampling times and an operation duty ratio;
The device comprises a data updating module, a data processing module and a data processing module, wherein the data updating module is used for receiving a new sampling file output by a lower computer, inputting a preset neural network to the new sampling file to obtain an updated sampling database, and comparing the identification information in the sampling database with the basic information in the basic database based on the updated sampling database to obtain an execution duty ratio.
In addition, in order to achieve the above object, the present application also proposes a medium, which is a computer readable medium, on which a computer program is stored, the computer program implementing the steps of the method for generating a duty cycle result based on a function execution of an embedded system as described above when being executed by a processor.
Furthermore, to achieve the above object, the present application provides a computer program product comprising a computer program which, when being executed by a processor, implements the steps of the method for generating a duty cycle result based on a function execution of an embedded system as described above.
According to the application, the original file and the sampling file are obtained from the embedded system, and the basic database and the sampling database are generated through data cleaning and analysis. Comparing the two identification information to obtain the execution duty ratio of the function, and generating a result report containing the function name, the sampling times and the operation duty ratio. And further receiving a new sampling file, optimizing and updating the sampling database through a preset neural network, and comparing again to dynamically update the execution duty ratio. The MAP file and the sampling PC pointer are analyzed to form a base and a sampling database, the execution duty ratio of the functions is obtained by comparing the MAP file and the sampling PC pointer, the data processing efficiency and accuracy are improved by utilizing the neural network, the operation efficiency of the embedded system is evaluated based on the positioning of the high-load function, and the performance optimization efficiency of the embedded system is improved.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the technical solution of the present application and are not intended to limit the present application.
For a better understanding of the technical solution of the present application, the following detailed description will be given with reference to the drawings and the specific embodiments.
The method comprises the main steps of obtaining an original file of an embedded system and receiving a sampling file transmitted by a lower computer, cleaning and analyzing data according to the original file to obtain a basic database, generating the sampling database according to the sampling file, comparing identification information in the sampling database with basic information in the basic database to obtain an execution duty ratio, wherein the execution duty ratio is the proportion of sampling times of an objective function in total sampling times, combining the execution duty ratio with the basic information to generate an execution duty ratio result, and the execution duty ratio result comprises a function name, sampling times and an operation duty ratio. After the step of comparing the identification information in the sampling database with the basic information in the basic database to obtain the execution duty ratio, the method comprises the steps of receiving a new sampling file output by a lower computer, inputting a preset neural network to the new sampling file to obtain an updated sampling database, and comparing the identification information in the sampling database with the basic information in the basic database based on the updated sampling database to obtain the execution duty ratio.
Based on this, the embodiment of the application provides a method for generating a function execution duty ratio result based on an embedded system, which is applied to an upper computer, wherein the upper computer is usually a personal computer or a server, referring to fig. 1, fig. 1 is a flow diagram of a first embodiment of the method for generating a function execution duty ratio result based on an embedded system according to the application.
In this embodiment, the method for generating the function execution duty ratio result based on the embedded system includes steps S10 to S40:
Step S10, an original file of the embedded system and a sampling file transmitted by a receiving lower computer are obtained.
It should be noted that, in order to implement the tracing of the execution duty ratio of the embedded system function, two key files, namely, the original file and the sampling file, need to be acquired first. In this embodiment, the original file refers to a MAP file generated by the GCC compiler, which contains information such as all functions in the system and their corresponding address ranges. The process of obtaining the MAP file is relatively straightforward, and the GCC can be enabled to output the file by only adding appropriate parameters to the compiling option. On the other hand, the sampling file is obtained by collecting the current state of the system at run-time, in particular the position information of the Program Counter (PC) pointer. This step relies on a timer that generates an interrupt at set intervals to record the location of the instruction currently being executed. The process is completed by the lower computer, and the specific implementation strategy comprises the steps of adding an interface for storing a PC pointer, starting a timer for timing sampling, summarizing the acquired information into a list, and outputting the information through a low-priority thread to form a log in good time.
And step S20, cleaning and analyzing the data of the original file to obtain a basic database.
It should be noted that, it is relatively straightforward to obtain the MAP file, and it is only necessary to add appropriate parameters at compile time to let the GCC output the file. However, the MAP file contains a large amount of information such as a symbol table, a memory MAP, and an address range of each function, which are mixed together, and thus a data cleansing and parsing step is required.
Further, step S20 also includes obtaining a function address mapping table for the original file, specifically, obtaining the function address mapping table from the original file generated in the compiling process of the embedded system. Typically, this process can be implemented by a compiler option, ensuring that the MAP file is output that contains all necessary information. The file details the name of each function in the system and its corresponding memory address range. And extracting information from the function address mapping table to obtain function basic information, wherein the function basic information comprises a function name, a starting address and an ending address, and specifically, extracting information from the function address mapping table to obtain the function basic information. The work of this stage mainly includes parsing MAP files, identifying and extracting core information such as name, start address and end address of each function. For example, this task may be written in a script or done automatically using specialized tools to ensure consistency and accuracy of the data extraction. And constructing a structured data table based on the function basic information, performing data cleaning processing on the structured data table, generating a first database, and storing the first database as a basic function library. In particular, constructing a structured data table not only helps to improve the efficiency of data management, but also facilitates subsequent data querying and analysis. The design of the structured data table should take into account the principles of ease of extension and maintenance and typically includes fields such as function ID, function name, start address, end address, and any other metadata that may be useful. After the structured data table is built, it is subjected to a data cleansing process. Data cleansing aims at removing redundant information, correcting erroneous data, and ensuring consistency of data formats. Common flushing operations include removing duplicate records, correcting address range errors, normalizing data formats, and the like. The cleaned data is converted into a first database, and the generated first database is stored as a basic function library. This base function library will become the basis for subsequent analysis work for comparison and matching with other runtime collected data. To ensure data security and access efficiency, a suitable database management system (such as SQLite or MySQL) may be selected to store the data, and a reasonable indexing strategy may be designed to speed up the query.
In addition, in the process of data parsing, it is also necessary to consider the handling of special cases, such as how to correctly distinguish between multiple homonymous functions when they exist but are located in different address spaces, or how to reasonably handle such exceptions when functions with discontinuous addresses are encountered. Effective resolution of these problems is critical to improving the accuracy of the final analysis results. The goal of the whole process is to create a reliable base database as a basis for comparing the sampled file data, thereby accurately counting the actual execution frequency of each function and its proportion during the operation of the whole system.
Step S30, a sampling database is generated according to the sampling file.
It should be noted that, the sample file typically includes a list of PC (program counter) pointer values collected periodically during the running of the system, and these initial data need to undergo a series of processes to be used for subsequent analysis.
Further, step S30 also comprises the step of determining pointer information during operation based on the sampling file, and performing redundancy deletion on the pointer information to obtain processed pointer information. Specifically, determining the pointer information at run-time is mainly extracting all PC pointer values in the sample file. Since these data are directly derived from the real-time operating state of the system, they accurately reflect the execution of the system at various moments. However, the original sampled data may contain duplicate or erroneous information, which requires further processing. Meanwhile, in order to improve the accuracy of data analysis, redundant deletion operation is required to be carried out on the extracted pointer information. This includes identifying and removing duplicate PC pointer records, and filtering out data points that are significantly anomalous (e.g., values outside of the effective address range). By applying a suitable algorithm, the data set can be cleaned up effectively, ensuring that each piece of retained pointer information is meaningful. And then matching the processed pointer information with a preset function address range, converting the processed pointer information into corresponding function identification information, formatting the function identification information according to a preset database structure to obtain formatted function identification information, and storing the formatted function identification information to obtain a sampling database. Specifically, the processed pointer information is analyzed, and the processed pointer information is matched with a preset function address range to be converted into corresponding function identification information. The key here is that there is an exact underlying database SQL1 in which all functions and their corresponding address intervals are stored. For each cleaned PC pointer value, it is found whether it falls within the address range of a certain known function and the pointer is associated to the corresponding function identification. Once the function identifier corresponding to each PC pointer is determined, it needs to be formatted according to a preset database structure. This means that not only the identifier of each function is recorded, but also related metadata such as the time stamp of the sample, the belonging thread ID, etc. for subsequent analysis. The purpose of this is to build a database model that is well structured and easy to query. The final step is to store the formatted function identification information into the sample database SQL 2. This typically involves a batch insertion operation to efficiently import large amounts of data into the database.
The finally formed sampling database SQL2 is a clear, ordered and easy-to-query structured data set, and can be directly matched and compared with the function address range information in the basic database SQL1, so that the actual execution times of each function and the proportion of each function in the whole system operation period are calculated.
And step S40, comparing the identification information in the sampling database with the basic information in the basic database to obtain the execution duty ratio.
The base database SQL1 contains detailed information such as all functions and address ranges corresponding to the functions that are parsed from the MAP file generated by GCC compilation. This information is used as a reference for identifying and matching data from the runtime acquisition. After obtaining the cleaned and converted sample database SQL2, the next task is to compare each PC pointer value recorded therein (i.e. processed pointer information) with the function address range in the underlying database. Specifically, for each record in SQL2, it is queried by the PC pointer value it contains whether it falls within the address range of any function in SQL 1. If a match is found, this indicates that the PC pointer corresponds to a single instance of execution of the particular function.
After this comparison operation is performed, the number of times each function is sampled can be counted. The execution duty ratio is a ratio of the sampling times of the objective function to the total sampling times, and the total sampling times need to be known in order to calculate the execution duty ratio. Based on this, the execution duty cycle of each function can be calculated by the following formula:
For example, in one hypothetical scenario, if 10000 samples are taken in total and function a is sampled 500 times, then the execution duty cycle of function a is 5%. The analysis method can clearly show the importance proportion of each function during the whole system operation. In addition, the function execution condition in different time periods can be further analyzed, and the object is achieved by carrying out segmentation processing on the sampling data according to the time stamps. For example, the operation data of one day is divided into one section per hour, and then the above-described process of matching and duty ratio calculation is repeated in each section. This has the advantage that it can be observed whether the frequency of activity of certain functions increases significantly over certain time periods, possibly indicating that these periods are highly loaded or peak in performance of certain tasks. And classification statistics based on different modes of operation, embedded systems typically operate at different privilege levels, such as thread mode and privilege mode. Therefore, it is also necessary to classify and count the execution of the functions according to the operation mode. In practice this means that the current operating mode needs to be recorded when the PC pointer is acquired and the sampled data is classified accordingly in the subsequent data processing stages. For example, frequently called functions in user mode may point to problems with application layer logic, while calling more functions in kernel mode may be a bottleneck for underlying drivers or operating system services. In this way, the system can be more targeted for optimization, such as adjusting task priorities or improving kernel mode code efficiency.
And S50, combining the execution duty ratio with the basic information to generate an execution duty ratio result.
Specifically, the execution duty ratio result is formed by summarizing the execution duty ratio data obtained by comparing the identification information in the sampling database with the basic information in the basic database. The execution duty cycle result typically contains the name of each function, its total number of samples over the entire system run period, and the corresponding run duty cycle.
Further, according to the execution duty ratio result, an optimization result is generated, and the running condition of the objective function can be deeply analyzed. For example, if certain critical functions are found to be frequently invoked during system operation (high sampling times and/or high operating ratios), this means that these functions become performance bottlenecks. For such functions, optimization suggestions may be made from a number of perspectives, which may check for algorithm optimization whether the algorithm logic inside these functions may be optimized. Such as whether there are unnecessary loops, repeated computations, or may be replaced with part of a more efficient algorithm. And the memory access optimization analyzes the memory use modes of the functions and considers whether the problem of high cache miss rate exists or not. Memory access latency is reduced by adjusting the data structure or employing a more localized algorithm. The parallelization process can be accelerated by introducing multiple threads or utilizing hardware features (e.g., SIMD instruction sets). Finally, whether redundant codes or too complex control flows are available for the code reconstruction is considered. Simplifying the code structure not only improves the readability, but may also lead to improved performance. In addition, the optimization strategy can be further refined by combining the execution duty ratio change trend in different time periods or running modes. For example, functions that exhibit higher loads during certain time periods need to be of particular concern, especially in real-time systems, key functions that ensure response times should be optimized preferentially.
Furthermore, after the execution duty ratio result is obtained, the performance optimization of the embedded system is continuously monitored. And (4) inputting a preset neural network to the new sampling file by receiving the new sampling file output by the lower computer to obtain an updated sampling database, and executing the step (S40) based on the updated sampling database. Specifically, after the lower computer completes a new round of data acquisition, a new sampling file containing the latest PC pointer value is output. This file contains information on the state of operation of the system during the last period of time, which is critical for capturing the dynamic behavior of the system. Once a new sample file is received, it is then required to update its input to the preset neural network. The preset neural network may be a model based on Long Short Term Memory (LSTM) or Convolutional Neural Network (CNN). For example, LSTM networks are used to process time series data because sample files in embedded systems typically have time dependencies. LSTM is able to capture pattern changes over long spans of time, which is particularly useful for identifying function call frequencies and performance bottlenecks. CNNs are good at feature extraction and can be used to detect specific patterns or anomalies in PC pointer values, taking neural networks for their powerful pattern recognition capabilities and adaptive learning characteristics. Traditional data analysis methods have difficulty coping with complex, dynamically changing data sets, while neural networks can automatically learn and identify potential patterns in the data through training. In addition, the operation environment of the embedded system is complex and changeable, the neural network can better adapt to the changes, a more accurate data analysis result is provided, and the accuracy and the efficiency of data processing are remarkably improved by adopting the neural network.
After the update is completed, the next step is to compare the updated sample database with the information in the base database. The core here is to match each PC pointer value with its corresponding function address range, thereby determining the number of executions and the duty cycle of the respective function in the new time period. Since there has been a previous base database SQL1 and an old version of the sample database SQL2 as references, this step is mainly focused on identifying the newly added execution instance and its impact. In this way, not only the trend of the system over time, such as the increase or decrease of the execution frequency of certain functions, can be tracked, but also new performance bottlenecks can be found in time. For example, if a previously less active function suddenly shows a higher execution duty cycle, this is due to a recent code change or a task load transfer caused by an external environment change. For these findings, the optimization strategy can be quickly adjusted, such as re-examining implementation details of the correlation function, taking into account adjustments of the resource allocation scheme, etc.
The embodiment provides a function execution duty ratio result generation method based on an embedded system, which is used for obtaining an original file and a sampling file from the embedded system and generating a basic database and a sampling database through data cleaning and analysis. Comparing the two identification information to obtain the execution duty ratio of the function, and generating a result report containing the function name, the sampling times and the operation duty ratio. And further receiving a new sampling file, optimizing and updating the sampling database through a preset neural network, and comparing again to dynamically update the execution duty ratio. The MAP file and the sampling PC pointer are analyzed to form a base and a sampling database, the execution duty ratio of the functions is obtained by comparing the MAP file and the sampling PC pointer, the data processing efficiency and accuracy are improved by utilizing the neural network, the operation efficiency of the embedded system is evaluated based on the positioning of the high-load function, and the performance optimization efficiency of the embedded system is improved.
In the second embodiment of the present application, the same or similar content as in the first embodiment of the present application may be referred to the description above, and will not be repeated. On this basis, please refer to fig. 2, the method for generating the function execution duty ratio result based on the embedded system further includes steps S201 to S205:
Step S201, determining an original pointer sampling sequence based on the sampling file.
It should be noted that, first, all PC pointer values are extracted from the sample file. These values are typically accompanied by a timestamp or other metadata (such as a thread ID) to provide context information. An automation tool may be written using a scripting language (e.g., python or Perl) to parse the sample file and extract the required pointer information therefrom. The next step is to clean the extracted raw pointer values to remove any noise or invalid data that may affect the accuracy of the analysis. For example, filtering out pointer values that are outside the effective address range, removing duplicate records, correcting the timestamp of the error, etc. This step ensures the quality of the data for subsequent analysis. The cleaned pointer values will be organized into an ordered sequence, i.e. the original pointer sample sequence. This sequence is arranged in a time order, with each element representing a code location that is being executed by the system at a certain time. Additional metadata, such as acquisition time, belonging threads, etc., may also be appended to each pointer value for ease of subsequent analysis.
Step S202, address alignment processing is carried out on the original pointer sampling sequence, and an address interval corresponding to the original pointer sampling sequence is obtained.
It should be noted that address alignment processing is performed for each PC pointer value in the original pointer sample sequence. Specifically, it is checked whether each pointer value falls within the address interval of a certain function. This can be automated by writing scripts or using SQL queries. For example, for each pointer value, the underlying database may be queried, all function address intervals containing the pointer value found, and the matching result recorded.
Once the address interval to which each pointer value belongs is determined, a mapping table may be generated that converts the original sequence of pointer samples into a corresponding sequence of address intervals. The mapping table not only contains each pointer value and its corresponding function address interval, but also can add additional information, such as function name, thread ID, etc., to facilitate subsequent analysis. To ensure the accuracy of the address alignment, the results also need to be verified. For example, check if there are unrecognized pointer values (which may point to operating system code or other non-user defined functions) and do so appropriately. In addition, the speed and efficiency of address alignment processing can be remarkably improved by optimizing a query algorithm and an index structure.
Step S203, counting is carried out based on the address interval, and a corresponding occurrence frequency and frequency distribution table is obtained.
It should be noted that, statistics is performed based on the address interval, so that the corresponding occurrence frequency and frequency distribution table can be obtained to identify which functions in the system are frequently called, thereby positioning potential performance bottlenecks.
Specifically, according to the mapping table generated after the address alignment process, we can count the occurrence frequency of each function address interval in the sampling file, that is, how many times the function is executed in the whole monitoring period. By traversing all pointer values and recording the function address intervals to which they belong, a frequency statistics table can be constructed, wherein each entry represents a function and its corresponding execution times. For further analysis, the ratio of the execution frequency of each function to the total sampling frequency can be calculated to form a frequency distribution table, which is helpful for understanding the occupation condition of the system resources and the relative importance of the functions. In addition, the frequency distribution table not only displays the functions called by high frequencies, but also reveals the function calling modes which are low frequencies but possibly critical, thereby providing basis for optimizing work. For example, if the execution frequency of a function is abnormally high, it may need to be considered for code optimization or algorithm improvement, while a function that is low-frequency but takes a long time may be an important object of optimization.
Step S204, a second database is built according to the occurrence frequency and the frequency distribution table.
It should be noted that the second database includes an address value, a number of occurrences, and a time stamp. This process may be automated by scripting or using database management tools when building the second database. For each record, adding a timestamp field may provide additional time dimension information in addition to storing address values and number of occurrences, helping the developer to learn the trend of a particular function call pattern over time. For example, certain functions may be called only frequently for a certain period of time, which may be due to system load changes or other external factors. In this way, not only the execution frequency of each function can be accurately recorded, but also the dynamic behavior characteristics thereof can be captured.
In addition, to improve query efficiency and data management capabilities, the second database should be designed with a reasonable index structure. For example, an index may be created from address values, number of occurrences, or time stamps, making the query operation more efficient.
Step S205, eliminating the second database to obtain a sampling database.
It should be noted that, performing the rejection process on the second database to generate the reduced sampling database is a key step for ensuring the data quality and improving the analysis efficiency. This process aims to remove unnecessary redundant information and anomaly data, thereby making the data set ultimately used for performance analysis more accurate and useful.
Further, step 205 also includes obtaining a running time stamp and a base database range, in particular, running time stamp refers to a specific time record of the system during monitoring, the time stamps providing time dimension information of the data so that a developer can track system behavior over a specific period of time. By recording the time point of each sampling, the change trend of the system load can be analyzed, and the peak time period or the time period of occurrence of the abnormal event can be identified. Meanwhile, the base database range refers to a set of address intervals of all functions extracted from the MAP file or other original file generated by compiling. This range includes the start address and end address of each function, which defines the location of all monitorable functions in the system. The underlying database not only provides a reference standard for subsequent data processing, but also helps identify which PC pointer values belong to legitimate function calls, thereby excluding invalid or anomalous data points. And then eliminating abnormal data with address values exceeding the range of the basic database in the second database to obtain an updated second database, and specifically, comparing each address value in the second database with a function address interval in the basic database. The base database contains the start and end addresses of all known functions, the address intervals defining legal function call ranges. By writing a script or using a database query language (e.g., SQL), it is possible to automatically check whether each PC pointer value falls within the address range of any one function. If an address value is outside the range of the underlying database, it is considered anomalous data and should be removed from the second database. For example, all records in the second database may be traversed through a loop structure and a lookup operation performed on each address value to determine whether it belongs to a legitimate functional address interval. For non-eligible records, it may be deleted directly or marked as invalid. In order to ensure the accuracy of processing, a log recording function can be added to record the removed data and reasons thereof in detail, so that the subsequent examination and verification are facilitated. After this step, the updated second database contains only valid address values within the address range of the known function. This not only improves the purity of the data set, but also provides a reliable basis for subsequent statistical analysis. And then obtaining an updated address value and the occurrence frequency of each address value in a preset sampling period according to the updated second database, and generating a sampling database based on the updated address value, the occurrence frequency of each address value in the preset sampling period and the corresponding running time stamp. Specifically, for each valid address value in the updated second database, the number of times it is sampled throughout the monitoring period is counted. This can be achieved by traversing all records and grouping together by address value. For example, the occurrence frequency of each address value can be easily counted by using SQL queries, namely 'SELECT ADDRESS _value' and 'COUNT (x) AS occurrence FROM updated _ db GROUP BY address _value'. here, "updated_db" represents the updated second database, "address_value" is a specific address pointed to by the PC pointer, and "occurrence" represents the number of occurrences of the address value. Next, based on these updated address values and their corresponding number of occurrences, a final sample database is generated in combination with the time stamp information for each record. Each entry should contain not only the address value and number of occurrences, but also the time stamps of the first and last occurrence, as well as other metadata (e.g., thread ID) that may be helpful for analysis. The purpose of this is to provide a structured, easy to query dataset that supports deeper performance analysis. For example, a high load function within a specific time period can be identified by the timestamp information, or a trend of some function call patterns over time can be found.
In this embodiment, an original pointer sampling sequence is extracted from a sampling file, mapped to a corresponding function address interval through address alignment processing, and occurrence frequency and distribution of each address interval are counted. And establishing a second database based on the data, wherein the second database comprises address values, occurrence times and time stamps, then performing data rejection processing to remove abnormal values, finally generating a final sampling database, improving the accuracy and analysis efficiency of the data, effectively identifying high load functions and potential bottlenecks in the system, supporting dynamic optimization adjustment, enhancing the performance and stability of the system, and providing a reliable data base for continuous monitoring and improvement.
In the third embodiment of the present application, the same or similar content as the first embodiment of the present application can be referred to the description above, and the description is omitted. On this basis, please refer to fig. 3, the step S40 of the method for generating the function execution duty ratio result based on the embedded system further includes steps S301 to S304:
step S301, extracting pointer information of a plurality of sampling moments from the sampling database, and obtaining the total sampling times.
It should be noted that, by querying the sampling database, all PC pointer value records in a specific time interval can be screened out, so as to ensure coverage to the critical period of system operation. Once the desired time period is determined, the next step is to aggregate the data. This includes not only collecting all pointer information for each sample instant, but also the total number of samples over the selected period. Specific operations may be automated by writing scripts or using SQL queries. For example, a simple SQL query is "SELECT COUNT (x) FROM sampled _ DATA WHERE TIMESTAMP BETWEEN ' start_time ' AND ' end_time"; "sampled _data" here represents a table storing sample data, AND "time stamp" is a field that records the acquisition time.
Step S302, a plurality of corresponding function identification information is obtained according to the pointer information.
In order to convert these pointer values into meaningful function identification information, it is necessary to match them with a function address range in the base database (SQL 1). Specific operations typically involve writing scripts or automating this process using a database query language (e.g., SQL). For example, a loop structure may be constructed to traverse all sampled PC pointer values and perform a lookup operation on each pointer value to determine the function to which it belongs. This lookup operation may be accomplished by comparing whether the PC pointer value falls between the start and end addresses of a function. If a match is found, the identifier of the function is recorded, and if not, it means that the pointer points to an unrecognized portion, such as operating system kernel code or other non-user defined function.
Step S303, searching corresponding basic information in the basic database according to the plurality of function identification information to obtain the sampling times of the objective function.
It should be noted that, based on the function identification information acquired in the previous step, related data may be automatically retrieved from the base database (SQL 1) by writing a query script or using an SQL statement. Each function identification information represents a specific function name and its corresponding address range.
Specifically, a database query may be performed for each function identification information, such as :"SELECT function_name,start_address,end_address FROM function_base WHERE function_id = 'target_function_id';". where "function_base" is a table storing all function base information and "function_id" is a field for uniquely identifying each function. In this way, detailed basic information of the objective function can be obtained, including but not limited to a function name, a start address, and an end address. Next, the basic information is matched with pointer information in a sampling database, and the sampling times of the objective function are counted. This process involves traversing each record in the sample database (SQL 2) and checking if its PC pointer value falls within the address range of a function. If the match is successful, the sample count of the function is incremented by one. Finally, by summarizing all the matching results, the exact number of samplings of the objective function can be obtained.
Step S304, determining the execution duty ratio of the objective function based on the sampling times of the objective function and the total sampling times.
It should be noted that, after the execution duty ratio of the objective function is obtained, a deeper analysis may be performed. High execution duty cycle functions can be a potential performance bottleneck, particularly when these functions contain complex logic or are frequently called. In this way, it is possible to intuitively see not only which functions are most often executed, but also where optimization may be required.
In addition, comparing the execution duty cycle to the expected performance index may help verify whether the actual performance of the system meets the expectations and guide the subsequent optimization work. For example, if the function execution duty cycle on some critical paths is far higher than expected, then it may be considered to be algorithmically optimized, parallelized, or code reconstructed to improve efficiency.
According to the embodiment, the pointer information is extracted from the sampling database, the total sampling times are counted, the basic database is queried according to the pointer matching function identification to obtain the sampling times of the objective function, the execution duty ratio is calculated, the performance monitoring and analysis are realized, the high-load function is accurately positioned, and the data support is provided for optimizing the performance of the embedded system.
Based on this, the embodiment of the application provides a method for generating a function execution duty ratio result based on an embedded system, which is applied to a lower computer, wherein the lower computer generally refers to an embedded hardware device such as a Microcontroller (MCU) or a single-chip microcomputer that directly operates, and referring to fig. 4, fig. 4 is a flow chart of a fourth embodiment of the method for generating a function execution duty ratio result based on an embedded system.
In this embodiment, the method for generating the function execution duty ratio result based on the embedded system includes steps S401 to S404:
step S401, collecting program counter data of the embedded system in operation through a timing interrupt mechanism.
It should be noted that, firstly, a timer is set by a timer interrupt mechanism, and a trigger period of the timer is configured to be a preset sampling interval. A hardware timer needs to be configured during the system initialization phase. This timer is set to trigger an interrupt at regular intervals, for example every 1 millisecond. And then, the program counter data is recorded by sampling the program counter according to a preset sampling interval. Specifically, each time the timer triggers an interrupt, an Interrupt Service Routine (ISR) reads the current program counter value and records it. These PC pointer values directly reflect the code location that the system is executing at that time. To minimize the impact on system performance, the ISR should be as compact as possible, only responsible for acquiring and recording PC pointer values. And finally, writing the program counter data into the circulation buffer area to obtain the data volume of the circulation buffer area, detecting the data volume of the circulation buffer area, comparing the data volume of the circulation buffer area with a preset capacity threshold value to obtain a detection result, and outputting the program counter data when the detection result is that the data volume of the circulation buffer area exceeds the preset capacity threshold value. Specifically, each time the timer interrupt is triggered, the acquired PC pointer value is quickly written into the circular buffer, so that the continuity and instantaneity of the data can be ensured. As new data is added, the system dynamically calculates the current data size of the circular buffer. The status of the buffer can be monitored by comparing this amount of data with a preset capacity threshold. If the detection result shows that the data volume of the circular buffer exceeds the preset capacity threshold value, the buffer is about to be fully loaded or is already fully loaded, and immediate measures are needed to avoid data loss. When the detection result shows that the data volume exceeds the limit, the system starts a data output flow to package all unprocessed data in the circulating buffer into a sampling file and sends the sampling file to the upper computer for subsequent analysis. To ensure consistency and integrity of data, accessing the buffer in a multi-threaded environment may require the use of a mutex lock or other synchronization mechanism to avoid race conditions.
In addition, in view of the data overflow problem that may result from long-term operation, a mechanism is also needed to handle the full load situation. For example, when the buffer reaches its upper limit of capacity, the earliest record may be overwritten or acquisition may be stopped and a warning notification may be issued. Meanwhile, in order to avoid excessively influencing the system performance, the acquisition operation should be as simple and efficient as possible, and the workload executed in the interrupt context should be reduced.
Step S402, converting the program counter data to obtain pointer information.
It should be noted that, each time the timer interrupt is triggered, the obtained original PC value needs to undergo a series of processes to be converted into meaningful pointer information.
First, the obtained PC value is usually a memory address, which directly indicates the location of the currently executed instruction. However, for ease of understanding and analysis, these addresses need to be mapped to specific functions or code segments. The first step in the conversion process is to correct the PC value to ensure that it points to the correct instruction location. For example, some architectures may require offset adjustments to the PC value to match the actual instruction address. Next, each PC value may be mapped to a corresponding function name and address interval by looking up a base database (e.g., a function address interval parsed by a MAP file). This step involves not only simple address comparisons, but also the handling of edge situations, such as instructions crossing function boundaries, etc. In addition, additional metadata, such as acquisition time stamps, thread IDs, etc., may be added to each pointer information during the conversion process to provide richer context information. The pointer information thus generated contains not only the original PC value but also the name of the function to which it belongs, the address interval and the associated time and thread information.
Step S403, storing the pointer information in the acquisition information table.
It should be noted that the PC pointer value read in the timer Interrupt Service Routine (ISR) is immediately added to a specially designed acquisition information table. The acquisition information table is typically a circular buffer or linked list structure that can effectively manage memory usage while ensuring data continuity. Every time a new PC pointer value is collected, the new PC pointer value is inserted into the table according to the first-in first-out principle, so that the latest data is ensured not to be lost. To prevent data coverage and overflow problems, the acquisition information table needs to have a certain self-protection mechanism. For example, when the table approaches its maximum capacity, the log output thread may be notified by the release semaphore to process existing data, or simply stop entry of new data and record an overflow event for subsequent analysis. In addition, considering the limitation of system resources, the complexity of the operations performed in the ISR should be reduced as much as possible, only the necessary data insertion operations are performed, and the more time-consuming data processing tasks are handed to the background low-priority thread to complete. Meanwhile, each record contains not only the PC pointer value, but also possibly auxiliary information such as a timestamp, a thread ID, etc. to provide a richer context.
And step S404, outputting the acquisition information table as a sampling file to be sent to the upper computer when the acquisition information table meets the preset capacity.
It should be noted that, in order to manage the capacity of the collected information table, a threshold is generally set to define when to output data. For example, if the acquisition information table is designed to store 1000 records, the system will trigger the data output mechanism when the newly added data causes the number of records to reach this threshold. At this time, all data in the collection information table is packed into one sample file. The packaging process may include formatting the data, adding necessary metadata (e.g., time stamps, device identification, etc.) to ensure that the host computer can accurately parse the information. The sample file is then transmitted to the host computer. This may be accomplished in a variety of ways depending on the particular communication protocol and hardware configuration. For example, the file may be sent out via a serial port, USB interface, or network connection (e.g., TCP/IP). In some application scenarios, it is also possible to use wireless communication technology (e.g. Wi-Fi, bluetooth) to transmit data, which is especially applicable in mobile or remote monitoring scenarios. In order not to affect the real-time performance of the system, a low priority thread or a specific background task is usually selected to perform file transfer work in actual operation.
Further, in order to realize real-time monitoring and analysis of the duty ratio of the function execution, the lower computer needs to continuously update the sampling files and transmit the latest sampling files to the upper computer in real time.
And when the acquired information is expressed to the preset time, updating the data of the acquired information table again, packaging the data as a new sampling file and transmitting the new sampling file to the upper computer. Specifically, after a certain time interval (for example, 10 s) passes through the collection information table in the lower computer, a data packing operation is triggered to convert the current collection information into a new sampling file. In this process, metadata such as a timestamp, thread ID, etc. are appended in addition to the PC pointer value to provide richer context information. In this way, each newly generated sample file reflects the state of operation of the system over the last period of time. Next, the lower level opportunity sends the newly generated sample file to the upper level computer using a pre-configured communication mechanism, such as a serial port, USB, or network interface (e.g., TCP/IP protocol). In order to ensure the stability and efficiency of data transmission, a streaming mode may be adopted, that is, a sampled file is generated and transmitted at the same time, instead of waiting for the whole file to be generated and transmitted. In addition, a retransmission mechanism can be set to cope with the possible data loss problem, so that all important performance data can be ensured to safely reach the upper computer. This process ensures the latest and accuracy of the performance data, providing a solid foundation for continuous optimization of the system.
According to the embodiment, the timer is set to collect pointer information, the pointer information is stored in the collection information table, when the collection information table meets the preset capacity, the collection information table is output and is used as a sampling file to be sent to the upper computer, data collection and transmission are achieved, real-time performance monitoring is supported, and system optimization efficiency and accuracy are improved.
The application also provides a device for generating the function execution duty ratio result based on the embedded system, please refer to fig. 5, the device comprises:
the acquiring module 10 is configured to acquire an original file of the embedded system and receive a sampling file transmitted by the lower computer.
The processing module 20 is used for cleaning and analyzing the data of the original file to obtain a basic database;
The processing module 20 is further configured to generate a sampling database according to the sampling file.
And the comparison module 30 is used for comparing the identification information in the sampling database with the basic information in the basic database to obtain an execution duty ratio, wherein the execution duty ratio is the proportion of the sampling times of the objective function in the total sampling times.
The result module 40 is configured to combine the execution duty ratio with the basic information to generate an execution duty ratio result, where the execution duty ratio result includes a function name, a sampling number, and an operation duty ratio.
The data updating module 50 is used for receiving a new sampling file output by the lower computer, inputting a preset neural network to the new sampling file to obtain an updated sampling database, and comparing the identification information in the sampling database with the basic information in the basic database based on the updated sampling database to obtain the execution duty ratio.
The function execution duty ratio result generating device based on the embedded system provided by the application adopts the function execution duty ratio result generating method based on the embedded system in the embodiment, so that the technical problem of how to efficiently acquire the function execution duty ratio in real time can be solved. Compared with the prior art, the function execution duty ratio result generating device based on the embedded system has the same beneficial effects as the function execution duty ratio result generating method based on the embedded system provided by the embodiment, and other technical features in the function execution duty ratio result generating device based on the embedded system are the same as the features disclosed by the method of the embodiment, and are not repeated herein.
In an embodiment, the processing module 20 is further configured to obtain a function address mapping table for the original file, extract information from the function address mapping table to obtain function basic information, where the function basic information includes a function name, a start address and an end address, construct a structured data table based on the function basic information, perform data cleaning processing on the structured data table to generate a first database, and store the first database as a basic function library.
In an embodiment, the processing module 20 is further configured to determine an original pointer sampling sequence based on the sampling file, perform address alignment processing on the original pointer sampling sequence to obtain an address interval corresponding to the original pointer sampling sequence, perform statistics based on the address interval to obtain a corresponding frequency of occurrence and frequency distribution table, establish a second database according to the frequency of occurrence and the frequency distribution table, where the second database includes an address value, a frequency of occurrence and a timestamp, and perform rejection processing on the second database to obtain a sampling database.
In an embodiment, the processing module 20 is further configured to obtain an operation time stamp and a basic database range, reject abnormal data in the second database, the address value of which exceeds the basic database range, obtain an updated second database, obtain an updated address value and the occurrence number of each address value in a preset sampling period according to the updated second database, and generate a sampling database based on the updated address value, the occurrence number of each address value in the preset sampling period, and the corresponding operation time stamp.
In an embodiment, the comparison module 30 is further configured to extract pointer information of a plurality of sampling moments from the sampling database, obtain a total sampling frequency, obtain a plurality of corresponding function identification information according to the plurality of pointer information, find corresponding basic information in the basic database according to the plurality of function identification information, obtain a sampling frequency of the objective function, and determine an execution duty ratio of the objective function based on the sampling frequency of the objective function and the total sampling frequency.
In an embodiment, the result module 30 is further configured to collect the program counter data during operation of the embedded system through a timer interrupt mechanism, convert the program counter data to obtain pointer information, store the pointer information in the collection information table, and output the collection information table as a sampling file to be sent to the upper computer when the collection information table meets a preset capacity.
In an embodiment, the result module 30 is further configured to set a timer through a timer interrupt mechanism, configure a trigger period of the timer to be a preset sampling interval, record program counter data by sampling the program counter at the preset sampling interval, write the program counter data into the circular buffer to obtain the data amount of the circular buffer, detect the data amount of the circular buffer to compare with a preset capacity threshold to obtain a detection result, and output the program counter data when the detection result is that the data amount of the circular buffer exceeds the preset capacity threshold.
The application provides a function execution duty ratio result generating device based on an embedded system, which comprises at least one processor and a memory in communication connection with the at least one processor, wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor so that the at least one processor can execute the function execution duty ratio result generating method based on the embedded system in the first embodiment.
Referring now to FIG. 6, a schematic diagram of an embedded system-based function execution duty cycle result generation device suitable for use in implementing embodiments of the present application is shown. The function execution duty ratio result generating device based on the embedded system in the embodiment of the present application may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (Personal DIGITAL ASSISTANT: personal digital assistants), PADs (Portable Application Description: tablet computers), PMPs (Portable MEDIA PLAYER: portable multimedia players), vehicle-mounted terminals (e.g., vehicle-mounted navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The function execution duty ratio result generating apparatus based on an embedded system shown in fig. 6 is merely an example, and should not impose any limitation on the functions and the scope of use of the embodiment of the present application.
As shown in fig. 6, the function execution duty ratio result generation device based on the embedded system may include a processing means 1001 (e.g., a central processor, a graphic processor, etc.), which may perform various appropriate actions and processes according to a program stored in a ROM (Read Only Memory) 1002 or a program loaded from a storage means 1003 into a RAM (Random Access Memory ) 1004. In the RAM1004, various programs and data necessary for the operation of the duty ratio result generating device based on the function execution of the embedded system are also stored. The processing device 1001, the ROM1002, and the RAM1004 are connected to each other by a bus 1005. An input/output (I/O) interface 1006 is also connected to the bus. In general, a system including an input device 1007 such as a touch screen, a touch pad, a keyboard, a mouse, an image sensor, a microphone, an accelerometer, a gyroscope, etc., an output device 1008 including a Liquid crystal display (LCD: liquid CRYSTAL DISPLAY), a speaker, a vibrator, etc., a storage device 1003 including a magnetic tape, a hard disk, etc., and a communication device 1009 may be connected to the I/O interface 1006. The communication means 1009 may allow the duty cycle result generating device to communicate with other devices wirelessly or by wire to exchange data based on the function execution of the embedded system. While an embedded system based function execution duty cycle result generation device is shown with various systems, it should be understood that not all illustrated systems are required to be implemented or provided. More or fewer systems may alternatively be implemented or provided.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the methods described by the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network through a communication device, or installed from the storage device 1003, or installed from the ROM 1002. The above-described functions defined in the method of the disclosed embodiment of the application are performed when the computer program is executed by the processing device 1001.
The function execution duty ratio result generating device based on the embedded system provided by the application adopts the function execution duty ratio result generating method based on the embedded system in the embodiment, so that the technical problem of how to efficiently acquire the function execution duty ratio in real time can be solved. Compared with the prior art, the function execution duty ratio result generating device based on the embedded system has the same beneficial effects as the function execution duty ratio result generating method based on the embedded system provided by the embodiment, and other technical features in the function execution duty ratio result generating device based on the embedded system are the same as the features disclosed in the method of the previous embodiment, and are not repeated herein.
It is to be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the description of the above embodiments, particular features, structures, materials, or characteristics may be combined in any suitable manner in any one or more embodiments or examples.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
The present application provides a computer-readable medium having computer-readable program instructions (i.e., a computer program) stored thereon for performing the function execution duty cycle result generation method based on the embedded system in the above-described embodiment.
The computer readable medium provided by the application can be, for example, a U disk, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, or device, or a combination of any of the foregoing. More specific examples of a computer-readable medium may include, but are not limited to, an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access Memory (RAM: random Access Memory), a Read-Only Memory (ROM: read Only Memory), an erasable programmable Read-Only Memory (EPROM: erasable Programmable Read Only Memory or flash Memory), an optical fiber, a portable compact disc Read-Only Memory (CD-ROM: CD-Read Only Memory), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this embodiment, the computing device may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, or apparatus. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to electrical wiring, fiber optic cable, RF (Radio Frequency) and the like, or any suitable combination of the foregoing.
The above-mentioned computer-readable medium may be contained in the function execution duty ratio result generation device based on the embedded system, or may exist alone without being incorporated in the function execution duty ratio result generation device based on the embedded system.
The computer readable medium carries one or more programs which, when executed by the embedded system based function execution duty cycle result generation apparatus, cause the embedded system based function execution duty cycle result generation apparatus to write computer program code for performing the operations of the present application in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" language or similar programming languages, or combinations thereof. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of remote computers, the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN: local Area Network) or a wide area network (WAN: wide Area Network), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules involved in the embodiments of the present application may be implemented in software or in hardware. Wherein the name of the module does not constitute a limitation of the unit itself in some cases.
The readable medium provided by the application is a computer readable medium, and the computer readable medium stores computer readable program instructions (namely computer program) for executing the method for generating the function execution duty ratio result based on the embedded system, so that the technical problem of how to efficiently acquire the function execution duty ratio in real time can be solved. Compared with the prior art, the beneficial effects of the computer readable medium provided by the application are the same as the beneficial effects of the method for generating the function execution duty ratio result based on the embedded system provided by the embodiment, and are not described in detail herein.
The application also provides a computer program product comprising a computer program which when executed by a processor implements the steps of a method for generating a duty cycle result based on a function execution of an embedded system as described above.
The computer program product provided by the application can solve the technical problem of how to efficiently acquire the function execution duty ratio in real time. Compared with the prior art, the beneficial effects of the computer program product provided by the application are the same as the beneficial effects of the function execution duty ratio result generation method based on the embedded system provided by the embodiment, and are not repeated here.
The foregoing description is only a partial embodiment of the present application, and is not intended to limit the scope of the present application, and all the equivalent structural changes made by the description and the accompanying drawings under the technical concept of the present application, or the direct/indirect application in other related technical fields are included in the scope of the present application.