技术领域Technical Field
本申请涉及计算机软件技术领域,尤其涉及接口调用日志分析方法、装置、设备、介质及产品。The present application relates to the field of computer software technology, and in particular to an interface call log analysis method, device, equipment, medium and product.
背景技术Background Art
近年来,随着微服务架构的广泛应用,软件系统的复杂度不断提高。在这一背景下,确保微服务之间的高效协作和故障排查变得尤为重要。然而,现有的链路追踪和日志分析工具往往存在一些缺陷,比如部署和配置过程复杂、实时分析能力有限以及预警和通知功能不足等问题。比如,Zipkin(一个分布式跟踪系统) 这样的系统虽然提供了追踪数据的收集和展示,但需要单独部署追踪服务器并在各个服务中配置相应的追踪依赖和代码,这对于刚接触这类系统的用户来说可能较为复杂。In recent years, with the widespread application of microservice architecture, the complexity of software systems has continued to increase. In this context, it is particularly important to ensure efficient collaboration and troubleshooting between microservices. However, existing link tracking and log analysis tools often have some defects, such as complex deployment and configuration processes, limited real-time analysis capabilities, and insufficient warning and notification functions. For example, although systems such as Zipkin (a distributed tracing system) provide collection and display of tracing data, they require the deployment of tracing servers separately and the configuration of corresponding tracing dependencies and codes in each service, which may be complicated for users who are new to such systems.
发明内容Summary of the invention
本申请的主要目的在于提供一种接口调用日志分析方法、装置、设备、介质及产品,旨在解决日志分析工具部署和配置复杂性高的技术问题。The main purpose of this application is to provide an interface call log analysis method, device, equipment, medium and product, aiming to solve the technical problem of high complexity in deployment and configuration of log analysis tools.
为实现上述目的,本申请提出一种接口调用日志分析方法,应用于安装了接口调用日志分析工具包的微服务,接口调用日志分析方法包括:To achieve the above purpose, the present application proposes an interface call log analysis method, which is applied to a microservice installed with an interface call log analysis toolkit. The interface call log analysis method includes:
基于接口调用日志分析工具包中的过滤器拦截请求所述微服务的接口的服务请求,解析拦截的所述服务请求的服务信息,其中,所述服务信息是从服务请求中提取出的,用于描述请求本身及其上下文的信息,包括请求时间、请求者身份、请求参数、响应时间、响应者身份、响应内容以及其他相关信息,所述其他相关信息包括用户认证信息、客户端IP地址和请求来源;Based on the filter in the interface call log analysis toolkit, the service request of the interface requesting the microservice is intercepted, and the service information of the intercepted service request is parsed, wherein the service information is extracted from the service request and is used to describe the request itself and its context information, including request time, requester identity, request parameters, response time, responder identity, response content and other related information, and the other related information includes user authentication information, client IP address and request source;
基于预设的日志格式将所述服务信息转换为日志数据;Converting the service information into log data based on a preset log format;
获取外界输入的需求信息,根据所述需求信息对所述日志数据进行收集策略配置,根据配置的收集策略收集所述日志数据中的目标日志数据,其中,所述收集策略包括按时间间隔收集,按日志大小收集,按请求路径收集和按用户身份收集,所述收集策略通过可视化界面获取用户输入的收集策略参数,并生成相应的收集策略;Obtaining demand information input from the outside, configuring a collection strategy for the log data according to the demand information, and collecting target log data in the log data according to the configured collection strategy, wherein the collection strategy includes collecting by time interval, collecting by log size, collecting by request path, and collecting by user identity, and the collection strategy obtains the collection strategy parameters input by the user through a visual interface and generates a corresponding collection strategy;
将所述目标日志数据存储至预设的数据库,并确定所述数据库中多个目标日志数据组成的数据集,对所述数据集进行异常检测,其中,所述数据库包括关系型数据库和Nosql数据库中的至少一项,所述将所述目标日志数据存储至预设的数据库的步骤包括若在预设时间周期内所述服务请求的数量小于预设阈值,则将所述目标日志数据以表格的形式保存至关系型数据库;若在预设时间周期内所述服务请求的数量大于或等于预设阈值,则将所述目标日志数据保存至Nosql数据库;所述对所述数据集进行异常检测的步骤包括将所述数据集按照接口分成不同的数据组,并将每个数据组中的每条目标日志数据作为数据对象,其中,每个接口对应一个数据组;所述将每个数据组中的每条目标日志数据作为数据对象的步骤之后包括获取所述数据对象的请求者身份,根据所述请求者身份进行用户行为分析,其中,所述用户行为分析包括根据每个数据组中出现次数最多的请求者身份判断用户群体,使用机器学习或数据挖掘算法对每个数据组中所述用户群体进行分析,根据分析结果识别出所述用户群体的行为模式;若所述数据对象满足预设异常条件,则将所述数据对象标记为异常日志,并确定异常检测结果为存在异常;所述将所述目标日志数据存储至预设的数据库的步骤之前包括当日志条数短时间过多,则将所述日志条数的日志数据暂时性存储至虚拟空间,若虚拟空间中日志数据的日志大小超过预设存储大小,则将所述虚拟空间中的日志数据与预设收集策略匹配,以根据预设收集策略将虚拟空间中的日志数据存储至预设的数据库;The target log data is stored in a preset database, and a data set composed of multiple target log data in the database is determined, and anomaly detection is performed on the data set, wherein the database includes at least one of a relational database and a Nosql database, and the step of storing the target log data in the preset database includes: if the number of service requests within a preset time period is less than a preset threshold, the target log data is saved in the form of a table to the relational database; if the number of service requests within the preset time period is greater than or equal to the preset threshold, the target log data is saved to the Nosql database; the step of performing anomaly detection on the data set includes dividing the data set into different data groups according to interfaces, and taking each target log data in each data group as a data object, wherein each interface corresponds to a data group; the step of taking each target log data in each data group as a data object The step includes obtaining the identity of the requester of the data object, performing user behavior analysis based on the identity of the requester, wherein the user behavior analysis includes determining the user group based on the identity of the requester that appears most frequently in each data group, analyzing the user group in each data group using a machine learning or data mining algorithm, and identifying the behavior pattern of the user group based on the analysis result; if the data object meets a preset abnormal condition, marking the data object as an abnormal log, and determining that the abnormal detection result is an abnormality; before the step of storing the target log data in a preset database, when the number of log entries is too large in a short period of time, temporarily storing the log data of the number of log entries in a virtual space, and if the log size of the log data in the virtual space exceeds the preset storage size, matching the log data in the virtual space with a preset collection strategy, so as to store the log data in the virtual space in a preset database according to the preset collection strategy;
若异常检测结果为存在异常,则输出预设的预警信息。If the abnormality detection result shows that an abnormality exists, the preset warning information is output.
在一实施例中,所述根据配置的收集策略收集所述日志数据中的目标日志数据的步骤包括:In one embodiment, the step of collecting target log data in the log data according to the configured collection strategy includes:
若所述收集策略为按时间间隔收集,则确定与所述收集策略对应的定时任务,根据所述定时任务收集所述日志数据中的目标日志数据,其中,所述定时任务包括基于预设时间间隔进行日志数据收集;If the collection strategy is to collect at a time interval, determine a scheduled task corresponding to the collection strategy, and collect target log data in the log data according to the scheduled task, wherein the scheduled task includes collecting log data based on a preset time interval;
若所述收集策略为按日志大小收集,则检测所述日志数据的存储数据量是否大于预设存储数据量阈值,若大于预设存储数据量阈值,则将所述日志数据作为目标日志数据进行收集;If the collection strategy is to collect by log size, then detecting whether the storage data volume of the log data is greater than a preset storage data volume threshold, and if it is greater than the preset storage data volume threshold, collecting the log data as target log data;
若所述收集策略为按日志级别收集,则将满足预设日志级别的所述日志数据作为目标日志数据进行收集;If the collection strategy is to collect by log level, the log data that meets the preset log level is collected as the target log data;
若所述收集策略为按请求路径收集,则将满足预设请求路径的所述日志数据作为目标日志数据进行收集;If the collection strategy is to collect by request path, the log data satisfying the preset request path is collected as target log data;
若所述收集策略为按用户身份收集,则将满足预设用户身份的所述日志数据作为目标日志数据进行收集。If the collection strategy is to collect by user identity, the log data that meets the preset user identity will be collected as target log data.
在一实施例中,所述将每个数据组中的每条目标日志数据作为数据对象的步骤之后,包括:In one embodiment, after the step of taking each target log data in each data group as a data object, the following steps are included:
针对每个数据组,若存在预设时间范围内的数据对象的数量大于预设服务请求数量阈值,则确定所述预设时间范围内的数据对象满足预设异常条件;和/或,For each data group, if the number of data objects within a preset time range is greater than a preset service request number threshold, it is determined that the data objects within the preset time range meet a preset abnormal condition; and/or,
确定所述数据组中与处于失败状态的服务请求对应的数据对象,若与处于失败状态的服务请求对应的数据对象的对象数量大于预设失败数量阈值,则确定与处于失败状态的服务请求对应的所述数据对象满足预设异常条件;和/或,Determining the data objects corresponding to the service request in the failed state in the data group, if the number of the data objects corresponding to the service request in the failed state is greater than a preset failure number threshold, determining that the data objects corresponding to the service request in the failed state meet a preset abnormal condition; and/or,
确定所述数据组中与每条数据对象对应的返回参数,若在各所述返回参数中存在包含预设的特定字符的目标返回参数,则确定所述目标返回参数对应的数据对象满足预设异常条件,其中,所述特定字符是预先设定的字符,包含所述特定字符的目标返回参数满足预设异常条件。Determine the return parameters corresponding to each data object in the data group. If there is a target return parameter containing a preset specific character in each of the return parameters, determine that the data object corresponding to the target return parameter meets the preset exception condition, wherein the specific character is a pre-set character, and the target return parameter containing the specific character meets the preset exception condition.
此外,为实现上述目的,本申请还提出一种接口调用日志分析装置,设置于安装了接口调用日志分析工具包的微服务,接口调用日志分析装置包括:In addition, to achieve the above purpose, the present application also proposes an interface call log analysis device, which is arranged in a microservice in which an interface call log analysis toolkit is installed, and the interface call log analysis device includes:
日志采集模块,基于接口调用日志分析工具包中的过滤器拦截请求所述微服务的接口的服务请求,解析拦截的所述服务请求的服务信息,其中,所述服务信息是从服务请求中提取出的,用于描述请求本身及其上下文的信息,包括请求时间、请求者身份、请求参数、响应时间、响应者身份、响应内容以及其他相关信息,所述其他相关信息包括用户认证信息、客户端IP地址和请求来源;A log collection module intercepts the service request of the interface of the microservice based on the filter in the interface call log analysis toolkit, and parses the service information of the intercepted service request, wherein the service information is extracted from the service request and is used to describe the request itself and its context information, including request time, requester identity, request parameters, response time, responder identity, response content and other related information, wherein the other related information includes user authentication information, client IP address and request source;
日志格式模块,基于预设的日志格式将所述服务信息转换为日志数据;A log format module, which converts the service information into log data based on a preset log format;
日志收集模块,获取外界输入的需求信息,根据所述需求信息对所述日志数据进行收集策略配置,根据配置的收集策略收集所述日志数据中的目标日志数据,其中,所述收集策略包括按时间间隔收集,按日志大小收集,按请求路径收集和按用户身份收集,所述收集策略通过可视化界面获取用户输入的收集策略参数,并生成相应的收集策略,所述将所述目标日志数据存储至预设的数据库的步骤之前包括当日志条数短时间过多,则将所述日志条数的日志数据暂时性存储至虚拟空间,若虚拟空间中日志数据的日志大小超过预设存储大小,则将所述虚拟空间中的日志数据与预设收集策略匹配,以根据预设收集策略将虚拟空间中的日志数据存储至预设的数据库;A log collection module, which obtains demand information input from the outside, configures a collection strategy for the log data according to the demand information, and collects target log data in the log data according to the configured collection strategy, wherein the collection strategy includes collecting by time interval, collecting by log size, collecting by request path, and collecting by user identity. The collection strategy obtains the collection strategy parameters input by the user through a visual interface, and generates a corresponding collection strategy. The step of storing the target log data in a preset database includes temporarily storing the log data of the log number in a virtual space when the number of log items is too large in a short period of time. If the log size of the log data in the virtual space exceeds the preset storage size, the log data in the virtual space is matched with the preset collection strategy to store the log data in the virtual space in the preset database according to the preset collection strategy.
日志分析模块,将所述目标日志数据存储至预设的数据库,并确定所述数据库中多个目标日志数据组成的数据集,对所述数据集进行异常检测,其中,所述数据库包括关系型数据库和Nosql数据库中的至少一项,所述将所述目标日志数据存储至预设的数据库的步骤包括若在预设时间周期内所述服务请求的数量小于预设阈值,则将所述目标日志数据以表格的形式保存至关系型数据库;若在预设时间周期内所述服务请求的数量大于或等于预设阈值,则将所述目标日志数据保存至Nosql数据库;所述对所述数据集进行异常检测的步骤包括将所述数据集按照接口分成不同的数据组,并将每个数据组中的每条目标日志数据作为数据对象,其中,每个接口对应一个数据组;所述将每个数据组中的每条目标日志数据作为数据对象的步骤之后包括获取所述数据对象的请求者身份,根据所述请求者身份进行用户行为分析,其中,所述用户行为分析包括根据每个数据组中出现次数最多的请求者身份判断用户群体,使用机器学习或数据挖掘算法对每个数据组中所述用户群体进行分析,根据分析结果识别出所述用户群体的行为模式;若所述数据对象满足预设异常条件,则将所述数据对象标记为异常日志,并确定异常检测结果为存在异常;所述将所述目标日志数据存储至预设的数据库之前包括当日志条数短时间过多,则将所述日志条数的日志数据暂时性存储至虚拟空间,若虚拟空间中日志数据的日志大小超过预设存储大小,则将所述虚拟空间中的日志数据与预设收集策略匹配,以根据预设收集策略将虚拟空间中的日志数据存储至预设的数据库;The log analysis module stores the target log data in a preset database, determines a data set composed of multiple target log data in the database, and performs anomaly detection on the data set, wherein the database includes at least one of a relational database and a Nosql database, and the step of storing the target log data in the preset database includes: if the number of service requests within a preset time period is less than a preset threshold, the target log data is saved in a table to the relational database; if the number of service requests within the preset time period is greater than or equal to the preset threshold, the target log data is saved to the Nosql database; the step of performing anomaly detection on the data set includes dividing the data set into different data groups according to the interface, and taking each target log data in each data group as a data object, wherein each interface corresponds to a data group; the step of taking each target log data in each data group as a data object After the step of obtaining the data object, the method further comprises obtaining the identity of the requester of the data object, and performing user behavior analysis based on the identity of the requester, wherein the user behavior analysis comprises determining the user group based on the identity of the requester that appears most frequently in each data group, analyzing the user group in each data group using a machine learning or data mining algorithm, and identifying the behavior pattern of the user group based on the analysis result; if the data object meets a preset abnormal condition, the data object is marked as an abnormal log, and the abnormal detection result is determined to be an abnormality; before storing the target log data in a preset database, the method comprises temporarily storing the log data of the log number in a virtual space when the number of log entries is too large in a short period of time, and if the log size of the log data in the virtual space exceeds the preset storage size, matching the log data in the virtual space with a preset collection strategy, so as to store the log data in the virtual space in a preset database according to the preset collection strategy;
日志预警模块,若异常检测结果为存在异常,则输出预设的预警信息。The log warning module outputs the preset warning information if the anomaly detection result is an anomaly.
此外,为实现上述目的,本申请还提出一种接口调用日志分析设备,设备包括:存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,计算机程序配置为实现如上文的接口调用日志分析方法的步骤。In addition, to achieve the above-mentioned purpose, the present application also proposes an interface call log analysis device, which includes: a memory, a processor, and a computer program stored in the memory and executable on the processor, and the computer program is configured to implement the steps of the interface call log analysis method as described above.
此外,为实现上述目的,本申请还提出一种存储介质,存储介质为计算机可读存储介质,存储介质上存储有计算机程序,计算机程序被处理器执行时实现如上文的接口调用日志分析方法的步骤。In addition, to achieve the above-mentioned purpose, the present application also proposes a storage medium, which is a computer-readable storage medium, and a computer program is stored on the storage medium. When the computer program is executed by the processor, the steps of the interface calling log analysis method as described above are implemented.
此外,为实现上述目的,本申请还提供一种计算机程序产品,计算机程序产品包括计算机程序,计算机程序被处理器执行时实现如上文的接口调用日志分析方法的步骤。In addition, to achieve the above-mentioned purpose, the present application also provides a computer program product, which includes a computer program. When the computer program is executed by a processor, the steps of the interface calling log analysis method as described above are implemented.
本申请提出的一个或多个技术方案,至少具有以下技术效果:One or more technical solutions proposed in this application have at least the following technical effects:
本申请通过接口调用日志分析工具包集成了接口调用日志分析方法,使得各微服务只需简单安装该工具包,无需额外配置或繁琐操作,即可自动实现对接口服务请求的拦截与深度分析。具体而言,安装了该工具包的微服务能够通过过滤器拦截所有指向微服务的接口的服务请求,并解析这些服务请求中的服务信息,实现了对服务请求内容的全面捕获与细致解析。紧接着,微服务会根据预设的日志格式将服务信息转换为日志数据。获取外界输入的需求信息,根据需求信息对日志数据进行收集策略配置,根据配置的收集策略收集日志数据中的目标日志数据,其中,收集策略包括按时间间隔收集,按日志大小收集,按日志级别收集,按请求路径收集和按用户身份收集中的至少一项。根据需求信息配置的收集策略满足了多样化的日志管理需求,使得系统能够针对特定业务场景或需求,高效地筛选出有价值的日志数据。将目标日志数据存储至预设的数据库,包括关系型数据库和Nosql数据库中的至少一项,能够适应不同业务场景的需求。更进一步,系统支持从数据库中检索并汇总目标日志数据,形成全面的数据集。通过对这些数据集进行深入分析,系统能够识别出潜在的性能瓶颈、异常行为,并据此向用户发送预设的预警信息。这一过程不仅帮助开发人员快速定位问题、优化服务性能,还极大地提升了系统的安全性和稳定性。综上,本申请通过集成化的接口调用日志分析工具包,极大地减少了日志分析工具部署和配置复杂性,实现了日志数据的自动化收集、格式化、存储与分析。The present application integrates the interface call log analysis method through the interface call log analysis toolkit, so that each microservice only needs to simply install the toolkit, without additional configuration or cumbersome operation, to automatically intercept and deeply analyze the interface service request. Specifically, the microservice with the toolkit installed can intercept all service requests pointing to the interface of the microservice through the filter, and parse the service information in these service requests, so as to achieve comprehensive capture and detailed analysis of the service request content. Then, the microservice will convert the service information into log data according to the preset log format. Obtain the demand information input from the outside, configure the collection strategy for the log data according to the demand information, and collect the target log data in the log data according to the configured collection strategy, wherein the collection strategy includes collecting by time interval, collecting by log size, collecting by log level, collecting by request path and collecting by user identity. At least one of the collection strategies configured according to the demand information meets the diverse log management needs, so that the system can efficiently filter out valuable log data for specific business scenarios or needs. The target log data is stored in a preset database, including at least one of a relational database and a Nosql database, which can adapt to the needs of different business scenarios. Furthermore, the system supports retrieving and aggregating target log data from the database to form a comprehensive data set. By conducting in-depth analysis of these data sets, the system can identify potential performance bottlenecks and abnormal behaviors, and send preset warning information to users accordingly. This process not only helps developers quickly locate problems and optimize service performance, but also greatly improves the security and stability of the system. In summary, this application greatly reduces the complexity of log analysis tool deployment and configuration through an integrated interface call log analysis toolkit, and realizes the automated collection, formatting, storage and analysis of log data.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and, together with the description, serve to explain the principles of the present application.
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings required for use in the embodiments or the description of the prior art will be briefly introduced below. Obviously, for ordinary technicians in this field, other drawings can be obtained based on these drawings without paying any creative work.
图1为本申请接口调用日志分析方法实施例一的流程示意图;FIG1 is a flow chart of an embodiment 1 of the interface call log analysis method of the present application;
图2为本申请接口调用日志分析方法实施例二的流程示意图;FIG2 is a flow chart of a second embodiment of the interface call log analysis method of the present application;
图3为本申请接口调用日志分析方法的场景示意图;FIG3 is a schematic diagram of a scenario in which an interface of the present application calls a log analysis method;
图4为本申请实施例接口调用日志分析装置的模块结构示意图;FIG4 is a schematic diagram of the module structure of the interface call log analysis device according to an embodiment of the present application;
图5为本申请实施例中接口调用日志分析方法涉及的硬件运行环境的设备结构示意图。FIG5 is a schematic diagram of the device structure of the hardware operating environment involved in the interface call log analysis method in an embodiment of the present application.
本申请目的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The purpose, features and advantages of this application will be further described in conjunction with the embodiments and with reference to the accompanying drawings.
具体实施方式DETAILED DESCRIPTION
应当理解,此处所描述的具体实施例仅仅用以解释本申请的技术方案,并不用于限定本申请。It should be understood that the specific embodiments described herein are only used to explain the technical solutions of the present application and are not used to limit the present application.
为了更好的理解本申请的技术方案,下面将结合说明书附图以及具体的实施方式进行详细的说明。In order to better understand the technical solution of the present application, a detailed description will be given below in conjunction with the accompanying drawings and specific implementation methods.
由于目前的日志分析工具存在一些局限性,它们通常需要单独部署追踪服务器,并且在各个服务中配置相应的追踪依赖和代码,这对于刚接触这类工具的用户来说,部署和配置过程可能相对复杂。此外,目前主要提供的是追踪数据的收集和展示,但在实时分析和监控能力方面相对较弱,虽然可以查看实时的追踪数据,但缺乏高级的实时分析功能。预警和通知功能也相对有限,主要依赖于用户自定义的规则和脚本,缺乏内置的、开箱即用的预警和通知系统,这可能需要用户进行额外的开发和配置。同时,有些日志分析工具仅使用单一数据库进行日志存储,这导致日志分析工具在适应性方面会有所降低,无法满足不同业务场景的需求。Due to some limitations of current log analysis tools, they usually require the deployment of a separate tracking server and the configuration of corresponding tracking dependencies and codes in each service. For users who are new to such tools, the deployment and configuration process may be relatively complicated. In addition, the main function currently provided is the collection and display of tracking data, but the real-time analysis and monitoring capabilities are relatively weak. Although real-time tracking data can be viewed, advanced real-time analysis functions are lacking. The warning and notification functions are also relatively limited, mainly relying on user-defined rules and scripts, and lacking a built-in, out-of-the-box warning and notification system, which may require users to perform additional development and configuration. At the same time, some log analysis tools only use a single database for log storage, which reduces the adaptability of log analysis tools and cannot meet the needs of different business scenarios.
本实施例提供的接口调用日志分析方法简化了部署流程,各微服务只需要安装集成了接口调用日志分析方法的接口调用日志分析工具包即可,无需任何其他多余的操作,也不用像Zipkin(一款开源的分布式实时数据追踪系统)那样需要单独部署追踪服务。该方法配置灵活性高,支持配置文件来设置日志收集的参数,例如是否启用日志采集、日志格式、日志发送方式等。本实施例支持多种关系型数据库和NoSQL(非关系型数据库的统称)数据库,能够适应不同业务场景的需求,使得用户可以根据实际情况选择合适的数据库进行日志存储。本实施例提供了丰富的收集策略,包括按时间间隔收集、按日志大小收集、按日志级别收集等,用户可以根据业务需求设置每分钟收集一次日志,或者当日志大小超过特定阈值时进行收集,以提高日志收集的效率。此外,本实施例提供了多种预警策略,包括实时监控和预警线上问题,例如可以设置特定关键字或特定级别的日志为预警对象,当满足条件时发送预警通知,以便及时处理线上问题。还可以将一些用户特定的行为收集起来,开发者可以根据自己的业务维度进行配置,比如某个接口被某个用户在一定时间内请求了多少次之后,就发送一个通知到企业的智能平台,方便后续的业务分析。本实施例主要是针对接口的链路追踪,我们可以通过接口日志做一些智能化的分析;在日志预警和收集策略架构设计上,我们充分考虑了各种业务场景,研发中需要的场景我们已基本都会支持;智能分析模块针对一些面向消费者的系统比较实用。The interface call log analysis method provided in this embodiment simplifies the deployment process. Each microservice only needs to install the interface call log analysis toolkit that integrates the interface call log analysis method. No other redundant operations are required, and there is no need to deploy a separate tracking service like Zipkin (an open source distributed real-time data tracking system). This method has high configuration flexibility and supports configuration files to set log collection parameters, such as whether to enable log collection, log format, log sending method, etc. This embodiment supports a variety of relational databases and NoSQL (a general term for non-relational databases) databases, which can adapt to the needs of different business scenarios, so that users can choose a suitable database for log storage according to actual conditions. This embodiment provides a wealth of collection strategies, including collection by time interval, collection by log size, collection by log level, etc. Users can set logs to be collected once a minute according to business needs, or collect when the log size exceeds a specific threshold to improve the efficiency of log collection. In addition, this embodiment provides a variety of warning strategies, including real-time monitoring and warning of online problems. For example, specific keywords or logs of a specific level can be set as warning objects, and warning notifications are sent when conditions are met to handle online problems in a timely manner. Some user-specific behaviors can also be collected, and developers can configure them according to their own business dimensions. For example, after a certain interface is requested a certain number of times by a certain user within a certain period of time, a notification is sent to the company's intelligent platform to facilitate subsequent business analysis. This embodiment is mainly aimed at link tracking of the interface. We can do some intelligent analysis through the interface log; in the design of the log warning and collection strategy architecture, we have fully considered various business scenarios, and we have basically supported the scenarios required in research and development; the intelligent analysis module is more practical for some consumer-oriented systems.
需要说明的是,本实施例的执行主体可以是一种具有数据处理、网络通信以及程序运行功能的计算服务设备,例如平板电脑、个人电脑、手机等,或者是一种能够实现上述功能的电子设备、终端系统等。以下以安装了接口调用日志分析工具包的微服务系统为例,对本实施例及下述各实施例进行说明。It should be noted that the execution subject of this embodiment can be a computing service device with data processing, network communication and program running functions, such as a tablet computer, a personal computer, a mobile phone, etc., or an electronic device capable of realizing the above functions, a terminal system, etc. The following takes a microservice system with an interface call log analysis toolkit installed as an example to illustrate this embodiment and the following embodiments.
基于此,本实施例提供了一种接口调用日志分析方法,参照图1,图1为本申请接口调用日志分析方法实施例一的流程示意图。Based on this, this embodiment provides an interface call log analysis method, referring to Figure 1, which is a flow chart of the first embodiment of the interface call log analysis method of the present application.
本实施例中,应用于安装了接口调用日志分析工具包的微服务。In this embodiment, it is applied to a microservice that has an interface call log analysis toolkit installed.
接口调用日志分析工具包指的是一个集成了接口调用日志分析方法的软件包,部署在微服务环境中,用于自动化地拦截、解析、收集、存储和分析接口调用的日志数据,微服务只需要安装工具包即可记录和分析微服务接口调用链路日志,无需任何其它多余的操作。The interface call log analysis toolkit refers to a software package that integrates the interface call log analysis method. It is deployed in a microservice environment and is used to automatically intercept, parse, collect, store and analyze the log data of interface calls. Microservices only need to install the toolkit to record and analyze the microservice interface call link log without any other unnecessary operations.
接口调用日志分析方法包括步骤S10~S50:The interface call log analysis method includes steps S10 to S50:
步骤S10,基于接口调用日志分析工具包中的过滤器拦截请求微服务的接口的服务请求,解析拦截的服务请求的服务信息;Step S10, intercepting the service request of the interface requesting the microservice based on the filter in the interface call log analysis toolkit, and parsing the service information of the intercepted service request;
需要说明的是,接口是软件系统中不同组件之间交互的桥梁,定义了组件之间通信的方式和规则。在微服务架构中,服务请求是指一个微服务向另一个微服务或外部系统发起的请求,通常包含请求的方法、路径、参数等信息。服务信息是从服务请求中提取出的,用于描述请求本身及其上下文的信息,包括请求的请求信息和响应信息,请求信息可以包括请求时间、请求者身份、请求参数等,响应信息可以包括响应时间、响应者身份、响应内容等。接口调用日志分析工具包中的过滤器可以在服务请求到达目标资源之前或从目标资源返回之前拦截这些请求和响应,并对其进行处理。It should be noted that the interface is a bridge for interaction between different components in a software system, and defines the communication methods and rules between components. In the microservice architecture, a service request refers to a request initiated by one microservice to another microservice or an external system, which usually contains information such as the request method, path, and parameters. Service information is extracted from the service request and is used to describe the request itself and its context, including the request information and response information of the request. The request information may include the request time, requester identity, request parameters, etc., and the response information may include the response time, responder identity, response content, etc. The filters in the interface call log analysis toolkit can intercept these requests and responses and process them before the service request reaches the target resource or returns from the target resource.
接口调用日志分析工具包内置了一个服务请求过滤器,能够自动拦截所有的服务请求和响应,无需手动配置追踪代码或单独部署追踪服务器。当安装了该工具包的微服务系统接收到外部或内部的服务请求时,系统首先拦截这些服务请求,从拦截到的服务请求中提取出关键的服务信息,如请求时间、请求方法、请求路径、请求参数等。The interface call log analysis toolkit has a built-in service request filter that can automatically intercept all service requests and responses without manually configuring the tracking code or deploying a separate tracking server. When a microservice system with the toolkit installed receives external or internal service requests, the system first intercepts these service requests and extracts key service information from the intercepted service requests, such as request time, request method, request path, request parameters, etc.
此外,根据具体的业务需求和预设配置,还可能提取其他相关信息,如请求头(Headers)中的用户认证信息、客户端IP地址、请求来源等,以便进行更全面的日志分析和安全审计。In addition, based on specific business needs and preset configurations, other relevant information may also be extracted, such as user authentication information in the request header, client IP address, request source, etc., to facilitate more comprehensive log analysis and security audits.
步骤S20,基于预设的日志格式将服务信息转换为日志数据;Step S20, converting the service information into log data based on a preset log format;
需要说明的是,服务信息的格式不一定是统一的,所以需要将原始的服务信息按照一定的格式进行整理,以便于后续的存储、查询和分析。It should be noted that the format of service information is not necessarily uniform, so the original service information needs to be organized in a certain format to facilitate subsequent storage, query and analysis.
将解析出的服务信息按照预设的格式(如JSON,一种广泛采用的轻量级数据交换格式)进行整理转换,形成易于存储和查询的日志条目,并将转换后的日志条目作为日志数据。The parsed service information is sorted and converted according to a preset format (such as JSON, a widely used lightweight data exchange format) to form log entries that are easy to store and query, and the converted log entries are used as log data.
步骤S30,获取外界输入的需求信息,根据需求信息对日志数据进行收集策略配置,根据配置的收集策略收集日志数据中的目标日志数据,其中,收集策略包括按时间间隔收集,按日志大小收集,按日志级别收集,按请求路径收集和按用户身份收集中的至少一项;Step S30, obtaining demand information input from the outside, configuring a collection strategy for the log data according to the demand information, and collecting target log data in the log data according to the configured collection strategy, wherein the collection strategy includes at least one of collecting by time interval, collecting by log size, collecting by log level, collecting by request path, and collecting by user identity;
需要说明的是,收集策略是指根据特定条件(如时间间隔、日志大小、日志级别、请求路径、用户身份等)来收集日志数据的规则。It should be noted that the collection strategy refers to the rules for collecting log data based on specific conditions (such as time interval, log size, log level, request path, user identity, etc.).
从用户或系统配置中获取日志收集的需求信息,根据需求信息配置相应的收集策略,收集策略的具体应用可以包括按时间间隔收集,如每日、每小时等固定时间间隔收集日志;按日志大小收集,当日志文件达到一定大小时(如100MB)进行收集;按日志级别收集,仅收集特定级别(如ERROR、WARN)的日志;按请求路径收集,仅收集特定API路径的日志;按用户身份收集,根据请求者的用户身份(如用户ID、角色)筛选日志。根据配置的收集策略,从日志数据中筛选出满足收集策略的目标日志数据进行收集。Obtain the log collection requirement information from the user or system configuration, and configure the corresponding collection strategy based on the requirement information. The specific application of the collection strategy may include collecting at time intervals, such as daily, hourly, and other fixed time intervals; collecting by log size, collecting when the log file reaches a certain size (such as 100MB); collecting by log level, only collecting logs of a specific level (such as ERROR, WARN); collecting by request path, only collecting logs of a specific API path; collecting by user identity, filtering logs according to the user identity of the requester (such as user ID, role). According to the configured collection strategy, filter the target log data that meets the collection strategy from the log data for collection.
进一步地,用户可以通过可视化界面或者配置文件来输入收集策略,比如用户登录到系统后,通过可视化界面上的选项和输入框,填写或选择所需的收集策略参数。系统接收并解析这些输入,生成相应的收集策略;或者用户编辑一个配置文件,将所需的收集策略参数写入文件中。然后,用户将配置文件上传到系统或放置在系统指定的目录下。系统读取配置文件,解析其中的内容,生成收集策略。从而系统根据配置的收集策略,从日志数据中筛选出满足收集策略的目标日志数据进行收集。Furthermore, users can input collection strategies through a visual interface or configuration file. For example, after logging into the system, users can fill in or select the required collection strategy parameters through the options and input boxes on the visual interface. The system receives and parses these inputs and generates the corresponding collection strategy; or users edit a configuration file and write the required collection strategy parameters into the file. Then, the user uploads the configuration file to the system or places it in a directory specified by the system. The system reads the configuration file, parses the content therein, and generates a collection strategy. Thus, the system selects the target log data that meets the collection strategy from the log data according to the configured collection strategy for collection.
步骤S40,将目标日志数据存储至预设的数据库,并确定数据库中多个目标日志数据组成的数据集,对数据集进行异常检测,其中,数据库包括关系型数据库和Nosql数据库中的至少一项;Step S40, storing the target log data in a preset database, determining a data set consisting of a plurality of target log data in the database, and performing anomaly detection on the data set, wherein the database includes at least one of a relational database and a Nosql database;
需要说明的是,数据库是用于存储和管理数据的软件系统,包括关系型数据库(如MySQL)和NoSQL数据库。关系型数据库是一种使用表格来存储和组织数据的数据库系统,数据以行和列的形式存储,支持复杂的查询和事务处理。NoSQL(Not Only SQL,不仅仅是SQL)数据库是一类非关系型数据库的统称,它们不遵循传统关系型数据库的表格结构、数据关系(如外键约束)、以及SQL查询语言等规则。NoSQL数据库的设计初衷是为了解决大规模数据集合的多样化和高并发访问问题,这些特点使得它们在处理大数据、高可用性、分布式系统等方面表现出色。异常检测是指对存储的数据集进行分析,以识别出不符合正常模式或预期的数据。It should be noted that a database is a software system used to store and manage data, including relational databases (such as MySQL) and NoSQL databases. A relational database is a database system that uses tables to store and organize data. Data is stored in the form of rows and columns and supports complex queries and transaction processing. NoSQL (Not Only SQL) databases are a general term for a class of non-relational databases that do not follow the table structure, data relationships (such as foreign key constraints), and SQL query language rules of traditional relational databases. The original intention of the design of NoSQL databases was to solve the problems of diversity and high concurrent access to large-scale data sets. These characteristics make them perform well in processing big data, high availability, and distributed systems. Anomaly detection refers to the analysis of stored data sets to identify data that does not conform to normal patterns or expectations.
将收集到的目标日志数据存储至预设的数据库中,在数据库中,将多个目标日志数据组合成一个数据集。接着对数据集进行分析,如果数据集规模较小,且分析任务相对简单,可以直接在数据库中利用数据库的查询语言和内置的分析函数来分析。如果数据集规模庞大,或者需要进行复杂的数据分析和可视化,则可以将数据提取出来,在外部环境中(如使用Python、R等编程语言及其数据分析库)进行分析。分析是否存在异常数据,如响应时间异常长、请求失败率高等。The collected target log data is stored in the preset database. In the database, multiple target log data are combined into a data set. Then the data set is analyzed. If the data set is small and the analysis task is relatively simple, it can be analyzed directly in the database using the database query language and built-in analysis functions. If the data set is large or complex data analysis and visualization are required, the data can be extracted and analyzed in an external environment (such as using programming languages such as Python and R and their data analysis libraries). Analyze whether there is abnormal data, such as abnormally long response time, high request failure rate, etc.
步骤S50,若异常检测结果为存在异常,则输出预设的预警信息。Step S50: If the abnormality detection result is that an abnormality exists, a preset warning message is output.
需要说明的是,预警信息是指当系统检测到异常情况时,向相关人员发送的通知或警报,以便及时采取措施解决问题。It should be noted that early warning information refers to the notification or alarm sent to relevant personnel when the system detects an abnormal situation so that timely measures can be taken to solve the problem.
根据异常检测的结果,判断是否存在异常,比如:性能异常(响应时间异常长或者处理速度下降),错误率异常(请求失败率高或者系统内部错误增多),资源使用异常(内存使用率过高或者磁盘空间不足)。若存在异常,则根据预设的预警模板生成预警信息,预警模板可以包含以下要素:异常类型,发生时间,影响范围,严重程度,建议措施。接着通过邮件、短信、即时通讯等方式将预警信息发送给相关人员。若不存在异常,可以继续进行监控,并根据配置的策略(如定期报告、日志归档等)进行后续处理。在某些情况下,系统可能会记录当前状态作为正常基线,用于未来的异常检测对比。此外,对于长时间无异常的稳定状态,系统可能会触发一些优化或清理任务,如压缩旧日志、释放未使用的资源等,以保持系统的最佳性能。According to the results of anomaly detection, determine whether there are anomalies, such as: performance anomalies (abnormally long response time or decreased processing speed), error rate anomalies (high request failure rate or increased internal system errors), resource usage anomalies (too high memory usage or insufficient disk space). If there are anomalies, an early warning message is generated according to the preset early warning template. The early warning template can contain the following elements: anomaly type, occurrence time, scope of impact, severity, and recommended measures. Then send the early warning information to relevant personnel through email, SMS, instant messaging, etc. If there are no anomalies, you can continue to monitor and perform subsequent processing according to the configured policies (such as regular reporting, log archiving, etc.). In some cases, the system may record the current state as a normal baseline for future anomaly detection comparison. In addition, for a long period of stable state without anomalies, the system may trigger some optimization or cleanup tasks, such as compressing old logs, releasing unused resources, etc., to maintain the best performance of the system.
示例性地,假设有一个电商微服务系统,其中包含一个处理订单的微服务,为了监控订单处理接口的性能和稳定性,我们部署了接口调用日志分析工具包。当订单处理接口接收到一个创建订单的请求时,接口调用日志分析工具包首先拦截这个请求,从请求中提取出关键信息,如请求时间(2023-04-01 12:00:00)、请求方法(POST)、请求路径(/orders)、请求参数({userId: 123, productId: 456, quantity: 2})。将这些信息格式化为JSON格式的日志条目:{"time": "2023-04-01 12:00:00", "method": "POST", "path": "/orders", "params": {"userId": 123, "productId": 456, "quantity":2}}。根据预设的收集策略(如每天收集一次),将格式化后的日志数据保存到数据库中。从数据库中检索出最近一周的订单处理接口日志数据,组成数据集。对数据集进行统计分析,发现某个时间段内接口响应时间明显增长,平均响应时间从100ms增加到300ms。根据分析结果,向系统管理员发送预警信息:“注意!订单处理接口在XX时间段内响应时间显著增加,可能影响用户体验,请检查并优化。”For example, suppose there is an e-commerce microservice system that includes a microservice for processing orders. In order to monitor the performance and stability of the order processing interface, we deploy an interface call log analysis toolkit. When the order processing interface receives a request to create an order, the interface call log analysis toolkit first intercepts the request and extracts key information from the request, such as request time (2023-04-01 12:00:00), request method (POST), request path (/orders), and request parameters ({userId: 123, productId: 456, quantity: 2}). This information is formatted into a JSON-formatted log entry: {"time": "2023-04-01 12:00:00", "method": "POST", "path": "/orders", "params": {"userId": 123, "productId": 456, "quantity":2}}. According to the preset collection strategy (such as collecting once a day), the formatted log data is saved in the database. The order processing interface log data for the past week was retrieved from the database to form a data set. After statistical analysis of the data set, it was found that the interface response time increased significantly during a certain period of time, and the average response time increased from 100ms to 300ms. Based on the analysis results, an early warning message was sent to the system administrator: "Attention! The response time of the order processing interface increased significantly during the XX period of time, which may affect the user experience. Please check and optimize it."
本实施例通过接口调用日志分析工具包集成了接口调用日志分析方法,使得各微服务只需简单安装该工具包,无需额外配置或繁琐操作,即可自动实现对接口服务请求的拦截与深度分析。具体而言,安装了该工具包的微服务能够通过过滤器拦截所有指向微服务的接口的服务请求,并解析这些服务请求中的服务信息,实现了对服务请求内容的全面捕获与细致解析。紧接着,微服务会根据预设的日志格式将服务信息转换为日志数据。获取外界输入的需求信息,根据需求信息对日志数据进行收集策略配置,根据配置的收集策略收集日志数据中的目标日志数据,其中,收集策略包括按时间间隔收集,按日志大小收集,按日志级别收集,按请求路径收集和按用户身份收集中的至少一项。根据需求信息配置的收集策略满足了多样化的日志管理需求,使得系统能够针对特定业务场景或需求,高效地筛选出有价值的日志数据。将目标日志数据存储至预设的数据库,包括关系型数据库和Nosql数据库中的至少一项,能够适应不同业务场景的需求。更进一步,系统支持从数据库中检索并汇总目标日志数据,形成全面的数据集。通过对这些数据集进行深入分析,系统能够识别出潜在的性能瓶颈、异常行为,并据此向用户发送预设的预警信息。这一过程不仅帮助开发人员快速定位问题、优化服务性能,还极大地提升了系统的安全性和稳定性。综上,本实施例通过集成化的接口调用日志分析工具包,极大地减少了日志分析工具部署和配置复杂性,实现了日志数据的自动化收集、格式化、存储与分析。This embodiment integrates the interface call log analysis method through the interface call log analysis toolkit, so that each microservice only needs to simply install the toolkit, without additional configuration or cumbersome operation, to automatically intercept and deeply analyze the interface service request. Specifically, the microservice installed with the toolkit can intercept all service requests pointing to the interface of the microservice through the filter, and parse the service information in these service requests, so as to achieve comprehensive capture and detailed analysis of the service request content. Then, the microservice will convert the service information into log data according to the preset log format. Obtain the demand information input from the outside, configure the collection strategy for the log data according to the demand information, and collect the target log data in the log data according to the configured collection strategy, wherein the collection strategy includes collecting by time interval, collecting by log size, collecting by log level, collecting by request path and collecting by user identity. At least one of the collection strategies configured according to the demand information meets the diverse log management requirements, so that the system can efficiently filter out valuable log data for specific business scenarios or requirements. The target log data is stored in a preset database, including at least one of a relational database and a Nosql database, which can adapt to the needs of different business scenarios. Furthermore, the system supports retrieving and aggregating target log data from the database to form a comprehensive data set. By conducting in-depth analysis of these data sets, the system can identify potential performance bottlenecks and abnormal behaviors, and send preset warning information to users accordingly. This process not only helps developers quickly locate problems and optimize service performance, but also greatly improves the security and stability of the system. In summary, this embodiment greatly reduces the complexity of log analysis tool deployment and configuration through an integrated interface call log analysis toolkit, and realizes the automated collection, formatting, storage and analysis of log data.
在一种可行的实施方式中,步骤S40中将目标日志数据存储至预设的数据库的步骤可以包括步骤T10~T20:In a feasible implementation manner, the step of storing the target log data in a preset database in step S40 may include steps T10 to T20:
步骤T10,若在预设时间周期内服务请求的数量小于预设阈值,则将目标日志数据保存至关系型数据库;Step T10, if the number of service requests within a preset time period is less than a preset threshold, the target log data is saved to a relational database;
需要说明的是,在本实施例中,可以支持各种流行的关系型数据库和nosql数据库保存日志。预设阈值是一个预先定义好的数值界限,用于判断某个条件是否成立。在本实施例中,预设阈值用于决定服务请求的数量是否达到了一定的水平,从而决定日志数据的存储位置。It should be noted that, in this embodiment, various popular relational databases and nosql databases can be supported to save logs. The preset threshold is a predefined numerical limit used to determine whether a certain condition is met. In this embodiment, the preset threshold is used to determine whether the number of service requests has reached a certain level, thereby determining the storage location of the log data.
根据以往的微服务单位时间接收的服务请求量人为来判断是否足够小,若足够小,比如一分钟只接收了10个请求,那么在后续的日志保存只需要使用关系型数据库即可,成本更低。The number of service requests received per unit time by the microservice in the past is used to manually determine whether it is small enough. If it is small enough, for example, only 10 requests are received in one minute, then only a relational database needs to be used for subsequent log storage, which is less costly.
步骤T20,若在预设时间周期内服务请求的数量大于或等于预设阈值,则将目标日志数据保存至Nosql数据库。Step T20: If the number of service requests within a preset time period is greater than or equal to a preset threshold, the target log data is saved to the Nosql database.
根据以往的微服务单位时间接收的服务请求量人为来判断是否足够大,若足够大,比如一分钟接收了10000个请求,那么在后续的日志保存需要使用Nosql数据库,来应对高并发和海量数据的存储需求。The number of service requests received by the microservice per unit time in the past is used to manually determine whether it is large enough. If it is large enough, for example, 10,000 requests are received in one minute, then a Nosql database needs to be used for subsequent log storage to cope with the storage requirements of high concurrency and massive data.
进一步地,服务请求的数量是否小于预设阈值这一判断可以通过编写代码逻辑来实现的,代码会监听或统计服务请求的数量,并将其与预设的阈值进行比较,从而做出决策。首先,系统需要统计在一段时间内接收到的服务请求的数量。这个数量可以通过计数器、消息队列的积压量或其他机制来获取。将统计得到的服务请求数量与预设的阈值进行比较,以确定日志数据的存储位置。果服务请求数量小于预设阈值:系统认为当前的请求量较小,可能不需要NoSQL数据库提供的高并发和可扩展性优势。因此,将日志数据保存至关系型数据库,以便利用关系型数据库在事务处理、复杂查询等方面的优势。系统认为当前的请求量较大,可能需要NoSQL数据库来应对高并发和海量数据的存储需求。因此,将日志数据保存至NoSQL数据库,以便更好地处理大规模数据和高并发访问。Furthermore, the judgment of whether the number of service requests is less than the preset threshold can be realized by writing code logic. The code will monitor or count the number of service requests and compare it with the preset threshold to make a decision. First, the system needs to count the number of service requests received within a period of time. This number can be obtained through a counter, the backlog of the message queue, or other mechanisms. The number of service requests obtained by counting is compared with the preset threshold to determine the storage location of the log data. If the number of service requests is less than the preset threshold: the system believes that the current request volume is small and may not need the high concurrency and scalability advantages provided by the NoSQL database. Therefore, the log data is saved to a relational database in order to take advantage of the relational database in transaction processing, complex queries, etc. The system believes that the current request volume is large and may require a NoSQL database to cope with the storage requirements of high concurrency and massive data. Therefore, the log data is saved to a NoSQL database to better handle large-scale data and high concurrent access.
本实施例通过选择数据存储方案,显著提升了日志处理的灵活性和效率。根据服务请求量,决策日志数据的存储位置。当请求量较小时,利用关系型数据库的低成本和高一致性优势,有效降低了存储成本;而当请求量较大时,则使用NoSQL数据库,以应对高并发和海量数据的挑战,确保了日志数据的完整性和系统的稳定性。这种基于请求量自主选择的数据存储策略,不仅提高了资源利用效率,还增强了系统的可扩展性和响应能力,为微服务架构下的日志管理带来了显著的有益效果。This embodiment significantly improves the flexibility and efficiency of log processing by selecting a data storage solution. The storage location of log data is determined based on the service request volume. When the request volume is small, the low cost and high consistency of the relational database are utilized to effectively reduce the storage cost; when the request volume is large, a NoSQL database is used to cope with the challenges of high concurrency and massive data, ensuring the integrity of the log data and the stability of the system. This data storage strategy based on autonomous selection of request volume not only improves resource utilization efficiency, but also enhances the scalability and responsiveness of the system, bringing significant beneficial effects to log management under the microservice architecture.
基于本申请实施例一,在本申请实施例二中,与上述实施例一相同或相似的内容,可以参考上文介绍,后续不再赘述。在此基础上,参照图2,图2为本申请接口调用日志分析方法实施例二的流程示意图,步骤S30中根据配置的收集策略收集日志数据中的目标日志数据的步骤还包括步骤A10~A50:Based on the first embodiment of the present application, in the second embodiment of the present application, the same or similar contents as the first embodiment can refer to the above description, and will not be repeated later. On this basis, refer to Figure 2, which is a flow chart of the second embodiment of the interface call log analysis method of the present application. The step of collecting the target log data in the log data according to the configured collection strategy in step S30 also includes steps A10~A50:
步骤A10,若收集策略为按时间间隔收集,则确定与收集策略对应的定时任务,根据定时任务收集日志数据中的目标日志数据,其中,定时任务包括基于预设时间间隔进行日志数据收集;Step A10, if the collection strategy is to collect at a time interval, determine a scheduled task corresponding to the collection strategy, and collect target log data in the log data according to the scheduled task, wherein the scheduled task includes collecting log data based on a preset time interval;
需要说明的是,按时间间隔收集是一种收集策略,指根据预设的时间间隔(如每小时、每天等)来收集日志数据。定时任务是在系统或程序中预设的、按照特定时间或条件自动执行的任务。在本实施例中,定时任务用于基于时间间隔收集日志数据。目标日志数据是根据收集策略被选中并准备进行收集的具体日志数据。预设时间间隔是在按时间间隔收集策略中,提前设定的用于确定收集日志数据频率的时间段。It should be noted that collection by time interval is a collection strategy, which refers to collecting log data according to preset time intervals (such as every hour, every day, etc.). Scheduled tasks are tasks preset in the system or program and automatically executed according to specific times or conditions. In this embodiment, scheduled tasks are used to collect log data based on time intervals. The target log data is the specific log data that is selected and prepared for collection according to the collection strategy. The preset time interval is a time period set in advance in the collection strategy by time interval to determine the frequency of collecting log data.
确定收集策略为按时间间隔收集,查找与该策略对应的定时任务,根据定时任务中的预设时间间隔,自动执行日志数据的收集操作。Determine the collection strategy as collection at time intervals, find the scheduled task corresponding to the strategy, and automatically perform the log data collection operation according to the preset time interval in the scheduled task.
步骤A20,若收集策略为按日志大小收集,则检测日志数据的存储数据量是否大于预设存储数据量阈值,若大于预设存储数据量阈值,则将日志数据作为目标日志数据进行收集;Step A20, if the collection strategy is to collect by log size, then detect whether the storage data volume of the log data is greater than a preset storage data volume threshold, and if it is greater than the preset storage data volume threshold, collect the log data as target log data;
需要说明的是,按日志大小收集是一种收集策略,指当日志数据的存储数据量达到或超过某个预设的阈值时,触发日志数据的收集。预设存储数据量阈值是在按日志大小收集策略中,用于判断是否需要收集日志数据的存储数据量界限。It should be noted that collection by log size is a collection strategy, which means that when the storage volume of log data reaches or exceeds a preset threshold, the collection of log data is triggered. The preset storage volume threshold is the storage volume limit used to determine whether to collect log data in the collection strategy by log size.
确定收集策略为按日志大小收集,定期检测日志数据的存储数据量,如果存储数据量大于预设的存储数据量阈值,则触发日志数据的收集操作。Determine the collection strategy as collection by log size, regularly detect the storage data volume of log data, and trigger the log data collection operation if the storage data volume is greater than the preset storage data volume threshold.
步骤A30,若收集策略为按日志级别收集,则将满足预设日志级别的日志数据作为目标日志数据进行收集;Step A30: if the collection strategy is to collect by log level, log data that meets the preset log level is collected as target log data;
需要说明的是,按日志级别收集是一种收集策略,指只收集具有特定日志级别的日志数据。日志级别通常用于表示日志信息的重要性和紧急性。预设级别指在按日志级别收集策略中,指定的需要收集的日志级别(如INFO、WARNING、ERROR等)。It should be noted that collection by log level is a collection strategy that only collects log data with a specific log level. Log level is usually used to indicate the importance and urgency of log information. The preset level refers to the log level (such as INFO, WARNING, ERROR, etc.) specified in the collection by log level strategy.
确定收集策略为按日志级别收集,遍历日志数据,检查每条日志的级别,收集所有满足预设级别的日志数据。Determine the collection strategy as collection by log level, traverse the log data, check the level of each log, and collect all log data that meets the preset level.
步骤A40,若收集策略为按请求路径收集,则将满足预设请求路径的日志数据作为目标日志数据进行收集;Step A40: if the collection strategy is to collect by request path, the log data satisfying the preset request path is collected as the target log data;
需要说明的是,按请求路径收集是一种收集策略,指只收集与特定请求路径相关的日志数据。这有助于分析特定功能或模块的日志。预设路径是在按请求路径收集策略中,指定的需要收集日志的请求路径。It should be noted that collection by request path is a collection strategy that only collects log data related to a specific request path. This helps analyze logs of specific functions or modules. The preset path is the request path for which logs need to be collected in the collection by request path strategy.
对于每一条日志,都需要检查其请求路径字段。将日志的请求路径与所有预设请求路径进行比对,看是否存在匹配项。如果日志的请求路径与某个预设请求路径匹配,则将该日志数据收集起来,以便后续处理。For each log, its request path field needs to be checked. The request path of the log is compared with all preset request paths to see if there is a match. If the request path of the log matches a preset request path, the log data is collected for subsequent processing.
步骤A50,若收集策略为按用户身份收集,则将满足预设用户身份的日志数据作为目标日志数据进行收集。Step A50: If the collection strategy is to collect by user identity, the log data satisfying the preset user identity is collected as the target log data.
需要说明的是,按用户身份收集是一种收集策略,指只收集特定用户身份的日志数据。这有助于追踪和分析特定用户的操作行为。预设身份是在按用户身份收集策略中,指定的需要收集日志的用户身份。It should be noted that collection by user identity is a collection strategy that only collects log data of a specific user identity. This helps track and analyze the operation behavior of a specific user. The preset identity is the user identity for which logs need to be collected in the collection strategy by user identity.
确定收集策略为按用户身份收集,检查日志数据中的用户身份标识,收集所有具有预设身份标识的日志数据。Determine the collection strategy as collection by user identity, check the user identity in the log data, and collect all log data with the preset identity.
进一步地,预设收集策略可以同时存在多种,比如预设收集策略为按请求路径收集(预设请求路径:/users/info)和按日志级别收集(预设级别:ERROR),那么如果一条日志请求路径为/users/info且日志级别为ERROR的日志将被收集。此外,当日志条数短时间过多,也可以开辟一个虚拟空间暂时性存储,当虚拟空间中的日志大小超过预设存储大小,再把拿出来与预设收集策略匹配并保存至数据库,也可以定时从虚拟空间中拿取日志进行操作。Furthermore, there can be multiple preset collection strategies at the same time. For example, if the preset collection strategy is to collect by request path (preset request path: /users/info) and collect by log level (preset level: ERROR), then a log with a log request path of /users/info and a log level of ERROR will be collected. In addition, when there are too many logs in a short period of time, a virtual space can be opened for temporary storage. When the log size in the virtual space exceeds the preset storage size, it can be taken out to match the preset collection strategy and saved to the database. It can also be timed to take logs from the virtual space for operation.
本实施例通过上述多样化的日志收集策略,能够显著提升日志管理的效率和准确性。通过灵活配置,系统能够自动根据时间间隔、日志大小、日志级别、请求路径或用户身份等条件精准筛选并收集目标日志数据。这不仅有助于快速定位问题、分析系统性能,还能有效减少非必要日志的存储,降低存储成本。同时,当日志量激增时,利用虚拟空间进行临时存储,再配合预设收集策略进行分批处理,确保了日志处理的连续性和高效性,为系统的稳定运行提供了强有力的支持。This embodiment can significantly improve the efficiency and accuracy of log management through the above-mentioned diversified log collection strategies. Through flexible configuration, the system can automatically and accurately screen and collect target log data according to conditions such as time interval, log size, log level, request path or user identity. This not only helps to quickly locate problems and analyze system performance, but also effectively reduces the storage of unnecessary logs and reduces storage costs. At the same time, when the amount of logs surges, virtual space is used for temporary storage, and then batch processing is carried out in conjunction with the preset collection strategy, which ensures the continuity and efficiency of log processing and provides strong support for the stable operation of the system.
基于本申请实施例一或实施例二,在本申请实施例三中,与上述实施例一或实施例二相同或相似的内容,可以参考上文介绍,后续不再赘述。步骤S40中将日志数据保存至数据库中的步骤还包括步骤B10~B20:Based on the first or second embodiment of the present application, in the third embodiment of the present application, the same or similar contents as those in the first or second embodiment can be referred to the above description, and will not be described in detail later. The step of saving the log data to the database in step S40 also includes steps B10 to B20:
步骤B10,将数据集按照接口分成不同的数据组,并将每个数据组中的每条目标日志数据作为数据对象;其中,每个接口对应一个数据组;Step B10, dividing the data set into different data groups according to the interface, and taking each target log data in each data group as a data object; wherein each interface corresponds to one data group;
需要说明的是,数据集是一组相关数据的集合,这些数据可以是结构化或非结构化的,用于分析、处理或存储。数据组是根据接口将数据集划分成的子集。数据对象指的是数据组中的每条目标日志数据,它们被单独处理或分析。It should be noted that a dataset is a collection of related data, which can be structured or unstructured, for analysis, processing or storage. A data group is a subset of a dataset divided according to an interface. A data object refers to each target log data in a data group, which is processed or analyzed separately.
首先,根据接口的不同,将数据集分成多个不同的数据组。每个接口对应一个唯一的数据组,确保数据的逻辑清晰和易于管理。在每个数据组中,将每条目标日志数据作为数据对象进行提取。这些数据对象将作为后续分析或处理的基本单位。First, the data set is divided into multiple different data groups according to different interfaces. Each interface corresponds to a unique data group to ensure that the data logic is clear and easy to manage. In each data group, each target log data is extracted as a data object. These data objects will serve as the basic unit for subsequent analysis or processing.
步骤B20,若数据对象满足预设异常条件,则将数据对象标记为异常日志,并确定异常检测结果为存在异常。Step B20: If the data object meets the preset abnormal condition, the data object is marked as an abnormal log, and the abnormality detection result is determined to be abnormal.
需要说明的是,预设异常条件是提前定义好的、用于判断数据对象是否异常的一组规则或条件。当数据对象的属性或值与这些条件匹配时,即可认为该数据对象异常。异常日志是被标记为异常的数据对象,它们可能包含错误、异常行为或不符合预期的数据。异常检测结果是对数据集进行异常检测后得出的结论,指示数据集中是否存在异常数据。It should be noted that the preset abnormal conditions are a set of rules or conditions defined in advance to determine whether a data object is abnormal. When the attributes or values of a data object match these conditions, the data object is considered abnormal. Exception logs are data objects marked as abnormal, which may contain errors, abnormal behaviors, or data that does not meet expectations. The anomaly detection result is the conclusion drawn after anomaly detection on the data set, indicating whether there is abnormal data in the data set.
遍历每个数据组中的数据对象,应用预设的异常条件进行判断。如果数据对象的某个属性或值满足异常条件,则将该数据对象标记为异常日志。在完成所有数据对象的异常检测后,根据标记为异常日志的数据对象的数量或其他相关因素,确定异常检测结果。如果存在被标记为异常的数据对象,则异常检测结果为存在异常;否则,结果为不存在异常。Traverse the data objects in each data group and apply the preset abnormal conditions for judgment. If a certain attribute or value of the data object meets the abnormal condition, the data object is marked as an abnormal log. After completing the abnormality detection of all data objects, determine the abnormality detection result based on the number of data objects marked as abnormal logs or other relevant factors. If there is a data object marked as abnormal, the abnormality detection result is that there is an abnormality; otherwise, the result is that there is no abnormality.
通过本实施例,系统能够高效地将大规模数据集按接口细分为多个数据组,实现了数据的精细化管理和分析。每个数据组内的数据对象作为独立单元,便于后续深入分析和处理。同时,预设异常条件的引入,使得系统能够自动识别和标记异常日志,快速定位潜在问题,提高了异常检测的准确性和效率。这一流程不仅优化了日志处理流程,还增强了系统的稳定性和可靠性,为用户提供了有力的支持。Through this embodiment, the system can efficiently subdivide large-scale data sets into multiple data groups according to interfaces, realizing refined management and analysis of data. The data objects in each data group are independent units, which facilitates subsequent in-depth analysis and processing. At the same time, the introduction of preset abnormal conditions enables the system to automatically identify and mark abnormal logs, quickly locate potential problems, and improve the accuracy and efficiency of abnormal detection. This process not only optimizes the log processing process, but also enhances the stability and reliability of the system, providing strong support for users.
基于本申请实施例一至实施例三中任意一项,在本申请实施例四中,与上述实施例一至实施例三中任意一项相同或相似的内容,可以参考上文介绍,后续不再赘述。步骤B10中将每个数据组中的每条目标日志数据作为数据对象的步骤之后还包括步骤C10~C30:Based on any one of the first to third embodiments of the present application, in the fourth embodiment of the present application, the same or similar content as any one of the first to third embodiments can refer to the above introduction, and will not be repeated later. After the step of taking each target log data in each data group as a data object in step B10, steps C10 to C30 are also included:
步骤C10,针对每个数据组,若存在预设时间范围内的数据对象的数量大于预设服务请求数量阈值,则确定预设时间范围内的数据对象满足预设异常条件;Step C10: for each data group, if the number of data objects within the preset time range is greater than the preset service request number threshold, it is determined that the data objects within the preset time range meet the preset abnormal condition;
需要说明的是,预设时间范围是提前定义好的一段时间区间,用于对数据进行分析或筛选。在这个时间范围内的数据对象将被特别关注或处理。预设服务请求数量阈值是一个预先设定的数值,用于判断在特定时间范围内服务请求的数量是否异常。如果实际数量超过这个阈值,则认为存在异常情况。It should be noted that the preset time range is a predefined time interval used for data analysis or screening. Data objects within this time range will be given special attention or processing. The preset service request quantity threshold is a pre-set value used to determine whether the number of service requests within a specific time range is abnormal. If the actual number exceeds this threshold, it is considered to be abnormal.
对于每个数据组,首先确定一个预设的时间范围,统计该时间范围内数据对象的数量。将统计结果与预设服务请求数量阈值进行比较,如果数量大于阈值,则判定该时间范围内的数据对象满足预设异常条件。For each data group, a preset time range is first determined, and the number of data objects within the time range is counted. The statistical result is compared with the preset service request quantity threshold. If the number is greater than the threshold, it is determined that the data objects within the time range meet the preset abnormal condition.
步骤C20,确定数据组中与处于失败状态的服务请求对应的数据对象,若与处于失败状态的服务请求对应的数据对象的对象数量大于预设失败数量阈值,则确定与处于失败状态的服务请求对应的数据对象满足预设异常条件;Step C20, determining the data objects corresponding to the service request in the failed state in the data group, if the number of the data objects corresponding to the service request in the failed state is greater than a preset failure number threshold, determining that the data objects corresponding to the service request in the failed state meet the preset abnormal condition;
需要说明的是,失败状态的服务请求是指未能成功完成的服务请求,通常由于系统错误、网络问题或资源不足等原因导致。预设失败数量阈值是一个预先设定的数值,用于判断在数据组中处于失败状态的服务请求数量是否异常。如果实际数量超过这个阈值,则认为存在异常情况。It should be noted that a failed service request refers to a service request that has not been successfully completed, usually due to system errors, network problems, or insufficient resources. The preset failure number threshold is a pre-set value used to determine whether the number of service requests in a failed state in a data group is abnormal. If the actual number exceeds this threshold, it is considered to be abnormal.
遍历数据组中的每条数据对象,识别与失败状态的服务请求对应的数据对象。统计这些数据对象的数量,将统计结果与预设失败数量阈值进行比较。如果数量大于阈值,则判定与失败状态的服务请求对应的数据对象满足预设异常条件。Traverse each data object in the data group and identify the data object corresponding to the service request in the failed state. Count the number of these data objects and compare the statistical result with the preset failure number threshold. If the number is greater than the threshold, it is determined that the data object corresponding to the service request in the failed state meets the preset abnormal condition.
步骤C30,确定数据组中与每条数据对象对应的返回参数,若在各返回参数中存在包含预设的特定字符的目标返回参数,则确定目标返回参数对应的数据对象满足预设异常条件。Step C30, determining the return parameter corresponding to each data object in the data group, if there is a target return parameter containing preset specific characters in each return parameter, determining that the data object corresponding to the target return parameter meets the preset abnormal condition.
需要说明的是,返回参数是服务请求完成后,由服务提供者返回给请求者的数据或信息。这些参数通常包含有关服务执行结果或状态的信息。预设的特定字符是在数据处理中,预设的特定字符用于识别或筛选具有特定含义或属性的数据。如果返回参数中包含这些字符,则可能表示数据对象满足某种异常条件。目标返回参数是在返回参数中,包含预设的特定字符的那些参数被称为目标返回参数。这些参数对应的数据对象可能满足预设的异常条件。It should be noted that the return parameter is the data or information returned by the service provider to the requester after the service request is completed. These parameters usually contain information about the service execution result or status. Preset specific characters are used to identify or filter data with specific meanings or attributes in data processing. If these characters are included in the return parameters, it may indicate that the data object meets certain abnormal conditions. Target return parameters are those parameters in the return parameters that contain preset specific characters. The data objects corresponding to these parameters may meet the preset abnormal conditions.
遍历数据组中的每条数据对象,获取其对应的返回参数。检查每个返回参数是否包含预设的特定字符,如果发现包含特定字符的返回参数(即目标返回参数),则判定该返回参数对应的数据对象满足预设异常条件。Traverse each data object in the data group and obtain its corresponding return parameter. Check whether each return parameter contains the preset specific characters. If a return parameter containing the specific characters is found (i.e., the target return parameter), it is determined that the data object corresponding to the return parameter meets the preset exception condition.
当然,异常情况并不止上述情况,用户也可以通过配置增加异常情况检测。同时,系统还可以进行性能分析,比如根据接口调用时间、响应状态码等指标,评估接口性能。Of course, the abnormal situations are not limited to the above ones, and users can also add abnormal situation detection through configuration. At the same time, the system can also perform performance analysis, such as evaluating interface performance based on indicators such as interface call time and response status code.
本实施例通过多维度地检测数据组中的异常情况,显著提升了问题发现和响应的速度。预设时间范围内的数据对象数量检测、失败状态服务请求数量的监控以及特定返回参数的筛查,共同构建了一个全面且灵活的异常检测框架。这不仅帮助用户快速定位潜在问题,还能及时分析服务性能和接口稳定性,为系统优化提供了有力支持。此外,用户可自定义异常检测规则,进一步增强了系统的灵活性和适应性,确保系统稳定运行。This embodiment significantly improves the speed of problem discovery and response by detecting anomalies in data groups in multiple dimensions. The detection of the number of data objects within a preset time range, the monitoring of the number of failed service requests, and the screening of specific return parameters together build a comprehensive and flexible anomaly detection framework. This not only helps users quickly locate potential problems, but also timely analyzes service performance and interface stability, providing strong support for system optimization. In addition, users can customize anomaly detection rules, which further enhances the flexibility and adaptability of the system and ensures stable operation of the system.
在一种可行的实施方式中,步骤B10中将每个数据组中的每条目标日志数据作为数据对象的步骤之后还包括步骤D10~D20:In a feasible implementation manner, after the step of taking each target log data in each data group as a data object in step B10, steps D10 to D20 are also included:
步骤D10,获取数据对象的请求者身份;Step D10, obtaining the identity of the requester of the data object;
需要说明的是,请求者身份指发起数据请求或执行某项操作的实体的身份标识。在日志或服务请求记录中,请求者身份通常用于标识是哪个用户或系统组件发起了请求。It should be noted that the requester identity refers to the identity of the entity that initiates a data request or performs an operation. In logs or service request records, the requester identity is usually used to identify which user or system component initiated the request.
从数据集中提取每个数据对象(如日志记录、服务请求记录)的请求者身份信息。确保请求者身份信息的准确性和完整性,以便后续分析使用。Extract the requester identity information of each data object (such as log records, service request records) from the data set. Ensure the accuracy and completeness of the requester identity information for subsequent analysis.
步骤D20,根据请求者身份进行用户行为分析,其中用户行为分析包括根据每个数据组中出现次数最多的请求者身份判断用户群体。Step D20 , performing user behavior analysis based on the requester identity, wherein the user behavior analysis includes determining the user group based on the requester identity that appears most frequently in each data group.
需要说明的是,用户分析是一种数据分析方法,旨在通过收集和处理用户数据来深入了解用户行为、偏好、需求等信息,以便为产品优化、市场策略制定等提供依据。用户群体是具有共同特征或行为模式的用户集合。在用户分析中,用户群体通常根据用户的某些属性(如年龄、性别、地域、兴趣等)或行为(如购买习惯、使用频率等)进行划分。在本实施例中,主要通过分析接口使用频率进行划分。It should be noted that user analysis is a data analysis method that aims to gain an in-depth understanding of user behavior, preferences, needs and other information by collecting and processing user data, so as to provide a basis for product optimization, market strategy formulation, etc. A user group is a collection of users with common characteristics or behavior patterns. In user analysis, user groups are usually divided according to certain attributes of users (such as age, gender, region, interests, etc.) or behaviors (such as purchasing habits, frequency of use, etc.). In this embodiment, the division is mainly based on the frequency of use of the analysis interface.
对提取的请求者身份信息进行分类和整理,以便进行统计分析。统计每个数据组中每个请求者身份的出现次数,以了解不同用户或用户群体在该数据组中的活跃程度。根据每个数据组中出现次数最多的请求者身份,判断该数据组的主要用户群体。这有助于识别出哪些用户或用户群体是该数据组的主要贡献者或使用者。The extracted requester identity information is classified and organized for statistical analysis. The number of occurrences of each requester identity in each data group is counted to understand the level of activity of different users or user groups in the data group. Based on the requester identity with the most occurrences in each data group, the main user group of the data group is determined. This helps to identify which users or user groups are the main contributors or users of the data group.
进一步地,可以利用针对接口的链路追踪的优势,通过接口日志做一些智能化的分析,比如通过机器学习或数据挖掘算法,可以从海量日志中提取有价值的信息,为业务优化提供支持。Furthermore, we can take advantage of the link tracking of interfaces and perform some intelligent analysis through interface logs. For example, through machine learning or data mining algorithms, we can extract valuable information from massive logs to provide support for business optimization.
本实施例通过精准地获取并分析数据对象的请求者身份,进而识别出主要用户群体及其行为模式。这不仅为产品优化、市场策略制定提供了有力依据,还增强了用户分析的深度和广度。结合链路追踪和智能化分析技术,系统能够深入挖掘接口日志中的有价值信息,为业务决策提供更为精准的数据支持,进一步提升了业务效率和用户满意度。This embodiment accurately obtains and analyzes the identity of the requester of the data object, and then identifies the main user groups and their behavior patterns. This not only provides a strong basis for product optimization and market strategy formulation, but also enhances the depth and breadth of user analysis. Combined with link tracking and intelligent analysis technology, the system can deeply mine valuable information in the interface log, provide more accurate data support for business decision-making, and further improve business efficiency and user satisfaction.
需要说明的是,在技术可实现,逻辑清晰的情形下,以上各实施例可以两两组合或者多个进行自由组合。It should be noted that, under the condition that the technology is feasible and the logic is clear, the above embodiments can be combined in pairs or in multiple combinations.
示例性地,参照图3,图3为本申请接口调用日志分析方法的流程示意图,假设有一家大型电商平台,其微服务架构包含了订单服务、库存服务、支付服务等多个核心服务。这些服务之间的接口调用链路复杂,为了确保系统的稳定性和用户体验,该平台决定采用本申请进行接口调用链路日志的记录和分析。平台首先初始化系统,即在各个微服务中引入本发明提供的工具包。工具包捕获到每个服务请求后,提取关键信息(如请求路径、请求参数、响应状态码、响应内容等),并进行格式化,然后根据平台业务规模和数据量,选择了Elasticsearch(一个高扩展的分布式全文检索引擎)作为日志存储数据库,以保证高效的日志检索和分析能力。保存后系统可以对日志进行可视化展示,用户或者开发人员可以直观看到存储的日志数据。也可以通过接口查询方式,可以直观地看到各服务的调用链路、调用时间和响应状态。为了进一步优化用户体验和提升业务效率,平台还引入了本申请的智能分析。通过收集并分析用户访问日志,识别出了高频率购买的商品和用户群体,为后续的个性化推荐和促销活动提供了有力支持,更为重要的,智能分析可以通过分析日志数据,识别异常情况,例如接口调用失败、响应时间过长等,如果有了异常情况被智能分析识别到了,智能分析会通知预警监控发送优化建议给用户。若无异常情况则结束此次日志分析。Exemplarily, referring to FIG. 3, FIG. 3 is a flow chart of the interface call log analysis method of the present application. Assuming that there is a large e-commerce platform, its microservice architecture includes multiple core services such as order service, inventory service, and payment service. The interface call link between these services is complex. In order to ensure the stability of the system and user experience, the platform decides to use the present application to record and analyze the interface call link log. The platform first initializes the system, that is, introduces the toolkit provided by the present invention into each microservice. After the toolkit captures each service request, it extracts key information (such as request path, request parameters, response status code, response content, etc.) and formats it. Then, according to the platform business scale and data volume, Elasticsearch (a highly scalable distributed full-text retrieval engine) is selected as the log storage database to ensure efficient log retrieval and analysis capabilities. After saving, the system can visualize the log, and users or developers can intuitively see the stored log data. It is also possible to intuitively see the call link, call time and response status of each service through the interface query method. In order to further optimize the user experience and improve business efficiency, the platform also introduces the intelligent analysis of the present application. By collecting and analyzing user access logs, we can identify frequently purchased products and user groups, providing strong support for subsequent personalized recommendations and promotional activities. More importantly, intelligent analysis can identify abnormal situations by analyzing log data, such as interface call failures and long response times. If an abnormal situation is identified by intelligent analysis, it will notify the early warning monitoring to send optimization suggestions to the user. If there is no abnormal situation, the log analysis ends.
需要说明的是,上述示例仅用于理解本申请,并不构成对本申请接口调用日志分析方法的限定,基于此技术构思进行更多形式的简单变换,均在本申请的保护范围内。It should be noted that the above examples are only used to understand the present application and do not constitute a limitation on the interface call log analysis method of the present application. More simple transformations based on this technical concept are all within the scope of protection of the present application.
本申请还提供一种接口调用日志分析装置,请参照图4,设置于安装了接口调用日志分析工具包的微服务,接口调用日志分析装置包括:The present application also provides an interface call log analysis device, as shown in FIG. 4 , which is set in a microservice in which an interface call log analysis toolkit is installed. The interface call log analysis device includes:
日志采集模块10,基于接口调用日志分析工具包中的过滤器拦截请求所述微服务的接口的服务请求,解析拦截的所述服务请求的服务信息,其中,所述服务信息是从服务请求中提取出的,用于描述请求本身及其上下文的信息,包括请求时间、请求者身份、请求参数、响应时间、响应者身份、响应内容以及其他相关信息,所述其他相关信息包括用户认证信息、客户端IP地址和请求来源;The log collection module 10 intercepts the service request of the interface of the microservice based on the filter in the interface call log analysis toolkit, and parses the service information of the intercepted service request, wherein the service information is extracted from the service request and is used to describe the request itself and its context information, including request time, requester identity, request parameters, response time, responder identity, response content and other related information, and the other related information includes user authentication information, client IP address and request source;
日志格式模块20,基于预设的日志格式将所述服务信息转换为日志数据;A log format module 20, which converts the service information into log data based on a preset log format;
日志收集模块30,获取外界输入的需求信息,根据所述需求信息对所述日志数据进行收集策略配置,根据配置的收集策略收集所述日志数据中的目标日志数据,其中,所述收集策略包括按时间间隔收集,按日志大小收集,按请求路径收集和按用户身份收集,所述收集策略通过可视化界面获取用户输入的收集策略参数,并生成相应的收集策略,所述将所述目标日志数据存储至预设的数据库的步骤之前包括当日志条数短时间过多,则将所述日志条数的日志数据暂时性存储至虚拟空间,若虚拟空间中日志数据的日志大小超过预设存储大小,则将所述虚拟空间中的日志数据与预设收集策略匹配,以根据预设收集策略将虚拟空间中的日志数据存储至预设的数据库;The log collection module 30 obtains demand information input from the outside, configures a collection strategy for the log data according to the demand information, and collects target log data in the log data according to the configured collection strategy, wherein the collection strategy includes collecting by time interval, collecting by log size, collecting by request path, and collecting by user identity. The collection strategy obtains the collection strategy parameters input by the user through a visual interface and generates a corresponding collection strategy. The step of storing the target log data in a preset database includes temporarily storing the log data of the log number in a virtual space when the number of log items is too large in a short period of time. If the log size of the log data in the virtual space exceeds the preset storage size, the log data in the virtual space is matched with the preset collection strategy to store the log data in the virtual space in the preset database according to the preset collection strategy.
日志分析模块40,将所述目标日志数据存储至预设的数据库,并确定所述数据库中多个目标日志数据组成的数据集,对所述数据集进行异常检测,其中,所述数据库包括关系型数据库和Nosql数据库中的至少一项,所述将所述目标日志数据存储至预设的数据库的步骤包括若在预设时间周期内所述服务请求的数量小于预设阈值,则将所述目标日志数据以表格的形式保存至关系型数据库;若在预设时间周期内所述服务请求的数量大于或等于预设阈值,则将所述目标日志数据保存至Nosql数据库;所述对所述数据集进行异常检测的步骤包括将所述数据集按照接口分成不同的数据组,并将每个数据组中的每条目标日志数据作为数据对象,其中,每个接口对应一个数据组;所述将每个数据组中的每条目标日志数据作为数据对象的步骤之后包括获取所述数据对象的请求者身份,根据所述请求者身份进行用户行为分析,其中,所述用户行为分析包括根据每个数据组中出现次数最多的请求者身份判断用户群体,使用机器学习或数据挖掘算法对每个数据组中所述用户群体进行分析,根据分析结果识别出所述用户群体的行为模式;若所述数据对象满足预设异常条件,则将所述数据对象标记为异常日志,并确定异常检测结果为存在异常;所述将所述目标日志数据存储至预设的数据库之前包括当日志条数短时间过多,则将所述日志条数的日志数据暂时性存储至虚拟空间,若虚拟空间中日志数据的日志大小超过预设存储大小,则将所述虚拟空间中的日志数据与预设收集策略匹配,以根据预设收集策略将虚拟空间中的日志数据存储至预设的数据库;The log analysis module 40 stores the target log data in a preset database, determines a data set composed of multiple target log data in the database, and performs anomaly detection on the data set, wherein the database includes at least one of a relational database and a Nosql database, and the step of storing the target log data in the preset database includes: if the number of service requests within a preset time period is less than a preset threshold, the target log data is saved in the form of a table to the relational database; if the number of service requests within the preset time period is greater than or equal to the preset threshold, the target log data is saved to the Nosql database; the step of performing anomaly detection on the data set includes dividing the data set into different data groups according to the interface, and taking each target log data in each data group as a data object, wherein each interface corresponds to a data group; the step of taking each target log data in each data group as The step of obtaining the data object includes obtaining the identity of the requester of the data object, and performing user behavior analysis based on the identity of the requester, wherein the user behavior analysis includes determining the user group based on the identity of the requester that appears most frequently in each data group, analyzing the user group in each data group using a machine learning or data mining algorithm, and identifying the behavior pattern of the user group based on the analysis result; if the data object meets the preset abnormal condition, the data object is marked as an abnormal log, and the abnormal detection result is determined to be an abnormality; before storing the target log data in the preset database, when the number of log entries is too large in a short period of time, the log data of the number of log entries is temporarily stored in the virtual space, and if the log size of the log data in the virtual space exceeds the preset storage size, the log data in the virtual space is matched with the preset collection strategy to store the log data in the virtual space in the preset database according to the preset collection strategy;
日志预警模块50,若异常检测结果为存在异常,则输出预设的预警信息。The log warning module 50 outputs a preset warning message if the abnormality detection result is that an abnormality exists.
本申请提供的接口调用日志分析装置,采用上述实施例中的接口调用日志分析方法,能够解决日志分析工具部署和配置复杂性高的技术问题。与现有技术相比,本申请提供的接口调用日志分析装置的有益效果与上述实施例提供的接口调用日志分析方法的有益效果相同,且接口调用日志分析装置中的其他技术特征与上述实施例方法公开的特征相同,在此不做赘述。The interface call log analysis device provided by the present application adopts the interface call log analysis method in the above embodiment, which can solve the technical problem of high complexity in the deployment and configuration of log analysis tools. Compared with the prior art, the beneficial effects of the interface call log analysis device provided by the present application are the same as the beneficial effects of the interface call log analysis method provided by the above embodiment, and other technical features in the interface call log analysis device are the same as the features disclosed in the above embodiment method, which will not be repeated here.
本申请提供一种接口调用日志分析设备,接口调用日志分析设备包括:至少一个处理器;以及,与至少一个处理器通信连接的存储器;其中,存储器存储有可被至少一个处理器执行的指令,指令被至少一个处理器执行,以使至少一个处理器能够执行上述实施例一中的接口调用日志分析方法。The present application provides an interface call log analysis device, which includes: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor so that the at least one processor can execute the interface call log analysis method in the above-mentioned embodiment one.
下面参考图5,其示出了适于用来实现本申请实施例的接口调用日志分析设备的结构示意图。本申请实施例中的接口调用日志分析设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(Personal Digital Assistant:个人数字助理)、PAD(Portable Application Description:平板电脑)、PMP(Portable Media Player:便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图5示出的接口调用日志分析设备仅仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。Reference is made to Figure 5 below, which shows a schematic diagram of the structure of an interface call log analysis device suitable for implementing an embodiment of the present application. The interface call log analysis device in the embodiment of the present application may include, but is not limited to, mobile terminals such as mobile phones, laptop computers, digital broadcast receivers, PDAs (Personal Digital Assistants), PADs (Portable Application Descriptions), PMPs (Portable Media Players), vehicle-mounted terminals (such as vehicle-mounted navigation terminals), etc., and fixed terminals such as digital TVs, desktop computers, etc. The interface call log analysis device shown in Figure 5 is merely an example and should not impose any limitations on the functions and scope of use of the embodiments of the present application.
如图5所示,接口调用日志分析设备可以包括处理装置1001(例如中央处理器、图形处理器等),其可以根据存储在只读存储器(ROM:Read Only Memory)1002中的程序或者从存储装置1003加载到随机访问存储器(RAM:Random Access Memory)1004中的程序而执行各种适当的动作和处理。在RAM1004中,还存储有接口调用日志分析设备操作所需的各种程序和数据。处理装置1001、ROM1002以及RAM1004通过总线1005彼此相连。输入/输出(I/O)接口1006也连接至总线。通常,以下系统可以连接至I/O接口1006:包括例如触摸屏、触摸板、键盘、鼠标、图像传感器、麦克风、加速度计、陀螺仪等的输入装置1007;包括例如液晶显示器(LCD:Liquid Crystal Display)、扬声器、振动器等的输出装置1008;包括例如磁带、硬盘等的存储装置1003;以及通信装置1009。通信装置1009可以允许接口调用日志分析设备与其他设备进行无线或有线通信以交换数据。虽然图中示出了具有各种系统的接口调用日志分析设备,但是应理解的是,并不要求实施或具备所有示出的系统。可以替代地实施或具备更多或更少的系统。As shown in FIG5 , the interface call log analysis device may include a processing device 1001 (e.g., a central processing unit, a graphics processing unit, etc.), which may perform various appropriate actions and processes according to a program stored in a read-only memory (ROM: Read Only Memory) 1002 or a program loaded from a storage device 1003 to a random access memory (RAM: Random Access Memory) 1004. In RAM 1004, various programs and data required for the operation of the interface call log analysis device are also stored. The processing device 1001, ROM 1002, and RAM 1004 are connected to each other via a bus 1005. An input/output (I/O) interface 1006 is also connected to the bus. Generally, the following systems may be connected to the I/O interface 1006: an input device 1007 including, for example, a touch screen, a touchpad, a keyboard, a mouse, an image sensor, a microphone, an accelerometer, a gyroscope, etc.; an output device 1008 including, for example, a liquid crystal display (LCD: Liquid Crystal Display), a speaker, a vibrator, etc.; a storage device 1003 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 1009. The communication device 1009 can allow the interface call log analysis device to communicate wirelessly or wired with other devices to exchange data. Although the interface call log analysis device with various systems is shown in the figure, it should be understood that it is not required to implement or have all the systems shown. More or fewer systems can be implemented or provided instead.
特别地,根据本申请公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本申请公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置从网络上被下载和安装,或者从存储装置1003被安装,或者从ROM1002被安装。在该计算机程序被处理装置1001执行时,执行本申请公开实施例的方法中限定的上述功能。In particular, according to the embodiments disclosed in the present application, the process described above with reference to the flowchart can be implemented as a computer software program. For example, the embodiments disclosed in the present application include a computer program product, which includes a computer program carried on a computer-readable medium, and the computer program includes a program code for executing the method shown in the flowchart. In such an embodiment, the computer program can be downloaded and installed from a network through a communication device, or installed from a storage device 1003, or installed from a ROM 1002. When the computer program is executed by the processing device 1001, the above-mentioned functions defined in the method of the embodiment disclosed in the present application are executed.
本申请提供的接口调用日志分析设备,采用上述实施例中的接口调用日志分析方法,能解决日志分析工具部署和配置复杂性高的技术问题。与现有技术相比,本申请提供的接口调用日志分析设备的有益效果与上述实施例提供的接口调用日志分析方法的有益效果相同,且该接口调用日志分析设备中的其他技术特征与上一实施例方法公开的特征相同,在此不做赘述。The interface call log analysis device provided by the present application adopts the interface call log analysis method in the above embodiment, which can solve the technical problem of high complexity in the deployment and configuration of log analysis tools. Compared with the prior art, the beneficial effects of the interface call log analysis device provided by the present application are the same as the beneficial effects of the interface call log analysis method provided by the above embodiment, and the other technical features in the interface call log analysis device are the same as the features disclosed in the method of the previous embodiment, which will not be repeated here.
应当理解,本申请公开的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式的描述中,具体特征、结构、材料或者特点可以在任何的一个或多个实施例或示例中以合适的方式结合。It should be understood that the various parts disclosed in this application can be implemented by hardware, software, firmware or a combination thereof. In the description of the above embodiments, specific features, structures, materials or characteristics can be combined in any one or more embodiments or examples in a suitable manner.
以上,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。The above are only specific implementations of the present application, but the protection scope of the present application is not limited thereto. Any technician familiar with the technical field can easily think of changes or substitutions within the technical scope disclosed in the present application, which should be included in the protection scope of the present application. Therefore, the protection scope of the present application should be based on the protection scope of the claims.
本申请提供一种存储介质,存储介质为计算机可读存储介质,具有存储在其上的计算机可读程序指令(即计算机程序),计算机可读程序指令用于执行上述实施例中的接口调用日志分析方法。The present application provides a storage medium, which is a computer-readable storage medium having computer-readable program instructions (ie, computer programs) stored thereon, and the computer-readable program instructions are used to execute the interface call log analysis method in the above-mentioned embodiment.
本申请提供的计算机可读存储介质例如可以是U盘,但不限于电、磁、光、电磁、红外线、或半导体的系统或器件,或者任意以上的组合。计算机可读存储介质的更具体地例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM:Random Access Memory)、只读存储器(ROM:Read Only Memory)、可擦式可编程只读存储器(EPROM:Erasable Programmable Read Only Memory或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM:CD-Read Only Memory)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本实施例中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统或者器件使用或者与其结合使用。计算机可读存储介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(RadioFrequency:射频)等等,或者上述的任意合适的组合。The computer-readable storage medium provided in the present application may be, for example, a USB flash drive, but is not limited to electrical, magnetic, optical, electromagnetic, infrared, or semiconductor systems or devices, or any combination of the above. More specific examples of computer-readable storage media may include, but are not limited to: an electrical connection with one or more wires, a portable computer disk, a hard disk, a random access memory (RAM: Random Access Memory), a read-only memory (ROM: Read Only Memory), an erasable programmable read-only memory (EPROM: Erasable Programmable Read Only Memory or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM: CD-Read Only Memory), an optical storage device, a magnetic storage device, or any suitable combination of the above. In this embodiment, the computer-readable storage medium may be any tangible medium containing or storing a program, which may be used by or in combination with an instruction execution system or device. The program code contained on the computer-readable storage medium may be transmitted using any appropriate medium, including but not limited to: wires, optical cables, RF (Radio Frequency: Radio Frequency), etc., or any suitable combination of the above.
上述计算机可读存储介质可以是接口调用日志分析设备中所包含的;也可以是单独存在,而未装配入接口调用日志分析设备中。The computer-readable storage medium may be included in the interface call log analysis device; or may exist independently without being assembled into the interface call log analysis device.
上述计算机可读存储介质承载有一个或者多个程序,当上述一个或者多个程序被接口调用日志分析设备执行时,使得接口调用日志分析设备:The computer-readable storage medium carries one or more programs. When the one or more programs are executed by the interface calling the log analysis device, the interface calls the log analysis device:
基于接口调用日志分析工具包中的过滤器拦截请求所述微服务的接口的服务请求,解析拦截的所述服务请求的服务信息,其中,所述服务信息是从服务请求中提取出的,用于描述请求本身及其上下文的信息,包括请求时间、请求者身份、请求参数、响应时间、响应者身份、响应内容以及其他相关信息,所述其他相关信息包括用户认证信息、客户端IP地址和请求来源;Based on the filter in the interface call log analysis toolkit, the service request of the interface requesting the microservice is intercepted, and the service information of the intercepted service request is parsed, wherein the service information is extracted from the service request and is used to describe the request itself and its context information, including request time, requester identity, request parameters, response time, responder identity, response content and other related information, and the other related information includes user authentication information, client IP address and request source;
基于预设的日志格式将所述服务信息转换为日志数据;Converting the service information into log data based on a preset log format;
获取外界输入的需求信息,根据所述需求信息对所述日志数据进行收集策略配置,根据配置的收集策略收集所述日志数据中的目标日志数据,其中,所述收集策略包括按时间间隔收集,按日志大小收集,按请求路径收集和按用户身份收集,所述收集策略通过可视化界面获取用户输入的收集策略参数,并生成相应的收集策略;Obtaining demand information input from the outside, configuring a collection strategy for the log data according to the demand information, and collecting target log data in the log data according to the configured collection strategy, wherein the collection strategy includes collecting by time interval, collecting by log size, collecting by request path, and collecting by user identity, and the collection strategy obtains the collection strategy parameters input by the user through a visual interface and generates a corresponding collection strategy;
将所述目标日志数据存储至预设的数据库,并确定所述数据库中多个目标日志数据组成的数据集,对所述数据集进行异常检测,其中,所述数据库包括关系型数据库和Nosql数据库中的至少一项,所述将所述目标日志数据存储至预设的数据库的步骤包括若在预设时间周期内所述服务请求的数量小于预设阈值,则将所述目标日志数据以表格的形式保存至关系型数据库;若在预设时间周期内所述服务请求的数量大于或等于预设阈值,则将所述目标日志数据保存至Nosql数据库;所述对所述数据集进行异常检测的步骤包括将所述数据集按照接口分成不同的数据组,并将每个数据组中的每条目标日志数据作为数据对象,其中,每个接口对应一个数据组;所述将每个数据组中的每条目标日志数据作为数据对象的步骤之后包括获取所述数据对象的请求者身份,根据所述请求者身份进行用户行为分析,其中,所述用户行为分析包括根据每个数据组中出现次数最多的请求者身份判断用户群体,使用机器学习或数据挖掘算法对每个数据组中所述用户群体进行分析,根据分析结果识别出所述用户群体的行为模式;若所述数据对象满足预设异常条件,则将所述数据对象标记为异常日志,并确定异常检测结果为存在异常;所述将所述目标日志数据存储至预设的数据库的步骤之前包括当日志条数短时间过多,则将所述日志条数的日志数据暂时性存储至虚拟空间,若虚拟空间中日志数据的日志大小超过预设存储大小,则将所述虚拟空间中的日志数据与预设收集策略匹配,以根据预设收集策略将虚拟空间中的日志数据存储至预设的数据库;The target log data is stored in a preset database, and a data set composed of multiple target log data in the database is determined, and anomaly detection is performed on the data set, wherein the database includes at least one of a relational database and a Nosql database, and the step of storing the target log data in the preset database includes: if the number of service requests within a preset time period is less than a preset threshold, the target log data is saved in the form of a table to the relational database; if the number of service requests within the preset time period is greater than or equal to the preset threshold, the target log data is saved to the Nosql database; the step of performing anomaly detection on the data set includes dividing the data set into different data groups according to interfaces, and taking each target log data in each data group as a data object, wherein each interface corresponds to a data group; the step of taking each target log data in each data group as a data object The step includes obtaining the identity of the requester of the data object, performing user behavior analysis based on the identity of the requester, wherein the user behavior analysis includes determining the user group based on the identity of the requester that appears most frequently in each data group, analyzing the user group in each data group using a machine learning or data mining algorithm, and identifying the behavior pattern of the user group based on the analysis result; if the data object meets a preset abnormal condition, marking the data object as an abnormal log, and determining that the abnormal detection result is an abnormality; before the step of storing the target log data in a preset database, when the number of log entries is too large in a short period of time, temporarily storing the log data of the number of log entries in a virtual space, and if the log size of the log data in the virtual space exceeds the preset storage size, matching the log data in the virtual space with a preset collection strategy, so as to store the log data in the virtual space in a preset database according to the preset collection strategy;
若异常检测结果为存在异常,则输出预设的预警信息。If the abnormality detection result shows that an abnormality exists, the preset warning information is output.
可以以一种或多种程序设计语言或其组合来编写用于执行本申请的操作的计算机程序代码,上述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN:Local Area Network)或广域网(WAN:Wide Area Network)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。Computer program code for performing the operations of the present application may be written in one or more programming languages or a combination thereof, including object-oriented programming languages such as Java, Smalltalk, C++, and conventional procedural programming languages such as "C" or similar programming languages. The program code may be executed entirely on the user's computer, partially on the user's computer, as a separate software package, partially on the user's computer and partially on a remote computer, or entirely on a remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (e.g., via the Internet using an Internet service provider).
附图中的流程图和框图,图示了按照本申请各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flow chart and block diagram in the accompanying drawings illustrate the possible architecture, function and operation of the system, method and computer program product according to various embodiments of the present application. In this regard, each square box in the flow chart or block diagram can represent a module, a program segment or a part of a code, and the module, the program segment or a part of the code contains one or more executable instructions for realizing the specified logical function. It should also be noted that in some alternative implementations, the functions marked in the square box can also occur in a sequence different from that marked in the accompanying drawings. For example, two square boxes represented in succession can actually be executed substantially in parallel, and they can sometimes be executed in the opposite order, depending on the functions involved. It should also be noted that each square box in the block diagram and/or flow chart, and the combination of the square boxes in the block diagram and/or flow chart can be implemented with a dedicated hardware-based system that performs a specified function or operation, or can be implemented with a combination of dedicated hardware and computer instructions.
描述于本申请实施例中所涉及到的模块可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,模块的名称在某种情况下并不构成对该单元本身的限定。The modules involved in the embodiments described in this application may be implemented by software or hardware, wherein the name of the module does not constitute a limitation on the unit itself in some cases.
本申请提供的可读存储介质为计算机可读存储介质,计算机可读存储介质存储有用于执行上述接口调用日志分析方法的计算机可读程序指令(即计算机程序),能够解决日志分析工具部署和配置复杂性高的技术问题。与现有技术相比,本申请提供的计算机可读存储介质的有益效果与上述实施例提供的接口调用日志分析方法的有益效果相同,在此不做赘述。The readable storage medium provided by the present application is a computer-readable storage medium, which stores computer-readable program instructions (i.e., computer programs) for executing the above-mentioned interface call log analysis method, and can solve the technical problem of high complexity in the deployment and configuration of log analysis tools. Compared with the prior art, the beneficial effects of the computer-readable storage medium provided by the present application are the same as the beneficial effects of the interface call log analysis method provided by the above-mentioned embodiment, and will not be elaborated here.
本申请还提供一种计算机程序产品,包括计算机程序,计算机程序被处理器执行时实现如上述的接口调用日志分析方法的步骤。The present application also provides a computer program product, including a computer program, which implements the steps of the interface calling log analysis method as described above when the computer program is executed by a processor.
本申请提供的计算机程序产品能够解决日志分析工具部署和配置复杂性高的技术问题。与现有技术相比,本申请提供的计算机程序产品的有益效果与上述实施例提供的接口调用日志分析方法的有益效果相同,在此不做赘述。The computer program product provided by this application can solve the technical problem of high complexity in the deployment and configuration of log analysis tools. Compared with the prior art, the beneficial effects of the computer program product provided by this application are the same as the beneficial effects of the interface call log analysis method provided by the above embodiment, which will not be repeated here.
以上仅为本申请的部分实施例,并非因此限制本申请的专利范围,凡是在本申请的技术构思下,利用本申请说明书及附图内容所作的等效结构变换,或直接/间接运用在其他相关的技术领域均包括在本申请的专利保护范围内。The above are only some embodiments of the present application, and are not intended to limit the patent scope of the present application. All equivalent structural changes made using the contents of the present application specification and drawings under the technical concept of the present application, or direct/indirect application in other related technical fields are included in the patent protection scope of the present application.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202411390817.4ACN118897784B (en) | 2024-10-08 | 2024-10-08 | Interface call log analysis method, device, equipment, medium and product |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202411390817.4ACN118897784B (en) | 2024-10-08 | 2024-10-08 | Interface call log analysis method, device, equipment, medium and product |
| Publication Number | Publication Date |
|---|---|
| CN118897784A CN118897784A (en) | 2024-11-05 |
| CN118897784Btrue CN118897784B (en) | 2025-05-30 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202411390817.4AActiveCN118897784B (en) | 2024-10-08 | 2024-10-08 | Interface call log analysis method, device, equipment, medium and product |
| Country | Link |
|---|---|
| CN (1) | CN118897784B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119621695A (en)* | 2025-02-11 | 2025-03-14 | 首都信息科技发展有限公司 | Log processing method and device based on ternary management system |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109491881A (en)* | 2018-11-01 | 2019-03-19 | 郑州云海信息技术有限公司 | A kind of method, apparatus of collector journal, equipment and readable storage medium storing program for executing |
| CN113094348A (en)* | 2021-03-19 | 2021-07-09 | 北京达佳互联信息技术有限公司 | Log management method and device |
| CN116841975A (en)* | 2023-05-31 | 2023-10-03 | 中银金融科技有限公司 | Log management method and device, electronic equipment and storage medium |
| CN117014300A (en)* | 2023-08-07 | 2023-11-07 | 康键信息技术(深圳)有限公司 | Micro-service log monitoring method and device, storage medium and computer equipment |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114969738B (en)* | 2022-05-27 | 2023-04-18 | 天翼爱音乐文化科技有限公司 | Interface abnormal behavior monitoring method, system, device and storage medium |
| CN116431443A (en)* | 2023-04-19 | 2023-07-14 | 平安国际融资租赁有限公司 | Log recording method, device, computer equipment and computer readable storage medium |
| CN118113558A (en)* | 2024-03-14 | 2024-05-31 | 中国工商银行股份有限公司 | Log file generation method and device, storage medium and electronic equipment |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109491881A (en)* | 2018-11-01 | 2019-03-19 | 郑州云海信息技术有限公司 | A kind of method, apparatus of collector journal, equipment and readable storage medium storing program for executing |
| CN113094348A (en)* | 2021-03-19 | 2021-07-09 | 北京达佳互联信息技术有限公司 | Log management method and device |
| CN116841975A (en)* | 2023-05-31 | 2023-10-03 | 中银金融科技有限公司 | Log management method and device, electronic equipment and storage medium |
| CN117014300A (en)* | 2023-08-07 | 2023-11-07 | 康键信息技术(深圳)有限公司 | Micro-service log monitoring method and device, storage medium and computer equipment |
| Publication number | Publication date |
|---|---|
| CN118897784A (en) | 2024-11-05 |
| Publication | Publication Date | Title |
|---|---|---|
| US11789943B1 (en) | Configuring alerts for tags associated with high-latency and error spans for instrumented software | |
| US11232125B1 (en) | Conversion of cloud computing platform data for ingestion by data intake and query system | |
| US11829236B2 (en) | Monitoring statuses of monitoring modules of a distributed computing system | |
| US11314574B2 (en) | Techniques for managing and analyzing log data | |
| US11269859B1 (en) | Correlating different types of data of a distributed ledger system | |
| US11880399B2 (en) | Data categorization using inverted indexes | |
| US10853124B2 (en) | Managing user data in a multitenant deployment | |
| US11061918B2 (en) | Locating and categorizing data using inverted indexes | |
| US10565220B2 (en) | Generating visualizations for search results data containing multiple data dimensions | |
| US11409645B1 (en) | Intermittent failure metrics in technological processes | |
| US11436116B1 (en) | Recovering pre-indexed data from a shared storage system following a failed indexer | |
| KR20190075972A (en) | Systems and methods for identifying process flows from log files and for visualizing flows | |
| US11276240B1 (en) | Precise plane detection and placement of virtual objects in an augmented reality environment | |
| US11526413B2 (en) | Distributed tracing of huge spans for application and dependent application performance monitoring | |
| CN105868391A (en) | Method and device for recording log | |
| US11106713B2 (en) | Sampling data using inverted indexes in response to grouping selection | |
| CN118897784B (en) | Interface call log analysis method, device, equipment, medium and product | |
| WO2021217119A1 (en) | Analyzing tags associated with high-latency and error spans for instrumented software | |
| CN119441068A (en) | A real-time data testing method, device, computer equipment and storage medium | |
| CN117971600A (en) | Performance monitoring method, device, readable storage medium and computer program product | |
| CN116401138B (en) | Operating system running state detection method and device, electronic equipment and medium | |
| CN111552674A (en) | Log processing method and device | |
| CN120104415A (en) | Monitoring methods and related equipment | |
| CN118312526A (en) | Abnormal SQL positioning method and device, electronic equipment and storage medium | |
| CN119807287A (en) | Data display method, system, device, electronic device and storage medium |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |