技术领域Technical Field
本发明实施例涉及实时计算领域,特别涉及数据处理方法、装置、服务器和可读存储介质。The embodiments of the present invention relate to the field of real-time computing, and in particular to a data processing method, device, server and readable storage medium.
背景技术Background technique
目前开源组件中,没有一款成熟的流式存储工具,大部分存储工具如Hbase,mongoDB,等都不满足流式存储,为了满足流式存储,目前推出了能够进行流式存储的文件消息队列作为消息引擎。但相比于kafka等开源消息引擎来说,一个文件消息对应的数据量要比kafka的一个消息对应的数据量大的多,如果像kafka一样向下游消费者随机发送消息,则会造成消费者的运行压力不均衡,运行效率较低,拖累整体的数据处理的速度。Currently, there is no mature streaming storage tool in the open source components. Most storage tools such as Hbase, mongoDB, etc. do not meet the requirements of streaming storage. In order to meet the requirements of streaming storage, file message queues that can perform streaming storage are currently launched as message engines. However, compared with open source message engines such as Kafka, the amount of data corresponding to a file message is much larger than the amount of data corresponding to a Kafka message. If messages are sent randomly to downstream consumers like Kafka, it will cause uneven operating pressure on consumers, low operating efficiency, and slow down the overall data processing speed.
发明内容Summary of the invention
本发明实施方式的目的在于提供一种数据处理方法、装置、服务器和可读存储介质,使得提高处理单元运行的效率,加快数据处理的速度。The purpose of the embodiments of the present invention is to provide a data processing method, device, server and readable storage medium, so as to improve the efficiency of the operation of the processing unit and accelerate the speed of data processing.
为解决上述技术问题,本发明的实施方式提供了一种数据处理方法,包括以下步骤:To solve the above technical problems, an embodiment of the present invention provides a data processing method, comprising the following steps:
当内存中存储的文件消息的数量小于预设存储阈值,则根据用户需求从文件消息队列中获取多个待处理文件消息存储于所述内存中,所述待处理文件消息中包含对应的所述待处理数据在所述存储组件中的位置;When the number of file messages stored in the memory is less than a preset storage threshold, multiple to-be-processed file messages are obtained from the file message queue according to user requirements and stored in the memory, wherein the to-be-processed file messages include the location of the corresponding to-be-processed data in the storage component;
根据处理单元的消费能力,将所述内存中的所述待处理文件消息发送至所述处理单元;sending the to-be-processed file message in the memory to the processing unit according to the consumption capacity of the processing unit;
通过所述处理单元从所述存储组件中获取所述待处理文件消息对应的待处理数据进行处理。The processing unit obtains the to-be-processed data corresponding to the to-be-processed file message from the storage component for processing.
本发明的实施方式还提供了一种数据处理装置,包括:An embodiment of the present invention further provides a data processing device, comprising:
文件获取模块,用于当内存中存储的文件消息的数量小于预设存储阈值,则根据用户需求从文件消息队列中获取多个待处理文件消息存储于所述内存中,所述待处理文件消息中包含对应的所述待处理数据在所述存储组件中的位置;A file acquisition module, configured to acquire a plurality of to-be-processed file messages from a file message queue and store them in the memory according to user requirements when the number of file messages stored in the memory is less than a preset storage threshold, wherein the to-be-processed file messages include the location of the corresponding to-be-processed data in the storage component;
文件发送模块,用于根据处理单元的消费能力,将所述内存中的所述待处理文件消息发送至所述处理单元;A file sending module, used for sending the to-be-processed file message in the memory to the processing unit according to the consumption capacity of the processing unit;
数据处理模块,用于通过所述处理单元从所述存储组件中获取所述待处理文件消息对应的待处理数据进行处理。The data processing module is used to obtain the to-be-processed data corresponding to the to-be-processed file message from the storage component through the processing unit for processing.
本发明的实施方式还提供了一种服务器,包括:至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行任一所述的数据处理方法。An embodiment of the present invention also provides a server, comprising: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor so that the at least one processor can execute any of the data processing methods described.
本发明的实施方式还提供了一种计算机可读存储介质,所述计算机程序被处理器执行时实现任一项所述的数据处理方法。An embodiment of the present invention further provides a computer-readable storage medium, and when the computer program is executed by a processor, any of the data processing methods described above is implemented.
本发明实施方式提供的数据处理方法,用于流式存储的文件消息队列,并针对该文件消息队列提供了文件消息分配的方法,即在文件消息获取之前判断内存中的文件消息的数量,避免内存中文件消息存储过多造成消费堆积。在发送文件消息时,根据处理单元的消费能力进行发送,而不是随机向处理单元发送,使得处理单元对文件消息及文件消息对应的数据的处理效率更高。The data processing method provided by the embodiment of the present invention is used for a file message queue of streaming storage, and provides a method for allocating file messages for the file message queue, that is, the number of file messages in the memory is determined before the file messages are acquired, so as to avoid consumption accumulation caused by excessive storage of file messages in the memory. When sending file messages, they are sent according to the consumption capacity of the processing unit, rather than randomly sent to the processing unit, so that the processing unit has higher efficiency in processing file messages and data corresponding to the file messages.
另外,本发明实施方式提供的数据处理方法,所述数据处理方法还包括:当所述内存中存储的所述文件消息的数量大于所述预设存储阈值,暂停从所述文件消息队列中获取所述待处理文件消息,将所述内存中存储的所述文件消息发送至对应的所述处理单元。在内存中存储文件消息的数量过多时,优先处理内存中的文件消息,避免内存中文件消息的冗余。In addition, the data processing method provided by the embodiment of the present invention further includes: when the number of the file messages stored in the memory is greater than the preset storage threshold, suspending the acquisition of the to-be-processed file messages from the file message queue, and sending the file messages stored in the memory to the corresponding processing unit. When the number of file messages stored in the memory is too large, the file messages in the memory are processed first to avoid redundancy of the file messages in the memory.
另外,本发明实施方式提供的数据处理方法,所述根据处理单元的消费能力,将所述内存中的所述待处理消息发送至所述处理单元,包括:根据所述处理单元的消费能力,确定发送到所述处理单元的元组可以包含文件消息的数量,其中,所述元组的数量与所述处理单元的数量相同;从所述内存中获取所述元组可以包含文件消息的数量的待处理文件消息形成所述元组,发送至所述处理单元。以处理单元的消费单元为依据设定发送的文件消息的数量,使得处理单元可以以最大效率接收和处理文件消息。In addition, the data processing method provided by the embodiment of the present invention, wherein the sending of the to-be-processed message in the memory to the processing unit according to the consumption capacity of the processing unit comprises: determining the number of file messages that can be included in the tuple sent to the processing unit according to the consumption capacity of the processing unit, wherein the number of tuples is the same as the number of the processing units; obtaining the to-be-processed file messages of which the tuple can include the number of file messages from the memory to form the tuple, and sending it to the processing unit. The number of file messages to be sent is set based on the consumption unit of the processing unit, so that the processing unit can receive and process the file messages with maximum efficiency.
另外,本发明实施方式提供的数据处理方法,所述根据处理单元的消费能力,将所述内存中的所述待处理文件消息发送至所述处理单元,包括:从所述内存中获取第一批处理需要处理的所述待处理文件消息,其中,所述第一批处理需要处理的所述待处理文件消息的数量根据所述处理单元的数量确定;将所述第一批处理的所述待处理文件消息发送至所述处理单元。对待处理文件消息进行批处理,可以对文件消息的处理进行合理规划和有效控制。In addition, the data processing method provided by the embodiment of the present invention, wherein the sending of the to-be-processed file messages in the memory to the processing unit according to the consumption capacity of the processing unit comprises: obtaining the to-be-processed file messages to be processed in the first batch from the memory, wherein the number of the to-be-processed file messages to be processed in the first batch is determined according to the number of the processing units; and sending the to-be-processed file messages in the first batch to the processing unit. Batch processing of the to-be-processed file messages can reasonably plan and effectively control the processing of the file messages.
另外,本发明实施方式提供的数据处理方法,所述将所述第一批处理的所述待处理文件消息发送至所述处理单元后,还包括:判断批处理的次数是否超过预先设定的最大批处理次数;若所述批处理的次数小于或等于所述最大批处理次数,从所述内存中获取第二批处理需要处理的所述待处理文件消息,发送至所述处理单元。对批处理的次数进行限定,可以约束数据处理的时间,方便优化数据处理的策略。In addition, the data processing method provided by the embodiment of the present invention, after sending the to-be-processed file message of the first batch of processing to the processing unit, further includes: determining whether the number of batch processing exceeds the preset maximum number of batch processing; if the number of batch processing is less than or equal to the maximum number of batch processing, obtaining the to-be-processed file message to be processed by the second batch processing from the memory, and sending it to the processing unit. Limiting the number of batch processing can constrain the time of data processing and facilitate the optimization of data processing strategies.
另外,本发明实施方式提供的数据处理方法,所述通过所述处理单元从所述存储组件中获取所述待处理文件消息对应的待处理数据进行处理,包括;根据所述用户需求将所述待处理文件消息组对应的待处理数据进行过滤;对过滤后的所述待处理数据进行处理。根据用户需求对待处理数据进行过滤,使获取的数据更具有针对性,且减少待处理数据的数据量,提高处理效率。In addition, the data processing method provided by the embodiment of the present invention, wherein the processing unit obtains the to-be-processed data corresponding to the to-be-processed file message from the storage component for processing, comprising: filtering the to-be-processed data corresponding to the to-be-processed file message group according to the user's needs; and processing the filtered to-be-processed data. The to-be-processed data is filtered according to the user's needs, so that the acquired data is more targeted, and the amount of the to-be-processed data is reduced, thereby improving the processing efficiency.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。One or more embodiments are exemplarily described by pictures in the corresponding drawings, and these exemplified descriptions do not constitute limitations on the embodiments. Elements with the same reference numerals in the drawings represent similar elements, and unless otherwise stated, the figures in the drawings do not constitute proportional limitations.
图1是本发明的第一实施方式提供的数据处理方法所应用的数据处理装置的系统架构图;1 is a system architecture diagram of a data processing device used in a data processing method provided in a first embodiment of the present invention;
图2是本发明的第一实施方式提供的数据处理方法的流程图;2 is a flow chart of a data processing method provided by a first embodiment of the present invention;
图3是本发明的第二实施方式提供的数据处理方法的流程图;3 is a flow chart of a data processing method provided by a second embodiment of the present invention;
图4是本发明的第四实施方式提供的数据处理装置的结构示意图;4 is a schematic diagram of the structure of a data processing device provided by a fourth embodiment of the present invention;
图5是本发明的第五实施方式提供的服务器的结构示意图。FIG5 is a schematic diagram of the structure of a server provided by a fifth embodiment of the present invention.
具体实施方式Detailed ways
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合附图对本发明的各实施方式进行详细的阐述。然而,本领域的普通技术人员可以理解,在本发明各实施方式中,为了使读者更好地理解本申请而提出了许多技术细节。但是,即使没有这些技术细节和基于以下各实施方式的种种变化和修改,也可以实现本申请所要求保护的技术方案。In order to make the purpose, technical scheme and advantages of the embodiments of the present invention clearer, the various embodiments of the present invention will be described in detail below in conjunction with the accompanying drawings. However, it will be appreciated by those skilled in the art that in the various embodiments of the present invention, many technical details are provided in order to enable the reader to better understand the present application. However, even without these technical details and various changes and modifications based on the following embodiments, the technical scheme claimed in the present application can be implemented.
以下各个实施例的划分是为了描述方便,不应对本发明的具体实现方式构成任何限定,各个实施例在不矛盾的前提下可以相互结合相互引用。The following embodiments are divided for the convenience of description and shall not constitute any limitation on the specific implementation of the present invention. The embodiments may be combined with each other and referenced to each other without contradiction.
本发明的第一实施方式涉及一种数据处理方法,应用于实时计算系统,实时计算系统通过预设存储组件存储数据流。如图1所示,图1为数据处理方法所应用的数据处理装置的系统架构图。具体地,数据流通过数据接入层接入,接入后的数据传输到预设存储组件进行存储,预设存储组件会根据存入的数据流的时间顺序生成文件消息(mesfile),进而形成文件消息队列。实时计算系统从文件消息队列中获取符合用户需求的文件消息,并根据文件消息从预设存储组件中获取对应的数据,根据业务需求对数据进行计算,生成业务统计数据,然后经过数据落地层存储生成的业务统计数据。The first embodiment of the present invention relates to a data processing method, which is applied to a real-time computing system, and the real-time computing system stores data streams through a preset storage component. As shown in Figure 1, Figure 1 is a system architecture diagram of a data processing device to which the data processing method is applied. Specifically, the data stream is accessed through the data access layer, and the accessed data is transmitted to the preset storage component for storage. The preset storage component generates a file message (mesfile) according to the time sequence of the stored data stream, thereby forming a file message queue. The real-time computing system obtains file messages that meet user needs from the file message queue, and obtains corresponding data from the preset storage component according to the file message, calculates the data according to business needs, generates business statistical data, and then stores the generated business statistical data through the data landing layer.
文件消息队列的生成方式具体为:根据数据流在存储组件中的存储顺序生成多个文件消息;将多个文件消息以存储顺序进行排列,形成文件消息队列。The specific method of generating the file message queue is as follows: generating a plurality of file messages according to the storage order of the data stream in the storage component; and arranging the plurality of file messages in the storage order to form a file message queue.
本实施方式所涉及的数据处理方法具体步骤如图2所示:The specific steps of the data processing method involved in this embodiment are shown in FIG2:
步骤201,当内存中存储的文件消息的数量小于预设存储阈值,则根据用户需求从文件消息队列中获取多个待处理文件消息存储于所述内存中,所述待处理文件消息中包含对应的所述待处理数据在所述存储组件中的位置;Step 201, when the number of file messages stored in the memory is less than a preset storage threshold, a plurality of to-be-processed file messages are obtained from the file message queue according to user requirements and stored in the memory, wherein the to-be-processed file messages include the location of the corresponding to-be-processed data in the storage component;
在本实施方式中,实时计算系统中一般具有用于获取和发送数据的交互单元,以Apache Storm分布式实时计算系统为例,交互单元为Apache Storm中的源(Spout)。实时计算系统在从文件消息队列中获取文件消息之前,需要判断当前是否可以获取文件消息。获取条件至少包括内存中存储的文件消息的数量小于预设内存存储阈值。其中,内存中存储的文件消息是上次处理单元消费剩余的、未被消费的文件消息。文件消息在内存中剩余的原因包括进行数据处理时发生故障、批处理次数达到最大批处理次数但待处理数据仍未处理结束、或暂停数据处理的过程等。如果在内存中存储的文件消息的数量大于预设存储阈值,此时文件消息消费堆积。为了避免这种情况的发生,只有当内存中存储的文件消息的数量小于预设存储阈值时,交互单元才从文件消息队列中获取待处理文件消息。In this embodiment, the real-time computing system generally has an interactive unit for acquiring and sending data. Taking the Apache Storm distributed real-time computing system as an example, the interactive unit is the source (Spout) in Apache Storm. Before the real-time computing system acquires the file message from the file message queue, it needs to determine whether the file message can be acquired at present. The acquisition condition at least includes that the number of file messages stored in the memory is less than the preset memory storage threshold. Among them, the file message stored in the memory is the file message remaining and not consumed by the last processing unit. The reasons for the remaining file message in the memory include a failure during data processing, the batch processing number reaches the maximum batch processing number but the data to be processed has not been processed, or the process of suspending data processing, etc. If the number of file messages stored in the memory is greater than the preset storage threshold, the file message consumption is accumulated at this time. In order to avoid this situation, the interactive unit only acquires the file message to be processed from the file message queue when the number of file messages stored in the memory is less than the preset storage threshold.
可选地,当内存中存储的文件消息的数量大于或等于预设存储阈值时,交互单元暂停从所述文件消息队列中获取所述待处理文件消息组,而是优先处理内存中剩余的文件消息,即将所述内存中存储的所述文件消息发送至对应的所述处理单元,直到内存中存储的文件消息的数量小于预设存储阈值。Optionally, when the number of file messages stored in the memory is greater than or equal to a preset storage threshold, the interaction unit suspends obtaining the to-be-processed file message group from the file message queue, and instead gives priority to processing the remaining file messages in the memory, i.e., sending the file messages stored in the memory to the corresponding processing unit until the number of file messages stored in the memory is less than the preset storage threshold.
进一步地,在交互单元从文件消息队列中获取文件消息之前,还可以对处理单元的消费能力进行判断,处理单元的消费能力包括处理单元可处理的数据量,根据可处理的数据量确定可发送的文件消息的数量,进而确定应当从文件消息队列中获取文件消息的数量。Furthermore, before the interaction unit obtains the file message from the file message queue, the consumption capacity of the processing unit can also be judged. The consumption capacity of the processing unit includes the amount of data that can be processed by the processing unit. The number of file messages that can be sent is determined based on the amount of data that can be processed, and then the number of file messages that should be obtained from the file message queue is determined.
步骤202,根据处理单元的消费能力,将所述内存中的所述待处理文件消息发送至所述处理单元;Step 202, sending the to-be-processed file message in the memory to the processing unit according to the consumption capacity of the processing unit;
处理单元的消费能力包括处理单元可处理的数据量,处理单元的数量不唯一,不同的处理单元执行不同的处理功能。根据用于处理待处理数据的处理单元的消费能力,将待处理文件消息组发送至处理单元。The consumption capacity of a processing unit includes the amount of data that the processing unit can process. The number of processing units is not unique, and different processing units perform different processing functions. According to the consumption capacity of the processing unit used to process the data to be processed, the to-be-processed file message group is sent to the processing unit.
可选地,交互单元根据处理单元的消费能力确定发送到所述处理单元的元组可以包含文件消息的数量,其中,所述元组的数量至少为1;从内存中获取元组可以包含文件消息的数量的待处理文件消息形成元组,发送至处理单元。Optionally, the interaction unit determines the number of file messages that can be contained in the tuple sent to the processing unit according to the consumption capacity of the processing unit, wherein the number of tuples is at least 1; obtains the number of file messages to be processed that can be contained in the tuple from the memory to form a tuple, and sends it to the processing unit.
具体地,交互单元从内存中获取的待处理文件消息以元组的方式发送至处理单元,元组中包含的待处理文件消息的数量根据处理单元的消费能力确定,区别于以往的随机发送的方式,使得发送的待处理文件消息的数量更符合处理单元的消费能力,保证每一个处理单元在处理数据的时候,都能处于工作状态,且空闲时间较少,完成时间间隔较短,进而使处理单元的处理效率更高。Specifically, the interaction unit obtains the to-be-processed file messages from the memory and sends them to the processing unit in the form of tuples. The number of to-be-processed file messages contained in the tuple is determined according to the consumption capacity of the processing unit. This is different from the previous random sending method, so that the number of to-be-processed file messages sent is more in line with the consumption capacity of the processing unit, ensuring that each processing unit can be in a working state when processing data, with less idle time and a shorter completion time interval, thereby making the processing efficiency of the processing unit higher.
可选地,交互单元对待处理文件消息进行分批处理发送,从内存中获取第一批处理所需待处理文件消息,其中,第一批处理所需待处理文件消息的数量根据处理单元的数量确定;将第一批处理的待处理文件消息发送至处理单元。Optionally, the interaction unit processes and sends the to-be-processed file messages in batches, obtains the to-be-processed file messages required for the first batch of processing from the memory, wherein the number of the to-be-processed file messages required for the first batch of processing is determined according to the number of processing units; and sends the to-be-processed file messages for the first batch of processing to the processing units.
第一批处理发送的元组的数量与处理单元的数量相同,如果设定处理单元的数量为10,一个元组中包括2个文件消息,则第一批处理发送10个元组,20个文件消息。由于第一批处理发出的元组数量和处理单元的数量相同,可以保证每一个处理单元都可以同时进行工作,保证了处理单元的处理效率。The number of tuples sent in the first batch of processing is the same as the number of processing units. If the number of processing units is set to 10 and one tuple includes 2 file messages, the first batch of processing sends 10 tuples and 20 file messages. Since the number of tuples sent in the first batch of processing is the same as the number of processing units, it can be ensured that each processing unit can work at the same time, ensuring the processing efficiency of the processing unit.
可选地,预先设置最大批处理次数,在进行第一批处理后,交互单元判断批处理的次数是否超过最大批处理次数;若所述批处理的次数小于或等于所述最大批处理次数,从所述内存中获取第二批处理所需所述待处理文件消息,发送至所述处理单元。交互单元获取第二批处理所需的待处理文件消息的触发机制可以根据需要预先设定,第二批处理的触发机制可以为空闲处理单元的数量超过了一定数值后进行第二批处理,或者设置发送时间定时进行第二批处理等。Optionally, a maximum number of batch processing times is preset, and after the first batch processing is performed, the interactive unit determines whether the number of batch processing times exceeds the maximum number of batch processing times; if the number of batch processing times is less than or equal to the maximum number of batch processing times, the to-be-processed file message required for the second batch processing is obtained from the memory and sent to the processing unit. The trigger mechanism for the interactive unit to obtain the to-be-processed file message required for the second batch processing can be preset as needed, and the trigger mechanism for the second batch processing can be to perform the second batch processing after the number of idle processing units exceeds a certain value, or to set a sending time to perform the second batch processing regularly, etc.
设置最大批处理次数的原因在于可以约束数据处理的时间,方便优化数据处理的策略。例如,如果在达到最大批处理次数后,待处理文件消息仍未全部发送至处理单元进行处理,则说明可能出现了处理单元设置的数量较少或处理单元设置的功能不合理等问题,因此,针对这样的问题,提出如下的优化策略:增加处理单元的数量或变更处理单元设置的功能,使得数据处理的速度更快,效率更高。The reason for setting the maximum number of batch processing times is to constrain the time of data processing and facilitate the optimization of data processing strategies. For example, if after reaching the maximum number of batch processing times, the pending file messages have not been sent to the processing unit for processing, it may indicate that there may be a small number of processing units or unreasonable functions of the processing unit settings. Therefore, in response to such problems, the following optimization strategies are proposed: increase the number of processing units or change the functions of the processing unit settings, so that the data processing speed is faster and more efficient.
可选地,处理单元对第一批处理所需的所述待处理文件消息在所述文件消息队列中的最大偏移量(offset值)进行记录;当进行第二批处理时,从所述最大偏移量对应的所述待处理文件消息之后开始获取。Optionally, the processing unit records the maximum offset (offset value) of the to-be-processed file messages required for the first batch of processing in the file message queue; when performing the second batch of processing, it starts to obtain after the to-be-processed file messages corresponding to the maximum offset.
具体地,最大偏移量为这一批数据(第一批处理)中已发送的最后的文件消息,待下一次发送(第二批处理)时,从最大偏移量后的文件消息开始发送至处理单元进行消费。为每一批数据设置编号(BatchID),在接收到全部处理单元的ACK回调后,证明此时这一批数据已经处理完成,在Zookeeper中记录这一批的BatchID及这一批的最大偏移量。Specifically, the maximum offset is the last file message sent in this batch of data (the first batch of processing). When the next transmission (the second batch of processing) is sent, the file messages after the maximum offset are sent to the processing unit for consumption. A number (BatchID) is set for each batch of data. After receiving the ACK callback from all processing units, it proves that this batch of data has been processed. The BatchID and the maximum offset of this batch are recorded in Zookeeper.
进行批处理并对批处理的最大偏移量进行记录的原因在于,在设备故障发生处理过程中断后,可以根据记录的最大偏移量继续之前的处理过程,而不需要重新发送处理;或者,用户需求可能是连贯的,例如用户想继续上一次的需求处理过程,因此可以从最大偏移量处继续获取文件消息,保证了数据处理的连贯性,更符合用户需求。The reason for performing batch processing and recording the maximum offset of the batch processing is that after the processing is interrupted due to a device failure, the previous processing process can be continued according to the recorded maximum offset without resending the processing; or, the user demand may be continuous, for example, the user wants to continue the last demand processing process, so the file message can continue to be obtained from the maximum offset, which ensures the continuity of data processing and better meets user needs.
步骤203,通过所述处理单元从所述存储组件中获取所述待处理文件消息对应的待处理数据进行处理。Step 203: The processing unit obtains the to-be-processed data corresponding to the to-be-processed file message from the storage component for processing.
文件消息为预设的存储组件根据流入的数据而生成,存储组件在对数据流进行存储时,会依据存储的数据生成文件消息,文件消息中包含有该文件消息对应的数据在预设存储组件中的地址。处理单元在接收到待处理文件消息后,根据待处理文件消息中记载的地址获取待处理数据。The file message is generated by the preset storage component according to the incoming data. When the storage component stores the data stream, it will generate the file message according to the stored data. The file message contains the address of the data corresponding to the file message in the preset storage component. After receiving the file message to be processed, the processing unit obtains the data to be processed according to the address recorded in the file message to be processed.
可选地,根据所述用户需求将所述待处理文件消息组对应的待处理数据进行过滤;对过滤后的所述待处理数据进行处理。例如,根据用户需求可以设置过滤条件为:只读取某个通道(channel)的数据或只消费某个网站的数据等,以实现用户的需求。Optionally, the data to be processed corresponding to the to-be-processed file message group is filtered according to the user's needs, and the filtered data to be processed is processed. For example, according to the user's needs, the filtering conditions can be set as: only reading data from a certain channel or only consuming data from a certain website, etc., to meet the user's needs.
将处理单元对待处理数据进行处理后的结果存储于ES、Hbase等数据库以供用户查询。The processing results of the processing unit on the data to be processed are stored in databases such as ES and Hbase for user query.
本发明实施方式相对于相关技术而言,提供了一种用于流式存储的文件消息队列,并针对该文件消息队列提供了文件消息分配的方法,即在文件消息获取之前判断内存中的文件消息的数量,避免内存中文件消息存储过多造成消费堆积。在发送文件消息时,根据处理单元的消费能力进行发送,而不是随机向处理单元发送,使得处理单元对文件消息及文件消息对应的数据的处理效率更高。另外,在内存中存储文件消息的数量过多时,优先处理内存中的文件消息,避免内存中文件消息的冗余;以处理单元的消费单元为依据设定发送的文件消息的数量,使得处理单元可以以最大效率接收和处理文件消息;对待处理文件消息进行批处理,可以对文件消息的处理进行合理规划和有效控制;对批处理的次数进行限定,可以约束数据处理的时间,方便优化数据处理的策略;根据用户需求对待处理数据进行过滤,使获取的数据更具有针对性,且减少待处理数据的数据量,提高处理效率。Compared with the related art, the embodiment of the present invention provides a file message queue for streaming storage, and provides a method for allocating file messages for the file message queue, that is, the number of file messages in the memory is judged before the file message is obtained, so as to avoid consumption accumulation caused by excessive storage of file messages in the memory. When sending file messages, the file messages are sent according to the consumption capacity of the processing unit, rather than randomly sent to the processing unit, so that the processing unit has higher processing efficiency for the file messages and the data corresponding to the file messages. In addition, when the number of file messages stored in the memory is too large, the file messages in the memory are processed first to avoid redundancy of the file messages in the memory; the number of file messages to be sent is set based on the consumption unit of the processing unit, so that the processing unit can receive and process the file messages with maximum efficiency; batch processing is performed on the file messages to be processed, so that the processing of the file messages can be reasonably planned and effectively controlled; the number of batch processing is limited, so that the time of data processing can be constrained, and the strategy of data processing can be conveniently optimized; the data to be processed is filtered according to user needs, so that the acquired data is more targeted, and the amount of data to be processed is reduced, so as to improve the processing efficiency.
本发明的第二实施方式涉及一种消息队列的消费方法。流程具体如图3所示。The second embodiment of the present invention relates to a message queue consumption method, the process of which is specifically shown in FIG3 .
S301,获取用户的业务需求;S301, obtaining the user's business needs;
S302,判断内存中的文件消息数量是否大于预设内存存储阈值;S302, determining whether the number of file messages in the memory is greater than a preset memory storage threshold;
若内存中的文件消息数量大于或等于预设内存存储阈值,暂停获取文件消息,执行S303;If the number of file messages in the memory is greater than or equal to the preset memory storage threshold, suspend obtaining file messages and execute S303;
S303,将内存中存储的剩余文件消息发送至对应的处理单元;S303, sending the remaining file messages stored in the memory to the corresponding processing unit;
若内存中的文件消息数量小于预设内存存储阈值,执行S304;If the number of file messages in the memory is less than the preset memory storage threshold, execute S304;
S304,根据用户的业务需求从文件消息队列中获取待处理文件消息,存储至内存;S304, obtaining the to-be-processed file message from the file message queue according to the user's business needs, and storing it in the memory;
S305,判断是否超过最大批处理次数;S305, determining whether the maximum number of batch processing times has been exceeded;
若批处理的次数小于或等于最大批处理次数,执行S306;If the number of batch processing times is less than or equal to the maximum number of batch processing times, execute S306;
若超过最大批处理次数,则结束流程。If the maximum number of batch processing times is exceeded, the process ends.
S306,根据处理单元的消费能力向处理单元发送内存中的待处理文件消息;S306, sending a message of the file to be processed in the memory to the processing unit according to the consumption capacity of the processing unit;
S307,从预设存储组件中获取文件消息对应的数据,进行业务处理,以实现用户的业务需求。S307, obtaining data corresponding to the file message from the preset storage component, and performing business processing to meet the business needs of the user.
本发明实施方式相对于相关技术而言,提供了一种用于流式存储的文件消息队列,并针对该文件消息队列提供了文件消息分配的方法,即在文件消息获取之前判断内存中的文件消息的数量,避免内存中文件消息存储过多造成消费堆积。在发送文件消息时,根据处理单元的消费能力进行发送,而不是随机向处理单元发送,使得处理单元对文件消息及文件消息对应的数据的处理效率更高。另外,在内存中存储文件消息的数量过多时,优先处理内存中的文件消息,避免内存中文件消息的冗余;以处理单元的消费单元为依据设定发送的文件消息的数量,使得处理单元可以以最大效率接收和处理文件消息;对待处理文件消息进行批处理,可以对文件消息的处理进行合理规划和有效控制;对批处理的次数进行限定,可以约束数据处理的时间,方便优化数据处理的策略;根据用户需求对待处理数据进行过滤,使获取的数据更具有针对性,且减少待处理数据的数据量,提高处理效率。Compared with the related art, the embodiment of the present invention provides a file message queue for streaming storage, and provides a method for allocating file messages for the file message queue, that is, the number of file messages in the memory is judged before the file message is obtained, so as to avoid consumption accumulation caused by excessive storage of file messages in the memory. When sending file messages, the file messages are sent according to the consumption capacity of the processing unit, rather than randomly sent to the processing unit, so that the processing unit has higher processing efficiency for the file messages and the data corresponding to the file messages. In addition, when the number of file messages stored in the memory is too large, the file messages in the memory are processed first to avoid redundancy of the file messages in the memory; the number of file messages to be sent is set based on the consumption unit of the processing unit, so that the processing unit can receive and process the file messages with maximum efficiency; batch processing is performed on the file messages to be processed, so that the processing of the file messages can be reasonably planned and effectively controlled; the number of batch processing is limited, so that the time of data processing can be constrained, and the strategy of data processing can be conveniently optimized; the data to be processed is filtered according to user needs, so that the acquired data is more targeted, and the amount of data to be processed is reduced, so as to improve the processing efficiency.
由于第一实施方式与本实施方式相互对应,因此本实施方式可与第一实施方式互相配合实施。第一实施方式中提到的相关技术细节在本实施方式中依然有效,在第一实施方式中所能达到的技术效果在本实施方式中也同样可以实现,为了减少重复,这里不再赘述。相应地,本实施方式中提到的相关技术细节也可应用在第一实施方式中。Since the first embodiment corresponds to the present embodiment, the present embodiment can be implemented in conjunction with the first embodiment. The relevant technical details mentioned in the first embodiment are still valid in the present embodiment, and the technical effects that can be achieved in the first embodiment can also be achieved in the present embodiment. In order to reduce repetition, they are not repeated here. Accordingly, the relevant technical details mentioned in the present embodiment can also be applied in the first embodiment.
上面各种方法的步骤划分,只是为了描述清楚,实现时可以合并为一个步骤或者对某些步骤进行拆分,分解为多个步骤,只要包括相同的逻辑关系,都在本专利的保护范围内;对算法中或者流程中添加无关紧要的修改或者引入无关紧要的设计,但不改变其算法和流程的核心设计都在该专利的保护范围内。The step division of the above methods is only for clear description. When implemented, they can be combined into one step or some steps can be split and decomposed into multiple steps. As long as they include the same logical relationship, they are all within the scope of protection of this patent; adding insignificant modifications to the algorithm or process or introducing insignificant designs without changing the core design of the algorithm and process are all within the scope of protection of this patent.
本发明第三实施方式涉及一种数据处理方法。在本实施方式中,以Apache Storm分布式实时计算系统为例,Apache Storm从原始数据源(如Apache Kafka队列,Kestrel队列等)接收输入数据,将其传递至一系列处理单元进行数据处理,并输出处理后的数据。Apache Storm中的组件包括:元组(Tuple)为Apache Storm中数据传输的基本单元;处理单元(Bolts)用于对数据进行过滤、聚合、加入等处理,在Apache Storm中通常具有多个Bolts,分别具有不同的处理功能;源(Spouts)用于从数据源中读取数据。但在诸如ApacheStorm等实时计算系统中不存在对数据流进行存储的功能,因此研发用于存储数据流(即进行流式存储)的存储组件。存储组件在对源源不断的数据流进行存储的过程中,会按照时间顺序连续生成文件消息(mesfile),进而形成文件消息队列。但这样的文件消息所对应的数据量相比于其他消息队列对应的数据量大的多,例如,对Apache Kafka队列中一个消息对应的数据量可以为1M,但文件消息队列中一个文件消息对应的数据量为256M,不能像Apache Kafka队列一样随机向Bolts发送消息进行处理。The third embodiment of the present invention relates to a data processing method. In this embodiment, taking the Apache Storm distributed real-time computing system as an example, Apache Storm receives input data from an original data source (such as an Apache Kafka queue, a Kestrel queue, etc.), passes it to a series of processing units for data processing, and outputs the processed data. The components in Apache Storm include: a tuple is the basic unit of data transmission in Apache Storm; a processing unit (Bolts) is used to filter, aggregate, join, and process data. In Apache Storm, there are usually multiple Bolts, each with different processing functions; a source (Spouts) is used to read data from a data source. However, there is no function of storing data streams in real-time computing systems such as Apache Storm, so a storage component for storing data streams (i.e., streaming storage) is developed. In the process of storing a steady stream of data streams, the storage component will continuously generate file messages (mesfile) in chronological order, thereby forming a file message queue. However, the data volume corresponding to such file messages is much larger than that corresponding to other message queues. For example, the data volume corresponding to a message in an Apache Kafka queue may be 1M, but the data volume corresponding to a file message in a file message queue is 256M. Messages cannot be randomly sent to Bolts for processing like Apache Kafka queues.
因此,本申请提供一种数据处理方法,具体为:Therefore, the present application provides a data processing method, which is specifically:
当内存中的文件消息的数量小于预设内存存储阈值,Spout从文件消息队列中获取待处理的文件消息并将待处理文件消息存储于内存中;When the number of file messages in the memory is less than the preset memory storage threshold, Spout obtains the file messages to be processed from the file message queue and stores them in the memory;
当内存中的文件数量大于或等于预设内存存储阈值,Spout暂停从文件消息队列中获取所述待处理文件消息的行为,优先将内存中存储的上次数据处理剩余的文件消息发送至对应的所述处理单元,直至内存中文件消息的数量小于预设内存存储阈值。When the number of files in the memory is greater than or equal to the preset memory storage threshold, Spout suspends the behavior of obtaining the to-be-processed file messages from the file message queue, and preferentially sends the remaining file messages from the last data processing stored in the memory to the corresponding processing unit until the number of file messages in the memory is less than the preset memory storage threshold.
Spout根据Bolts的处理能力(处理能力至少包括一个Bolt最大可处理的数据量及Bolts的数量)从内存中获取多个待处理文件消息,向Bolts发送包含一定数量文件消息的多个Tuple,其中,一个Tuple中包含的文件消息的数量取决于一个Bolt最大可处理的数据量,Tuple的数量取决于Bolts的数量。Spout obtains multiple file messages to be processed from the memory according to the processing capacity of Bolts (the processing capacity includes at least the maximum amount of data that a Bolt can process and the number of Bolts), and sends multiple Tuples containing a certain number of file messages to Bolts. The number of file messages contained in a Tuple depends on the maximum amount of data that a Bolt can process, and the number of Tuples depends on the number of Bolts.
Bolts在接收到包含待处理文件消息的Tuple后,根据待处理文件消息中对应待处理数据所在的位置信息,从存储组件中获取待处理数据,并进行处理。After receiving the Tuple containing the file to be processed message, Bolts obtains the data to be processed from the storage component according to the location information of the corresponding data to be processed in the file to be processed message and processes it.
将处理后的数据保存在数据库中,以供用户获取。The processed data is saved in the database for users to obtain.
本实施方式与以上实施方式相互对应,本实施方式可与其他实施方式互相配合。其他施方式提到的相关技术细节在本实施方式中依然有效,为了减少重复,这里不再赘述。相应地,本实施方式中提到的相关技术细节也可应用在其他实施方式中。This embodiment corresponds to the above embodiment, and this embodiment can cooperate with other embodiments. The relevant technical details mentioned in other embodiments are still valid in this embodiment, and in order to reduce repetition, they are not repeated here. Accordingly, the relevant technical details mentioned in this embodiment can also be applied in other embodiments.
本发明第四实施方式涉及一种数据处理装置,如图4所示,包括:A fourth embodiment of the present invention relates to a data processing device, as shown in FIG4 , comprising:
文件获取模块401,用于当内存中存储的文件消息的数量小于预设存储阈值,则根据用户需求从文件消息队列中获取多个待处理文件消息存储于所述内存中,所述待处理文件消息中包含对应的所述待处理数据在所述存储组件中的位置;The file acquisition module 401 is used to acquire a plurality of to-be-processed file messages from the file message queue and store them in the memory according to user requirements when the number of file messages stored in the memory is less than a preset storage threshold, wherein the to-be-processed file messages include the location of the corresponding to-be-processed data in the storage component;
文件发送模块402,用于根据处理单元的消费能力,将所述内存中的所述待处理文件消息发送至所述处理单元;A file sending module 402, configured to send the to-be-processed file message in the memory to the processing unit according to the consumption capacity of the processing unit;
数据处理模块403,用于通过所述处理单元从所述存储组件中获取所述待处理文件消息对应的待处理数据进行处理。The data processing module 403 is used to obtain the to-be-processed data corresponding to the to-be-processed file message from the storage component through the processing unit for processing.
本发明实施方式相对于相关技术而言,提供了一种用于流式存储的文件消息队列,并针对该文件消息队列提供了文件消息分配的方法,即在文件消息获取之前判断内存中的文件消息的数量,避免内存中文件消息存储过多造成消费堆积。在发送文件消息时,根据处理单元的消费能力进行发送,而不是随机向处理单元发送,使得处理单元对文件消息及文件消息对应的数据的处理效率更高。Compared with the related art, the embodiment of the present invention provides a file message queue for streaming storage, and provides a method for allocating file messages for the file message queue, that is, the number of file messages in the memory is determined before the file messages are acquired, so as to avoid consumption accumulation caused by too much file message storage in the memory. When sending file messages, they are sent according to the consumption capacity of the processing unit, rather than randomly sent to the processing unit, so that the processing unit has higher efficiency in processing file messages and data corresponding to the file messages.
不难发现,本实施方式为与第一实施方式相对应的装置实施例,本实施方式可与第一实施方式互相配合实施。第一实施方式中提到的相关技术细节在本实施方式中依然有效,为了减少重复,这里不再赘述。相应地,本实施方式中提到的相关技术细节也可应用在第一实施方式中。It is not difficult to find that this embodiment is a device embodiment corresponding to the first embodiment, and this embodiment can be implemented in conjunction with the first embodiment. The relevant technical details mentioned in the first embodiment are still valid in this embodiment, and in order to reduce repetition, they are not repeated here. Accordingly, the relevant technical details mentioned in this embodiment can also be applied in the first embodiment.
值得一提的是,本实施方式中所涉及到的各模块均为逻辑模块,在实际应用中,一个逻辑单元可以是一个物理单元,也可以是一个物理单元的一部分,还可以以多个物理单元的组合实现。此外,为了突出本发明的创新部分,本实施方式中并没有将与解决本发明所提出的技术问题关系不太密切的单元引入,但这并不表明本实施方式中不存在其它的单元。It is worth mentioning that all modules involved in this embodiment are logic modules. In practical applications, a logic unit can be a physical unit, a part of a physical unit, or a combination of multiple physical units. In addition, in order to highlight the innovative part of the present invention, this embodiment does not introduce units that are not closely related to solving the technical problem proposed by the present invention, but this does not mean that there are no other units in this embodiment.
本发明第五实施方式涉及一种服务器,如图5所示,包括:A fifth embodiment of the present invention relates to a server, as shown in FIG5 , comprising:
至少一个处理器501;以及,与所述至少一个处理器501通信连接的存储器502;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如第一实施方式和第二实施方式任一所述的消息队列消费方法。At least one processor 501; and a memory 502 communicatively connected to the at least one processor 501; wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor so that the at least one processor can execute the message queue consumption method as described in any one of the first embodiment and the second embodiment.
其中,存储器和处理器采用总线方式连接,总线可以包括任意数量的互联的总线和桥,总线将一个或多个处理器和存储器的各种电路链接在一起。总线还可以将诸如外围设备、稳压器和功率管理电路等之类的各种其他电路链接在一起,这些都是本领域所公知的,因此,本文不再对其进行进一步描述。总线接口在总线和收发机之间提供接口。收发机可以是一个元件,也可以是多个元件,比如多个接收器和发送器,提供用于在传输介质上与各种其他装置通信的单元。经处理器处理的数据通过天线在无线介质上进行传输,进一步,天线还接收数据并将数据传送给处理器。Among them, the memory and the processor are connected in a bus manner, and the bus may include any number of interconnected buses and bridges, and the bus links various circuits of one or more processors and memories together. The bus can also link various other circuits such as peripherals, voltage regulators, and power management circuits together, which are all well known in the art, so this article will not describe them further. The bus interface provides an interface between the bus and the transceiver. The transceiver can be one element or multiple elements, such as multiple receivers and transmitters, providing a unit for communicating with various other devices on a transmission medium. The data processed by the processor is transmitted on a wireless medium through an antenna, and further, the antenna also receives data and transmits the data to the processor.
处理器负责管理总线和通常的处理,还可以提供各种功能,包括定时,外围接口,电压调节、电源管理以及其他控制功能。而存储器可以被用于存储处理器在执行操作时所使用的数据。The processor is responsible for managing the bus and general processing, and can also provide various functions, including timing, peripheral interfaces, voltage regulation, power management, and other control functions. Memory can be used to store data used by the processor when performing operations.
本领域技术人员可以理解实现上述实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-OnlyMemory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。Those skilled in the art can understand that all or part of the steps in the above-mentioned embodiment method can be completed by instructing the relevant hardware through a program, and the program is stored in a storage medium, including several instructions to enable a device (which can be a single-chip microcomputer, chip, etc.) or a processor to execute all or part of the steps of the method described in each embodiment of the present application. The aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), disk or optical disk and other media that can store program codes.
本领域的普通技术人员可以理解,上述各实施方式是实现本发明的具体实施例,而在实际应用中,可以在形式上和细节上对其作各种改变,而不偏离本发明的精神和范围。Those skilled in the art will appreciate that the above-mentioned embodiments are specific examples for implementing the present invention, and in actual applications, various changes may be made thereto in form and detail without departing from the spirit and scope of the present invention.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110404274.7ACN113360463B (en) | 2021-04-15 | 2021-04-15 | Data processing method, device, server and readable storage medium |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110404274.7ACN113360463B (en) | 2021-04-15 | 2021-04-15 | Data processing method, device, server and readable storage medium |
| Publication Number | Publication Date |
|---|---|
| CN113360463A CN113360463A (en) | 2021-09-07 |
| CN113360463Btrue CN113360463B (en) | 2024-07-05 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202110404274.7AActiveCN113360463B (en) | 2021-04-15 | 2021-04-15 | Data processing method, device, server and readable storage medium |
| Country | Link |
|---|---|
| CN (1) | CN113360463B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115915058A (en)* | 2021-09-30 | 2023-04-04 | 上海擎感智能科技有限公司 | Message processing method, system, storage medium and vehicle for vehicle data |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106815254A (en)* | 2015-12-01 | 2017-06-09 | 阿里巴巴集团控股有限公司 | A kind of data processing method and device |
| CN111506430A (en)* | 2020-04-23 | 2020-08-07 | 上海数禾信息科技有限公司 | Method and device for data processing under multitasking and electronic equipment |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2015038137A1 (en)* | 2013-09-13 | 2015-03-19 | Hewlett-Packard Development Company, L.P. | Failure recovery of a task state in batch-based stream processing |
| CN109344172B (en)* | 2018-08-31 | 2022-05-17 | 深圳市元征科技股份有限公司 | High-concurrency data processing method and device and client server |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106815254A (en)* | 2015-12-01 | 2017-06-09 | 阿里巴巴集团控股有限公司 | A kind of data processing method and device |
| CN111506430A (en)* | 2020-04-23 | 2020-08-07 | 上海数禾信息科技有限公司 | Method and device for data processing under multitasking and electronic equipment |
| Publication number | Publication date |
|---|---|
| CN113360463A (en) | 2021-09-07 |
| Publication | Publication Date | Title |
|---|---|---|
| CN102945215A (en) | Information processing apparatus and method, and program | |
| CN108540568B (en) | Computing capacity sharing method and intelligent equipment | |
| CN110636122A (en) | Distributed storage method, server, system, electronic device and storage medium | |
| CN114237505B (en) | Batch processing method and device for business data and computer equipment | |
| CN111538600A (en) | Message processing method and device, computer equipment and storage medium | |
| CN112363812A (en) | Database connection queue management method based on task classification and storage medium | |
| CN105446653A (en) | Data merging method and device | |
| CN107767264A (en) | Online transaction system focus account trading flow pressure real-time monitoring method and device | |
| CN113360463B (en) | Data processing method, device, server and readable storage medium | |
| CN112632363A (en) | Method, device and equipment for processing batch query requests and readable storage medium | |
| WO2024088078A1 (en) | Bandwidth adjustment method, system and device, and storage medium | |
| CN117221227A (en) | Self-adaptive efficient arbitration management method based on FPGA | |
| CN111857996B (en) | Interrupt processing method, system, equipment and computer readable storage medium | |
| CN118013177A (en) | Method for generating twiddle factors, chip and storage medium | |
| WO2015188495A1 (en) | Data transmission method and device | |
| CN114116186B (en) | Dynamic scheduling method and device for resources | |
| CN113934531B (en) | High throughput stream processing method and device | |
| CN112540842B (en) | Method and device for dynamically adjusting system resources | |
| CN117097646A (en) | Tail delay adjustment method and device | |
| US12430068B2 (en) | Managing provenance information for data processing pipelines | |
| CN110727389B (en) | File cleaning method and system | |
| TWI735520B (en) | Method and device for adjusting the number of component logic threads | |
| CN113762954B (en) | Hot account transaction processing method and device, electronic equipment and storage medium | |
| WO2016070364A1 (en) | Data distribution method, loader and storage system | |
| CN113608885A (en) | Client request processing method, device, equipment and storage medium |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |