Detailed Description
In order that the above-recited objects, features and advantages of the present application will become more readily apparent, a more particular description of the application will be rendered by reference to the appended drawings and appended detailed description.
For a better understanding of the present application, the following description is given to illustrate the concepts related to the present application to those skilled in the art:
Remote direct memory access (RDMA, remote Direct MemoryAccess), a technique capable of transferring data directly from the memory of one computer to the memory of another computer over a network without any impact on the operating system, avoids the data duplication process between the application memory and the operating system cache by transferring data directly between the application memory, which does not require the overhead of processor and context switching, frees up memory bandwidth and processor, and can effectively improve the application system performance.
RDMA WRITE one computer directly writes the data in the local memory into the data writing mode of the local memory of the other computer.
The remote direct data access device, which is a device applying RDMA technology, can be in the form of a network card.
The memory area is used for storing local contents of a larger file or storing a smaller file completely, and can also be provided with corresponding addresses by which the server and the user equipment can locate the memory area.
The page cache (PAGE CACHE) is a memory block managed by the cores of the server and the user device for caching logical contents of the file, and optimizing the reading form of the data, thereby accelerating the access to the image and the data on the disk.
File storage parameters in order to ensure continuity between file slices (local contents of a file) when a large file is sliced and transmitted, the file slices can be defined by the file storage parameters, and specifically, the file storage parameters comprise a file offset parameter and a length parameter, the file offset parameter is used for representing the starting position of the file slice, and the length parameter is used for representing the content length of the file slice.
The memory registration, namely, a process in the server and the user equipment can register in an operating system aiming at a memory fragment to be used, and the registered memory fragment can be bound with the process.
A Direct mode read-write interface (Direct I/O) is a read mode bypassing the file write buffer area and the read buffer area, when a process performs read-write operation on a system file, the Direct mode read-write interface is called to pass through the file write buffer area and the read buffer area of the system so as to directly access the memory.
The virtual file system (VFS, virtual FILE SYSTEM) can read and write different file systems on different physical media by adopting a standard read-write mode, namely a unified operation interface and application programming interface are provided for various file systems, and system calls such as read-write can be made without concerning the storage media and the file system types of the bottom layer.
The fourth generation extended file system (EXT 4, fourth extended filesystem), the file system of the kernel operating system, is the underlying file system provided by the kernel and interfacing directly to the hard disk input/output interface.
The data transmission method in the embodiment of the application can be particularly applied to an RDMA-based data transmission scene between user equipment and a server, in a related technical process, when the server receives a file acquisition request of the user equipment, the server can locally register a memory patch matched with the file size, read file data into PAGE CACHE of a kernel layer from a server local memory, then an application process running in a user mode in the server extracts the file data from PAGE CACHE, stores the file data into the registered memory patch, and finally the server directly transmits the file data in the memory patch to the memory patch of the user equipment through an RDMA network card to finish file transmission.
Aiming at the implementation process, the file can be directly transmitted from the memory area of one server to the memory area of the other user equipment through the network, so that the pressure and the system overhead of the processor in the server and the user equipment are reduced, and the data copying process between the system caches is also saved. However, the above implementation process still has two problems, namely 1, the server needs to perform memory registration every time when responding to the file acquisition request, but the memory registration naturally has the problems of high system overhead and time consumption, so that the response speed is reduced, and each registration needs to occupy a new memory area, thereby causing memory waste. 2. In the implementation process, the server needs to read the file data in the local memory into PAGE CACHE of the kernel layer, and then store the file data in PAGE CACHE into the registered memory partition, which essentially transfers the file data from one partition to another partition of the memory, and the process actually generates a larger memory use cost.
In order to solve the above problem 1, in the embodiment of the present application, before starting a transmission flow, a plurality of memory slices with the same capacity may be registered in respective memories of a server and a user device, so that memory pools are formed in advance in the server and the user device, after the server responds to a read request of the user device, memory registration operation is not required, but memory management based on the memory pools is performed, the registered memory slices are selected for use, a file to be transmitted is split into slices aligned to the memory slices and stored correspondingly, and then the server may directly send slice data of the memory slices of the server to the registered memory slices in the user device through RDMA technology, thereby saving system overhead and time consumption of memory registration in each response, and the server and the user device may implement use of memory according to a volume and release multiplexing of the memory based on the memory management of the memory pools, so as to greatly improve utilization efficiency of the memory.
In another implementation manner, the capacities of the memory banks registered by the server and the user equipment may be different, and the server and the user equipment may record the capacities of the memory banks registered by the server and the user equipment when the capacities of the memory banks registered by the server and the user equipment are different, and may select a memory bank to be used for transmitting the file from the registered memory banks according to the recorded capacities of the memory banks and the size of the file to be transmitted when the memory banks are specifically used.
In order to solve the problem 2, the embodiment of the application can omit PAGE CACHE data transfer, and the server can bypass PAGE CACHE file data of the local memory to be directly stored in the registered memory chip area, thereby reducing the number of times of data copying in the memory in the file transmission process, further reducing the use cost of the memory and improving the transmission efficiency.
Referring to fig. 1, a system architecture diagram of a data transmission process according to an embodiment of the present application is shown, including a server and a user device. The embodiment of the application specifically describes the data transmission of the server and the user equipment through the user layer, the kernel layer and the hardware layer. The user layer is constructed based on the user mode, the kernel layer is constructed based on the kernel mode, the user layer is an active space of an upper application program, and the execution of the application program is required to depend on resources provided by the kernel layer, so that the kernel layer controls hardware resources of a computer and provides an environment for the upper application program to run, the application program running in the kernel layer can access the computer resources in the kernel layer through system call, and physical computer resources of a server and user equipment are hardware layers, such as a hard disk, a network card and the like.
In the embodiment of the present application, referring to fig. 1, before starting file transmission, a ue may perform a memory registration, and register m first memory slices for a first process to form a memory pool, where the memory pool registered by the first process includes a first memory slice 1, a first memory slice 2, and a first memory slice 3. Preferably, the first memory chip area 1, the first memory chip area 2, and the first memory chip area 3. The first memory chip area m has the same capacity, and the first memory chip areas can be used by the first process, and in the same way, the server can also perform memory registration in advance, the n second memory slices are registered for the second process to form a memory pool, wherein the memory pool registered for the second process comprises a second memory slice 1, a second memory slice 2 and a second memory slice 3. Second memory chip area 1, second memory chip area 2, second memory chip area 3. The second memory slices may be used by a second process, where m and n are not limited in size, and may be the same or different. In addition, before starting file transfer, the server also transmits information (file identification, occupied space, etc. information) of the file stored locally by the server to the user equipment for knowledge thereof.
When the user equipment needs to acquire a target file stored by the server for the first process, the user equipment can respond to a first reading request of the first process on the target file, acquire the size of the occupied space and the identification of the target file, and determine the number of target first memory fragments required for storing the target file, namely, the sum of the capacities of the target first memory fragments is required to be larger than or equal to the size of the occupied space of the target file. The user equipment may then generate a second read request corresponding to the target first memory area according to the address information of the target first memory area, the identifier of the target file, the file storage parameter corresponding to the target first memory area, that is, generate a second read request for each target first memory area, so that the second read request is used to request to obtain a file slice (local content) of the target file, and specifically, the file storage parameter in the second read request may define a starting position and a content length of the file slice. In the case that the number of the target first memory banks is plural, the ue may obtain a request list of second read requests arranged in sequence, and may send the request list of second read requests to the server. It should be noted that, in one implementation manner, the ue may sequentially send the second read requests in the request list to the server, and in another implementation manner, the ue may simultaneously send the second read requests in the request list to the server.
After the server obtains the request list of the second read requests, the server can start processing from the first second read request, that is, referring to fig. 1, for each second read request, specifically, through the identifier of the target file in the second read request, determine the storage position of the target file in the local hard disk of the server, and call the direct I/O interface of the kernel layer, bypass the direct access hard disk input/output interface of the kernel layer PAGE CACHE and EXT4, to slice the target file in the hard disk according to the file storage parameters in the second read request, obtain the corresponding file slice, copy the file slice to a target second memory area (such as the second memory area 3), finally, the remote direct data access device (RDMA network card) of the server can directly use the RDMA WRITE of RDMA technology to slice the file in the target second memory area, transfer the file information from the first memory area (such as the first memory area) of the target memory area of the kernel layer to the target file contained in the second read request to the target file access device through the internet, transfer the file slice to the target direct data access device of the user device to the target memory area (such as the first memory area 3) of the target file in the target memory area, and then transfer the file slice to the target file in the target memory area of the target device from the target memory area to the target memory area of the target device, and implement persistent file transfer to the target file in the target area, and implement persistent file transfer from the first memory area to the target memory area to the target file in the target area, the memory pool of the user equipment can release the space of the target first memory chip area, so that the memory release multiplexing is achieved, and the utilization efficiency of the memory is greatly improved. In addition, after the server finishes sending the target file, the memory pool of the server can release the space of the target second memory chip area, and the memory release multiplexing is achieved.
For the whole process, before starting the transmission flow, the embodiment of the application registers a plurality of memory slices with the same capacity in the respective memories of the server and the user equipment for use at a time without performing memory registration operation of each response, thereby saving the system overhead and time consumption of memory registration in each response, and improving the utilization efficiency of the memory by the server and the user equipment based on the memory management of the memory pool.
It should be noted that, based on the data transmission method provided by the embodiment of the present application, several specific scenarios may be implemented as follows:
in one implementation manner, referring to fig. 2, a schematic diagram of a file downloading scenario provided by the embodiment of the present application is shown, where the schematic diagram includes a user device and a server, a plurality of first memory segments may be registered in advance for a first process in the user device to form a memory pool, and a plurality of second memory segments may be registered in advance for a second process in the server to form a memory pool. Specifically, the user equipment may obtain the size and the identifier of the occupied space of the file to be downloaded from the server, and generate a request list of the second read request according to the manner described in the foregoing embodiment, and send the request list of the second read request to the server, where the server may process the request list of the second read request according to the manner described in the foregoing embodiment, sequentially transfer each file slice of the file to be downloaded by the user equipment from the memory of the user equipment to the registered second memory slice, and directly send the file slice from the second memory slice to the first memory slice of the user equipment in an RDMA manner, so that data transmission can be always performed to complete transmission of all data of the file, so that the user equipment may obtain the file downloaded from the server, and the downloaded file may be called by the first process of the user equipment.
In another implementation manner, referring to fig. 3, a schematic view of a scenario of a distributed data synchronization system provided by an embodiment of the present application is shown, where the distributed data synchronization system includes an uploading node and a data synchronization node, the uploading node may receive and store a file uploaded by a user device, and the data synchronization node is configured to acquire the file from the uploading node to implement data backup synchronization of multiple nodes. The uploading node can register a plurality of second memory fragments in advance to form a memory pool, and the data synchronizing node can register a plurality of first memory fragments in advance to form a memory pool. The uploading node is used for acquiring the target file to be synchronously backed up and sending the occupied space of the target file to the data synchronization node, and then the data synchronization node can generate a backup request list and send the backup request list to the uploading node according to the mode of generating the request list of the second read request described in the embodiment, so that the uploading node can process the backup request list according to the mode of processing the request list of the second read request described in the embodiment, transfer each file slice of the target file to be backed up from the memory of the uploading node to the registered second memory area in sequence, and send the file slice from the second memory area to the first memory area of the data synchronization node directly in an RDMA mode, and the transmission of all data of the file can be realized in the mode, thereby realizing the purpose of synchronously backing up the file at the data synchronization node of the distributed data synchronization system for one file uploaded by user equipment. It should be noted that, in the embodiment of the present application, the uploading node may transmit the target file to be backed up to each data synchronization node, or the uploading node may transmit the target file to be backed up to one data synchronization node, and then the data synchronization node transmits the target file to the next data synchronization node, so as to push the same until all the data synchronization nodes obtain the target file.
In another implementation, referring to fig. 4, a schematic view of a scenario of a distributed computing system provided by an embodiment of the present application is shown, where the distributed data synchronization system includes a slave computing node and a master computing node, where the master computing node is configured to obtain a computing task, split the computing task into a plurality of sub-tasks, and send the sub-tasks to the slave computing node for collaborative computing. The secondary computing node can register a plurality of second memory fragments in advance to form a memory pool, and the primary computing node can register a plurality of first memory fragments in advance to form a memory pool. The method comprises the steps that a slave computing node is used for acquiring a subtask to calculate, the identification and the occupied space of a calculation result are sent to a master computing node, then the master computing node can generate a synchronous request list to be sent to the slave computing node according to the mode of generating a request list of a second read request described in the embodiment, so that the slave computing node can process the synchronous request list according to the mode of processing the request list of the second read request described in the embodiment, each file slice of the calculation result to be synchronized is transferred to a registered six-memory-slice area from a memory of the slave computing node in sequence, and the file slice is directly sent to a first memory-slice area of the master computing node from the second memory-slice area in an RDMA mode, and the purposes of splitting and collaborative calculation of a larger calculation task in a distributed computing system and summarizing the calculation result in the master computing node are achieved.
It should be noted that, in the embodiment of the present application, the process of obtaining the size of the occupied space and the identifier of the target file, the address information of the memory area, the file storage parameter and other used information, signals or data are all performed under the premise of conforming to the corresponding data protection rule policy of the country of the location and obtaining the authorization given by the owner of the corresponding device.
Before starting a transmission flow, the embodiment of the application registers a plurality of memory fragments with the same capacity in the respective memories of the server and the user equipment for use, so that the server does not need to perform memory registration operation after responding to a read request of the user equipment, but selects the registered memory fragments for use based on memory management of a memory pool, the file to be transmitted is segmented into slices aligned with the memory fragments and stored correspondingly, and then the server can directly send slice data of the memory fragments of the server to the registered memory fragments in the user equipment through RDMA technology, thereby saving the system cost and time consumption of memory registration in each response, realizing memory management of the server and the user equipment based on the memory pool, using the memory according to volume and releasing and multiplexing the memory, greatly improving the utilization efficiency of the memory.
Referring to fig. 5, a flowchart of steps of a data transmission method provided by an embodiment of the present application is shown, and the method is applied to a user equipment, where a plurality of first memory segments are provided in a local memory of the user equipment, and includes:
Step 101, responding to a first reading request, obtaining the occupied space size of a target file, and determining the number of target first memory areas required for storing the target file according to the occupied space size of the target file and the capacity of the first memory areas.
The target first memory area is used for storing local contents of the target file.
In the embodiment of the present application, the first read request may be a request initiated by a first process in a user layer of the user device, such as a file download request, a file read request, and the like. By responding to the first reading request, the user equipment can acquire the occupied space size of the target file, wherein the occupied space size of the target file can be information provided by a server for storing the target file, or can be information which is preset by the server and the user equipment and is stored in the local area of the user equipment.
Further, since the local memory of the user equipment is provided with the plurality of first memory slices to form the memory pool, preferably, the capacities of the first memory slices are the same, the user equipment can realize memory management of the memory pool, namely, when responding to the first reading request, the number of target first memory slices required for storing the target file is determined according to the occupied space size of the target file and the capacity of the first memory slices, thereby achieving the purpose of using the memory in quantity and reducing the occurrence probability of the phenomenon of excessive or insufficient memory required for storing the file later.
For example, if one target file has a size of 4M and one first memory segment has a size of 1M, the number of target first memory segments is 4.
Step 102, generating a second read request corresponding to the target first memory area according to the address information of the target first memory area and the file storage parameter corresponding to the target first memory area, and sending the second read request to the server.
The file storage parameters are used for characterizing local content to be stored in the target first memory area.
In this step, the user equipment may generate, according to address information of the target first memory banks, an identifier of the target file, a file storage parameter corresponding to the target first memory banks, and a second read request corresponding to the target first memory banks, where the identifier of the target file is used for the server to search for the target file, that is, in the embodiment of the present application, one second read request is generated for each target first memory bank, so that one second read request is used to request to obtain one file slice (local content) of the target file, and the number of second read requests may be the same as that of the target first memory banks, for example, when the number of target first memory banks is 4, 4 second read requests may be sequentially generated to form the request list. In addition, the address information of a memory slice may uniquely define the location of the memory slice in the server and the user equipment, for example, the address information of a memory slice may be 0x01ab00a0.
Of course, in the embodiment of the application, the server can search the target file through other features of the target file, for example, the hash value of the target file can be calculated, then the hash value of the target file is improved to the server, and the server searches the file according to the hash value, and the like.
And step 103, receiving the local content of the target file sent by the server through the target first memory area.
In the embodiment of the application, the server adopts the RDMA mode to directly slice the file in the target second memory area of the local server, and the file is transmitted to the target first memory area of the local user equipment through the Internet, namely, the participation of a kernel processor of each terminal and the data copying pressure of a cache of each terminal in the data transmission process are reduced through the RDMA technology, so that the system cost is greatly reduced, and the system performance is improved.
In summary, before starting a transmission flow, the embodiment of the application registers a plurality of memory slices with the same capacity in the respective memories of the server and the user equipment for use, so that the server does not need to perform memory registration operation after responding to a read request of the user equipment, but selects the registered memory slices for use based on memory management of a memory pool, slices aligned to the memory slices to be transmitted are cut into slices and stored correspondingly, and then the server can directly send slice data of the memory slices of the server to the registered memory slices in the user equipment through RDMA technology, thereby saving the system cost and time consumption of memory registration in each response, realizing memory management based on the memory pool by the server and the user equipment, greatly improving the utilization efficiency of the memory, greatly reducing the participation of a kernel processor and the data copying pressure of the cache, and having positive effects on reducing the system cost.
Optionally optionally
Referring to fig. 6, a flowchart illustrating steps of a data transmission method according to an embodiment of the present application includes a server of a remote direct data access device, where the server has a plurality of second memory slices, and the method includes:
Step 201, obtaining a second read request sent by the user equipment.
The second read request may include address information of a target first memory segment, and a file storage parameter corresponding to the target first memory segment, where the target first memory segment is used to store local content of a target file, and the file storage parameter is used to characterize a feature of the local content to be stored in the target first memory segment.
The current second read request may further include an identification of the target file, or information of the target file, such as a hash value of the target file, which may be unique to the target file, and then a storage location of the target file in the server may be searched according to the information.
In the embodiment of the present application, the user equipment may send a request list of second read requests formed by sequentially arranging the second read requests to the server, and the server may split the request list of the second read requests to obtain each second read request in sequence for processing.
Step 202, determining a storage position of the target file in the server, extracting local content of the target file from the storage position according to the file storage parameters, and copying the local content to a target second memory area, wherein the target second memory area is any second memory area.
In this step, preferably, the capacity of the second memory segment is the same as the capacity of the first memory segment locally segmented by the user equipment, and for each second read request, the server determines, specifically through the identifier of the target file in the second read request, the storage position of the target file in the local hard disk of the server, and invokes the direct I/O interface of the kernel layer, so as to slice the target file in the hard disk according to the file storage parameter in the second read request to obtain a corresponding file slice, and copies the file slice to one target second memory segment.
In the embodiment of the application, the calculation amount of selecting the second memory area by the server is simple under the condition that the capacity of the second memory area is the same as the capacity of the second memory area. Of course, in the selection process, if at least part of the second memory banks is different, at this time, according to the storage parameter and the capacity of the second memory banks, the minimum number of target second memory banks that can meet the storage parameter may be selected, and then the corresponding local content may be read based on the target second memory banks.
Optionally, in step 203, local content stored in the target second memory area is sent to the target first memory area of the user equipment for storage according to the address information of the target first memory area through a remote direct data access device.
Finally, the remote direct data access device (RDMA network card) of the server may utilize RDMA WRITE operations of RDMA technology to directly slice the file in the target second memory area, and transmit the file slice to the remote direct data access device of the user equipment through the internet, where the remote direct data access device of the user equipment may write the file slice into the target first memory area according to the address information of the target first memory area included in the second read request until the server transmits all the file slices of the target file to the corresponding target first memory area of the user equipment, so as to complete the acquisition of the target file by the user equipment.
Of course, if an interruption occurs during the transmission, a mechanism of breakpoint continuous transmission may be adopted, and when the transmission is resumed, data transmission is continued from the position of the interruption, which is not limited by the present application.
The method and the device can be used for realizing the memory registration operation without memory management based on a memory pool after the server responds to a reading request of the user equipment, and can be used for selecting the registered memory fragment, cutting a file to be transmitted into slices aligned with the memory fragment and storing the slices correspondingly, and then the server can directly send slice data of the memory fragment of the server to the registered memory fragment in the user equipment through an RDMA technology, so that the system cost and time consumption of memory registration in each response are saved, the server and the user equipment can realize the use of memory according to the memory management of the memory pool, and the utilization efficiency of the memory is greatly improved.
Referring to fig. 7, a flowchart of specific steps of a data transmission method provided by an embodiment of the present application is shown, where the flowchart is applied to a user equipment, and a plurality of first memory segments are provided in a local memory of the user equipment, and the method includes:
Step 301, responding to a first reading request, obtaining the occupied space size of the target file, and determining the number of the first memory fragments according to the occupied space size of the target file and the capacity of the first memory fragments.
In this step, preferably, the capacities of the first memory slices are the same, and the ratio of the occupied space size of the target file to the capacity of the first memory slices can be used as the required number of slices of the first memory slices under the condition that the occupied space size of the target file and the capacity of the first memory slices are known, where if the ratio has a number of bits after a decimal point, the ratio can be rounded back to obtain the number of slices, for example, if the ratio is assumed to be 3.1, the number of slices obtained by rounding back is 4. Because the capacities of the first memory areas are the same, the number of the first memory areas can be directly calculated in the subsequent processing process, the capacities of the memory areas do not need to be considered independently, and the method is simple in calculation and high in efficiency.
Of course, if at least part of the capacities of the first memory banks are different, when determining the target first memory banks, the target first memory banks capable of meeting the minimum number of banks storing the target file can be selected according to the size of the occupied space of the target file and the capacity of each first memory bank, and then the subsequent file transfer process is performed based on the target first memory banks. This process is relatively complex to calculate, as opposed to the case where the capacities of the respective first memory areas are the same.
In another implementation manner, the capacities of the first memory banks may be different, and in case that the capacities of the first memory banks are different, the user equipment may record the capacities of the registered first memory banks, and in a specific use of the first memory banks, the user equipment may select, from the registered first memory banks, a target first memory bank to be used for transmitting the file according to the recorded capacities of the first memory banks and the size of the transmitted file.
Step 302, selecting a target first memory fragment with a plurality of fragments from the first memory fragments according to the number of fragments.
Step 303, generating a second read request corresponding to the target first memory area according to the address information of the target first memory area and the file storage parameter corresponding to the target first memory area.
In the embodiment of the present application, after receiving the first read request, the size of the occupied space of the target file may be obtained from the read request, and the size of the pre-registered first memory partition is known, so that the number of partitions capable of satisfying the first memory partition storing the target file may be calculated according to the size of the occupied space of the target file, and then the target first memory of the number of partitions is selected from the registered first memory partitions.
Optionally, step 302 includes obtaining a usage state of the first memory segment, determining a target first memory segment whose usage state is an idle state according to a size of an occupied space of the target file and a capacity of the first memory segment, and determining a number of target first memory segments required for storing the target file.
The user equipment can manage the memory pool formed by the first memory fragments through the first process, so that the first process can acquire the use states of all the first memory fragments in real time, such as occupied, idle and the like, and based on analysis of the use states, when the user equipment selects a target first memory fragment for current data transmission, the selected target first memory fragment can be the memory fragment with the use state of the idle state, thereby achieving the purposes of using the memory according to quantity, avoiding conflict between current memory use and other memory use operations, and improving the stability of memory use. And the user equipment can generate a second read request corresponding to the target first memory area according to the address information of the target first memory area and the file storage parameter corresponding to the target first memory area, and the second read request can be subsequently constructed into a list form.
Optionally, step 304 includes:
a substep 3041, according to the size of the occupied space of the target file and the capacity of the target first memory segment, acquiring a file storage parameter corresponding to the target first memory segment;
In sub-step 3042, a second read request corresponding to the target first memory partition is generated according to the address information of the target first memory partition and the file storage parameter corresponding to the target first memory partition.
In the embodiment of the application, since the first memory fragments are registered in advance, each first memory fragment has a storage upper limit, which may not store a complete target file, in order to enable the server side to more conveniently send data, then the data sent from each second memory fragment to the user equipment side by the server side is not repeated, the user equipment side is convenient to read and process the data from each target first memory fragment in sequence, therefore, in the embodiment of the application, the target file can be segmented into multiple segments, each segment is stored by one target first memory segment, and then each target first memory segment stores which part of the target file, and the storage parameters for storing in the target first memory segment can be determined according to the size of the occupied space of the target file and the capacity of the target first memory segment.
And then generating a second reading request corresponding to the target first memory area according to the address information of the target first memory area and the file storage parameters corresponding to the target first memory area.
Optionally, the file storage parameters include one or more of a file offset parameter corresponding to the local content of the target file and a length parameter corresponding to the local content of the target file, the file offset parameter being used to characterize a start position of the local content, the length parameter being used to characterize a content length of the local content.
In the embodiment of the present application, one second read request is used for obtaining one file slice (local content) of the target file in response to the request, so that the second read request may specifically indicate the start position of the file slice by using a file offset parameter (offset), and a length parameter (length) is used for indicating the content length of the file slice.
For example, for a file with a size of 5a, the size of the first memory partition is a, the server may segment the file into 5 file slices with a size of a (a is converted into a length unit of 1048576), and in order to obtain the file, the user equipment may sequentially include 5 second read requests in a generated second read request list, where a file offset parameter (offset) and a length parameter (length) of each of the 5 second read requests are respectively:
second read request 1 file offset parameter (offset) =0 length parameter (length) =1048576;
second read request 2 file offset parameter (offset) =1048576 length parameter (length) =1048576;
second read request 3 file offset parameter (offset) =2097152 length parameter=1048576;
second read request 4 file offset parameter (offset) = 3145728 length parameter=1048576;
second read request 5 file offset parameter (offset) = 4194304 length parameter (length) =1048576.
The server side can read the files with corresponding lengths according to the storage parameters, store the files into the target second memory fragments, and then send the files into the corresponding first memory fragments.
Step 304, after the second read request is built into the request list according to the generating sequence, the second read request in the request list is sent to the server.
And 305, receiving local contents of the target file sent by the server through the target first memory area.
In the embodiment of the present application, since the file slices of the target file are sequential, the second read requests corresponding to the file slices respectively have a certain order, in order to record the order relationship, a request list of the second read requests may be constructed according to the generation order of the second read requests, the second read requests in the request list are sequentially arranged, and the user equipment may send the request list of the second read requests to the server.
Alternatively, step 304 may be implemented by sequentially sending the second read requests in the request list to the server, or by simultaneously sending at least part of the second read requests in the request list to the server.
In the embodiment of the application, the user equipment can select to sequentially send the second reading requests in the request list to the server according to different actual requirements, or simultaneously send all or continuous parts of the second reading requests in the request list to the server, thereby realizing the flexibility of request sending. The server can read the local content corresponding to each second reading request once after receiving the second reading requests in batches, and then sequentially or batchwise returns the local content to the client to store each local data distribution into the target first memory fragments corresponding to each second reading request.
Optionally, the method may further comprise:
Step 306, in response to a memory registration request for a first process running on the user equipment, setting a plurality of first memory slices associated with the first process in a local memory of the user equipment, where the first memory slices are used by the first process.
In the embodiment of the present application, referring to fig. 1, the first process is a process running in a user layer of a user device and used for file acquisition, and normal running of the first process is not separated from use of a storage space, and before starting a transmission flow, the embodiment of the present application may register a plurality of first memory slices with the same capacity in a memory of the user device for use by the first process at a time, therefore, a memory pool for first process management is formed in the user equipment in advance, so that the user equipment does not need to perform memory registration operation in the process of acquiring the target file, but selects registered memory fragments for use based on the memory management of the memory pool, and in addition, the memory can be used in quantity and the memory is released and reused based on the memory management of the memory pool, so that the utilization efficiency of the memory is greatly improved.
After step 305, it may further include:
And step 307, processing the target file stored in the target first memory area in response to the processing instruction of the first process.
In the embodiment of the application, after the user equipment finishes storing the target file, the target file in the target first memory area can be used for being called by a first process, for example, the first process can run the target file, play the target file and the like, the first process can transfer the target file from the target first memory area to a local hard disk of the user equipment to realize persistent storage, and after the persistent storage, a memory pool of the user equipment can release the space of the target first memory area, so that the memory is released and reused, and the utilization efficiency of the memory is greatly improved.
In summary, before starting a transmission flow, the embodiment of the application registers a plurality of memory slices with the same capacity in the respective memories of the server and the user equipment for use, so that the server does not need to perform memory registration operation after responding to a read request of the user equipment, but selects the registered memory slices for use based on memory management of a memory pool, slices aligned to the memory slices to be transmitted are cut into slices and stored correspondingly, and then the server can directly send slice data of the memory slices of the server to the registered memory slices in the user equipment through RDMA technology, thereby saving the system cost and time consumption of memory registration in each response, realizing memory management based on the memory pool by the server and the user equipment, greatly improving the utilization efficiency of the memory, greatly reducing the participation of a kernel processor and the data copying pressure of the cache, and having positive effects on reducing the system cost.
Referring to fig. 8, a flowchart of specific steps of a data transmission method according to an embodiment of the present application is shown, where the flowchart is applied to a server, and a local memory of the server has a plurality of second memory segments, and the method includes:
step 401, obtaining a second read request sent by the user equipment.
The second reading request comprises address information of a target first memory area and file storage parameters corresponding to the target first memory area, wherein the target first memory area is used for storing local contents of a target file, and the file storage parameters are used for representing characteristics of the local contents to be stored in the target first memory area.
This step may refer to step 201 described above, and will not be described here.
Step 402, determining a storage location of the target file in the server, and sending a third read request to a virtual file system of the server through a second process.
The third read request includes file storage parameters, so that the virtual file system calls the hard disk input/output interface to extract local content of the target file from the storage location and copies the local content to the target second memory area in response to the third read request.
In the embodiment of the present application, referring to fig. 1, in conjunction with the explanation of the foregoing embodiment on the file offset parameter (offset) and the length parameter (length), the server may extract the file offset parameter (offset) and the length parameter (length) in the second read request, and generate, according to the file offset parameter (offset) and the length parameter (length), a third read request to send the third read request to the Virtual File System (VFS), where the VFS is a file system of the kernel layer of the server that interfaces with the user layer, and may process a request issued by the user layer.
The VFS can directly call the hard disk interface to extract the local content of the target file from the storage position according to the third reading request, and copy the local content to the target second memory chip area, and the process can omit data transfer of PAGE CACHE and EXT4, so that the number of times of data copying in the memory in the file transmission process is reduced, the use cost of the memory is further reduced, and the transmission efficiency is improved.
Alternatively, step 402 may be specifically implemented by the second process invoking a direct mode read-write interface to send a third read request to the virtual file system of the server.
In the embodiment of the application, the server side can send the third read request to the virtual file system of the server side in a manner of calling the direct I/O interface through the second process, and the direct I/O interface can bypass the read access memory of the file write buffer area and the read buffer area, so that the number of times of data copying in the memory in the file transmission process is reduced.
And step 403, according to the address information of the target first memory area, sending the local content stored in the target second memory area to the target first memory area of the user equipment for storage.
This step may refer to step 203 described above, and will not be described here.
Optionally, the method may further comprise:
Step 404, in response to a memory registration request for a second process running on the client, setting a plurality of second memory slices associated with the second process in a local memory of the server, where the second memory slices are used by the second process.
In the embodiment of the present application, referring to fig. 1, the second process is a process running in a user layer of a server and used for sending a file, and the normal running of the second process is not separated from the use of a storage space, and before starting a transmission flow, the embodiment of the present application can register a plurality of second memory slices with the same capacity in a memory of the server for the second process to use, therefore, a memory pool for second process management is formed in the server in advance, so that the server does not need to perform memory registration operation in the process of sending the target file, but selects registered memory fragments for use based on the memory management of the memory pool, and in addition, the memory can be used in quantity and reused by releasing the memory based on the memory management of the memory pool, thereby greatly improving the utilization efficiency of the memory.
Optionally, the remote direct data access device is a remote direct data access network card.
In the embodiment of the application, the RDMA network card omits the participation of a kernel processor, and the transmission of all data directly reaches the RDMA network card from the process and is forwarded to the RDMA network card of the other terminal by the RDMA network card. The RDMA network card can reduce the load of a processor, improve the throughput of a network and reduce the delay of the network.
Optionally, in another embodiment of the present application, in the foregoing step 202, extracting, from the storage location, the local content of the target file to copy to the target second memory partition according to the file storage parameter includes:
sub-step 2021, determining a local location of said local content in said storage location according to said file storage parameter;
sub-step 2022 extracts the local content and copies it to the target memory slice according to the local location.
In the embodiment of the application, since the user equipment side sends the second reading request to the server side, the server side can further locate the local position of the local content corresponding to the file storage parameter in the storage position based on the file storage parameter after finding out the position of the target file stored in the disk.
Optionally, in another embodiment of the present application, the file storage parameters include at least one of a file offset parameter corresponding to a local content of the target file, a length parameter corresponding to the local content of the target file, the file offset parameter being used to characterize a start position of the local content, and the length parameter being used to characterize a content length of the local content, the substep 2021 includes:
Sub-step 20211 determines a local location of said local content in said storage location based on said file offset parameter and/or said length parameter.
As described above, in the embodiment of the present application, the file storage parameters include the aforementioned file offset parameter (offset) and/or length parameter (length). The length parameter corresponds to a storage capacity of the target first memory allocation.
Then, when the offset is received, if the capacity of the memory allocation is the same, the length is fixed, and the server side can directly know the length. After the user equipment side sends the offset to the server side through the second read request, the server side determines a start read position to be read from the disk and a length to be read from the start read position according to the storage address of the target storage position in the disk based on the offset and the known length.
In the case of receiving the offset and length, then after the user equipment side sends the offset to the server side through the second read request, the server side determines, based on the offset and the known length, a start read position to be read from the disk and a length to be read from the start read position according to a storage address of the target storage position in the disk.
After receiving the length, the ue sends the offset to the server through the second read request, and then the server defaults the offset of a certain target second memory slice to 0 based on the length, and then sequentially adds the start position of the next target first memory slice to the length of the last target first memory slice to obtain the offset of the next target first memory slice, so as to obtain the offset and length of each target first memory slice, and determines the start read position to be read from the disk and the length to be read from the start position according to the storage address of the target storage position in the disk.
And then, the local content of the target file is read from the initial reading position of the magnetic disk and the length read from the initial position and is stored into the target second memory area, and then the server sends the local content in the target second memory area to the target first memory area corresponding to the corresponding request.
In summary, before starting a transmission flow, the embodiment of the application registers a plurality of memory slices with the same capacity in the respective memories of the server and the user equipment for use, so that the server does not need to perform memory registration operation after responding to a read request of the user equipment, but selects the registered memory slices for use based on memory management of a memory pool, slices aligned to the memory slices to be transmitted are cut into slices and stored correspondingly, and then the server can directly send slice data of the memory slices of the server to the registered memory slices in the user equipment through RDMA technology, thereby saving the system cost and time consumption of memory registration in each response, realizing memory management based on the memory pool by the server and the user equipment, greatly improving the utilization efficiency of the memory, greatly reducing the participation of a kernel processor and the data copying pressure of the cache, and having positive effects on reducing the system cost.
Referring to fig. 9, a block diagram of a distributed data synchronization system according to an embodiment of the present application is shown, including:
The system comprises at least one uploading node and at least one data synchronization node, wherein the local memory of the data synchronization node is provided with a plurality of first memory fragments;
The uploading node is used for acquiring a target file to be synchronously backed up, and sending the occupied space of the target file to the data synchronous node, acquiring a backup request sent by the data synchronous node, extracting local content of the target file according to the backup request, copying the local content to a target second memory area, wherein the target second memory area is any second memory area;
The data synchronization node is used for determining the number of target first memory fragments required for storing the target file according to the occupied space size of the target file and the capacity of the first memory fragments, and generating a backup request corresponding to the target first memory fragments and sending the backup request to the uploading node according to the address information of the target first memory fragments and file storage parameters corresponding to the target first memory fragments, wherein the file storage parameters are used for representing the characteristics of local contents to be stored in the target first memory fragments.
In particular, the description of the distributed data synchronization system may refer to the embodiment of fig. 3, which is not described herein.
In summary, before starting a transmission flow, the embodiment of the application registers a plurality of memory slices with the same capacity in the respective memories of the uploading node and the data synchronization node for use, so that the uploading node does not need to perform memory registration operation after responding to a backup request of the data synchronization node, but selects the registered memory slices for use based on memory management of a memory pool, a file to be transmitted is divided into slices aligned with the memory slices and stored correspondingly, and then the uploading node can directly send slice data of the memory slices of the uploading node to the registered memory slices in the data synchronization node through RDMA technology, thereby saving the system cost and time consumption of memory registration in each response, realizing memory use according to the amount and memory release multiplexing of the uploading node and the data synchronization node based on the memory management of the memory pool, greatly improving the utilization efficiency of the memory.
Referring to FIG. 10, a block diagram of a distributed computing system is shown, comprising:
the system comprises at least one master computing node and at least one slave computing node, wherein the local memory of the master computing node is provided with a plurality of first memory fragments;
The main computing node is used for acquiring a computing task, splitting the computing task into a plurality of subtasks and sending the subtasks to the slave computing node, determining the number of target first memory fragments required for storing the computing result according to the occupied space size of the computing result of the subtasks and the capacity of the first memory fragments, generating a synchronous request corresponding to the target first memory fragments according to the address information of the target first memory fragments and file storage parameters corresponding to the target first memory fragments, and sending the synchronous request to the slave computing node, wherein the file storage parameters are used for representing the characteristics of local contents to be stored in the target first memory fragments.
The slave computing node is used for acquiring subtasks to perform computation, sending the identification of a computation result and the size of occupied space to the master computing node, acquiring a synchronization request sent by the master computing node, extracting local content of the computation result according to the synchronization request, copying the local content to a target second memory area, wherein the target second memory area is any second memory area, and sending the local content stored in the target second memory area to a target first memory area of the master computing node for storage according to the address information of the target second memory area.
In particular, the description of the distributed data synchronization system may refer to the embodiment of fig. 4, which is not described herein.
In summary, before starting a transmission flow, the embodiment of the application registers a plurality of memory slices with the same capacity in the respective memories of the slave computing node and the master computing node for use, so that the slave computing node does not need to perform memory registration operation after responding to a synchronous request of the master computing node, but selects the registered memory slices for use based on memory management of a memory pool, a file to be transmitted is divided into slices aligned with the memory slices and stored correspondingly, and then the slave computing node can directly send slice data of the memory slices of the slave computing node to the registered memory slices in the master computing node through an RDMA technology, thereby saving the system cost and time consumption of memory registration in each response, realizing memory usage according to the amount and memory release multiplexing based on the memory management of the memory pool, greatly improving the utilization efficiency of the memory, and greatly reducing the participation of a kernel processor and the data copying pressure of a buffer memory and having positive effects on reducing the system cost.
Referring to fig. 11, a block diagram of a data transmission device according to an embodiment of the present application is applied to a user equipment, where a plurality of first memory banks are disposed in a local memory of the user equipment, and the data transmission device includes:
The first setting module 501 is configured to obtain an occupied space size of a target file in response to a first reading request, and determine, according to the occupied space size of the target file and a capacity of a first memory partition, a number of target first memory partitions required for storing the target file;
The generating module 502 is configured to generate a second read request corresponding to the target first memory area according to address information of the target first memory area and a file storage parameter corresponding to the target first memory area, and send the second read request to the server, where the file storage parameter is used to characterize a feature of local content to be stored in the target first memory area;
And the storage module 503 is configured to receive, through the target first memory partition, local content of the target file sent by the server.
Optionally, the first setting module 501 includes:
the first determining submodule is used for determining the number of the required first memory fragments according to the occupied space size of the target file and the capacity of the first memory fragments.
And the second determining submodule is used for selecting the target first memory areas with the number of the areas from the first memory areas according to the number of the photo areas.
Optionally, the generating module 502 includes:
The storage parameter determining submodule is used for acquiring file storage parameters corresponding to the target first memory area according to the occupied space size of the target file and the capacity of the target first memory area;
and the request generation sub-module is used for generating a second reading request corresponding to the target first memory area according to the address information of the target first memory area and the file storage parameters corresponding to the target first memory area.
Optionally, the generating module 502 includes:
and the list sending submodule is used for sending the second read request in the request list to the server after constructing the second read request into the request list according to the generation sequence.
Optionally, the list sending submodule includes:
the first sending unit is used for sequentially sending the second reading requests in the request list to the server;
or, a second sending unit, configured to send at least part of the second read requests in the request list to the server at the same time.
Optionally, the apparatus further comprises:
The first registration module is used for responding to a memory registration request of a first process running on the user equipment, and setting a plurality of first memory fragments associated with the first process in a local memory of the user equipment, wherein the first memory fragments are used for the first process;
optionally, the apparatus further comprises:
And the processing module is used for responding to the processing instruction of the first process and processing the target file stored in the target first memory area.
Optionally, the first setting module 501 includes:
The state submodule is used for acquiring the use state of the first memory area;
and the selecting sub-module is used for determining the target first memory area with the idle state in the use state according to the occupied space size of the target file and the capacity of the first memory area, and determining the number of the target first memory areas required for storing the target file.
Optionally, the file storage parameters include one or more of a file offset parameter corresponding to the local content of the target file and a length parameter corresponding to the local content of the target file, the file offset parameter being used to characterize a start position of the local content, the length parameter being used to characterize a content length of the local content.
In summary, before starting a transmission flow, the embodiment of the application registers a plurality of memory slices with the same capacity in the respective memories of the server and the user equipment for use, so that the server does not need to perform memory registration operation after responding to a read request of the user equipment, but selects the registered memory slices for use based on memory management of a memory pool, slices aligned to the memory slices to be transmitted are cut into slices and stored correspondingly, and then the server can directly send slice data of the memory slices of the server to the registered memory slices in the user equipment through RDMA technology, thereby saving the system cost and time consumption of memory registration in each response, realizing memory management based on the memory pool by the server and the user equipment, greatly improving the utilization efficiency of the memory, greatly reducing the participation of a kernel processor and the data copying pressure of the cache, and having positive effects on reducing the system cost.
Referring to fig. 12, a block diagram of another data transmission device according to an embodiment of the present application is applied to a server, where the server has a plurality of second memory banks, and includes:
The receiving module 601 is configured to obtain a second read request sent by the user equipment, where the second read request includes address information of a target first memory bank, and a file storage parameter corresponding to the target first memory bank;
The extracting module 602 is configured to determine a storage location of the target file at a local server, and extract, according to a file storage parameter, a local content of the target file from the storage location, and copy the local content to a target second memory partition, where the target second memory partition is any second memory partition;
And the sending module 603 is configured to send, according to the address information of the target first memory partition, the local content stored in the target second memory partition to the target first memory partition of the user equipment for storage.
Optionally, the extraction module 602 includes:
A local position extraction sub-module, configured to determine a local position of the local content in the storage location according to the file storage parameter;
and the replication sub-module is used for extracting the local content according to the local position and replicating the local content to the target memory chip area.
Optionally, the file storage parameters comprise at least one of file offset parameters corresponding to local contents of the target file and length parameters corresponding to the local contents of the target file, wherein the file offset parameters are used for representing starting positions of the local contents, and the copy submodule is further used for determining local positions of the local contents in the storage positions according to the file offset parameters and/or the length parameters.
Optionally, the apparatus further comprises:
the second registration module is used for responding to a memory registration request of a second process running on the user equipment, and setting a plurality of second memory fragments associated with the second process in a local memory of the server, wherein the second memory fragments are used for the second process.
Optionally, the file storage parameters comprise a file offset parameter and a length parameter corresponding to the local content of the target file, wherein the file offset parameter is used for representing the starting position of the local content, and the length parameter is used for representing the content length of the local content;
An extraction module 602, comprising:
And the direct reading sub-module is used for sending a third reading request to the virtual file system of the server through the second process, wherein the third reading request comprises file storage parameters, so that the virtual file system responds to the third reading request, calls the hard disk input/output interface to extract the local content of the target file from the storage position, and copies the local content to the target second memory chip area.
Optionally, the direct-reading sub-module includes:
And the direct reading unit is used for calling the direct mode read-write interface through the second process and sending a third read request to the virtual file system of the server.
Optionally, the remote direct data access device is a remote direct data access network card.
Before starting a transmission flow, the embodiment of the application registers a plurality of memory fragments with the same capacity in the respective memories of the server and the user equipment for use, so that the server does not need to perform memory registration operation after responding to a read request of the user equipment, but selects the registered memory fragments for use based on memory management of a memory pool, the file to be transmitted is segmented into slices aligned with the memory fragments and stored correspondingly, and then the server can directly send slice data of the memory fragments of the server to the registered memory fragments in the user equipment through RDMA technology, thereby saving the system cost and time consumption of memory registration in each response, realizing memory management of the server and the user equipment based on the memory pool, using the memory according to volume and releasing and multiplexing the memory, greatly improving the utilization efficiency of the memory.
The embodiment of the application also provides a non-volatile readable storage medium, where one or more modules (programs) are stored, where the one or more modules are applied to a device, and the instructions (instructions) of each method step in the embodiment of the application may cause the device to execute.
Embodiments of the application provide one or more machine-readable media having instructions stored thereon that, when executed by one or more processors, cause an electronic device to perform a method as described in one or more of the above embodiments. In the embodiment of the application, the electronic equipment comprises various types of equipment such as terminal equipment, servers (clusters) and the like.
Embodiments of the present disclosure may be implemented as an apparatus for performing a desired configuration using any suitable hardware, firmware, software, or any combination thereof, which may include electronic devices such as terminal devices, servers (clusters), etc. Fig. 13 schematically illustrates an exemplary apparatus 1000 that may be used to implement various embodiments described in embodiments of the present application.
For one embodiment, fig. 13 illustrates an example apparatus 1000 having one or more processors 1002, a control module (chipset) 1004 coupled to at least one of the processor(s) 1002, a memory 1006 coupled to the control module 1004, a non-volatile memory (NVM)/storage 1008 coupled to the control module 1004, one or more input/output devices 1010 coupled to the control module 1004, and a network interface 1012 coupled to the control module 1004.
The processor 1002 may include one or more single-core or multi-core processors, and the processor 1002 may include any combination of general-purpose or special-purpose processors (e.g., graphics processors, application processors, baseband processors, etc.). In some embodiments, the apparatus 1000 can be used as a terminal device, a server (cluster), or the like in the embodiments of the present application.
In some embodiments, the apparatus 1000 can include one or more computer-readable media (e.g., memory 1006 or NVM/storage 1008) having instructions 1014 and one or more processors 1002 in combination with the one or more computer-readable media configured to execute the instructions 1014 to implement the modules to perform the actions described in this disclosure.
For one embodiment, the control module 1004 may include any suitable interface controller to provide any suitable interface to at least one of the processor(s) 1002 and/or any suitable device or component in communication with the control module 1004.
The control module 1004 may include a memory controller module to provide an interface to the memory 1006. The memory controller modules may be hardware modules, software modules, and/or firmware modules.
Memory 1006 may be used to load and store data and/or instructions 1014 for device 1000, for example. For one embodiment, the memory 1006 may include any suitable volatile memory, such as a suitable DRAM. In some embodiments, the memory 1006 may comprise a double data rate type four synchronous dynamic random access memory (DDR 4 SDRAM).
For one embodiment, the control module 1004 may include one or more input/output controllers to provide an interface to the NVM/storage 1008 and the input/output device(s) 1010.
For example, NVM/storage 1008 may be used to store data and/or instructions 1014. NVM/storage 1008 may include any suitable nonvolatile memory (e.g., flash memory) and/or may include any suitable nonvolatile storage device(s) (e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives).
NVM/storage 1008 may include storage resources that are physically part of the device on which apparatus 1000 is installed, or may be accessible by the device without necessarily being part of the device. For example, NVM/storage 1008 may be accessed over a network via input/output device(s) 1010.
Input/output device(s) 1010 may provide an interface for apparatus 1000 to communicate with any other suitable device, input/output device 1010 may include communication components, audio components, sensor components, and the like. Network interface 1012 may provide an interface for device 1000 to communicate over one or more networks, and device 1000 may communicate wirelessly with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols, such as accessing a wireless network based on a communication standard, such as WiFi, 2G, 3G, 4G, 5G, etc., or a combination thereof.
For one embodiment, at least one of the processor(s) 1002 may be packaged together with logic of one or more controllers (e.g., memory controller modules) of the control module 1004. For one embodiment, at least one of the processor(s) 1002 may be packaged together with logic of one or more controllers of the control module 1004 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 1002 may be integrated on the same mold as logic of one or more controllers of the control module 1004. For one embodiment, at least one of the processor(s) 1002 may be integrated on the same die with logic of one or more controllers of the control module 1004 to form a system on chip (SoC).
In various embodiments, apparatus 1000 may be, but is not limited to being, a terminal device such as a server, a desktop computing device, or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet, a netbook, etc.). In various embodiments, device 1000 may have more or fewer components and/or different architectures. For example, in some embodiments, the apparatus 1000 includes one or more cameras, a keyboard, a Liquid Crystal Display (LCD) screen (including a touch screen display), a non-volatile memory port, multiple antennas, a graphics chip, an Application Specific Integrated Circuit (ASIC), and a speaker.
The detection device can adopt a main control chip as a processor or a control module, sensor data, position information and the like are stored in a memory or an NVM/storage device, a sensor group can be used as an input/output device, and a communication interface can comprise a network interface.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the scope of the embodiments of the application.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The foregoing describes in detail a data transmission method, apparatus, distributed data synchronization system, distributed computing system, electronic device, and machine readable medium, and the principles and embodiments of the present application are described herein with reference to specific examples, which are merely used to facilitate an understanding of the method and core ideas of the present application, and meanwhile, according to the ideas of the present application, the present application should not be construed as being limited to the specific embodiments and application scope of the present application.