技术领域technical field
本发明属于云存储领域,更具体地,涉及一种移动云存储环境下缓存数据的预取方法。The invention belongs to the field of cloud storage, and more specifically relates to a method for prefetching cached data in a mobile cloud storage environment.
背景技术Background technique
随着移动互联网的发展,终端应用呈现出爆炸式的增长,移动终端已逐渐成为新的应用平台,用户对移动终端的存储空间以及终端资源的在线共享等要求也越来越高,在这种需求的推动下,通过终端设备使用云存储服务已逐渐成为一种趋势。然而移动互联网相对于有线宽带互联网具有高延迟、低速率和不稳定的特点,在移动互联网环境下使用云存储服务的体验与在高速有线宽带互联网环境下的体验相比相去甚远,因此,可以使用缓存技术来提高在移动互联网环境下使用云存储服务的性能。With the development of mobile Internet, terminal applications have shown explosive growth, and mobile terminals have gradually become a new application platform. Users have higher and higher requirements for storage space of mobile terminals and online sharing of terminal resources. Driven by demand, using cloud storage services through terminal devices has gradually become a trend. However, compared with the wired broadband Internet, the mobile Internet has the characteristics of high delay, low speed and instability. The experience of using cloud storage services in the mobile Internet environment is far from that in the high-speed wired broadband Internet environment. Therefore, you can Use caching technology to improve the performance of using cloud storage services in the mobile Internet environment.
数据预取是影响缓存效率的重要因素之一,用户在通过移动终端使用云存储服务时,将个人数据存储在云存储系统中,当访问数据时,在有缓存的情况下,有很大几率都能从缓存中获取到数据,然而在没有缓存的情况下,则需要通过移动互联网到云存储系统中去获取,这样就增加了用户访问数据时的等待延时。缓存预取是根据用户之前的访问记录情况预测用户接下来可能会访问的文件数据,并将这些数据提前获取到本地缓存中,这样不仅能提高缓存命中率,同时能减少用户访问数据时的等待延时。数据预取不是在缓存非命中的时候去内存中获取数据,而是预先考虑到这种非命中情况,提前将数据获取到缓存中。数据预取时对将来要访问的数据的预测准确性将直接影响到数据预取的有效性,并进而影响到整个存储系统的性能,现有的预取算法会导致预取的很多数据并不会在短时间内被访问,这不仅没有达到预取的目的,反而还会浪费用户的网络带宽。Data prefetching is one of the important factors that affect cache efficiency. When users use cloud storage services through mobile terminals, they store personal data in the cloud storage system. When accessing data, there is a high probability that the cache will Data can be obtained from the cache, but in the case of no cache, it needs to be obtained from the cloud storage system through the mobile Internet, which increases the waiting delay when the user accesses the data. Cache prefetching is to predict the file data that the user may access next based on the user's previous access records, and obtain these data into the local cache in advance, which can not only improve the cache hit rate, but also reduce the user's waiting time when accessing data delay. Data prefetching is not to fetch data from the memory when there is a cache miss, but to pre-consider the miss and fetch data into the cache in advance. During data prefetching, the prediction accuracy of data to be accessed in the future will directly affect the effectiveness of data prefetching, and then affect the performance of the entire storage system. Existing prefetching algorithms will lead to a lot of prefetched data not It will be accessed in a short time, which not only fails to achieve the purpose of prefetching, but also wastes the user's network bandwidth.
发明内容Contents of the invention
针对现有技术的以上缺陷或改进需求,本发明提供了一种移动云存储环境下缓存数据的预取方法,其目的在于,解决现有预取算法中存在的会导致预取的很多数据并不会在短时间内被访问、从而会浪费用户的网络带宽的技术问题。In view of the above defects or improvement needs of the prior art, the present invention provides a method for prefetching cached data in a mobile cloud storage environment. A technical problem that will not be accessed in a short period of time, which will waste the user's network bandwidth.
为实现上述目的,按照本发明的一个方面,提供了一种移动云存储环境下缓存数据的预取方法,其是应用在移动终端中,所述方法包括以下步骤:In order to achieve the above object, according to one aspect of the present invention, a method for prefetching cached data in a mobile cloud storage environment is provided, which is applied in a mobile terminal, and the method includes the following steps:
(1)接收用户请求,根据该用户请求向服务器请求文件列表,并将该文件列表显示给用户;(1) Receive a user request, request a file list from the server according to the user request, and display the file list to the user;
(2)在用户从文件列表中选择文件后判断用户选择的文件是否存在于移动终端的缓存中,若存在则进入步骤(3),否则进入步骤(4);(2) After the user selects the file from the file list, judge whether the file selected by the user exists in the cache memory of the mobile terminal, if it exists, then enter step (3), otherwise enter step (4);
(3)直接从移动终端的缓存中提取用户选择的文件,然后进入步骤(6);(3) directly extract the file selected by the user from the cache memory of the mobile terminal, and then enter step (6);
(4)向服务器发送HTTP请求,该HTTP请求中携带有与用户选择的文件对应的URL信息;(4) Send an HTTP request to the server, which carries the URL information corresponding to the file selected by the user;
(5)从服务器接收与该URL信息对应的文件,该文件就是用户选择的文件;(5) Receive the file corresponding to the URL information from the server, and the file is the file selected by the user;
(6)判断用户选择的文件是否存在有历史访问记录,如果没有则转入步骤(7),如果有则转入步骤(10);(6) Judging whether the file selected by the user has a historical access record, if not then proceed to step (7), if there is then proceed to step (10);
(7)设置计数器n=1;(7) Counter n=1 is set;
(8)预取用户选择文件的n个后继文件;(8) Prefetch n successor files of the user selected file;
(9)判断该后继文件接下来是否被用户访问过,如果是则将预取的后继文件个数n加1,然后继续重复执行步骤(8),直至所有的后继文件都被预取完或用户停止访问该文件为止,否则返回步骤(7);(9) Determine whether the subsequent file has been accessed by the user next, and if so, add 1 to the number n of the prefetched subsequent files, and then continue to repeat step (8) until all subsequent files have been prefetched or Until the user stops accessing the file, otherwise return to step (7);
(10)通过计算访问该文件的后继文件的概率来确定是否对该后继文件进行预取,并确定对多少文件进行预取,其中在访问后继文件的概率乘积大于时,则提前从服务器预取该后继文件到本地。(10) Determine whether to prefetch the subsequent file by calculating the probability of accessing the subsequent file of the file, and determine how many files are prefetched, wherein the probability product of accessing the subsequent file is greater than , the subsequent file is prefetched from the server to the local in advance.
优选地,HTTP请求包括服务器的地址、用户的唯一标识符、以及用户选择文件的具体路径。Preferably, the HTTP request includes the address of the server, the unique identifier of the user, and the specific path of the file selected by the user.
优选地,步骤(10)包括以下子步骤:Preferably, step (10) includes the following substeps:
(10-1)设置预取队列长度m为0;(10-1) Set the prefetch queue length m to 0;
(10-2)计算用户下一次选择的文件A后访问其后继文件B的概率其中FAB表示历史访问记录中访问文件A后访问A的后继文件B的总次数,FA表示历史访问记录中对文件A访问的总次数;(10-2) Calculate the probability of accessing the subsequent file B after the file A selected by the user next time Wherein FAB represents the total number of times that the successor file B of A is accessed after accessing file A in the historical access record, and FA represents the total number of times that file A is accessed in the historical access record;
(10-3)判断是否有如是,则说明访问文件A后有50%以上的概率会访问A的后继文件B,将B加入预取队列,预取队列长度m加1,然后转步骤(10-4),否则返回步骤(10-1);(10-3) Judging whether there is If so, it means that after accessing file A, there is more than 50% probability of accessing A’s successor file B, adding B to the prefetch queue, adding 1 to the length m of the prefetch queue, and then going to step (10-4), otherwise returning to step ( 10-1);
(10-4)计算后继文件B后访问其后续文件C的概率p2,方法同步骤(10-2);(10-4) After calculating the probability p2 of accessing the subsequent file C of the subsequent file B, the method is the same as step (10-2);
(10-5)判断是否有若是则说明访问文件A后有50%以上的概率会基于文件列表访问A的后继文件B和C,并将C也加入预取队列,将预取长度m加1,并转步骤(10-6),否则转入步骤(10-7);(10-5) Judging whether there is If so, it means that after accessing file A, there is a probability of more than 50% that the successor files B and C of A will be accessed based on the file list, and C will also be added to the prefetch queue, and the prefetch length m will be increased by 1, and then go to step (10-6 ), otherwise go to step (10-7);
(10-6)重复执行上述步骤(10-5),直到p1*p2*...*pm-1>1/2且p1*p2*...*pm-1*pm<=1/2为止,此时预取队列长度为m-1;(10-6) Repeat the above steps (10-5) until p1 *p2 *...*pm-1 >1/2 and p1 *p2 *...*pm-1 * pm <= 1/2, at this time the length of the prefetch queue is m-1;
(10-7)依次对预取队列中的文件进行预取;(10-7) Prefetch the files in the prefetch queue sequentially;
(10-8)判断预取到的所有文件接下来是否被用户按顺序访问,若是则说明预取正确,过程结束,否则说明预取有误,然后返回步骤(10-1)。(10-8) Determine whether all the files prefetched are accessed in order by the user next, if so, it means that the prefetching is correct, and the process ends, otherwise, it means that the prefetching is wrong, and then returns to step (10-1).
按照本发明的另一方面,提供了一种移动云存储环境下缓存数据的预取系统,其是设置在移动终端中,所述系统包括:According to another aspect of the present invention, a prefetching system for cached data in a mobile cloud storage environment is provided, which is set in a mobile terminal, and the system includes:
第一模块,用于接收用户请求,根据该用户请求向服务器请求文件列表,并将该文件列表显示给用户;The first module is used to receive a user request, request a file list from the server according to the user request, and display the file list to the user;
第二模块,用于在用户从文件列表中选择文件后判断用户选择的文件是否存在于移动终端的缓存中,若存在则进入第三模块,否则进入第四模块;The second module is used to judge whether the file selected by the user exists in the cache of the mobile terminal after the user selects the file from the file list, and if it exists, enter the third module, otherwise enter the fourth module;
第三模块,用于直接从移动终端的缓存中提取用户选择的文件,然后进入第六模块;The third module is used to directly extract the file selected by the user from the cache of the mobile terminal, and then enter the sixth module;
第四模块,用于向服务器发送HTTP请求,该HTTP请求中携带有与用户选择的文件对应的URL信息;The fourth module is used to send an HTTP request to the server, and the HTTP request carries URL information corresponding to the file selected by the user;
第五模块,用于从服务器接收与该URL信息对应的文件,该文件就是用户选择的文件;The fifth module is used to receive the file corresponding to the URL information from the server, and the file is the file selected by the user;
第六模块,用于判断用户选择的文件是否存在有历史访问记录,如果没有则转入第七模块,如果有则转入第十模块;The sixth module is used to judge whether the file selected by the user has historical access records, if not, then transfer to the seventh module, and if there is, then transfer to the tenth module;
第七模块,用于设置计数器n=1;The seventh module is used to set the counter n=1;
第八模块,用于预取用户选择文件的n个后继文件;The eighth module is used to prefetch n subsequent files of the file selected by the user;
第九模块,用于判断该后继文件接下来是否被用户访问过,如果是则将预取的后继文件个数n加1,然后继续重复执行第八模块,直至所有的后继文件都被预取完或用户停止访问该文件为止,否则返回第七模块;The ninth module is used to determine whether the subsequent file has been accessed by the user next, and if so, add 1 to the number n of the prefetched subsequent files, and then continue to repeatedly execute the eighth module until all subsequent files are prefetched end or the user stops accessing the file, otherwise return to the seventh module;
第十模块,用于通过计算访问该文件的后继文件的概率来确定是否对该后继文件进行预取,并确定对多少文件进行预取,其中在访问后继文件的概率乘积大于时,则提前从服务器预取该后继文件到本地。The tenth module is used to determine whether to prefetch the subsequent file by calculating the probability of accessing the subsequent file of the file, and determine how many files are prefetched, wherein the product of the probability of accessing the subsequent file is greater than , the subsequent file is prefetched from the server to the local in advance.
总体而言,通过本发明所构思的以上技术方案与现有技术相比,能够取得下列有益效果:Generally speaking, compared with the prior art, the above technical solutions conceived by the present invention can achieve the following beneficial effects:
(1)预取的精度高,且能够保证预取的数据能够在短期内被用户访问:由于采用了步骤(7)-(9)和(10)中所定义的基于列表的数据预取策略,有效利用了用户访问文件列表的时间局部性和空间局部性,因此保证了较高的命中率,预取的精度较高;(1) The precision of prefetching is high, and it can ensure that the prefetched data can be accessed by users in a short time: due to the adoption of the list-based data prefetching strategy defined in steps (7)-(9) and (10) , which effectively utilizes the time locality and spatial locality of the user's access file list, thus ensuring a high hit rate and high prefetch accuracy;
(2)访问延迟较小,由于本发明主动预测用户将来可能访问到的数据内容,并将其获取到本地缓存中,有效的减少了用户请求的响应时间,缩短访问延迟。(2) The access delay is small. Since the present invention actively predicts the data content that the user may access in the future and acquires it in the local cache, the response time of the user request is effectively reduced and the access delay is shortened.
附图说明Description of drawings
图1是本发明移动云存储环境下缓存数据的预取方法的流程图。FIG. 1 is a flowchart of a method for prefetching cached data in a mobile cloud storage environment according to the present invention.
图2是移动设备云存储客户端文件菜单列表显示示意图。Fig. 2 is a schematic diagram of displaying a file menu list of a cloud storage client of a mobile device.
具体实施方式Detailed ways
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。此外,下面所描述的本发明各个实施方式中所涉及到的技术特征只要彼此之间未构成冲突就可以相互组合。In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention. In addition, the technical features involved in the various embodiments of the present invention described below can be combined with each other as long as they do not constitute a conflict with each other.
对于移动设备来说,由于其屏幕较小,因此在数据呈现上具有一定的局限性,最常用的数据呈现方式就是以菜单列表的形式呈现给用户,菜单列表的每一项代表用户的一个文件,当用户选择菜单列表中的某项时,该操作表示请求浏览该文件,在浏览该文件时,文件内容就会占满整个手机或pad的屏幕,当用户要切换文件时,就需要退出当前的浏览状态,然后返回到之前的文件菜单列表,选择下一个文件进行浏览,因此,对于这种情况,现在的智能移动设备都设计了滑动的功能,用户可以通过上下或左右滑动屏幕来切换逻辑上相邻的文件,而且通常用户在存储个人文件数据时为了能更加方便快捷的查询和浏览,会将文件进行分类存储,这使得通过滑动屏幕来切换文件浏览更加有用。针对移动设备这种浏览文件数据时特有的特点,本发明旨在提供一种有效的缓存数据预取策略。For mobile devices, due to their small screens, there are certain limitations in data presentation. The most commonly used data presentation method is presented to the user in the form of a menu list, and each item in the menu list represents a file of the user. , when the user selects an item in the menu list, this operation indicates a request to browse the file. When browsing the file, the file content will occupy the entire screen of the mobile phone or pad. When the user wants to switch files, he needs to exit the current browsing state, and then return to the previous file menu list, and select the next file to browse. Therefore, for this situation, the current smart mobile devices are designed with a sliding function, and the user can switch the logic by swiping the screen up and down or left and right In addition, when storing personal file data, users usually store files in categories in order to query and browse more conveniently and quickly, which makes it more useful to switch file browsing by sliding the screen. Aiming at the unique characteristics of mobile devices when browsing file data, the present invention aims to provide an effective cache data prefetching strategy.
以下首先对本发明的技术术语进行解释和说明:Below at first technical terms of the present invention are explained and illustrated:
后继文件:某文件基于列表逻辑顺序上的下一个文件。Successor file: A file is based on the next file in the logical order of the list.
预取长度:预先从服务器端获取文件的文件数量,比如预取长度为1表明获取文件A的同时预先从服务器端获取A的后继文件B;预取长度为2表明获取文件A的同时预取其后继文件B以及B的后继文件C。Prefetch length: the number of files obtained from the server in advance. For example, a prefetch length of 1 indicates that the successor file B of A is acquired from the server in advance while the prefetch length is 1; a prefetch length of 2 indicates that the prefetch is performed while acquiring file A Its successor file B and B's successor file C.
本发明提出了一种移动云存储环境下缓存数据的预取方法,用于在这种具有特有的访问文件数据特点的移动设备下更高效的获取数据,提高缓存效率,并从而提升用户体验。The present invention proposes a method for prefetching cached data in a mobile cloud storage environment, which is used to obtain data more efficiently under such a mobile device with unique characteristics of accessing file data, improve cache efficiency, and thereby improve user experience.
根据移动设备上的数据访问特点,我们可以认为在智能设备上浏览访问文件时通常具有一定的逻辑顺序性,即用户很有可能会根据列表中的文件顺序来访问文件,所以我们可以根据这种在列表顺序上相邻的文件进行预取,我们称为基于列表的顺序预取。According to the characteristics of data access on mobile devices, we can think that there is usually a certain logical sequence when browsing and accessing files on smart devices, that is, users are likely to access files according to the order of files in the list, so we can use this Prefetching adjacent files in list order is called list-based sequential prefetching.
在这种预取策略中,需要记录用户的历史访问记录,并根据该历史访问记录计算每个文件被访问后其基于列表顺序的后继被访问的概率,并根据概率值进行预取。In this prefetching strategy, it is necessary to record the user's historical access records, and calculate the probability of each file being accessed based on the successor of the list order based on the historical access records, and prefetch according to the probability value.
初始时,由于没有历史访问记录,但根据数据访问特性,我们假定用户会顺序访问,首先设定预取长度(预取文件数)为1,然后基于列表顺序预取当前文件的后继文件到本地缓存,若接下来预取的文件被访问,将预取长度加1设为2,然后基于列表顺序到服务器端预取后两个文件数据,否则还是按照预取长度为1进行预取;在按预取长度为2的情况下进行预取时,若接下来确实顺序访问了这两个预取的文件,那么接下来将预取长度再加1,到服务器端基于列表顺序的预取后三个文件到本地缓存中,但若有其中一个预取的文件没有被顺序访问,那么在访问当前不是按顺序访问的文件后,将预取长度重新设为1,然后进行预取,重复以上步骤,直到访问结束。At the beginning, because there is no historical access record, but according to the characteristics of data access, we assume that users will access sequentially, first set the prefetch length (number of prefetched files) to 1, and then prefetch the successor files of the current file to the local based on the order of the list Cache, if the prefetched file is accessed next, set the prefetch length plus 1 to 2, and then prefetch the last two file data from the server based on the list order, otherwise, prefetch according to the prefetch length of 1; When prefetching is performed under the condition that the prefetch length is 2, if the two prefetched files are indeed accessed sequentially, then add 1 to the prefetch length, and after the prefetch based on the list order on the server side Three files are stored in the local cache, but if one of the prefetched files is not accessed sequentially, then after accessing the files that are not currently accessed sequentially, reset the prefetch length to 1, then prefetch, and repeat the above steps until the end of the visit.
当用户多次使用移动客户端后,就会有了很多历史访问记录,当以后访问有记录的这些文件的时候就可以计算概率来确定是否要对基于列表顺序中的后继进行预取。记当前访问的文件为A,基于列表顺序的A的后继文件为B,FA为历史访问记录中对文件A访问的总次数,FAB为历史访问记录中访问文件A后访问A的后继文件B的总次数,P(AB)为当前访问文件A后访问A的后继文件B的概率,那么When the user uses the mobile client multiple times, there will be a lot of historical access records. When accessing these files with records in the future, the probability can be calculated to determine whether to prefetch the successor based on the list order. Record the currently accessed file as A, the successor file of A based on the order of the list is B, FA is the total number of accesses to file A in the historical access records, and FAB is the successor file of A after accessing file A in the historical access records The total number of times of B, P(AB) is the probability of accessing A’s successor file B after the current access file A, then
当时,说明访问文件A后有50%以上的概率会基于顺序列表的访问A的后继文件B,这时我们可以对文件B进行预取,同初始预取策略类似,我们将最大预取长度设为2,当确定B可以进行预取后,我们再来看基于列表顺序的B的后继文件C是否可以和文件B一起进行预取,若当前访问文件为A,那么接下来按顺序访问文件B和C的概率为P(AB)·P(BC),当时,说明访问文件A后有50%以上的概率会基于顺序列表的访问A的后继文件B和C,因此我们选择对B和C一起预取,否则只预取文件B,重复上面的步骤进行计算,直到加入某个文件进行预取后,其概率乘积小于等于1/2为止,此时便确定最终的预取队列。when , it means that after accessing file A, there is more than 50% probability that the successor file B of A will be accessed based on the sequence list. At this time, we can prefetch file B. Similar to the initial prefetching strategy, we set the maximum prefetching length to It is 2. When it is determined that B can be prefetched, let’s see whether the successor file C of B based on the list order can be prefetched together with file B. If the currently accessed file is A, then access files B and B in sequence. The probability of C is P(AB)·P(BC), when , it means that after accessing file A, there is more than 50% probability that the successor files B and C of A will be accessed based on the sequence list, so we choose to prefetch both B and C, otherwise, only file B is prefetched, and the above steps are repeated. Calculate until the probability product is less than or equal to 1/2 after adding a file for prefetching, then the final prefetching queue is determined.
在预取策略中,将预取数据长度的设定还需要考虑到以下几个方面:In the prefetch strategy, the setting of the prefetch data length also needs to consider the following aspects:
(1)若预取长度直接设为1,即每次预取都只从服务端获取一个文件,那么用户每访问一个文件都需要到服务端去请求一次获取预取文件数据,使用户访问服务器的次数过多,耗损更多的网络流量,另一方面,若用户访问文件速度较快,例如像简单的浏览图片,那么用户会很快的进行切换,每次预取一个文件的话有可能在用户切换时预取的文件还没有完全获取到本地缓存中,因此,还是会有访问延迟的情况。(1) If the prefetch length is directly set to 1, that is, each prefetch only obtains one file from the server, then each time the user accesses a file, he needs to go to the server to request the prefetch file data once, so that the user can access the server Too many times will consume more network traffic. On the other hand, if the user accesses the file faster, such as simply browsing pictures, then the user will switch quickly, and each time a file is prefetched, it may be in the The prefetched files have not been fully fetched into the local cache when the user switches, so there will still be access delays.
(2)若预取长度过大,由于并不是百分之百的确定当前访问的文件的后继是否接下来会被访问,因此预取过来的文件可能并不会被访问,预取的文件越多,那么浪费的用户流量也越多,另一方面,由于缓存空间有限,预取更多的数据意味着要换出缓存中原来已存在的数据,这样有可能将用户常访问的数据替换出来,而预取过来的数据又可能是无用的,这样反而降低了系统的性能。(2) If the prefetch length is too large, since it is not 100% sure whether the successor of the currently accessed file will be accessed next, the prefetched file may not be accessed. The more files are prefetched, then The more user traffic is wasted, on the other hand, due to the limited cache space, prefetching more data means swapping out the existing data in the cache, which may replace the data frequently accessed by users, while prefetching The fetched data may be useless, which reduces the performance of the system.
综上考虑我们在实现过程中,可以通过实验找出一个比较合适的数据预取长度,从而在减少用户访问服务端的次数以及(1)中介绍的访问延迟的情况下,还能尽可能少的预取无效数据,节省用户流量,并提高用户体验。In summary, in the process of implementation, we can find out a more appropriate data prefetch length through experiments, so as to reduce the number of times users visit the server and the access delay introduced in (1), as little as possible Prefetch invalid data, save user traffic, and improve user experience.
本发明描述了在移动终端上使用缓存技术来提高移动端访问云存储服务时,使用基于列表顺序的数据预取策略来进一步提高缓存的效率。The present invention describes that when the cache technology is used on the mobile terminal to improve the access of the mobile terminal to the cloud storage service, the efficiency of the cache is further improved by using a data prefetching strategy based on the order of the list.
如图1所示,本发明移动云存储环境下缓存数据的预取方法,其是应用在移动终端中,该方法包括如下步骤:As shown in Figure 1, the prefetching method of cached data under the mobile cloud storage environment of the present invention is applied in a mobile terminal, and the method includes the following steps:
(1)接收用户请求,根据该用户请求向服务器请求文件列表,并将该文件列表显示给用户,如图2所示;(1) Receive a user request, request a file list from the server according to the user request, and display the file list to the user, as shown in Figure 2;
(2)在用户从文件列表中选择文件后判断用户选择的文件是否存在于移动终端的缓存中,若存在则进入步骤(3),否则进入步骤(4);(2) After the user selects the file from the file list, judge whether the file selected by the user exists in the cache memory of the mobile terminal, if it exists, then enter step (3), otherwise enter step (4);
(3)直接从移动终端的缓存中提取用户选择的文件,然后进入步骤(6);(3) directly extract the file selected by the user from the cache memory of the mobile terminal, and then enter step (6);
(4)向服务器发送HTTP请求,该HTTP请求中携带有与用户选择的文件对应的URL信息;具体而言,该HTTP请求包括服务器的地址、用户的唯一标识符、以及用户选择文件的具体路径;(4) Send an HTTP request to the server, which carries URL information corresponding to the file selected by the user in the HTTP request; specifically, the HTTP request includes the address of the server, the unique identifier of the user, and the specific path of the file selected by the user ;
(5)从服务器接收与该URL信息对应的文件,该文件就是用户选择的文件;(5) Receive the file corresponding to the URL information from the server, and this file is the file selected by the user;
(6)判断用户选择的文件是否存在有历史访问记录,如果没有则转入步骤(7),如果有则转入步骤(10);(6) Judging whether the file selected by the user has a historical access record, if not then proceed to step (7), if there is then proceed to step (10);
(7)设置计数器n=1;(7) Counter n=1 is set;
(8)预取用户选择文件的n个后继文件;(8) Prefetch n successor files of the user selected file;
(9)判断该后继文件接下来是否被用户访问过,如果是则将预取的后继文件个数n加1,然后继续重复执行步骤(8),直至所有的后继文件都被预取完或用户停止访问该文件为止,否则返回步骤(7);(9) Determine whether the subsequent file has been accessed by the user next, and if so, add 1 to the number n of the prefetched subsequent files, and then continue to repeat step (8) until all subsequent files have been prefetched or Until the user stops accessing the file, otherwise return to step (7);
(10)通过计算访问该文件的后继文件的概率来确定是否对该后继文件进行预取,并确定对多少文件进行预取,其中在访问后继文件的概率乘积大于时,则提前从服务器预取该后继文件到本地,本步骤具体包括如下子步骤:(10) Determine whether to prefetch the subsequent file by calculating the probability of accessing the subsequent file of the file, and determine how many files are prefetched, wherein the probability product of accessing the subsequent file is greater than , the subsequent file is prefetched from the server to the local in advance. This step specifically includes the following sub-steps:
(10-1)设置预取队列长度m为0;(10-1) Set the prefetch queue length m to 0;
(10-2)计算用户下一次选择的文件A后访问其后继文件B的概率其中FAB表示历史访问记录中访问文件A后访问A的后继文件B的总次数,FA表示历史访问记录中对文件A访问的总次数;(10-2) Calculate the probability of accessing the subsequent file B after the file A selected by the user next time Wherein FAB represents the total number of times that the successor file B of A is accessed after accessing file A in the historical access record, and FA represents the total number of times that file A is accessed in the historical access record;
(10-3)判断是否有如是,则说明访问文件A后有50%以上的概率会访问A的后继文件B,将B加入预取队列,预取队列长度m加1,然后转步骤(10-4),否则返回步骤(10-1);(10-3) Judging whether there is If so, it means that after accessing file A, there is more than 50% probability of accessing A’s successor file B, adding B to the prefetch queue, adding 1 to the length m of the prefetch queue, and then going to step (10-4), otherwise returning to step ( 10-1);
(10-4)计算后继文件B后访问其后续文件C的概率p2,方法同步骤(10-2);(10-4) After calculating the probability p2 of accessing the subsequent file C of the subsequent file B, the method is the same as step (10-2);
(10-5)判断是否有若是则说明访问文件A后有50%以上的概率会基于文件列表访问A的后继文件B和C,并将C也加入预取队列,将预取长度m加1,并转步骤(10-6),否则转入步骤(10-7);(10-5) Judging whether there is If so, it means that after accessing file A, there is a probability of more than 50% that the successor files B and C of A will be accessed based on the file list, and C will also be added to the prefetch queue, and the prefetch length m will be increased by 1, and then go to step (10-6 ), otherwise go to step (10-7);
(10-6)重复执行上述步骤(10-5),直到p1*p2*...*pm-1>1/2且p1*p2*...*pm-1*pm<=1/2为止,此时预取队列长度为m-1;(10-6) Repeat the above steps (10-5) until p1 *p2 *...*pm-1 >1/2 and p1 *p2 *...*pm-1 * pm <= 1/2, at this time the length of the prefetch queue is m-1;
(10-7)依次对预取队列中的文件进行预取;(10-7) Prefetch the files in the prefetch queue sequentially;
(10-8)判断预取到的所有文件接下来是否被用户按顺序访问,若是则说明预取正确,过程结束,否则说明预取有误,然后返回步骤(10-1)。(10-8) Determine whether all the files prefetched are accessed in order by the user next, if so, it means that the prefetching is correct, and the process ends, otherwise, it means that the prefetching is wrong, and then returns to step (10-1).
通过本发明的上述方法,针对移动终端中用户使用客户端访问云存储数据时的特性,采用这种基于列表顺序的数据预取方法可以减少访问延迟,有效提高整个缓存系统的效率。Through the above-mentioned method of the present invention, aiming at the characteristics of the mobile terminal when the user uses the client to access the cloud storage data, the data prefetching method based on the order of the list can reduce the access delay and effectively improve the efficiency of the entire cache system.
本领域的技术人员容易理解,以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。It is easy for those skilled in the art to understand that the above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. Any modifications, equivalent replacements and improvements made within the spirit and principles of the present invention, All should be included within the protection scope of the present invention.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201510744409.9ACN106681990B (en) | 2015-11-05 | 2015-11-05 | A prefetching method for cached data in a mobile cloud storage environment |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201510744409.9ACN106681990B (en) | 2015-11-05 | 2015-11-05 | A prefetching method for cached data in a mobile cloud storage environment |
| Publication Number | Publication Date |
|---|---|
| CN106681990A CN106681990A (en) | 2017-05-17 |
| CN106681990Btrue CN106681990B (en) | 2019-10-25 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201510744409.9AActiveCN106681990B (en) | 2015-11-05 | 2015-11-05 | A prefetching method for cached data in a mobile cloud storage environment |
| Country | Link |
|---|---|
| CN (1) | CN106681990B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107294990B (en)* | 2017-07-04 | 2020-06-26 | 中国联合网络通信集团有限公司 | Information encryption method and device |
| CN110018970B (en)* | 2018-01-08 | 2023-07-21 | 腾讯科技(深圳)有限公司 | Cache prefetching method, device, equipment and computer readable storage medium |
| CN108667916B (en)* | 2018-04-24 | 2021-08-13 | 百度在线网络技术(北京)有限公司 | Data access method and system for Web application |
| CN108763104B (en)* | 2018-05-23 | 2022-04-08 | 北京小米移动软件有限公司 | Method and device for pre-reading file page and storage medium |
| CN109698865A (en)* | 2018-12-26 | 2019-04-30 | 苏州博纳讯动软件有限公司 | A kind of cloud application caching method based on access prediction |
| CN110245094B (en)* | 2019-06-18 | 2020-12-29 | 华中科技大学 | A block-level cache prefetch optimization method and system based on deep learning |
| CN111818122B (en)* | 2020-05-28 | 2022-03-01 | 北京航空航天大学 | A Data Prefetching Method for WAN Based on Traffic Fairness |
| CN114968076A (en)* | 2021-02-25 | 2022-08-30 | 华为技术有限公司 | Method, apparatus, medium, and program product for storage management |
| CN114138687A (en)* | 2021-11-30 | 2022-03-04 | 新华三大数据技术有限公司 | Data prefetching method and device, electronic equipment and storage medium |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101183908A (en)* | 2007-12-24 | 2008-05-21 | 深圳市茁壮网络技术有限公司 | Data prefetching method and communication system and related device |
| CN102737037A (en)* | 2011-04-07 | 2012-10-17 | 北京搜狗科技发展有限公司 | Webpage pre-reading method, device and browser |
| US8380680B2 (en)* | 2010-06-23 | 2013-02-19 | International Business Machines Corporation | Piecemeal list prefetch |
| CN104580437A (en)* | 2014-12-30 | 2015-04-29 | 创新科存储技术(深圳)有限公司 | Cloud storage client and high-efficiency data access method thereof |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101183908A (en)* | 2007-12-24 | 2008-05-21 | 深圳市茁壮网络技术有限公司 | Data prefetching method and communication system and related device |
| US8380680B2 (en)* | 2010-06-23 | 2013-02-19 | International Business Machines Corporation | Piecemeal list prefetch |
| CN102737037A (en)* | 2011-04-07 | 2012-10-17 | 北京搜狗科技发展有限公司 | Webpage pre-reading method, device and browser |
| CN104580437A (en)* | 2014-12-30 | 2015-04-29 | 创新科存储技术(深圳)有限公司 | Cloud storage client and high-efficiency data access method thereof |
| Title |
|---|
| 移动环境下支持实时事务处理的数据预取;李国徽等;《计算机学报》;20081015(第10期);第1841-1845页* |
| Publication number | Publication date |
|---|---|
| CN106681990A (en) | 2017-05-17 |
| Publication | Publication Date | Title |
|---|---|---|
| CN106681990B (en) | A prefetching method for cached data in a mobile cloud storage environment | |
| US10182127B2 (en) | Application-driven CDN pre-caching | |
| CN101741986B (en) | Page cache method for mobile communication equipment terminal | |
| US20110153867A1 (en) | Domain name system lookup latency reduction | |
| CN106339508B (en) | WEB caching method based on paging | |
| US20080201332A1 (en) | System and method for preloading content on the basis of user context | |
| US10158740B2 (en) | Method and apparatus for webpage resource acquisition | |
| CN101388824B (en) | File reading method and system under sliced memory mode in cluster system | |
| JP2013507718A (en) | Prefetch content items based on social distance | |
| CN102647634A (en) | A method and device for playing multi-slice video based on HTML5video | |
| JP2007510224A (en) | A method for determining the segment priority of multimedia content in proxy cache | |
| WO2018054170A1 (en) | Browser resources pre-pulling method, terminal and storage medium | |
| CN102368243B (en) | Method, device and mobile terminal for viewing picture in non-graphical browse mode | |
| CN109947720A (en) | A file pre-reading method, apparatus, device and readable storage medium | |
| CN106686113A (en) | A Distributed File System Intelligent Pre-reading Implementation Method | |
| US10341454B2 (en) | Video and media content delivery network storage in elastic clouds | |
| CN110427582A (en) | The read method and device of file cache | |
| CN114218170B (en) | File reading method and device | |
| CN115587052A (en) | Processing method of cache performance and related equipment thereof | |
| CN103838745B (en) | The processing method and processing device that a kind of webpage is pre-read | |
| WO2025066392A1 (en) | Prefetching parameter configuration method, cache prefetching method, device and storage medium | |
| CN116846881A (en) | A method and terminal for updating cache library files | |
| CN116578766A (en) | Network data preloading method, device, equipment and medium | |
| CN120653463A (en) | Distributed cache management method, device, computer equipment and storage medium | |
| Sihag | A survey on web prefetching techniques |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |