Specific embodiment
An embodiment of the present invention provides a kind of multi-level buffer data processing method and devices, can be stored by multi-level bufferDifferent types of data improve the inquiry velocity and efficiency of data, reduce system response time, improve the data processing of systemEnergy.
In order to which those skilled in the art is made to more fully understand the technical solution in the present invention, below in conjunction with of the invention realThe attached drawing in example is applied, the technical solution in the embodiment of the present invention is clearly and completely described, it is clear that described implementationExample is only part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, this field is commonTechnical staff's all other embodiments obtained without making creative work, should all belong to protection of the present inventionRange.
It is the exemplary application scene of the embodiment of the present invention referring to Fig. 1.Method and apparatus provided in an embodiment of the present invention canTo be applied to scene as shown in Figure 1, wherein, method provided in an embodiment of the present invention can be applied to data processing system 1000In, wherein, the data processing system 1000 can include multi-level buffer, such as first order caching, second level caching, the third levelCaching, the type of the caching of different stage can be different.The data processing system Ke Yi Bo Akai databases and dataQuery interface.Multi-level buffer data processing equipment provided in an embodiment of the present invention can be the one of the data processing system 1000Part can also be used as independent device and exist, do not limit herein.It should be noted that above application scene merely toIt is easy to understand the present invention and shows, embodiments of the present invention are unrestricted in this regard.On the contrary, the embodiment party of the present inventionFormula can be applied to applicable any scene.
Below in conjunction with attached drawing 2 to attached drawing 4 to the multi-level buffer data processing method shown in exemplary embodiment of the presentIt is introduced.
Referring to Fig. 2, the multi-level buffer data processing method flow chart provided for one embodiment of the invention.It as shown in Fig. 2, canTo include:
S201, will be in the storage to the caching of different stage of different types of data;Wherein, the type of the caching of different stageIt is different.
In response to data inquiry request, inquiry model is determined according to the corresponding query strategy of the data inquiry request by S202It encloses and search order, multi-level buffer is inquired according to the query context and search order successively, obtaining please with the inquirySeek corresponding data.
During specific implementation, the multi-level buffer can include caching more than two-stage and two-stage, can be right as neededThe level and rank of caching are extended, herein without limiting.In embodiments of the present invention, it is said by taking 3 grades of cachings as an exampleIt is bright.
In some embodiments, the caching by different types of data storage to different stage includes:
(1) prediction hot spot data is stored to the first order in caching;Wherein, the prediction hot spot data is goes through according to userThe prediction hot spot data that history behavioral data is predicted.
(2) during the storage of real-time hot spot data is cached to the second level;Wherein, the real-time hot spot data is according to user's realityWhen the real-time hot spot data that is calculated of behavioral data;
(3) full dose data are stored to the third level in caching.
It should be noted that when the present invention implements, can be predicted according to user's history behavioral dataHot spot data, during prediction hot spot data storage is cached to the first order.Wherein, it is different classes of according to the classification of user behaviorBehavior assigns different weights, according to the corresponding merchandise items of the Weight Acquisition user behavior and/or merchandise items supplierScore, determine that merchandise items and/or merchandise items provide according to the score of the merchandise items and/or merchandise items supplierThe sequence of person;Hot spot data is determined according to the sequence of the merchandise items and/or merchandise items supplier.It is for example, describedPrediction hot spot data can be hot spot merchandise items and/or, hot spot merchandise items supplier, such as seller or shop.SpecificallyDuring realization, can hot spot data be generated according to user's history behavioral data.It is purchased for example, can be added in merchandise items by userThe behavior of object vehicle determines that a certain merchandise items are added into the number of shopping cart, so as to obtain the score of the commodity and/or ranking.AgainSuch as, the behavior that merchandise items can also be added in shopping cart by user determines the total commodity included in some seller or shopThe number of shopping cart is added into, so as to obtain the score of the seller or shop and/or ranking.It again for example, can be with rootDifferent weights is assigned for different classes of behavior according to the classification of user behavior, it is corresponding according to the Weight Acquisition user behaviorMerchandise items and/or the score of merchandise items supplier, it is true according to the score of the merchandise items and/or merchandise items supplierDetermine the sequence of merchandise items and/or merchandise items supplier.It for example, can be according to articles storage popularity, the page of nearest X daysSeveral, tens of kinds of face access times and/or user's access times, lower single order numbers, lower single buyer's number etc. even hundreds of weightsIt is calculated, can corresponding score and ranking be generated according to the formula of correlation computations per data.For example, some A commodity existsX days numbers of visits of restocking up to Y time, collect Z times, the IPV of nearest X days be U times, UV is V times, can be corresponding according to different factorsDifferent weights obtain the score of the commodity, it is assumed that the score (score) of the commodity is S, rank R, if ranking is preceding 50%Hot spot commodity can be determined as by calculating hot spot commodity.Specific calculation, the present invention to this without limit, as long as consider notWith factor and weight obtain score can be with.It is then possible to according to the score of commodity and/or ranking determine the commodity whether beHot spot commodity and, determine whether the seller or shop are hot spot merchandise items suppliers.If so, it is determined as hot spot numberAccording to.When it is hot spot data to determine whether, can will be in the top, such as come the merchandise items or commodity within preceding 50%Ob-ject provider is as hot spot data.It is of course also possible to it is more than that the merchandise items of given threshold or merchandise items carry by scoreDonor is as hot spot data.
In some embodiments, the method further includes:The prediction hot spot data is distributed to each Distributed CalculationNode.For example, can be by calling using callback interface generation prediction hot spot data, and notify super node data preparationIt finishes.Hot spot data is pulled by super node calling interface, super node is supplied to using by hot spot data, it will by super nodePrediction hot spot book is distributed on each distributed computational nodes.In this way, each calculate node is stored with prediction hot spot data.CitingIllustrate, for corresponding hot spot retail shop " excellent clothing library ", method provided by the invention can be stored as hot spot data to firstIn grade caching.When receiving largely inquiry request to hot spot shop " excellent clothing library ", since the present invention stores it in advanceIn first order caching, therefore large batch of inquiry request can be coped with.
In some embodiments, the second level caches to store real-time hot spot data.During specific implementation, according to numberAccording to inquiry times per second and nearest usage time determine real-time hot spot data.Further, it is also possible that setting timeData are ranked up according to the inquiry times per second of data and nearest usage time in period, according to ranking results deletion portionDivided data.For example, dynamic algorithm when can pass through operation, by data according to inquiry times QPS per second within the unit intervalAnd usage time is ranked up recently, to ensure that real-time continuous hot spot can be remained stored in memory, avoids breakdown dataLibrary leads to application crashes.During specific implementation, make within the time cycle of setting according to the inquiry times per second of data and recentlyData are ranked up with the time, partial data is deleted according to ranking results.For example, in data peak period, the data failure timeIt could be provided as hour grade;The non-peak period data failure time could be provided as second grade.
In some embodiments, the third level caches the full dose data for storing database.
It, can be corresponding according to the data inquiry request in response to data inquiry request when the present invention implementsQuery strategy determines query context and search order, is inquired successively according to the query context and search order multistage slowIt deposits, obtains and multi-level buffer is inquired according to the query context and search order described in data corresponding with the inquiry request,Data corresponding with the inquiry request are obtained to include:The rank of the caching of inquiry is determined according to the query context, according to looking intoThe rank for asking caching inquires each level cache successively from high to low, obtains data corresponding with the inquiry request.
It illustrates, it is assumed that represent that different stage caches (data source in other words) using binary digit, may be usedEnter ginseng condition and assemble binary digit to represent to cache corresponding query strategy.For example, first order caching binary digit 0001It represents;Second level caching binary digit 0010 represents;The third level caches, and is represented with binary digit 0100;Fourth stage caching is usedBinary digit 1000 represents.
Assuming that query strategy includes (exemplary illustration):
【1】0011, represent first and second grade of caching of only inquiry
【2】1011, third level caching is not inquired in expression
During specific implementation, device provided by the invention provides query interface, these selections are carried by the parameter of encapsulationQuery strategy, then recognize these strategy when inquired.In inquiry, across rank inquiry can be realized, such asSimilar query strategy【2】Equally, 1,2,4 are inquired, skips the inquiry of 3rd level.It is illustrated by taking Fig. 3 as an example, it is assumed that query strategyFor 1111 (level Four caches, and all inquires), then search order is as shown in figure 3, first inquire the first order, if hit, terminates to look intoIt askes;Miss, then inquire the second level ... and so on.It should be noted that Fig. 3 illustrates by taking cargo tracer data as an example, toolThe inquiry of typically a plurality of data when body is realized, in the inquiry of every level-one, if any the situation of partial hit, it is also necessary to isolateThe request of failure continues to inquire next stage caching.
In some embodiments, the method further includes:Query scheduling strategy is set, and the query scheduling strategy is used forIndicate whether each level cache in the multi-level buffer provides inquiry service.In some embodiments, described in the basisThe corresponding query strategy of data inquiry request determines that query context and search order include:According to the data inquiry request pairThe query strategy and query scheduling strategy answered determine query context and search order.
For example, method and apparatus provided by the invention can realize the control and scheduling to multi-level buffer, such as canTo control whether the caching of any one rank provides inquiry service.For example, it is assumed that multi-level buffer includes 5 grades of cachings, Ke YipeiThe inquiry that 1,2,3,5 ranks are only provided is put, the 4th level cache does not provide inquiry.Even if the in this way, corresponding inquiry plan of incoming parameterComprising the 4th grade of caching in slightly, the 4th grade of caching will not be inquired when inquiring data.Specifically, the configuration of the query scheduling strategyInterface level can be refine to, expression formula realization may be used.For example, " calling source " is transmitted in incoming parameter, eachWhen grade accesses, matched into row expression, match, inquired.It is specific as shown in table 1.
1 query scheduling strategy of table illustrates table
As shown in table 1, it is corresponding to call the service that source is " findItemPromotion ", if caching query scheduling strategyIt is 1100, then only inquires the caching of 1,2 two rank, even if query strategy is 1110, then the third level will not be inquired and delayedIt deposits.
In some embodiments, the method further includes:If it is inquired and the inquiry request from low level cachingCorresponding data, during the higher level that data storage to the low level caches is cached.
It is the multi-level buffer data processing method flow chart that one embodiment of the invention provides, at data referring to Fig. 4Reason system, the data processing system include multi-level buffer, and the multi-level buffer includes first order caching, second level caching, theThree-level caches, as shown in figure 4, can include:
S401, during prediction hot spot data storage is cached to the first order;Real-time hot spot data is stored to the second level and is cachedIn;During the storage of full dose data is cached to the third level;Wherein, the type of first order caching is out-pile memory, described secondThe type of grade caching is memory in heap, and the type of the third level caching is cached for distributed remote;
In response to data inquiry request, inquiry model is determined according to the corresponding query strategy of the data inquiry request by S402It encloses and search order, multi-level buffer is inquired according to the query context and search order successively, obtaining please with the inquirySeek corresponding data.
In some embodiments, the prediction hot spot data is predicted to obtain according to user's history behavioral data.Wherein, rootIt predicts to obtain according to user's history behavioral data and predicts that hot spot data includes:It is different classes of behavior according to the classification of user behaviorAssign different weights, according to the corresponding merchandise items of the Weight Acquisition user behavior and/or merchandise items supplierPoint, merchandise items and/or merchandise items supplier are determined according to the score of the merchandise items and/or merchandise items supplierSequence;Hot spot data is determined according to the sequence of the merchandise items and/or merchandise items supplier.In some embodiments,The method further includes:The prediction hot spot data is distributed to each distributed computational nodes.
In some embodiments, the real-time hot spot data is calculated according to the real-time behavioral data of user.Wherein, rootReal-time hot spot data is calculated according to the real-time behavioral data of user to include:According to the inquiry times per second of data and use recentlyTime determines real-time hot spot data.
In some embodiments, the method further includes:According to the inquiry per second of data within the time cycle of settingNumber and nearest usage time are ranked up data, and partial data is deleted according to ranking results.
In some embodiments, the method further includes:Query scheduling strategy is set, and the query scheduling strategy is used forIndicate whether each level cache in the multi-level buffer provides inquiry service.
In some embodiments, it is described according to the corresponding query strategy of the data inquiry request determine query context withAnd search order includes:Query context is determined according to the corresponding query strategy of the data inquiry request and query scheduling strategyAnd search order.
In some embodiments, it is described that multi-level buffer is inquired according to the query context and search order, obtain withThe corresponding data of the inquiry request include:The rank of the caching of inquiry is determined according to the query context, according to query cachingRank inquire each level cache successively from high to low, obtain data corresponding with the inquiry request.
In some embodiments, the method further includes:If it is inquired and the inquiry request from low level cachingCorresponding data, during the higher level that data storage to the low level caches is cached.
It should be noted that in the embodiment of the present invention, it will predict that obtained hot spot data is stored in advance in first order cachingIn.During the storage of real-time hot spot data is cached to the second level, third level buffer memory full dose data.Wherein, the first order is delayedThe type deposited is out-pile memory, and the type of the second level caching is memory in heap, and the type of the third level caching is distributionFormula remote cache.
In inquiry, the situation in losing one's life if there is inquiry current level caching can inquire next stage caching, inquireIf the data that next stage returns when caching can be placed in upper level caching, if finding database in continuing recklessly always(database be bound to returned data) can be placed on the data result that data base querying comes out the second level and the third level cachesIn.Wherein first order buffer memory is that obtained hot spot data is predicted in extraction, and prediction hot spot data only provides after putting inInquiry service, and within the regular hour effectively, more than this time data with regard to invalid, if there is new known hotspot data meetingNew data are preheated again to enter, and are kept in the regular hour effectively.And for the second level caches, i.e., real-time hot spot dataFor its simultaneously should not as the first order cache be distributed.The second level on every machine is data cached, depends on what it receivedRequest what data inquired, the request in general received on every machine be all it is balanced, if there is 10,000 ask allIt is the preferential of A commodity to be inquired, if there is 2,000 machines, substantially every machine can all receive request as 5 times.The data of second level caching are equally having time limitations, fail in typically several seconds, more than rear data invalid, are not ordered after inquiryIn then continue inquire the third level, be put into again after checking out the second level caching in.In order to which hot spot data is enabled to resideMemory, can by expel the real-time hot spot of strategy can memory-resident, guarantee unexpected rival hot spot, that is, real-time hot spot resides inMemory.For example, when first time inquire when, the second level caching it is unsure according to when can puncture the second level, go inquiry the third level evenDatabase is inquired, as long as after inquiring data, second level caching will be put at once, then inquiry after the first all may be usedWith from the second level caching query to data, and the data that specifically put in all were effective in several seconds.
It should be noted that the first order and second level caching are all memory caches, it is simple to understand exactly and apply sameOn one machine, without network and the expense of network interface card input and output;But also have any different between them, level cache out-pile,L2 cache is in heap, because known hot spot data can be relatively more, up to 2, the data of 3G, and application system is using java language, the benefit using out-pile is can to reduce the risk that GC (garbage collection) is brought, and harm is more sequences than the poor performance in heapThe cost of rowization.And fewer, the space less than 1G in heap generally.It should be noted that the third level caching be it is remoteJourney caches, and due to being distributed caching design, it can accomplish to expand the capacity of storage by increasing machine, therefore it is depositedThe data volume put depends on the scale of machine.But compared to local cache (first and second grade caching), using delaying again to the third levelThe size depending on network interface card when obtaining data is deposited, there is no limit distributed caching is reading, number is written completely unlike local cacheAccording to when can carry out locking processing for data, if one point data visit capacity is excessive, the request for waiting for lock is more, then waits for lockTime can be long, if for a long time do not have returned data, application request will block, cause it is subsequent request all block,Etc. processing to be applied.If hot spot is all hit in local cache, then would not all inquire third level caching or numberAccording to library.
It should be noted that the present invention is not limited to 3 grades of cachings, 4 grades of cachings are can also be, fourth stage caching can be disaster toleranceCaching, it can be understood as the backup of database.When the present invention implements, it can increase or delete the caching of different stage,It realizes extending transversely.Further, different types of, complicated caching can be sorted out and be used in mixed way by the present invention,During use, can different cachings be accessed by different cache policies.Further, the present invention can be directed to and accomplish pairArbitrary level cache is detachable, and fine granularity application can only access certain level cache, accomplishes being used in mixed way for different stage caching;In flow, it can accomplish to operate the read-write operation of cachings at different levels, caching and data integrated, database current limliting etc..
It should be noted that in some embodiments, the present invention also provides the storage of multi-level buffer data, inquiry,Dispatching method illustrates more detail below.
In some embodiments, an embodiment of the present invention provides a kind of multi-level buffer date storage method, applied to numberAccording to processing system, the data processing system includes multi-level buffer, the method includes:Obtain the type of data;According to describedThe type of data stores the data into corresponding caching;Wherein, the data type of the buffer memory of different stage is different,The type of the caching of different stage is different.
In some embodiments, the multi-level buffer includes first order caching, second level caching, third level caching, instituteThe type for stating data includes data storage to corresponding caching:Prediction hot spot data storage is cached to the first orderIn;Wherein, the prediction hot spot data is predicted to obtain according to user's history behavioral data;By real-time hot spot data storage to secondIn grade caching;Wherein, the real-time hot spot data is calculated according to the real-time behavioral data of user;By full dose data storage to theIn three-level caching.
In some embodiments, it predicts to obtain according to user's history behavioral data and predicts that hot spot data includes:According toThe classification of family behavior assigns different weights for different classes of behavior, according to the corresponding commodity of the Weight Acquisition user behaviorObject and/or the score of merchandise items supplier determine quotient according to the score of the merchandise items and/or merchandise items supplierProduct object and/or the sequence of merchandise items supplier;It is determined according to the sequence of the merchandise items and/or merchandise items supplierHot spot data.
In some embodiments, the method further includes:The prediction hot spot data is distributed to each Distributed CalculationNode.
In some embodiments, real-time hot spot data is calculated according to the real-time behavioral data of user to include:According to numberAccording to inquiry times per second and nearest usage time determine real-time hot spot data.
In some embodiments, the method further includes:According to the inquiry per second of data within the time cycle of settingNumber and nearest usage time are ranked up data, and partial data is deleted according to ranking results.
In some embodiments, the type of the first order caching is out-pile memory, the type of the second level cachingFor memory in heap, the type of the third level caching is cached for distributed remote.
In some embodiments, the embodiment of the present invention additionally provides a kind of multi-level buffer data query method, is applied toData processing system, the data processing system include multi-level buffer, the method includes:Receive data inquiry request;ResponseIn data inquiry request, query context and search order, root are determined according to the corresponding query strategy of the data inquiry requestMulti-level buffer is inquired successively according to the query context and search order, obtains data corresponding with the inquiry request.
Wherein, it is described that multi-level buffer is inquired according to the query context and search order, it obtains and the inquiry requestCorresponding data include:According to the query context determine inquiry caching rank, according to query caching rank by height toIt is low to inquire each level cache successively, obtain data corresponding with the inquiry request.
Wherein, the method further includes:It, will if inquiring data corresponding with the inquiry request from low level cachingIn higher level's caching that the data storage is cached to the low level.
In some embodiments, the embodiment of the present invention additionally provides multi-level buffer data dispatching method, applied to dataProcessing system, the data processing system include multi-level buffer, the method includes:Query scheduling strategy, the inquiry are setWhether each level cache that scheduling strategy is used to indicate in the multi-level buffer provides inquiry service;Receive data inquiry request,Query context is determined according to the query scheduling strategy of the corresponding query strategy of the data inquiry request and setting and is looked intoInquiry sequence.
Above-mentioned multi-level buffer data storage, inquiry, the specific implementation of dispatching method are referred to Fig. 1-method shown in Fig. 4And it realizes.
Referring to Fig. 5, the multi-level buffer data processing equipment schematic diagram provided for one embodiment of the invention.
A kind of multi-level buffer data processing equipment 500, including:
Storage unit 501, for storing different types of data into the caching of different stage;
Query unit 502, in response to data inquiry request, according to the corresponding query strategy of the data inquiry requestDetermine query context and search order, multi-level buffer inquired according to the query context and search order successively, obtain withThe corresponding data of the inquiry request.
In some embodiments, the storage unit specifically includes:
First storage unit, used in that will predict that hot spot data storage is cached to the first order;Wherein, the prediction hot spot numberIt predicts to obtain according to according to user's history behavioral data;
Second storage unit, used in the storage of real-time hot spot data to be cached to the second level;Wherein, the real-time hot spot numberIt is calculated according to according to the real-time behavioral data of user;
Third storage unit, used in the storage of full dose data to be cached to the third level.
In some embodiments, described device further includes:
First determination unit, for assigning different weights, root for different classes of behavior according to the classification of user behaviorAccording to the corresponding merchandise items of the Weight Acquisition user behavior and/or the score of merchandise items supplier, according to the commodity pairAs and/or the score of merchandise items supplier determine the sequence of merchandise items and/or merchandise items supplier;According to the commodityThe sequence of object and/or merchandise items supplier determine hot spot data.
In some embodiments, described device further includes:
Dispatching Unit, for the prediction hot spot data to be distributed to each distributed computational nodes.
In some embodiments, described device further includes:
Second determination unit determines real-time hot spot number for the inquiry times per second according to data and nearest usage timeAccording to.
In some embodiments, described device further includes:
Updating unit, within the time cycle of setting according to the inquiry times per second of data and nearest usage timeData are ranked up, partial data is deleted according to ranking results.
In some embodiments, the type of the first order caching is out-pile memory, the type of the second level cachingFor memory in heap, the type of the third level caching is cached for distributed remote.
In some embodiments, described device further includes:
Scheduling unit, for setting query scheduling strategy, the query scheduling strategy is used to indicate in the multi-level bufferEach level cache whether provide inquiry service.
In some embodiments, the query unit is specifically used for:According to the corresponding inquiry of the data inquiry requestStrategy and query scheduling strategy determine query context and search order.
In some embodiments, the query unit is specifically used for:
The rank of the caching of inquiry is determined according to the query context, is looked into successively from high to low according to the rank of query cachingEach level cache is ask, obtains data corresponding with the inquiry request.
In some embodiments, the storage unit is additionally operable to:
If inquiring data corresponding with the inquiry request from low level caching, the data are stored to described lowIn higher level's caching of level cache.
Referring to Fig. 6, the multi-level buffer data processing equipment schematic diagram provided for one embodiment of the invention.
A kind of multi-level buffer data processing equipment 600, including:
Storage unit, used in that will predict that hot spot data storage is cached to the first order;By real-time hot spot data storage to theIn L2 cache;During the storage of full dose data is cached to the third level;Wherein, the type of the first order caching is out-pile memory,The type of the second level caching is memory in heap, and the type of the third level caching is cached for distributed remote;
Query unit, it is true according to the corresponding query strategy of the data inquiry request in response to data inquiry requestDetermine query context and search order, multi-level buffer, acquisition and institute are inquired according to the query context and search order successivelyState the corresponding data of inquiry request.
In some embodiments, described device further includes:
First determination unit, for assigning different weights, root for different classes of behavior according to the classification of user behaviorAccording to the corresponding merchandise items of the Weight Acquisition user behavior and/or the score of merchandise items supplier, according to the commodity pairAs and/or the score of merchandise items supplier determine the sequence of merchandise items and/or merchandise items supplier;According to the commodityThe sequence of object and/or merchandise items supplier determine hot spot data.
In some embodiments, described device further includes:
Dispatching Unit, for the prediction hot spot data to be distributed to each distributed computational nodes.
In some embodiments, described device further includes:
Second determination unit determines real-time hot spot number for the inquiry times per second according to data and nearest usage timeAccording to.
In some embodiments, described device further includes:
Updating unit, within the time cycle of setting according to the inquiry times per second of data and nearest usage timeData are ranked up, partial data is deleted according to ranking results.
In some embodiments, the type of the first order caching is out-pile memory, the type of the second level cachingFor memory in heap, the type of the third level caching is cached for distributed remote.
In some embodiments, described device further includes:
Scheduling unit, for setting query scheduling strategy, the query scheduling strategy is used to indicate in the multi-level bufferEach level cache whether provide inquiry service.
In some embodiments, the query unit is specifically used for:According to the corresponding inquiry of the data inquiry requestStrategy and query scheduling strategy determine query context and search order.
In some embodiments, the query unit is specifically used for:
The rank of the caching of inquiry is determined according to the query context, is looked into successively from high to low according to the rank of query cachingEach level cache is ask, obtains data corresponding with the inquiry request.
In some embodiments, the storage unit is additionally operable to:
If inquiring data corresponding with the inquiry request from low level caching, by data storage to the gradeIn the higher level's caching not cached.
It should be noted that in some embodiments, the embodiment of the present invention additionally provides a kind of multi-level buffer data and depositsStorage device, described device include:Acquiring unit, for obtaining the type of data;Storage unit, for the class according to the dataType stores the data into corresponding caching;Wherein, the data type of the buffer memory of different stage is different, different stageCaching type it is different.
Wherein, the storage unit specifically includes:
First storage unit, used in that will predict that hot spot data storage is cached to the first order;Wherein, the prediction hot spot numberIt predicts to obtain according to according to user's history behavioral data;
Second storage unit, used in the storage of real-time hot spot data to be cached to the second level;Wherein, the real-time hot spot numberIt is calculated according to according to the real-time behavioral data of user;
Third storage unit, used in the storage of full dose data to be cached to the third level.
Wherein, described device further includes:
First determination unit, for assigning different weights, root for different classes of behavior according to the classification of user behaviorAccording to the corresponding merchandise items of the Weight Acquisition user behavior and/or the score of merchandise items supplier, according to the commodity pairAs and/or the score of merchandise items supplier determine the sequence of merchandise items and/or merchandise items supplier;According to the commodityThe sequence of object and/or merchandise items supplier determine hot spot data.
Wherein, described device further includes:
Dispatching Unit, for the prediction hot spot data to be distributed to each distributed computational nodes.
Wherein, described device further includes:
Second determination unit determines real-time hot spot number for the inquiry times per second according to data and nearest usage timeAccording to.
Wherein, described device further includes:
Updating unit, within the time cycle of setting according to the inquiry times per second of data and nearest usage timeData are ranked up, partial data is deleted according to ranking results.
Wherein, the type of first order caching is out-pile memory, and the type of the second level caching is memory in heap, instituteThe type for stating third level caching is cached for distributed remote.
In some embodiments, the embodiment of the invention discloses a kind of multi-level buffer data query arrangement, described devicesIncluding:Receiving unit, for receiving data inquiry request;Query unit, in response to data inquiry request, according to the numberQuery context and search order are determined according to the corresponding query strategy of inquiry request, according to the query context and search orderMulti-level buffer is inquired successively, obtains data corresponding with the inquiry request.
Wherein, the query unit is specifically used for:The rank of the caching of inquiry is determined according to the query context, according to looking intoThe rank for asking caching inquires each level cache successively from high to low, obtains data corresponding with the inquiry request.
In some embodiments, the embodiment of the invention discloses a kind of multi-level buffer data scheduling device, device packet is statedIt includes:Setting unit, for setting query scheduling strategy, the query scheduling strategy is used to indicate at different levels in the multi-level bufferIt does not cache and whether inquiry service is provided;Scheduling unit for receiving data inquiry request, is corresponded to according to the data inquiry requestQuery strategy and the query scheduling strategy of setting determine query context and search order.
Wherein, the setting of apparatus of the present invention each unit or module is referred to Fig. 2 and is realized to method shown in Fig. 4,This is not repeated.
It is the block diagram for the device for multi-level buffer data processing that one embodiment of the invention provides referring to Fig. 7.Including:At least one processor 701 (such as CPU), memory 702 and at least one communication bus 703, be used to implement these equipment itBetween connection communication.Processor 701 is used to perform the executable module stored in memory 702, such as computer program.StorageDevice 702 may include high-speed random access memory (RAM:Random Access Memory), it is also possible to further include non-shakinessFixed memory (non-volatile memory), for example, at least a magnetic disk storage.One or more than one program are depositedIt is stored in memory, and is configured to by one or more than one processor 701 performs the one or more journeySequence includes the instruction for being operated below:It will be in the storage to the caching of different stage of different types of data;Wherein, it is differentThe type of the caching of rank is different;It is true according to the corresponding query strategy of the data inquiry request in response to data inquiry requestDetermine query context and search order, multi-level buffer, acquisition and institute are inquired according to the query context and search order successivelyState the corresponding data of inquiry request.
In some embodiments, processor 701 includes use specifically for performing the one or more programsIn the instruction for carrying out following operation:Prediction hot spot data is stored to the first order in caching;Wherein, the prediction hot spot data rootIt predicts to obtain according to user's history behavioral data;During the storage of real-time hot spot data is cached to the second level;Wherein, the real-time hot spotData are calculated according to the real-time behavioral data of user;During the storage of full dose data is cached to the third level.
In some embodiments, processor 701 includes use specifically for performing the one or more programsIn the instruction for carrying out following operation:Different weights is assigned for different classes of behavior according to the classification of user behavior, according to instituteState the corresponding merchandise items of Weight Acquisition user behavior and/or the score of merchandise items supplier, according to the merchandise items and/Or the score of merchandise items supplier determines the sequence of merchandise items and/or merchandise items supplier;According to the merchandise itemsAnd/or the sequence of merchandise items supplier determines hot spot data.
In some embodiments, processor 701 includes use specifically for performing the one or more programsIn the instruction for carrying out following operation:The prediction hot spot data is distributed to each distributed computational nodes.
In some embodiments, processor 701 includes use specifically for performing the one or more programsIn the instruction for carrying out following operation:Real-time hot spot data is determined according to the inquiry times per second of data and nearest usage time.
In some embodiments, processor 701 includes use specifically for performing the one or more programsIn the instruction for carrying out following operation:According to the inquiry times per second of data and nearest usage time within the time cycle of settingData are ranked up, partial data is deleted according to ranking results.
In some embodiments, processor 701 includes use specifically for performing the one or more programsIn the instruction for carrying out following operation:Query scheduling strategy is set, and the query scheduling strategy is used to indicate in the multi-level bufferEach level cache whether provide inquiry service.
In some embodiments, processor 701 includes use specifically for performing the one or more programsIn the instruction for carrying out following operation:It determines to look into according to the corresponding query strategy of the data inquiry request and query scheduling strategyAsk range and search order.
In some embodiments, processor 701 includes use specifically for performing the one or more programsIn the instruction for carrying out following operation:The rank of the caching of inquiry is determined according to the query context, according to the rank of query cachingIt inquires each level cache successively from high to low, obtains data corresponding with the inquiry request.
In some embodiments, processor 701 includes use specifically for performing the one or more programsIn the instruction for carrying out following operation:If data corresponding with the inquiry request are inquired from low level caching, by the numberIn the higher level's caching cached according to storage to the low level.
It is the block diagram for the device for multi-level buffer data processing that one embodiment of the invention provides referring to Fig. 8.Including:At least one processor 801 (such as CPU), memory 802 and at least one communication bus 803, be used to implement these equipment itBetween connection communication.Processor 801 is used to perform the executable module stored in memory 802, such as computer program.StorageDevice 802 may include high-speed random access memory (RAM:Random Access Memory), it is also possible to further include non-shakinessFixed memory (non-volatile memory), for example, at least a magnetic disk storage.One or more than one program are depositedIt is stored in memory, and is configured to by one or more than one processor 801 performs the one or more journeySequence includes the instruction for being operated below:Prediction hot spot data is stored to the first order in caching;By real-time hot spot dataIt stores to the second level in caching;During the storage of full dose data is cached to the third level;Wherein, the type of the first order caching is heapOuter memory, the type of the second level caching is memory in heap, and the type of the third level caching is cached for distributed remote;It ringsQuery context and search order should be determined according to the corresponding query strategy of the data inquiry request in data inquiry request,Multi-level buffer is inquired according to the query context and search order successively, obtains data corresponding with the inquiry request.
In some embodiments, processor 801 includes use specifically for performing the one or more programsIn the instruction for carrying out following operation:Different weights is assigned for different classes of behavior according to the classification of user behavior, according to instituteState the corresponding merchandise items of Weight Acquisition user behavior and/or the score of merchandise items supplier, according to the merchandise items and/Or the score of merchandise items supplier determines the sequence of merchandise items and/or merchandise items supplier;According to the merchandise itemsAnd/or the sequence of merchandise items supplier determines hot spot data.
In some embodiments, processor 801 includes use specifically for performing the one or more programsIn the instruction for carrying out following operation:The prediction hot spot data is distributed to each distributed computational nodes.
In some embodiments, processor 801 includes use specifically for performing the one or more programsIn the instruction for carrying out following operation:Real-time hot spot data is determined according to the inquiry times per second of data and nearest usage time.
In some embodiments, processor 801 includes use specifically for performing the one or more programsIn the instruction for carrying out following operation:According to the inquiry times per second of data and nearest usage time within the time cycle of settingData are ranked up, partial data is deleted according to ranking results.
In some embodiments, processor 801 includes use specifically for performing the one or more programsIn the instruction for carrying out following operation:Query scheduling strategy is set, and the query scheduling strategy is used to indicate in the multi-level bufferEach level cache whether provide inquiry service.
In some embodiments, processor 801 includes use specifically for performing the one or more programsIn the instruction for carrying out following operation:It determines to look into according to the corresponding query strategy of the data inquiry request and query scheduling strategyAsk range and search order.
In some embodiments, processor 801 includes use specifically for performing the one or more programsIn the instruction for carrying out following operation:The rank of the caching of inquiry is determined according to the query context, according to the rank of query cachingIt inquires each level cache successively from high to low, obtains data corresponding with the inquiry request.
In some embodiments, processor 801 includes use specifically for performing the one or more programsIn the instruction for carrying out following operation:If data corresponding with the inquiry request are inquired from low level caching, by the numberIn the higher level's caching cached according to storage to the low level.
In some embodiments, the embodiment of the present invention additionally provides a kind of device for the storage of multi-level buffer data,Including memory and one, either more than one program one of them or more than one program is stored in memoryIn, and be configured to include to carry out by one or more than one processor execution the one or more programsThe instruction operated below:Obtain the type of data;The data are stored into corresponding caching according to the type of the data;Wherein, the data type of the buffer memory of different stage is different, and the type of the caching of different stage is different.
In some embodiments, the embodiment of the present invention additionally provides a kind of device for multi-level buffer data query,Including memory and one, either more than one program one of them or more than one program is stored in memoryIn, and be configured to include to carry out by one or more than one processor execution the one or more programsThe instruction operated below:Receive data inquiry request;It is corresponding according to the data inquiry request in response to data inquiry requestQuery strategy determines query context and search order, is inquired successively according to the query context and search order multistage slowIt deposits, obtains data corresponding with the inquiry request.
In some embodiments, the embodiment of the present invention additionally provides a kind of device for multi-level buffer data dispatch,Including memory and one, either more than one program one of them or more than one program is stored in memoryIn, and be configured to include to carry out by one or more than one processor execution the one or more programsThe instruction operated below:Query scheduling strategy is set, and the query scheduling strategy is used to indicate at different levels in the multi-level bufferIt does not cache and whether inquiry service is provided;Receive data inquiry request, according to the corresponding query strategy of the data inquiry request withAnd the query scheduling strategy of setting determines query context and search order.Those skilled in the art are considering specification and realityAfter trampling invention disclosed herein, other embodiments of the present invention will readily occur to.The present invention is directed to cover any of the present inventionVariations, uses, or adaptations, these variations, uses, or adaptations follow the general principle and packet of the present inventionInclude generally known common sense and conventional technological means in the art, which is not disclosed in this disclosure.Description and embodiments are considered only as showingExample property, true scope and spirit of the invention are pointed out by following claim.
It should be understood that the invention is not limited in the precision architecture for being described above and being shown in the drawings, andAnd various modifications and changes may be made without departing from the scope thereof.The scope of the present invention is only limited by appended claim
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all the present invention spirit andWithin principle, any modification, equivalent replacement, improvement and so on should all be included in the protection scope of the present invention.
It should be noted that herein, relational terms such as first and second and the like are used merely to a realityBody or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operationIn any this practical relationship or sequence.Moreover, term " comprising ", "comprising" or its any other variant are intended toNon-exclusive inclusion, so that process, method, article or equipment including a series of elements not only will including thoseElement, but also including other elements that are not explicitly listed or further include as this process, method, article or equipmentIntrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded thatAlso there are other identical elements in process, method, article or equipment including the element.The present invention can be by calculatingDescribed in the general context for the computer executable instructions that machine performs, such as program module.Usually, program module includes holdingThe routine of row particular task or realization particular abstract data type, program, object, component, data structure etc..It can also divideThe present invention is put into practice in cloth computing environment, in these distributed computing environment, by by communication network and connected long-rangeProcessing equipment performs task.In a distributed computing environment, program module can be located at the local including storage deviceIn remote computer storage medium.
Each embodiment in this specification is described by the way of progressive, identical similar portion between each embodimentPoint just to refer each other, and the highlights of each of the examples are difference from other examples.Especially for device realityFor applying example, since it is substantially similar to embodiment of the method, so describing fairly simple, related part is referring to embodiment of the methodPart explanation.The apparatus embodiments described above are merely exemplary, wherein described be used as separating component explanationUnit may or may not be physically separate, the component shown as unit may or may not bePhysical unit, you can be located at a place or can also be distributed in multiple network element.It can be according to the actual needsSome or all of module therein is selected to realize the purpose of this embodiment scheme.Those of ordinary skill in the art are not payingIn the case of creative work, you can to understand and implement.The above is only the specific embodiment of the present invention, should be referred toGo out, for those skilled in the art, without departing from the principle of the present invention, can also make severalImprovements and modifications, these improvements and modifications also should be regarded as protection scope of the present invention.