Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit0a4954a

Browse files
ftang1torvalds
authored andcommitted
percpu_counter: add percpu_counter_sync()
percpu_counter's accuracy is related to its batch size. For apercpu_counter with a big batch, its deviation could be big, so when thecounter's batch is runtime changed to a smaller value for better accuracy,there could also be requirment to reduce the big deviation.So add a percpu-counter sync function to be run on each CPU.Reported-by: kernel test robot <rong.a.chen@intel.com>Signed-off-by: Feng Tang <feng.tang@intel.com>Signed-off-by: Andrew Morton <akpm@linux-foundation.org>Cc: Dennis Zhou <dennis@kernel.org>Cc: Tejun Heo <tj@kernel.org>Cc: Christoph Lameter <cl@linux.com>Cc: Michal Hocko <mhocko@suse.com>Cc: Qian Cai <cai@lca.pw>Cc: Andi Kleen <andi.kleen@intel.com>Cc: Huang Ying <ying.huang@intel.com>Cc: Dave Hansen <dave.hansen@intel.com>Cc: Haiyang Zhang <haiyangz@microsoft.com>Cc: Johannes Weiner <hannes@cmpxchg.org>Cc: Kees Cook <keescook@chromium.org>Cc: "K. Y. Srinivasan" <kys@microsoft.com>Cc: Matthew Wilcox (Oracle) <willy@infradead.org>Cc: Mel Gorman <mgorman@suse.de>Cc: Tim Chen <tim.c.chen@intel.com>Link:http://lkml.kernel.org/r/1594389708-60781-4-git-send-email-feng.tang@intel.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1 parent4e2ee51 commit0a4954a

File tree

2 files changed

+23
-0
lines changed

2 files changed

+23
-0
lines changed

‎include/linux/percpu_counter.h‎

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -44,6 +44,7 @@ void percpu_counter_add_batch(struct percpu_counter *fbc, s64 amount,
4444
s32batch);
4545
s64__percpu_counter_sum(structpercpu_counter*fbc);
4646
int__percpu_counter_compare(structpercpu_counter*fbc,s64rhs,s32batch);
47+
voidpercpu_counter_sync(structpercpu_counter*fbc);
4748

4849
staticinlineintpercpu_counter_compare(structpercpu_counter*fbc,s64rhs)
4950
{
@@ -172,6 +173,9 @@ static inline bool percpu_counter_initialized(struct percpu_counter *fbc)
172173
return true;
173174
}
174175

176+
staticinlinevoidpercpu_counter_sync(structpercpu_counter*fbc)
177+
{
178+
}
175179
#endif/* CONFIG_SMP */
176180

177181
staticinlinevoidpercpu_counter_inc(structpercpu_counter*fbc)

‎lib/percpu_counter.c‎

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -98,6 +98,25 @@ void percpu_counter_add_batch(struct percpu_counter *fbc, s64 amount, s32 batch)
9898
}
9999
EXPORT_SYMBOL(percpu_counter_add_batch);
100100

101+
/*
102+
* For percpu_counter with a big batch, the devication of its count could
103+
* be big, and there is requirement to reduce the deviation, like when the
104+
* counter's batch could be runtime decreased to get a better accuracy,
105+
* which can be achieved by running this sync function on each CPU.
106+
*/
107+
voidpercpu_counter_sync(structpercpu_counter*fbc)
108+
{
109+
unsigned longflags;
110+
s64count;
111+
112+
raw_spin_lock_irqsave(&fbc->lock,flags);
113+
count=__this_cpu_read(*fbc->counters);
114+
fbc->count+=count;
115+
__this_cpu_sub(*fbc->counters,count);
116+
raw_spin_unlock_irqrestore(&fbc->lock,flags);
117+
}
118+
EXPORT_SYMBOL(percpu_counter_sync);
119+
101120
/*
102121
* Add up all the per-cpu counts, return the result. This is a more accurate
103122
* but much slower version of percpu_counter_read_positive()

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp