Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit16b0a7a

Browse files
vingu-linaroPeter Zijlstra
authored and
Peter Zijlstra
committed
sched/fair: Ensure tasks spreading in LLC during LB
schbench shows latency increase for 95 percentile above since: commit0b0695f ("sched/fair: Rework load_balance()")Align the behavior of the load balancer with the wake up path, which triesto select an idle CPU which belongs to the LLC for a waking task.calculate_imbalance() will use nr_running instead of the sparecapacity when CPUs share resources (ie cache) at the domain level. Thiswill ensure a better spread of tasks on idle CPUs.Running schbench on a hikey (8cores arm64) shows the problem:tip/sched/core :schbench -m 2 -t 4 -s 10000 -c1000000 -r 10Latency percentiles (usec)50.0th: 3375.0th: 4590.0th: 5195.0th: 4152*99.0th: 1428899.5th: 1428899.9th: 14288min=0, max=14276tip/sched/core + patch :schbench -m 2 -t 4 -s 10000 -c1000000 -r 10Latency percentiles (usec)50.0th: 3475.0th: 4790.0th: 5295.0th: 78*99.0th: 9499.5th: 9499.9th: 94min=0, max=94Fixes:0b0695f ("sched/fair: Rework load_balance()")Reported-by: Chris Mason <clm@fb.com>Suggested-by: Rik van Riel <riel@surriel.com>Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>Reviewed-by: Rik van Riel <riel@surriel.com>Tested-by: Rik van Riel <riel@surriel.com>Link:https://lkml.kernel.org/r/20201102102457.28808-1-vincent.guittot@linaro.org
1 parenta73f863 commit16b0a7a

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

‎kernel/sched/fair.c‎

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9031,7 +9031,8 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
90319031
* emptying busiest.
90329032
*/
90339033
if (local->group_type==group_has_spare) {
9034-
if (busiest->group_type>group_fully_busy) {
9034+
if ((busiest->group_type>group_fully_busy)&&
9035+
!(env->sd->flags&SD_SHARE_PKG_RESOURCES)) {
90359036
/*
90369037
* If busiest is overloaded, try to fill spare
90379038
* capacity. This might end up creating spare capacity

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp