Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

How to calculate cache miss percentage?#9

faradawn started this conversation inGeneral
Discussion options

Hi,

I tried the Cassandra trace on 1 core and rolled up the statistics:

Screenshot 2023-06-12 at 8 23 12 AM
  1. May I ask how to calculate the LLC_total_miss percentage?
  2. Why does "nopref", which I assume is "no prefetcher", has the lowest miss rate?
  3. In the first five lines, why pythisa has the highest IPC (instructions per cycle), even though it has more cache misses than "nopref"?

Thanks in advance!

You must be logged in to vote

Replies: 1 comment 1 reply

Comment options

Hi@faradawn,

  1. For calculating LLC miss ratio, you can compute the ratio of Core_0_LLC_total_miss/Core_0_LLC_total_access
  2. There are four types of memory requests in ChampSim: load, RFO, prefetch, and writebacks. Core_0_LLC_total_miss is the sum of all four types of memory requests. From the screenshot, Pythia increases the number of all memory requests for cassandra_phase0_core0 by 37%. But if you check the Core_0_LLC_load_miss, Pythia reduces it by almost 52%. This basically says the prefetch requests generated by Pythia made the load request misses go down, but the overall miss rate (including the prefetch request miss rate) goes high. Since a load miss is more critical to overall performance, the performance improves in the end.
  3. The previous answer should address this question.

Hope this helps.

You must be logged in to vote
1 reply
@faradawn
Comment options

Hi@rahulbera,

I found total_access in the raw experimental output:

/Pythia/experiments/experiments_1C/cassandra_phase0_core0_nopref.out'

and calculated the miss rate for Nopref and Pythia

Nopref: 1056690 / 1784950 = 0.59Pythia: 1448139 / {2522886 = 0.57

Got that Pythia used more memory access (prefetching) to reduce load cache miss, which is more crucial to performance (instructions per cycle).

It makes sense now.

Thanks for the clear explanation!

Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Category
General
Labels
None yet
2 participants
@faradawn@rahulbera
Converted from issue

This discussion was converted from issue #7 on June 12, 2023 14:58.


[8]ページ先頭

©2009-2025 Movatter.jp