Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
This repository was archived by the owner on Sep 25, 2023. It is now read-only.
/tokio-abcutilsPublic archive

Tools and methods to analyze TOKIO-ABC results

License

NotificationsYou must be signed in to change notification settings

NERSC/tokio-abcutils

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Setting Up

Step 1. Prepare the working environment

We assume that all of the TOKIO-ABC results are stored in a subdirectory of thisrepository calledresults:

$ mkdir results$ cd results$ for i in /global/project/projectdirs/m888/glock/tokio-abc-results/runs.{cori,edison}.2017-*; do ln -vs $i;done

In order for the concurrent jobs metrics to be populated, you must define thefollowing environment variables:

export NERSC_JOBSDB_HOST="..."export NERSC_JOBSDB_USER="..."export NERSC_JOBSDB_PASSWORD="..."export NERSC_JOBSDB_DB="..."

See a NERSC staff member for the correct values to gain access to the NERSCjobs database.

Step 2. Generate job summary files

Then we generate summary json files for each Darshan log. This takes asignificant amount of time because it involves opening every Darshan log, thencollecting metrics from across the system that correspond to that job. To dothis in parallel, use the includedparallel_summarize_job.sh script, e.g.,

$ ./parallel_summarize_job.sh edison 2>&1 | tee -a summarize_jobs-edison.logmkdir: created directory '/global/project/projectdirs/m888/glock/tokio-year/summaries/edison'Generating /global/project/projectdirs/m888/glock/tokio-year/summaries/edison/glock_ior_id3906633_2-14-63024-14939811182217632593_1.jsonGenerating /global/project/projectdirs/m888/glock/tokio-year/summaries/edison/glock_ior_id4048967_2-19-64883-9509271909828150823_1.jsonGenerating /global/project/projectdirs/m888/glock/tokio-year/summaries/edison/glock_ior_id4015752_2-18-63772-1376825852187540237_1.json...

This script is just a parallel wrapper aroundsummarize_job.py and is invokedon each darshan log with options similar to the following:

$ ./pytokio/bin/summarize_job.py --jobhost=cori \                                 --concurrentjobs \                                 --topology=data/cori.xtdb2proc.gz \                                 --ost \                                 --json \                                 results/runs.cori.2017-12/runs.cori.2017-12-31.5/glock_dbscan_*.darshan

Thissummarize_job.py script retrieves and indexes data from each connector,but it does not strive to synthesize cross-connector metrics such as coveragefactors. That occurs in analysis that we will perform later on.

Step 3. Collate job summaries

We then convert the collection of per-job summary json files into a normalizedcollection of records in CSV format.

$ ./normalize_job_summaries.py --output summaries/edison-summaries_%s.csv ./summaries/edison/*.json

Thenormalize_job_summaries.py script takes any number of json files generatedbysummarize_job.py, finds all of the fields that were populated, and createsa Pandas DataFrame from all of those records. Each record that is missing oneor more keys fromsummarize_job.py simply has that field left as a NaN.

The--output argument allows you to specify a file name to which thenormalized data should be written in CSV format. If the--output file namecontains a%s, this is replaced by the date range represented in thenormalized data.


[8]ページ先頭

©2009-2025 Movatter.jp