- Notifications
You must be signed in to change notification settings - Fork73
A profiling and performance analysis tool for machine learning
License
openxla/xprof
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
XProf offers a number of tools to analyse and visualize theperformance of your model across multiple devices. Some of the tools include:
- Overview: A high-level overview of the performance of your model. Thisis an aggregated overview for your host and all devices. It includes:
- Performance summary and breakdown of step times.
- A graph of individual step times.
- High level details of the run environment.
- Trace Viewer: Displays a timeline of the execution of your model that shows:
- The duration of each op.
- Which part of the system (host or device) executed an op.
- The communication between devices.
- Memory Profile Viewer: Monitors the memory usage of your model.
- Graph Viewer: A visualization of the graph structure of HLOs of your model.
To learn more about the various XProf tools, check out theXProf documentation
First time user? Come and check out thisColab Demo.
- xprof >= 2.20.0
- (optional) TensorBoard >= 2.20.0
Note: XProf requires access to the Internet to load theGoogle Chart library.Some charts and tables may be missing if you run XProf entirely offline onyour local machine, behind a corporate firewall, or in a datacenter.
If you use Google Cloud to run your workloads, we recommend thexprofiler tool.It provides a streamlined profile collection and viewing experience using VMsrunning XProf.
To get the most recent release version of XProf, install it via pip:
$ pip install xprofXProf can be launched as a standalone server or used as a plugin withinTensorBoard. For large-scale use, it can be deployed in a distributed mode withseparate aggregator and worker instances (more details on it later in thedoc).
When launching XProf from the command line, you can use the following arguments:
logdir(optional): The directory containing XProf profile data (filesending in.xplane.pb). This can be provided as a positional argument orwith-lor--logdir. If provided, XProf will load and display profilesfrom this directory. If omitted, XProf will start without loading anyprofiles, and you can dynamically load profiles usingsession_pathorrun_pathURL parameters, as described in theLog DirectoryStructure section.-p <port>,--port <port>: The port for the XProf web server.Defaults to8791.-gp <grpc_port>,--grpc_port <grpc_port>: The port for the gRPCserver used for distributed processing. Defaults to50051. This must bedifferent from--port.-wsa <addresses>,--worker_service_address <addresses>: Acomma-separated list of worker addresses (e.g.,host1:50051,host2:50051)for distributed processing. Defaults to to0.0.0.0:<grpc_port>.-hcpb,--hide_capture_profile_button: If set, hides the 'CaptureProfile' button in the UI.
If you have profile data in a directory (e.g.,profiler/demo), you can view itby running:
$ xprof profiler/demo --port=6006Or with the optional flag:
$ xprof --logdir=profiler/demo --port=6006If you have TensorBoard installed, you can run:
$ tensorboard --logdir=profiler/demoIf you are behind a corporate firewall, you may need to include the--bind_alltensorboard flag.
Go tolocalhost:6006/#profile of your browser, you should now see the demooverview page show up.Congratulations! You're now ready to capture a profile.
When using XProf, profile data must be placed in a specific directory structure.XProf expects.xplane.pb files to be in the following path:
<log_dir>/plugins/profile/<session_name>/<log_dir>: This is the root directory that you supply totensorboard --logdir.plugins/profile/: This is a required subdirectory.<session_name>/: Each subdirectory insideplugins/profile/represents asingle profiling session. The name of this directory will appear in theTensorBoard UI dropdown to select the session.
Example:
If your log directory is structured like this:
/path/to/your/log_dir/└── plugins/ └── profile/ ├── my_experiment_run_1/ │ └── host0.xplane.pb └── benchmark_20251107/ └── host1.xplane.pbYou would launch TensorBoard with:
tensorboard --logdir /path/to/your/log_dir/
The runsmy_experiment_run_1 andbenchmark_20251107 will be available in the"Sessions" tab of the UI.
You can also dynamically load sessions from a GCS bucket or local filesystem bypassing URL parameters when loading XProf in your browser. This method workswhether or not you provided alogdir at startup and is useful for viewingprofiles from various locations without restarting XProf.
For example, if you start XProf with no log directory:
xprof
You can load sessions using the following URL parameters.
Assume you have profile data stored on GCS or locally, structured like this:
gs://your-bucket/profile_runs/├── my_experiment_run_1/│ ├── host0.xplane.pb│ └── host1.xplane.pb└── benchmark_20251107/ └── host0.xplane.pbThere are two URL parameters you can use:
session_path: Use this to load asingle session directly. The pathshould point to a directory containing.xplane.pbfiles for one session.- GCS Example:
http://localhost:8791/?session_path=gs://your-bucket/profile_runs/my_experiment_run_1 - Local Path Example:
http://localhost:8791/?session_path=/path/to/profile_runs/my_experiment_run_1 - Result: XProf will load the
my_experiment_run_1session, and you will see its data in the UI.
- GCS Example:
run_path: Use this to point to a directory that containsmultiplesession directories.- GCS Example:
http://localhost:8791/?run_path=gs://your-bucket/profile_runs/ - Local Path Example:
http://localhost:8791/?run_path=/path/to/profile_runs/ - Result: XProf will list all session directories found under
run_path(i.e.,my_experiment_run_1andbenchmark_20251107) in the "Sessions"dropdown in the UI, allowing you to switch between them.
- GCS Example:
Loading Precedence
If multiple sources are provided, XProf uses the following order of precedenceto determine which profiles to load:
session_pathURL parameterrun_pathURL parameterlogdircommand-line argument
XProf supports distributed profile processing by using an aggregator thatdistributes work to multiple XProf workers. This is useful for processing largeprofiles or handling multiple users.
Note: Currently, distributed processing only benefits the following tools:overview_page,framework_op_stats,input_pipeline, andpod_viewer.
Note: The ports used in these examples (6006 for the aggregator HTTPserver,9999 for the worker HTTP server, and50051 for the worker gRPCserver) are suggestions and can be customized.
Worker Node
Each worker node should run XProf with a gRPC port exposed so it can receiveprocessing requests. You should also hide the capture button as workers are notmeant to be interacted with directly.
$ xprof --grpc_port=50051 --port=9999 --hide_capture_profile_buttonAggregator Node
The aggregator node runs XProf with the--worker_service_address flag pointingto all available workers. Users will interact with aggregator node's UI.
$ xprof --worker_service_address=<worker1_ip>:50051,<worker2_ip>:50051 --port=6006 --logdir=profiler/demoReplace<worker1_ip>, <worker2_ip> with the addresses of your worker machines.Requests sent to the aggregator on port 6006 will be distributed among theworkers for processing.
For deploying a distributed XProf setup in a Kubernetes environment, seeKubernetes Deployment Guide.
Every night, a nightly version of the package is released under the name ofxprof-nightly. This package contains the latest changes made by the XProfdevelopers.
To install the nightly version of profiler:
$ pip uninstall xprof tensorboard-plugin-profile$ pip install xprof-nightlyAbout
A profiling and performance analysis tool for machine learning
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Uh oh!
There was an error while loading.Please reload this page.