Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Code for Gaussian Process Predictions with Uncertain Inputs Enabled by Uncertainty-Tracking Processor Architectures

License

NotificationsYou must be signed in to change notification settings

physical-computation/uncertain-gaussian-process-code

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

40 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

[Add to signaloid.io][Add to signaloid.io]

Gaussian Process Predictions with Uncertain Inputs Enabled by Uncertainty-Tracking Microprocessors

This repository contains the code for theGaussian Process Predictions with Uncertain Inputs Enabled by Uncertainty-Tracking Microprocessors that was presented at theMachine Learning with New Compute Paradigms workshop at NeurIPS 2024.

The key contribution of the paper is a simple algorithm that can compute the Gaussian Process predictive posterior distribution with an uncertain input on an uncertainty-tracking microprocessor12. We run our algorithm on an commercial implementation of the uncertainty-tracking microprocessor presented by Tsoutsouraset al.12 provided bySignaloid.

We compare our algorithm to Monte Carlo simulations. We vary the number of Monte Carlo iterations that are carried and the size of the representation of the uncertainty-tracking microprocessor to measure the trade-off between the run time and the accuracy (as measured by the Wasserstein distance3 to the ground-truth output distribution).

The Pareto plot below shows thepaper's key results comparing the mean run time against the mean Wasserstein distance (±1 std. dev.). Algorithm 1 refers to our method implemented on the uncertainty-tracking microprocessor as implemented onSignaloid and MC stands for Monte Carlo simulation implemented on a traditional computer. The final numbers in each entry in the legend is the representation size or the number of Monte Carlo iterations. SeeSection 5 of the paper for more details on the method. We see that our method is almost always on the Pareto frontier.

image

This repository contains the code for the implementation that was run on theSignaloid platform insrc/main.c and the implementation of the Monte Carlo experiments insrc/native.c. Seethe Section below to see how to run each case.

For the best overall configuration of theSignaloid uncertainty-tracking microprocessor (representation size of 128), we find that the closest-in-terms-of-accuracy Monte Carlo simulation (128000 Monte Carlo iterations) takes approximately 108.80x longer.

Run the code

This repository contains the implementation that makes use of the uncertainty-tracking microprocessor provided bySignaloid and the implementation of the Monte Carlo simulation that runs on traditional hardware.

Run on Signaloid

To run the implementation that makes use of the uncertainty-tracking microprocessor provided bySignaloid, please click theAdd to Signaloid button on the top of this README.

This should load this repository onto your Signaloid account. You can then run the code by clicking on theCompile and Run button.

The output of this code should have the following format:

<output distribution> <run time in microseconds>

Note

For the most accurate results, make sure to use the Jupiter microarchitecture on Signaloid.

Larger experimental suites, such as running repeated experiments using multiple different representation sizes can be done via theSignaloid Cloud Compute Engine API. Additional tools and instructions on how to do so for this repository will be added in the future.

Run on Traditional Hardware

Note

Running on the Monte Carlo implementation on traditional hardware is only supported on macOS and Linux.

To run the Monte Carlo implementation on traditional hardware, you will need to have installedGSL on your system.

After setting upGSL, simply clone this repository by running

git clone --recursive https://github.com/physical-computation/uncertain-gaussian-process-code

Then run

make run

This command will run a single Monte Carlo simulation with 10 iterations. The resulting data can be found indata.out which should look like:

799        <- run time in microseconds-0.761006       <- samples from Monte Carlo simulation (and below)-0.840678-1.034207-1.085209-0.258264-0.9238410.6384820.300466-0.9452441.094424

You can change the number of Monte Carlo iterations by changing theN_SAMPLES_SINGLE_RUN make variable in theMakefile.

Benchmarking mode:You can also use the following command to run in benchmarking mode:

make run-bm

This requires an installation of Python 3.

Note

TheMakefile assumes that the Python 3 executable is calledpython. However, if you have a different name for this, please edit thePYTHON make variable in theMakefile.

For a set of specified numbers of Monte Carlo iterations, this mode will carry out repeated runs with a preset interval between each run. The resulting data will be indexed in the fileexperiment-results/results.log in the following format:

local-gp-<number of Monte Carlo iterations>-<experiment iteration> <unique id> <run time in seconds>

The Monte Carlo samples can be found indata/<unique id>.out.

You can make the following adjustments torun-bm by changing the appropriate make variable in theMakefile. Below we also show the default values they are set to.

SLEEP_FOR=0             <- interval in between runs.N=1                     <- number of repetition of each Monte Carlo number of samples configuration.N_SAMPLES := 4 16 32    <- a list of numbers corresponding to the number of samples of the Monte Carlo simulations.

Citing this work

Please cite this work using the following Bibtex entry:

@inproceedings{petangoda2024gaussian,title={Gaussian Process Predictions with Uncertain Inputs Enabled by Uncertainty-Tracking Processor Architectures},author={Janith Petangoda and Chatura Samarakoon and Phillip Stanley-Marbell},booktitle={NeurIPS 2024 Workshop Machine Learning with new Compute Paradigms},year={2024},url={https://openreview.net/forum?id=zKt7uVOttG}}

References

Footnotes

  1. V. Tsoutsouras, O. Kaparounakis, B. Bilgin, C. Samarakoon, J. Meech, J. Heck, and P. Stanley-Marbell, “The laplace microarchitecture for tracking data uncertainty and its implementationin a risc-v processor,” in MICRO-54: 54th Annual IEEE/ACM International Symposium onMicroarchitecture, pp. 1254–1269, 2021.2

  2. V. Tsoutsouras, O. Kaparounakis, C. Samarakoon, B. Bilgin, J. Meech, J. Heck, and P. Stanley-Marbell, “The laplace microarchitecture for tracking data uncertainty,” IEEE Micro, vol. 42,no. 4, pp. 78–86, 2022.2

  3. L. V. Kantorovich, “Mathematical methods of organizing and planning production,” Management science, vol. 6, no. 4, pp. 366–422, 1960.

About

Code for Gaussian Process Predictions with Uncertain Inputs Enabled by Uncertainty-Tracking Processor Architectures

Resources

License

Stars

Watchers

Forks


[8]ページ先頭

©2009-2025 Movatter.jp