- Notifications
You must be signed in to change notification settings - Fork37
LockingVarInfo#1078
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
base:main
Are you sure you want to change the base?
LockingVarInfo#1078
Uh oh!
There was an error while loading.Please reload this page.
Conversation
* Implement InitContext* Fix loading order of modules; move `prefix(::Model)` to model.jl* Add tests for InitContext behaviour* inline `rand(::Distributions.Uniform)`Note that, apart from being simpler code, Distributions.Uniform alsodoesn't allow the lower and upper bounds to be exactly equal (but wemight like to keep that option open in DynamicPPL, e.g. if the userwants to initialise all values to the same value in linked space).* Document* Add a test to check that `init!!` doesn't change linking* Fix `push!` for VarNamedVectorThis should have been changed in#940, but slipped through as the filewasn't listed as one of the changed files.* Add some line breaksCo-authored-by: Markus Hauru <markus@mhauru.org>* Add the option of no fallback for ParamsInit* Improve docstrings* typo* `p.default` -> `p.fallback`* Rename `{Prior,Uniform,Params}Init` -> `InitFrom{Prior,Uniform,Params}`---------Co-authored-by: Markus Hauru <markus@mhauru.org>
* use `varname_leaves` from AbstractPPL instead* add changelog entry* fix import
…!`, `predict`, `returned`, and `initialize_values` (#984)* Replace `evaluate_and_sample!!` -> `init!!`* Use `ParamsInit` for `predict`; remove `setval_and_resample!` and friends* Use `init!!` for initialisation* Paper over the `Sampling->Init` context stack (pending removal of SamplingContext)* Remove SamplingContext from JETExt to avoid triggering `Sampling->Init` pathway* Remove `predict` on vector of VarInfo* Fix some tests* Remove duplicated test* Simplify context testing* Rename FooInit -> InitFromFoo* Fix JETExt* Fix JETExt properly* Fix tests* Improve comments* Remove duplicated tests* Docstring improvementsCo-authored-by: Markus Hauru <markus@mhauru.org>* Concretise `chain_sample_to_varname_dict` using chain value type* Clarify testset name* Re-add comment that shouldn't have vanished* Fix stale Requires dep* Fix default_varinfo/initialisation for odd models* Add comment to src/sampler.jlCo-authored-by: Markus Hauru <markus@mhauru.org>---------Co-authored-by: Markus Hauru <markus@mhauru.org>
…niform}`, `{tilde_,}assume` (#985)* Remove `SamplingContext` for good* Remove `tilde_assume` as well* Split up tilde_observe!! for Distribution / Submodel* Tidy up tilde-pipeline methods and docstrings* Fix tests* fix ambiguity* Add changelog* Update HISTORY.mdCo-authored-by: Markus Hauru <markus@mhauru.org>---------Co-authored-by: Markus Hauru <markus@mhauru.org>* setleafcontext(model, ctx) and various other fixes* fix a bug* Add warning for `initial_parameters=...`
* Remove resume_from* Format* Fix test
* Enable NamedTuple/Dict initialisation* Add more tests
* Fix `include_all` for predict* Fix include_all for predict, some perf improvements
* Replace Medata.flags with Metadata.trans* Fix a bug* Fix a typo* Fix two bugs* Rename trans to is_transformed* Rename islinked to is_transformed, remove duplication
* Change pointwise_logdensities default key type to VarName* Fix a doctest
DynamicPPL.jl documentation for PR#1078 is available at: |
penelopeysm commentedOct 20, 2025 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
This is v intersting. Here are my completely unprompted thoughts:
|
codecovbot commentedOct 20, 2025 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@## breaking #1078 +/- ##============================================- Coverage 82.40% 78.06% -4.34%============================================ Files 42 41 -1 Lines 3791 3848 +57 ============================================- Hits 3124 3004 -120- Misses 667 844 +177 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
penelopeysm commentedOct 20, 2025
hold on, why is CI failing........ 😱 |
mhauru commentedOct 20, 2025 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
A local experiment shows me that about 80% of the time in the 8-core case is due to the We should try locking at a higher level, like at I've never used |
yebai commentedOct 20, 2025
A quick thought: we could introduce locks at the model level. This would prevent users from using threading, except through an explicit opt-in mechanism such as a hypothetical |
penelopeysm commentedOct 20, 2025
Oh, yes, that's completely correct. I didn't think of that. I guess that means even the current implementation here is too high level right? Because the logpdf calculation is actually done by the accumulator itself, way way inside map_accumulator. And yeah, channels are pretty ugly to work with. The general structure of channel based code looks like this Distributed.@syncbegin Distributed.@asyncbegin# read from the channel and do things with it one at a timeend Distributed.@asyncbegin# here do whatever you like that writes to the channelendend I think the whole thing would get lumped inside Because the Channel must contain concrete objects (not function calls) probably we'd need a struct that contains all the arguments to accumulate_obssume plus a boolean to indicate whether it's an assume or observe. |
mhauru commentedOct 20, 2025
Oh yes, silly me. Another option could be to not even try to support threaded |
penelopeysm commentedOct 20, 2025
I like the sound of AtomicAccumulator. In a way that is better because we want things like addlogprob to also be threadsafe, not just tildes. I think threadsafe distributions are also a good idea but atm I don't know how to generalise them. It would work very nicely for the iid case though. |
mhauru commentedOct 20, 2025
I think the problem with atomic accumulators is that if we take that road, all accumulators must be made thread-safe. I can easily wrap the A possible way out would be to have an abstract type |
Uh oh!
There was an error while loading.Please reload this page.
A sketch of what using locks to implement a thread-safe VarInfo might look like. The idea is summarised by this:
In other words, all operations that may modify the VarInfo modify
vi.innerwhile holdingvi.lock. Everything else just pipes through tovi.inner[].I have not thought this through carefully, and this certainly doesn't provide proper thread-safety. However, it's good enough to run the following little benchmark:
Results on 1 thread:
Results on 8 threads (on a 10 core laptop):
In sum: This is bad. Either I've done something very suboptimally, or locks have large overheads.
cc@yebai,@penelopeysm