- Notifications
You must be signed in to change notification settings - Fork774
Future Rx.NET Packaging#2038
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
-
Update 2025/08/01: please see#2211 for an update on progress, detail on prototypes, and discussion about options now under consideration. We think that it might be necessary to change how Rx.NET is packaged in NuGet in a future version. We would much prefer not to do this, but the alternative appears to be a choice between leaving a major problem unfixed, and breaking backwards compatibility. The Major Problem we Want to FixOne of our main goals for the next version of Rx is to fix the problem where taking a dependency on SeeAvaloniaUI/Avalonia#9549 for an example of this kind of problem. This is an unfortunate consequence of a decision made in Rx v4. The "great unification" in that release meant that you needed only a single NuGet reference to Where it all went wrong was when .NET Core 3.1 added WPF and Windows Forms support. The difficulty arose because not all .NET Core 3.1 systems have these frameworks. What was Rx to do? The solution it adopted was to look at your application's target framework moniker. If you had a windows-specific TFM targetting a sufficiently recent version (e.g. But this makes no sense for UI applications using other UI frameworks (e.g. Avalonia). Those have no need for Rx's WPF and Windows Forms features. It becomes particularly disastrous if you create a self-contained deployment, because this will now include all of the WPF and Windows Forms library component. This tends to make your application about 30MB bigger than it needed to be. What needs to happenIt needs to be possible for applications targetting Windows-specific TFMs to be able to use Rx without ending up with a dependency on WPF and Windows Forms. Dependencies on Rx's WPF and/or Windows Forms functionality should be an opt-in feature. Implications for packagingThere are essentially two options:
In either case, WPF and Windows Forms features would move out into new NuGet packages. (With 2, there is the option to have the deprecated What we'd like to do but can'tIf we thought it was possible, we'd prefer option 1 above: to remove all UI-framework-specific features from If we were able to rewrite history, we'd make it so that Rx v4 had worked this way, and that the WPF and Windows Forms (and UWP) features were always out in separate components. But obviously we can't do that. And unfortunately, that's why we don't believe we can do this in Rx 7.0 without breaking backwards compatibility in a serious way. To understand why, consider an application
Now suppose So we now have this situation:
So what will happen if we have modifed You can't fix this by also adding a reference to, say, This is a bad situation to put You might think we can solve this by adding type forwarders in What we think we will have to do (reluctantly)We think the only way forward is to deprecate We hate the idea of doing this. Rx.NET has already been through several confusing iterations of its packaging solution. If we knew of a way to solveAvaloniaUI/Avalonia#9549 while retaining UPDATE 2023/11/21: A possible alternativeThanks to a question from@heronbpv, we did some further investigation and may have come up with an alternative. The full explanation is at#2038 (reply in thread) but in summary: it looks like the use of If this does work out, we might be able to move more slowly towards the state in which UI-framework-specific functionality is fully separated out. We would likely still create separate NuGet packages for each UI framework, but leave the existing types in place in Should we fork Rx?It has been suggested (e.g. see#2034 (comment) and the discussion following that comment) that there is an opportunity for a "clean break" here. There are at least two (incompatible) views on what that might mean, including:
As I understand it, a motivation for this is to enable more innovation. I don't currently have a clear idea of what changes we might make in this new scheme that we currently cannot, although one possible area would be to separate out the parts of Rx that are not well suited to trimming. (There's some code relating to I currently think that if we were going to do this, we'd need to build up a shopping list of the big changes we think we'd want to make to take advantage of this "clean break" because once we've pulled that trigger (if we do it) the opportunity to make significant changes is now in the past. What do you think?We've opened this discussion because we expect people to have opinions on this, and we hope that people might be able to design solutions to this that haven't occurred to us—perhaps there is a better way that we've just not seen yet. Please let us know what you think! |
BetaWas this translation helpful?Give feedback.
All reactions
👍 11👎 1👀 5
Replies: 17 comments 49 replies
-
#2034 illustrates one possible implementation. It makes the following choices:
To be clear, we've not decided that we will do it this way. This is just a prototype to show how one solution to this problem would look, and to verify that it does indeed fix the 30MB installer problem that we're setting out to solve. (It does.) We don't much like the name |
BetaWas this translation helpful?Give feedback.
All reactions
👍 8
-
same as my thought. totally agree with it. |
BetaWas this translation helpful?Give feedback.
All reactions
-
How widespread is the use of I presume if As for the clean start, there would be a lot of clashes due to the use of extension methods, right? For example, if old Rx and new Rx needed to be interoperated, whose |
BetaWas this translation helpful?Give feedback.
All reactions
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
-
I have an idea. How about turning |
BetaWas this translation helpful?Give feedback.
All reactions
-
Unless I presume changing the signature to |
BetaWas this translation helpful?Give feedback.
All reactions
-
I don't know how we would obtain data that could answer that question. GitHub reports about 1900 files using it on GitHub, but I'm not sure that's a very illuminating answer to your question.
I've seen enough examples of projects stuck on some version of a component (for various reasons, including the component having essentially fallen out of support but there not being any realistic alternative) to have come to the view that it is problematic to hope that dependencies will simply always be updated. (And even when they are, it's often not on the schedule you'd want.) In conclusion, my answer to this question is: I think it's probable that the changes are high enough to matter. (Rx is pretty widely used, so even if a small fraction of users were to be affected, that's a big impact. And I'm unconvinced it's going to be all that small a fraction.) Rx has historically maintained pretty high standards of backwards compatibility. So for that reason we don't really know what the impact of a breaking change will be.
That is a very good point. I had been thinking that because most of what Rx does is most naturally used as implementation detail—the number of types that it defines that you would likely make part of your own class's public API is fairly small, and the two most important types, But I hadn't thought about what happens when some random dependency deep in your tree brings in "the other Rx". That would be a major problem. So that rather suggests that a complete split would be a mistake, and that we'd want to unify to exactly one version of Rx. |
BetaWas this translation helpful?Give feedback.
All reactions
👍 1
-
The problem with that is that its public-facing API includes types defined by WPF. So even with a hollow shell, you still end up with a dependency on WPF. |
BetaWas this translation helpful?Give feedback.
All reactions
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
-
@idg10 I can confirm this is a huge problem, I tried such a workaround for moving away from the old v2.2.5 binaries and found that there was no way to run away from it, since any little package left unchanged would break it back again. The only alternative was to use a different TFM as I suggested below.
Completely agree, I think it would not be enough to just fork the package, you would need to fork the namespace itself. That is exactly what I think would create massive confusion within the community.
We need to cut these out, the current architecture is completely untenable. It is very sad that it got to this state, but I don't think we need to give up on change.@akarnokd see my proposal below, I would be curious to hear your feedback. |
BetaWas this translation helpful?Give feedback.
All reactions
-
I think this is possibly more tolerable than you anticipate. This wouldn't break old versions of people's apps, only new versions when they upgrade. I think it's ok to introduce breaking changes, especially considering the benefits, and advantages over alternative options. Perhaps it's the point of this discussion, but maybe a survey would help? Let your users tell you whether they think this approach is a problem, you might find that most are ok with it. As an example, the last few versions ofMediatR have introduced breaking changes. These have been potentially frustrating, but ultimately they are significant improvements and I think users appreciate them. |
BetaWas this translation helpful?Give feedback.
All reactions
-
And I know I'd rather see a consolidation of community contributions to the official repo to help drive the project forward, rather than seeing it fracture and that effort become diluted... |
BetaWas this translation helpful?Give feedback.
All reactions
👍 1
-
Sorry, you're right - what I meant to say is, the problem of updating dependency chains is not unique to this library, or to this change. |
BetaWas this translation helpful?Give feedback.
All reactions
-
If I were in this scenario, yes, I wouldn't hold it against Also, to be clear, I didn't mean to imply that these changes are not breaking, by putting "breaking changes" in quotes. I'm honestly not sure why I did.
My understanding was that all the proposed solutions would be breaking backwards compatibility insome way. Your illustration with regard to type-forwarders seems to be what I was missing. I'm actually not familiar with type forwarders, so that was very helpful. This setup does need to come with some clear documentation, for newcomers to the ecosystem, then, because itis introducing more complexity to developers and the dependency-resolution process.
So far, my favorite idea was (coincidentally, I promise) the one I suggestedhere, assuming it works. It also looks like you mentioned the same ideahere. I like it because it follows a "normal" process of introducing breaking changes pretty closely: Changes are presented directly to developers, as they write code, while also providing lead time to address them.
With everything you've presented, I think this is a fair assessment. |
BetaWas this translation helpful?Give feedback.
All reactions
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
-
@JakenVeina What I don't like about the current proposal is there is no plan to ever phase out the bloated packages and therefore no incentive for current library maintainers to stop developing the way they have so far. Will we continue to maintain type forwarders for UI schedulers indefinitely? Will we continue to maintain @idg10 has mentioned elsewhere that because Rx uses the
The very fact that we have these long lists ofbinary breaking changes should be a big clue about how Microsoft is dealing with evolving their ecosystem. Even just the first item on this list already raised some huge backlash from the community and is holding back and causing refactoring ofentire projects across the .NET ecosystem including core plotting libraries. And yet here we are with .NET 8 and the world hasn't ended. Microsoft have repeated the mantra time and again that theywill break things for the sake of sanity, and that consumers are given 3 years to adapt between LTS releases. Right now I am assuming there isno promise of stable binary compatibility across .NET TFMs so even if I don't understand why given this we are assuming that There is a namespace called I don't buy the argument of counting nodes in a "dependency tree" as a measure of complexity. Humans often care more about names than they care about numbers. My definition of "sanity" is not just that we have one package, it's that the root of the namespace I don't understand why we are all giving up when the tools are there to deal this gracefully, with respect for both the existing communityand new developers. As it stands, I cannot say I am looking forward to the day where Rx 101 starts with "first install the |
BetaWas this translation helpful?Give feedback.
All reactions
👍 2
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
-
Were you aware that we Rx has already been doing this for about 8 years? Do you know the names of the existing type forwarders that Rx has been maintaining for all that time? If not, can you honestly say they are a problem? And even if you do know what they are, do you believe their continued existing is causing a problem? I've not been aware of their being a problem, but perhaps there's something I don't know. Would it be nice to get rid of them? Sure. It's one less moving part. But as far as I know they're not really a big deal, and I have no way of knowing what the impact would be of dropping them. Re: .NET and breaking changes I've not been through every one of the changes in the lists you linked to, but I did systematically go through quite a lot of them to see if I was missing something. I have a few observations. First of all, the .NET team evidently has their "breaking change" detector set to a much more sensitive level than the Rx project does, figuratively speaking. A great many of the things they label as "binary breaking changes" are things we probably wouldn't think of as breaking changes in Rx, because they won't break most people in most circumstances. (That makes these kinds of changes very, very different from something as radical as removing something from a public-facing API without first going through some fair-warning period of E.g., look at the various "binary breaking changes" to The effect is that for the overwhelming majority of consumers, those Quite a few more of these changes look like various stages of multi-year moves to make things obsolete. (Quite a few relate to the security-driven decision to gradually stop people using CLR serialization.) As far as I can tell, most of the things labelled as binary breaking changes there don't come close to the destructive impact of the kinds of changes we're discussing here. Another observation is that a lot of the changes seem to relate to evolution of application frameworks (e.g., ASP.NET Core or Windows Forms). I think there's a major qualitative difference between evolving the design of a framework, and introducing a breaking change to something like LINQ to Objects. Chosing a framework is a top-down decision not taken lightly. The use of a library feature is something other libraries will often impose on you. This makes them quite different things. And I'd ask you: which do you think Rx is more like? ASP.NET Core or LINQ to Objects? I think it's more like the latter. Rx is core library functionality, not a framework, and although there are some library types in that list of changes, the ones I looked at were all more like the
We aren't. We hold ourselves to a much lower standard. We have to, for the simple reason that we just don't have the resources to take the care and attention that the .NET team does when making changes of the kind they did to improve I don't believe I ever said we had to exceed or even equal the standards set by the frameworks (standards which I think you have unfairly discounted in your argument). What Idid say is that because we're in the
I did not say we're going to go to the same lengths as the .NET team. Just that we should do better than average. As those links you've posted show, the .NET team will label some things as a "binary breaking change" if it's merely an observable change in behaviour. That's actually a much higher standard than the one I'm aspiring to in this discussion. By that standard,#1968 and#1882 are both binary breaking changes, but I'm OK with that. Those are on the "internal implementation details of I think it's really important to recognize that removing things from the public API of It is, in my view, a mistake to consider all "binary breaking changes" as being essentially the same sort of thing. I therefore do not think the fact that .NET itself has made a lot of binary breaking changes gives Rx an excuse to do things that we know will definitely break things for our users. In summary, I've tried to clarify that my view on the need for backwards compatibility is not quite as extreme as you had thought. I have always been aware that we're not on the same level as the .NET team, but stand by my view that we need to be better than average. |
BetaWas this translation helpful?Give feedback.
All reactions
-
@idg10 thanks for creating this discussion, I think we should really exhaust all possibilities here to make sure we get the best decision for the community. I wholeheartedly agree with your option 1). By default we should try to save For completeness, I would like to submit to discussion a hybrid alternative first suggested in#2034 (comment) that I think does go a little bit in the direction of what you would like to achieve ideally. Essentially, consider the possibility of Rx v7 being a multi-TFM package (same as today) with The reason this is somewhat nice is that TFMs are still one of the most used and well understood ways of controlling the dependency graph on NuGet. If you say your app needs to run on .NET 7.0 then you simply can't pull .NET 8.0 dependencies by mistake. This means that your scenario of irremediably breaking something by accidentally upgrading the Rx.NET package won't happenunless you also at the same time update the TFM for your app. For me this is actually a big deal, since I don't think people change TFMs lightly, especially on client-side apps. If they do, dependencies usually need to be properly audited anyway (exactly because dependency graphs can change) and I don't think people should be surprised to find things can break: after all that is the point of versioning dependency trees by TFM. You might say this is not strictly backwards compatible because of platform unification during dependency resolution stage, but to me this is still better than the other two extreme options in terms of combining compatibility with a clean break. This way someone creating a new .NET 8.0 app today will finally have a sane starting point. At the same time someone maintaining an older app can still feel confident upgrading the Rx package to the latest version, since it won't break their dependency tree as long as they keep to their existing TFM. This seems like an acceptable compromise, as it would give even people on existing TFMs a chance to start upgrading and more easily explore the possibility of bumping TFMs by testing and identifying any problems, while contacting any legacy project maintainers or forking problematic dependencies throughout the coming year. |
BetaWas this translation helpful?Give feedback.
All reactions
👍 3
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
-
@idg10 that is great news and definitely sounds like it would do exactly what I was hoping.
I understand this concern, but this is actually then a different question: "is .NET 8.0 the right TFM to introduce this change?". For that particular question, I am willing to concede that maybe you are right and now is not the right time, since we were not ready in time for the release which is now passed, people can use it and they will build up legacy. But we can still use the exact same approach for .NET 9.0. In the meantime you can still announce the intention to remove platform-specific TFMs from As the main maintainer you would ultimately make that call, but it looks to me now that this strategy is the strongest option, especially given everything you outlined above. |
BetaWas this translation helpful?Give feedback.
All reactions
👍 1
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
-
To summarise and check if I understand correctly, the logic if we wanted to do this today would be to release Rx v7 with the following TFMs:
The last TFM would be literally just a replica of net8.0 but is actually crucial if I understand unification correctly. You need it since applications targeting net8.0-windows might fallback to net6.0-windows if the newer target is not there and thus retain the legacy. I hope that in later releases, e.g. .NET 9.0 we would be able to drop the windows TFMs entirely since any fallbacks would get the same assembly, but I don't know the details. Either way this seems workable unless I am missing something? |
BetaWas this translation helpful?Give feedback.
All reactions
👍 1
-
Is there any TFM in which this change wouldn't cause all these same problems? I think the right TFM to fix the underlying problem would have been .NET 6.0 because this has already been causing pain for some time, and has caused at least one major project to abandon Rx.NET. The idea of deferring it for another version or two is not one I much like, so I think there would need to be a pretty strong upside. So let's think the scenario through to see what the upside of deferring this fix might be. Suppose today we mark The upside, I believe, is that at some point in the future we get to ship the That's undoubtedly a good upside. But the question then is: how long is long enough? Elsewhere you've raised issues dating back to Rx 2.2.5, a version that shipped about 9 years ago. If that version is still relevant to this discussion today, what's the earliest you think we could actually release a version of |
BetaWas this translation helpful?Give feedback.
All reactions
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
-
@idg10 To answer this question I feel I need to understand and qualify a bit better the nature of a "type 2" change. Specifically, I need to understand whether we agree on the "target audience" for a "type 2" change. I only see a way out if we are willing to accept first that the target audience for "type 2" changes are maintainers ofexisting applications. If this is the case, then I submit that the right TFM to introduce this fix would be any TFM that is not released right now, since by definition there can't be maintainers ofexisting applications on TFMs that do not exist yet. If on the other hand the target audience for "type 2" changes are all current and future developers of both existing and yet to exist applications and TFMs, then I agree with you there seems to be no hope for a clean break on .NET 8.0 is now released, so let us call Let us say Rx v7 provides a First, I think we can agree that as long as the legacy maintainer does not change his TFM, nothing will break. What I was still curious about is whether we could really free developers of new apps entirely from the "curse of schedulers" by dropping the platform specific TFMs. Can we really say for sure that if a new developer targets To answer this question I made a mock project implementing this idea with .NET 6 and .NET 7 (just because I have them handy). Here are the relevant bits: Mock System.Reactive v7 targets <TargetFrameworks>net6.0;net6.0-windows10.0.19041;net7.0</TargetFrameworks> As a control, I introduced an API surface exclusively for the namespaceSystem.Reactive{#ifNET6_0publicclassDeprecatedApi{}#endif#ifNET7_0_OR_GREATERpublicclassShinyNewApi{}#endif} I started out with a "legacy app" targeting As I hoped, the First of all, the main thing to notice is that the most specific TFM (in this case
I think this is honestly the best scenario I can conceive of for a clean break that preserves
As a rule of thumb I would use Microsoft's own end-of-support dates as deprecation targets, so if we introduce the break in .NET 9, the back-compatible TFMs would be around until .NET 8 end-of-support, which is currently stated as November 10, 2026, so 3 years from now.
If anything, I think our story is an argument for why these kind of problems are not the end of the road that they are often made out to be. Our application is still going strong and we have a roadmap for changing into modern .NET.
Our story shows this to be untrue, since we were effectively split up from the entire Rx community without even the possibility of using type forwarders (because of the arbitrary strong name change). In fact, if anything, the strong name key change showed that Microsoft does not consider the use of the In the end what I believe can save us is the fact that the library is open-source and evolving, and not any kind of implicit contract on the immutability of API surface areas. For more examples see also#2038 (reply in thread). |
BetaWas this translation helpful?Give feedback.
All reactions
👍 1
-
Sadly, This isn't necessarily the case. Someone could start writing an application later this year (let's say March 2024) targeting .NET 6.0 (and yes, I know, but people do this, sometimes for unavoidable reasons—Azure functions .NET 8.0 support has some limitations right now for example) and then take a dependency on some other component, let's call it Let's say we released Rx 7 in February 2024. This doesn't affect the application authors at the time, because their application won't exist for another six months. It also doesn't affect them when they start writing it in August, because they're on Rx 5. In November 2024, .NET 9 ships. Also, .NET 6.0 goes out of support, so they try to upgrade to .NET 9. Now let's look at how we might have made a change in Rx 7 in March 2024 some 5 months before they started work on their app in August 2024 is going to cause them to become victims of a type 2 problem. It's going to stop them from upgrading to .NET 9, or even to .NET 8. Suppose we did the thing we really wish we could just go ahead and do: remove all the UI-framework-specific stuff from The app authors in this example (who, don't forget, started work later, in August 2024) find themselves in a type 2 situation when they try to upgrade to .NET 9.0. Just to remind you of my definition:
Because they upgraded to .NET 9.0, they're now getting the But They can't use either .NET 9.0 or .NET 8.0. .NET 7.0 will already be out of support at this point. Staying on .NET 6.0 will avoid these problems, but that also just went out of support, so they can't do that either. There is nothing they can do to resolve this problem. So that demonstrates that it would be possible for us to do something to Rx in February 2024 which will cause an application to that didn't even exist until August 2024 to have a "type 2" problem in November 2024. So although that may be a slightly convoluted example, it's an existence proof that your assumption is not guaranteed to be correct. We therefore can't in fact say that victims (a more accurate description than "target audience", I think) of a "type 2 change" can only be maintainers of existing applications. We could do things today that create a time bomb where application development started in the future is initially OK, but runs into trouble later on because of something we already did before they got started.
Notice that in the scenario I just described, the "type 2" problem becomes apparent at the point where the application developer attempts to upgrade to a TFM that didn't exist at the start of the scenario. You might be inclined to dismiss this because it's weird and contrived. All I can say is that I've encountered weirder situations than this helping people resolve problems in real applications. The problems are always lurking somewhere deep in the dependency tree. This is the main reason I am very reluctant to remove things from the API of existing components. I've been on the receiving end of decisions to do that in the past, and it's a world of pain.
OK, so let's tweak the versions and timing of my example to align with your example:
We now have a "type 2" problem. When they upgrade to .NET 10.0, It's June 2026, so they do have the option to switch back to .NET 8.0. So it's not strictly a "type 2" problem yet. But it will have become one in November when .NET 8.0 support ends. So in this scenario, a decision taken today, followed up by the removal of a public API in December 2024 causes a type 2 problem in November 2026.
If the legacy maintainers do not change the TFM, they will inevitably enter a state where they are running on an unsupported runtime. I would describe this as broken. So no, I do not agree that nothing will break. Their application will become a security liability in November 2026 if they do not change their TFM before then. The ultra-short support lifecycles of .NET today make this a much more difficult problem. If .NET LTS versions had similar support lifetimes as Windows, then I'd totally accept your argument. They could leave their TFM unchanged. .NET 8.0 would remain in support for a decade. That's good enough. But unfortunately the "Long" in LTS is not very long.
I think this is the core of where we disagree. It's always deliberate. It's not always a choice. If you're on a runtime that's about to go of support, it takes a deliberate action to upgrade to a newer runtime. You have no choice but to take that deliberate action. For all that I disagree with the details, I think it may be that the resolution is just to have longer timescales. So mark the types as deprecated in the next version. Then after, say, 3 years we start using NuGet package deprecation—mark the last NuGet So by about 2030 we might finally have recovered from a decision that looked like a good idea in 2018. |
BetaWas this translation helpful?Give feedback.
All reactions
-
Separation of concernsFrom my point of view this would be best approached with multiple Core and Platform specific packages, there could even be multiple core packages to split the core up into specific functional areas such as Schedulers. Branch Divide and ConquerPerhaps the best way to start from a new base is to create a new Legacy branch within this Repository and make that the on-going patch and support base for the current state of affairs. My Reactive HistoryI have been using Reactive coding since 2013 and have used it in various areas; ASP.Net, WPF and a Windows Service were the first implementations used in conjunction with SignalR working with David Fowler to ensure that SignalR delivered the features I needed to use within my company's projects at the time. The FutureGoing forward I would like to see Interfaces (the design surface) extracted from the functionality to allow for an extendable and replaceable functionality framework. The ability to create functions that are functionally different but still operate within the Reactive mechanism is important for flexibility. I myself have come across issues within the Reactive codebase that when trying to add a reasonably small change to the built in functionality lead to having to fork the entire codebase as the 'Onion Coding Style' used just lead to another layer of dependency until eventually I realised that there was no way of making the alteration as an extension of the core without entirely rebuilding it. ReactiveUIWithin ReactiveUI we have core functionality exposed under a series of Interfaces however the framework loading mechanism is built around enabling replaceable functionality, this allows easier support for various UI Frameworks and extensibility / alteration of functionality to meet the requirements of the end user. We have a Core package then a series of UI framework support packages to either alter or extend the core functionality. Avalonia is an example of this flexibility where they have used ReactiveUI as a base and then made extensions fitting to their UI Framework. Desired Core PackagesFollowing a similar package pattern to Bonsai
Specific Microsoft UI Implementations If RelevantThis may be better handled by a UI framework such as ReactiveUI, but then no specific UI implementation should take place in
NOTE these are my suggestions based upon my experiences of using Rx over the years. Over the past few years there was little advancement in the DotNet Rx world and its lead people to think it's not the right way for them. |
BetaWas this translation helpful?Give feedback.
All reactions
👍 3👀 1
-
I may have misunderstood, but are you proposing what amounts to a fork? To use the "new Rx" you would take a dependency on This means it's possible for a single application to end up with a reference to both "old Rx" (what we have today) and "new Rx". We were initially contemplating such a "clean start" approach, until@akarnokd pointed out the horrific flaw in this approach in his comment at#2038 (comment) The critical part is this: if you end up with dependencies on both old Rx and new Rx, "whose myObservable.Where(x=>x>2) fails to compile because the C# compiler can see two extension methods for This has led me to the view that whatever solution we choose, if you end up depending on both Rx 6 and Rx 7, the Or were you envisaging that |
BetaWas this translation helpful?Give feedback.
All reactions
-
|
BetaWas this translation helpful?Give feedback.
All reactions
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
-
Stop me if this has already been considered, elsewhere... A more traditional approach to breaking changes would be to mark certain APIs as Considering the running example, This is of course still a breaking change, but perhaps a more acceptable one. This also requires tolerating the core issue of packaging bloat for a while longer, instead of actually solving it, but the trade-off is to alleviate the pain of the breaking change for most consumers. Does this make sense? Are there any other categories of changes in play here that wouldn't be compatible with this example? |
BetaWas this translation helpful?Give feedback.
All reactions
-
This does indeed make sense. The only reason we hadn't initially suggested this was that it means However, now that we seem to have found a workaround, there's a lot less pressure to make changes quickly. As a result, this approach now looks like it's probably the most promising. It enables us to retain the current package structure. It provides people with a warning of the problems. And it provides a clear path for us to get to the ideal destination state in which UI-specific features are off in separate components. For me there are still some questions around exactly how we do it. One option is to have However, I'm not sure if that's worth the complexity. It really depends on how well the workaround works out for people in practice. |
BetaWas this translation helpful?Give feedback.
All reactions
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
-
All of these cures seem infinitely worse than the original problem of "Wow there's too many DLLs". Please don't make a mess of the Rx namespaces again, we have already been through this, Rx is already confusing enough, weknow from the first time around that having "too many DLLs" caused immense user confusion and (for some reason) accusations of "bloat" because "there were too many references" - now we are proposing making it even more confusing, in order to solve a problem that is largely Annoying in nature Are we really worried about Disk Space here? In 2023? And that's worthnuking our entire ecosystem over? I really reallyreally want people to reconsider the pain this will cause users of Rx, especially those who are using it via intermediate libraries. I know pulling in WPF sucks when you don't need it, but any kind of proposal like this will beso much worse. |
BetaWas this translation helpful?Give feedback.
All reactions
👎 3
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
-
@anaisbetts I am also not a fan of DLL proliferation, in fact the opposite. However I don't think the problem being discussed here is just a concern with disk space. For example, I have outlined in#1847 a collection of issues that are currently caused by the current approach of having a single "mono-package" for The problem is that the mono-package approach can change the API surface of Rx depending on which platform you target, e.g. by assuming the consumer wants to use WinForms and UWP just because the runtime architecture of the project happens to be Can you elaborate a little on why you think splitting just these platform-specific schedulers would be so much worse than the current situation? I actually would like to hear what exactly is the pain from the other side, since so far we have heard here mostly an agreement that the current state of the package is untenable. |
BetaWas this translation helpful?Give feedback.
All reactions
👍 3
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
-
Pulling in massive WPF dependencies when you don't need it isn't just "annoying" and doesn't just "suck", it's a deal-breaker for most large-scale consumers. Avalonia determined it was less painful to remove At the end of the day, final-product complexityshould carry far more weight in decision-making than design-time complexity. Having components split across multiple DLLs is not a significant barrier to entry in .NET land, not since .NET Core. The debate here is which approach offers the minimum change in design-time complexity. |
BetaWas this translation helpful?Give feedback.
All reactions
-
With alot of IoT devices they don't have much disk space or memory, they are designed as low cost single purpose devices and are just responsible for sending a small amount of data at high data rates, they need a tidy and efficient installation for the devices to run efficiently. Web Servers, Desktop PCs, and Mobile devices generally don't have such an issue in this modern world. I don't believe we are talking about changing namespaces, or function names, but taking the current 'Reactive Onion' as I have described it (layer upon layer of interlocked functionality) that Rx has become and flattening it out a little into specific concerns allowing for future flexibility and adaptability to cater for the needs of the various use cases Rx has. |
BetaWas this translation helpful?Give feedback.
All reactions
-
I'd also say that I believe a core "growth & purpose" use case for Rx is IoT, and the new version of the Introduction to Rx book has more of a focus on IoT / event processing, than the traditional WinForms / WPF / UI examples - as these are mainly catered for by the ReactiveUI stack. Rx running in WASM (https://youtu.be/KzQ_Whn6oBA?t=27298) shows what a possible future might look like. |
BetaWas this translation helpful?Give feedback.
All reactions
👍 1
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
-
I feel that there's also a long shadow of the bad old days of figuring out which of a zillion combination of DLLs are needed for your scenario, which is not really what any version of this proposal is talking about. The "decomposition" proposal is essentially a single base assembly that everyone could depend on (e.g. IOT/server side apps as well as UI apps) with an opt-in choice of UI tiers to include if you happen to be building a UI application on one framework or another (with 3rd parties also providing support for their UI frameworks). And the "backwards compatibility" proposal is trying to achieve the situation where there is a legacy package that aggregated everything together so existing consumers would have the "full fat" experience they have today, up and down the dependency tree. That legacy is inevitably going to cause anxiety and that's why this thread is so important - there are lots of competing perspectives that have to be balanced out. In that regard, I'd make special note of the legacy support for UWP consumers; we know there are a lot out there in the real world, even as UWP is being deprecated by MS. The approach to phasing out System.Reactive support is another important consideration. |
BetaWas this translation helpful?Give feedback.
All reactions
-
Just as a matter of clarity, and since I didn't see this being discussed here: is there a possibility to improve the dependency resolution that ends up bringing the WPF/Winforms dlls, in such a way that doesn't happen to non-WPF/Winforms applications? |
BetaWas this translation helpful?Give feedback.
All reactions
-
I believe that you described pretty much the problem that is at the root of what is being discussed here. Unfortunately the answer to this question seems to be "no", at least without introducing breaking changes to the package. |
BetaWas this translation helpful?Give feedback.
All reactions
👍 1😕 1
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
-
This prompted me to review in depth the full chain of causality that leads to these DLLs being present. (I've gone and found the relevant source code in NuGet and the .NET SDK.) And as a result of this, I've made a discovery. It appears that in .NET SDK 8.0.100 at least (and possibly older SDKs—I've not yet tested this on older ones) a workaround is available that might enable us to move forward in a different way. It turns out that if you have an application that is using Rx.NET 6.0.0, and is afflicted by the "complete copy of WPF and Windows Forms gets included" problem, you can work around it by adding this to your application's project file: <PropertyGroup> <DisableTransitiveFrameworkReferences>true</DisableTransitiveFrameworkReferences> </PropertyGroup> This prevents the unwanted framework assemblies from being copied into your output. You still end up building against the The only fly in the ointment here is that I can't see a way to include just the Windows Forms but not the WPF components. I tried this: <PropertyGroup> <DisableTransitiveFrameworkReferences>true</DisableTransitiveFrameworkReferences> <UseWindowsForms>true</UseWindowsForms> <UseWPF>false</UseWPF> </PropertyGroup> but for some reason this ends up giving you all the WPF components anyway. So if you're building self-contained Windows Forms apps, this is suboptimal. But it does at least appear to offer a fix for the problem when using Avalonia. If this proves to be an acceptable workaround, this then gives us a (multi-year) path for finally getting to the place where we want to be:
This would meet the preference that I (and@glopesdev ) have for keeping The absolutely critical enabling element here is the availability of a viable workaround. For as long as the offending types remain in My least favourite feature of this is thatall apps with a For this reason, I think that even with this workaround we should consider a |
BetaWas this translation helpful?Give feedback.
All reactions
🚀 1
-
The inclusion of Wpf with Winforms may be due to some containers for Wpf included in Winforms, but I don't believe that you get all the dependencies of Wpf just the ones required for the container. |
BetaWas this translation helpful?Give feedback.
All reactions
-
DisableTransitiveFrameworkReferences was added to dotnet/sdk here:dotnet/sdk#3221 |
BetaWas this translation helpful?Give feedback.
All reactions
-
I want to add one other option we've thought about, and which might "work" but which we're reluctant to use because we believe it'd be completely unsupported. It would be technically possible for us to build a NuGet package for The benefit of this is that because from a NuGet packaging perspective, there's no apparent framework reference to the desktop frameworks, you don't get all the WPF/WinForms DLLs incorporated in your self-contained deployment just because you happened to add a reference to Rx.NET, but if you are in fact using those frameworks, it all works like you'd want. (And you get reasonably informative errors. If you try to use the relevant Rx.NET types in an app where you've not explicitly opted into using the relevant frameworks, you get a compiler error telling you that you're trying to use a, say, Windows Forms type without having a suitable assembly reference. And if you try to rely on the feature at runtime, you get a The massive downside is that this is a bit of a nasty hack and, as far as I know, completely unsupported. Our experience with the problems of trying to use UWP in an unsupported way (i.e., using it in conjunction with the modern .NET project system) is that hacking the build to make it do things that aren't supported causes grief. For that reason, I'd be quite reluctant to go down this path. |
BetaWas this translation helpful?Give feedback.
All reactions
-
This is something that I had also thought about with the potential use of Source Generators to provide a variation of output, I haven't tested this ability but I can see the potential of it working. |
BetaWas this translation helpful?Give feedback.
All reactions
-
I get the reluctancy to break backwards compatibility, but sometimes you need to nuke an egg to make a pizza. WPF, and other UI frameworks are "on top of" the .net runtime and BCL, same goes for ASP, and Entity Framework. I would like to see a splitting, where there is a base that has zero dependencies outside the runtime, then anything that hooks into anything else, you make a new nuget. So its something like this for "tie-in" nugets: *.AspNetCore.Extensions |
BetaWas this translation helpful?Give feedback.
All reactions
-
BetaWas this translation helpful?Give feedback.
All reactions
-
A disruptive change is worth it if it solves the current problem and meets future plans, it also helps to ship new features and bug fixes faster. |
BetaWas this translation helpful?Give feedback.
All reactions
-
The difficulty is that it can put existing projects into impossible situations. It's not merely that we need to provide documentation. That documentation would have to say "Sorry, you're out of luck." The reasons that a "clean break" can actually make things worse are described here:
|
BetaWas this translation helpful?Give feedback.
All reactions
-
From my point of view as someone who works in building libraries at my current company I really would like the team to consider that a major update is a major update and that breaking changes are acceptable. In my oppinion it really isn't a great flow when we don't get thing done properly just cause of "we want to make it backwards compatible". However I undrestand the reasons behind it. Just stating my oppinion that I would be fine if we simply have to do steps "A, B, C" if I update. Angular dose it in such a way that they have a guide when updating to a major release where you have a list of checkmarks. I think the approach is pretty good, I never had a issue so far approching things this way. |
BetaWas this translation helpful?Give feedback.
All reactions
-
Angular isn't a great model because that's a framework, and as such tends to be a top-down sort of a choice. In Rx you can get a situation where you use multiple libraries all depending on slightly different versions of Rx without you even knowing they were using it. You don't become an Angular application as a side effect of one of the libraries you're using taking a sneaky dependency on Angular! But that's exactly how you can end up as an Rx user. And really, it's the dependency scenarios that make this tricky. If we only had to care about applications making explicit decisions to use Rx, this would be a whole lot simpler. When a new version of Angular comes out, an application developer will decide if/when to take that change, and will typically accept that this is going to involve some planning and a coordinated effort to update the app. That's because it's an application framework. Rx really isn't. You don't write an Rx app. |
BetaWas this translation helpful?Give feedback.
All reactions
-
As I've worked to write up the proposed approach and build a prototype implementation of it, I have made a slightly surprising discovering:#205 was the original fix for the plug-in conflict problem, but there was a regression in Rx 5.0! I've been building various test harnesses to verify that whatever packaging changes we ultimately make do not cause regressions for any of the various problems that earlier packaging changes were trying to fix, which is how I discovered that Rx 5.0 did in fact cause exactly that kind of regression. Here's the sequence that will illustrate the problem:
This came as something of a surprise because the plug-in conflict issue was apparently a big deal at the time. I was baffled when I realised this regression occurred over 3 years ago but apparently nobody noticed this time. I suspect this is because there are some things that are different this time. There's a straightforward workaround if a maintainer of an old plug-in hits this issue when upgrading to Rx 5.0: don't upgrade! They can stay on Rx 4.4.1, which does not have this problem. Most of the new work in Rx 5.0 was in support of newer frameworks, so legacy .NET Framework plug-ins almost certainly don't need to upgrade. Also, it only occurs when a plug-in uses a version of .NET that is older than 4.7.2. That shipped about 6 years ago, so in practice nobody's going to be building new plug-ins that have this issue. (So if youreally need a later version of Rx you just need to make sure your plug-in targets The reason Rx 5.0 caused this regression is partly because in the Great Unification of Rx 4.0, the fix applied for Rx 3.1 (weird assembly version numbers) was abandoned. However, more by luck than judgement Rx 4.0 happened not to be susceptible to this bug. It only recurred when Rx 5.0 changed the TFMs. Specifically:
Here's the critical difference: with Rx 4.4.1, if you were targettingANY version of .NET Framework, there was no way you could possibly load the But in Rx 5.0, that's not true. If you write a plug-in that targets I think the only way to avoid this would be for I think the answer to that might be to continue to offer |
BetaWas this translation helpful?Give feedback.
All reactions
-
We now have two significant PRs. They are currently in draft because no final decisions have been made. We want to put them out for public review for now:
We expect everyone to hate the names of the new packages, so if you have any good ideas, please let us know. But bear in mind that some of the good names are already taken by the existing façade packages, and we have good reasons not to want to make those packages serve two unrelated purposes. A set of packages built in the proposed way is available from the feed in the Azure DevOps project for Rx:https://dev.azure.com/dotnet/Rx.NET/_artifacts/feed/RxNet (which should be accessible onhttps://pkgs.dev.azure.com/dotnet/Rx.NET/_packaging/RxNet/nuget/v3/index.json for NuGet) The packages all have version We aren't planning to push even preview builds of these out to NuGet until we've got consensus on the approach. |
BetaWas this translation helpful?Give feedback.
All reactions
-
Hmm. I can't repro those errors - if I use That said, I think this issue of there being two versions of So I think we might end up having to turn I apologise for not replying to this earlier. Somehow I never saw this thread until you referred me to it in that other discussion. |
BetaWas this translation helpful?Give feedback.
All reactions
-
@idg10 I updated my test project and still get the errors. Do you not see them on building |
BetaWas this translation helpful?Give feedback.
All reactions
-
OK, I've finally found the time to look at this. I think there are a few things going on. This particular message is just my attempt to collect my understanding of what your example shows. It does not offer any solutions. This just feeds into the set of things we need to consider in the design of any solution (but it might well rule out the workaround proposal. Something your example reveals shows that we might have no option but to do the full package split after all.) This code in Observable.Interval(TimeSpan.FromSeconds(1)).ObserveOn(DispatcherQueueScheduler.Current).Subscribe(_=>{}); In particular, it's the middle line, the use of the extension method, IObservable<long>interval=Observable.Interval(TimeSpan.FromSeconds(1));ISchedulerscheduler=DispatcherQueueScheduler.Current;IObservable<long>intervalOnDispatcher=Observable.ObserveOn(interval,scheduler);intervalOnDispatcher.Subscribe(_=>{}); (Apologies to those who prefer This actually compiles without error (although we still get warnings from NuGet). And the reason for that is that we've told the compiler exactly which IObservable<long>intervalOnDispatcher=interval.ObserveOn(scheduler); That's when we get these errors: 1>C:\dev\temp\rxrepros\kmgallahan\ReactiveWinUITests\ReactiveWinUIAppTest\MainWindow.xaml.cs(47,50,47,68): error CS0012: The type 'Control'isdefined in an assemblythatis notreferenced.You must add a reference to assembly 'System.Windows.Forms, Version=6.0.2.0,Culture=neutral,PublicKeyToken=b77a5c561934e089'.1>C:\dev\temp\rxrepros\kmgallahan\ReactiveWinUITests\ReactiveWinUIAppTest\MainWindow.xaml.cs(47,50,47,68): error CS7069: Reference to type 'Dispatcher' claims itisdefined in 'WindowsBase', butitcould not be found1>C:\dev\temp\rxrepros\kmgallahan\ReactiveWinUITests\ReactiveWinUIAppTest\MainWindow.xaml.cs(47,50,47,68): error CS7069: Reference to type 'DispatcherObject' claims itisdefined in 'WindowsBase', butitcould not be found The proximate cause here is that by using an extension method, we've forced the C# compiler to considerevery Since we forced it to look at all the The problem described by the first error is simply that The problem described by the next two errors is a bit more subtle. The On my system with SDK 9.0.300 installed, I find that for your example project (which targets .NET 8.0) this reference assembly is at
This is a fairly small subset of all the things that older versions of However, later versions of So we've got this weird situation where v4.0.0.0 of A problem with depending on a later, fully-featured version of But this isn't really a tenable situation because application code can find itself causing a compile-time dependency on WPFisms in an app that doesn't target WPF. The nature of extension methods mean that source code that never meant to depend on WPF (and which would produce no runtime dependency on WPF, because it actually compiles down to one of the overloads that doesn't use WPF) can cause acompile time dependency on WPF that will cause the build to fail because the compiler is unable to understand the Rx.NET code in question. There's one particularly important conclusion: If the Not for the first time with Rx.NET's packaging problems, extension methods rule out certain strategies. (They are the reason the oft-suggested "clean break" approach doesn't work.) At one point, Rx.NET was actually attempting to work around this problem but as far as I can tell, this didn't actually work. Up to and including Rx 5.0.0, we had a In practice, that didn't actually appear to work, becauseAvaloniaUI/Avalonia#9549 shows that they pick up the (You suggest that maybe adding similar Since a) this didn't work as it was meant to and b) by May 2024 there were no versions of .NET still in support that didnot understand OS-specific TFMs, I removed that |
BetaWas this translation helpful?Give feedback.
All reactions
👀 1
-
@idg10 is this option of doing a full package split already part of the existing PR, or documented briefly elsewhere? There has been so much written about this topic over the last couple of years that I for once have started to lose track of the current state of things. More than the ADR, I think having theshortest distillation possible of what the current proposal is shaping up to be would be invaluable. I think most agree with everything you say, and also feel there is no option to go back to having a full split between Rx schedulers and the rest of the framework. I believe the only bit left was naming, which I believe remains important to avoid having another round of regrets later on. |
BetaWas this translation helpful?Give feedback.
All reactions
-
@idg10 Thanks for taking the time to look into these errors. I do not have much more to add other than my full support for a break to dump as much legacy cruft as possible. When compared toR3 Rx.NET appears to be stuck in a quagmire. My main issue there is lack of DynamicData support (issue). |
BetaWas this translation helpful?Give feedback.
All reactions
-
Anyone following this discussion might want to know that we are now planning to move forward. Please see#2177 for details. |
BetaWas this translation helpful?Give feedback.
All reactions
❤️ 6
-
@idg10 In the spirit of summarising these very long-winded discussions, it would be useful to include a very high-level summary with the executive decisions taken: specifically the exact names of the packages should be made more clear for everyone to comment, since we will all be living with the aftermath of this for the foreseeable future. For the record, I still don't like the names, and in general I find inertia to be a poor judge of quality. I will take some time this week to think of alternative ways forward. |
BetaWas this translation helpful?Give feedback.
All reactions
-
See:#2211 for the latest progress. |
BetaWas this translation helpful?Give feedback.
All reactions
-
Ian has recorded a video update about the latest progress:https://www.youtube.com/watch?v=GSDspWHo0bo |
BetaWas this translation helpful?Give feedback.