[deps]: Update fusioncache monorepo to 2.6.0#236
Merged
dereknance merged 1 commit intomainfrom Mar 24, 2026
Merged
Conversation
|
Internal tracking:
|
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #236 +/- ##
=======================================
Coverage 67.65% 67.65%
=======================================
Files 46 46
Lines 1141 1141
Branches 100 100
=======================================
Hits 772 772
Misses 325 325
Partials 44 44 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
e512e18 to
4d96b0c
Compare
4d96b0c to
0d743ad
Compare
0d743ad to
a9fbb42
Compare
a9fbb42 to
2d9a55c
Compare
2d9a55c to
a3392b6
Compare
|
Contributor
|
Great job! No new security vulnerabilities introduced in this pull request |
dereknance
approved these changes
Mar 24, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.




This PR contains the following updates:
2.4.0→2.6.02.4.0→2.6.02.5.0→2.6.02.4.0→2.6.0Release Notes
ZiggyCreatures/FusionCache (ZiggyCreatures.FusionCache)
v2.6.0🏷️ Configurable cleanup behavior for
RemoveByTag()Normally, when calling
RemoveByTag("my-tag"), the entries with such a tag will be gradually expired on a subsequent access.Community member @charlesvigneault asked for the ability to instead properly remove them.
So I added a new option to allow configuring this behavior:
See here for the original issue.
RemoveByTag("*")in HybridCache adapterAfter the initial release of HybridCache in 2025, the team added support for a special case: using
RemoveByTag("*")to clear the entire cache.I didn't notice untile recently, and thanks to community user @vrbyjimmy I did that.
Or, to better say it, he did that!
He acted so quickly that a PR immediately landed with the implementation, so thanks Jakub for that!
What happens underneath is that a
RemoveByTag("*")call on the adapter is detected and re-routed to a Clear() call on the underlying FusionCache instance: very simple and elegant, and I like that a lot.See here for the original issue.
🔒 Better Distributed Locker + Eager Refresh
Community user @jgshowpad noticed that when using the new distributed stampede protection introduced in v2.5.0 with Eager Refresh some errors were being logged.
That was caused by the Redis-based distributed locker not handling correctly a timeout of zero (which btw is a pretty common approach to basically check for a lock already being acquired by someone else, without having to wait).
This has now been fixed.
See here for the original issue.
⚡ Perf boost for
GenerateOperationId()Community user @Inok contributed with a nice set of low-level perf optimizations for the
GenerateOperationId()internal method, which may be called quite a lot when doing observability (logging, OTEL, etc).That's a very nice and welcome contribution, thanks Pavel!
See here for the original issue.
🔒 Add missing ext method for DI registration of
Community member @mumin-khan noticed that, after releasing distributed stampede support in v2.5.0, I missed the related ext method for the the DI registration.
So now I added it, and it's now possible to do this:
This has now been added, thanks Mumin!
See here for the original issue.
🐞 Fixed a couple of missing
ConfigureAwait(false)Community user @JerinMathewJose01 noticed that on the old .NET Framework 4.7.2 and 4.8, sometimes the factory may remain stuck without completing correctly.
That was caused by a couple of missing
ConfigureAwait(false)when awaiting the factory execution.This has now been fixed.
See here for the original issue.
📕 Docs
As always, I took some time to updated the docs with the latest stuff and make them overall better.
v2.5.0🛡️ Distributed Cache Stampede Protection
Since the very beginning FusionCache offered a solid Cache Stampede protection, as explained in the docs where it is clearly illustrated:
Such protection worked not just in the normal flow (miss -> factory -> return) but also with other more advanced features like:
With time the stampede protection got even better, and even extensible: this allowed 3rd party implementations of the core mechanism, called memory locker (
IFusionCacheMemoryLocker).All of this without removing the normal "it just works" experience since, by default, a
StandardMemoryLockeris used without needing any user setup or intervention.Cool.
But here's the thing: this protection had always been a local thing, meaning it did not span multiple nodes, in a distributed way: this meant that, if we were "unlucky", multiple factories could have run at the same time for the same cache key on different nodes.
Meaning, this:
But that was true until now: enter Distributed Cache Stampede Protection 🎉
Thanks to the introduction of the new
IFusionCacheDistributedLocker(see the next point) it's now possible to coordinate factory execution accross multiple nodes, so that only one factory would run at the same time for the same cache key even on different nodes.Meaning, this:
By providing an
IFusionCacheDistributedLockerimplementation during setup, FusionCache will take care of everything, we don't have to do anything else.The setup looks like this:
Or, even better, if we want to re-use the same connection multiplexer for better performanceand use of resources, we can do this:
As always, the idea is that "it just works".
See here for the original issue.
🔒 Extensible Distributed Locking
As mentioned above, this is the new distributed component responsible for coordinating multiple factory executions on different nodes, all automatically.
As of now I'm providing 2 main implementations:
Of course the Redis one is the only real deal for now, meant for production use.
Other implementations in the future will be possible, by simply implementing the new
IFusionCacheDistributedLockerabstraction, just like it was possible before with theIFusionCacheMemoryLockerabstraction.So to recap:
I would say it's all pretty nice 🙂
See here for the original issue.
⚙️ New
MemoryCacheDurationentry optionThis is seemingly small, but really important.
In a multi-node scenario with an L1+L2 setup it's important to keep the cache, as a whole, coherent.
When using a Backplane there's no need to do anything: all is taken care of, and the cache as a whole is always coherent.
But wha if we cannot or don't want to use a backplane, for... reasons?
Well, every change in the cache will leave the other L1s out-of-sync for the remaining time before their expiration, and this is not good.
This problem is known as Cache Coherence, and the backplane is what is used to SOLVE it.
But if we can't use a backplane, we should at least MITIGATE it: and we can do that by reducing the incoherency window.
And how?
Well, by simply specifying 2 different durations: one for the L1 and one for the L2.
Now, with FusionCache it has always been possible to specify a different
Durationfor the distributed cache, thanks to theDistributedCacheDurationoption.The problem was that, when in the scenario above (L1+L2 and no backplane), it would have been nice to be able to simply say "keep all the durations as alrady specified, and just refresh the data in the L1 from L2 every few seconds".
But with only the
DistributedCacheDurationoption available, the way to achieve this was counterintuitive: instead of somehow override the L1 duration, we needed to lower the normalDurationto a few seconds and specify the intended logical duration as theDistributedCacheDuration.Not terrible, but not great.
But now, not anymore: enter
MemoryCacheDuration.We can of course go granular on a call-by-call basis, but there's something better: we can simply specify a value in the
DefaultEntryOptions, and all the existing call sites will inherit this new value which will automatically override the duration only for the L1.Done.
And, if we use Tagging we can simply do the same thing for the
TagsDefaultEntryOptions, and we're done.Something like this:
Oh, and the new Best Practices Advisor (see next point) can already give this advice when it detects such a scenario.
Nice 🙂
See here for the original issue.
🏅 Best Practices Advisor
Sometimes we may inadvertently fall into a scenario with:
With time FusionCache got more and more new components (like #575 ) and options (like #571 ) and this, along with the naturally dynamic nature of a flexible setup and configuration, may lead to inadvertently make the wrong decisions and fall into some gotchas.
FusionCache already had a couple of internal checks, like looking for a missing
CacheKeyPrefixwhen using a shared L1 (which may lead to cache key collisions), and warns about them in the logs.Now this practice has been unified & expanded, and it has a name: Best Practices Advisor.
Long story short, FusionCache now checks for common pitfalls and can give warnings and suggestions, all automatically and based on the current runtime state: no need to scrape the docs to see if the current config may lead to surprises thanks to a bad incantation of options.
I'd like to highlight that I've been careful about not trying to make it too smart for its own good: that is an easy to miss cliff that would lead to exaggerate in the implemented heuristics and checks, leading to bad results.
The checks initially implemented are:
More checks will be added in the future, but for now these are already quite useful.
Oh, one final thing: if you are thinking "great, a new piece of AI crap that will waste resources" then... nah, it's just a bunch of
ifs done automatically in the background during startup. And if you want you can disable the Advisor by simply setting the newEnableBestPracticesAdvisoroption tofalse(default istrue).See here for the original issue.
⚙️ New
IgnoreTimeoutsWhenDebuggingoptionCommunity user @tvardero asked if it was possible to automatically ignore all timeouts when debugging.
That was in fact an interesting feature request, and after some investigations I decided to proceed.
Now, when setting the new
IgnoreTimeoutsWhenDebuggingoption totrue, all timeouts will be ignored, but ONLY when there is a debugger attached (viaDebugger.IsAttached).All in all this will help when debugging issues locally, without nasty timeouts hitting simply because we are inspecting a variable after a breapoint hit, which is... the whole point of debugging, right?
Thanks @tvardero for the input!
See here for the original issue.
See here for the feature design issue.
🕑 Small timestamps change
Thanks to community user @vit-svoboda I changed the logic that gets the timestamp for a new entry, generated from a factory.
Before, the timestamp was about the moment the factory ended, now it's when it started.
No big change really, but it should help in a couple of edge cases with high concurrency.
See here for the original issue.
⚡ Minor performance tweaks
Nothing big really, as the perf were already great: just a bunch of extra tuning in a couple of edge cases, nothing big really.
📕 Docs (not yet!)
I did not have time to update the docs related to all this new stuff, but I'll do it in the next few days, pinky promise.
For now, this massive release note should be good enough.
✅ Tests
As always, with new features come new tests to make sure that all work as intended, now and in the future (regressions, am I right?).
Now we're up to 1534 total running tests, including params combinations & friends.
I can always do more, but still: not bad.
Configuration
📅 Schedule: Branch creation - "every 2nd week starting on the 2 week of the year before 4am on Monday" (UTC), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about these updates again.
This PR was generated by Mend Renovate. View the repository job log.