Skip to content

[deps]: Update fusioncache monorepo to 2.6.0#236

Merged
dereknance merged 1 commit intomainfrom
renovate/fusioncache-monorepo
Mar 24, 2026
Merged

[deps]: Update fusioncache monorepo to 2.6.0#236
dereknance merged 1 commit intomainfrom
renovate/fusioncache-monorepo

Conversation

@renovate
Copy link
Contributor

@renovate renovate bot commented Jan 19, 2026

This PR contains the following updates:

Package Change Age Confidence
ZiggyCreatures.FusionCache 2.4.02.6.0 age confidence
ZiggyCreatures.FusionCache.Backplane.StackExchangeRedis 2.4.02.6.0 age confidence
ZiggyCreatures.FusionCache.OpenTelemetry 2.5.02.6.0 age confidence
ZiggyCreatures.FusionCache.Serialization.SystemTextJson 2.4.02.6.0 age confidence

Release Notes

ZiggyCreatures/FusionCache (ZiggyCreatures.FusionCache)

v2.6.0

🏷️ Configurable cleanup behavior for RemoveByTag()

Normally, when calling RemoveByTag("my-tag"), the entries with such a tag will be gradually expired on a subsequent access.

Community member @​charlesvigneault asked for the ability to instead properly remove them.

So I added a new option to allow configuring this behavior:

services.AddFusionCache()
	.WithOptions(options =>
	{
		options.RemoveByTagBehavior = RemoveByTagBehavior.Remove;
	});

See here for the original issue.

Ⓜ️ Add support for RemoveByTag("*") in HybridCache adapter

After the initial release of HybridCache in 2025, the team added support for a special case: using RemoveByTag("*") to clear the entire cache.

I didn't notice untile recently, and thanks to community user @​vrbyjimmy I did that.
Or, to better say it, he did that!
He acted so quickly that a PR immediately landed with the implementation, so thanks Jakub for that!

What happens underneath is that a RemoveByTag("*") call on the adapter is detected and re-routed to a Clear() call on the underlying FusionCache instance: very simple and elegant, and I like that a lot.

See here for the original issue.

🔒 Better Distributed Locker + Eager Refresh

Community user @​jgshowpad noticed that when using the new distributed stampede protection introduced in v2.5.0 with Eager Refresh some errors were being logged.

That was caused by the Redis-based distributed locker not handling correctly a timeout of zero (which btw is a pretty common approach to basically check for a lock already being acquired by someone else, without having to wait).

This has now been fixed.

See here for the original issue.

⚡ Perf boost for GenerateOperationId()

Community user @​Inok contributed with a nice set of low-level perf optimizations for the GenerateOperationId() internal method, which may be called quite a lot when doing observability (logging, OTEL, etc).

That's a very nice and welcome contribution, thanks Pavel!

See here for the original issue.

🔒 Add missing ext method for DI registration of

Community member @​mumin-khan noticed that, after releasing distributed stampede support in v2.5.0, I missed the related ext method for the the DI registration.

So now I added it, and it's now possible to do this:

builder.Services.AddFusionCacheRedisDistributedLocker(options =>
  {
    options.Configuration = ... ;
  });

This has now been added, thanks Mumin!

See here for the original issue.

🐞 Fixed a couple of missing ConfigureAwait(false)

Community user @​JerinMathewJose01 noticed that on the old .NET Framework 4.7.2 and 4.8, sometimes the factory may remain stuck without completing correctly.

That was caused by a couple of missing ConfigureAwait(false) when awaiting the factory execution.

This has now been fixed.

See here for the original issue.

📕 Docs

As always, I took some time to updated the docs with the latest stuff and make them overall better.

v2.5.0

🛡️ Distributed Cache Stampede Protection

Since the very beginning FusionCache offered a solid Cache Stampede protection, as explained in the docs where it is clearly illustrated:

Cache Stampede Request Coalescing

Such protection worked not just in the normal flow (miss -> factory -> return) but also with other more advanced features like:

  • Eager Refresh: hit (after the eager threshold) -> return + background factory
  • Factory Timeouts: miss -> factory + timeout -> return + background complete

With time the stampede protection got even better, and even extensible: this allowed 3rd party implementations of the core mechanism, called memory locker (IFusionCacheMemoryLocker).

All of this without removing the normal "it just works" experience since, by default, a StandardMemoryLocker is used without needing any user setup or intervention.

Cool.

But here's the thing: this protection had always been a local thing, meaning it did not span multiple nodes, in a distributed way: this meant that, if we were "unlucky", multiple factories could have run at the same time for the same cache key on different nodes.

Meaning, this:

Distributed Cache Stampede, Before

But that was true until now: enter Distributed Cache Stampede Protection 🎉

Thanks to the introduction of the new IFusionCacheDistributedLocker (see the next point) it's now possible to coordinate factory execution accross multiple nodes, so that only one factory would run at the same time for the same cache key even on different nodes.

Meaning, this:

Distributed Cache Stampede, After

By providing an IFusionCacheDistributedLocker implementation during setup, FusionCache will take care of everything, we don't have to do anything else.

The setup looks like this:

services.AddFusionCache()
	// SERIALIZER
	.WithSerializer(
		new FusionCacheSystemTextJsonSerializer()
	)
	// DISTRIBUTED CACHE
	.WithDistributedCache(
		new RedisCache(new RedisCacheOptions
		{
			Configuration = "localhost:6379",
		})
	)
	// BACKPLANE
	.WithBackplane(
		new RedisBackplane(new RedisBackplaneOptions
		{
			Configuration = "localhost:6379",
		})
	)
	// DISTRIBUTED LOCKER <-- HERE IT IS!
	.WithDistributedLocker(
		new RedisDistributedLocker(new RedisDistributedLockerOptions
		{
			Configuration = "localhost:6379",
		})
	);

Or, even better, if we want to re-use the same connection multiplexer for better performanceand use of resources, we can do this:

var muxer = ConnectionMultiplexer.Connect("localhost:6379");

services.AddFusionCache()
	// SERIALIZER
	.WithSerializer(
		new FusionCacheSystemTextJsonSerializer()
	)
	// DISTRIBUTED CACHE
	.WithDistributedCache(
		new RedisCache(new RedisCacheOptions
		{
			ConnectionMultiplexerFactory = async () => muxer,
		})
	)
	// BACKPLANE
	.WithBackplane(
		new RedisBackplane(new RedisBackplaneOptions
		{
			ConnectionMultiplexerFactory = async () => muxer,
		})
	)
	// DISTRIBUTED LOCKER <-- HERE IT IS!
	.WithDistributedLocker(
		new RedisDistributedLocker(new RedisDistributedLockerOptions
		{
			ConnectionMultiplexerFactory = async () => muxer,
		})
	);

As always, the idea is that "it just works".

See here for the original issue.

🔒 Extensible Distributed Locking

As mentioned above, this is the new distributed component responsible for coordinating multiple factory executions on different nodes, all automatically.

As of now I'm providing 2 main implementations:

Of course the Redis one is the only real deal for now, meant for production use.

Other implementations in the future will be possible, by simply implementing the new IFusionCacheDistributedLocker abstraction, just like it was possible before with the IFusionCacheMemoryLocker abstraction.

So to recap:

  • install the package (e.g.: the Redis one)
  • add 1 line in the setup (just like for the distributed cache or the backplane)
  • done

I would say it's all pretty nice 🙂

See here for the original issue.

⚙️ New MemoryCacheDuration entry option

This is seemingly small, but really important.

In a multi-node scenario with an L1+L2 setup it's important to keep the cache, as a whole, coherent.

When using a Backplane there's no need to do anything: all is taken care of, and the cache as a whole is always coherent.

But wha if we cannot or don't want to use a backplane, for... reasons?

Well, every change in the cache will leave the other L1s out-of-sync for the remaining time before their expiration, and this is not good.

This problem is known as Cache Coherence, and the backplane is what is used to SOLVE it.
But if we can't use a backplane, we should at least MITIGATE it: and we can do that by reducing the incoherency window.

And how?

Well, by simply specifying 2 different durations: one for the L1 and one for the L2.

Now, with FusionCache it has always been possible to specify a different Duration for the distributed cache, thanks to the DistributedCacheDuration option.

The problem was that, when in the scenario above (L1+L2 and no backplane), it would have been nice to be able to simply say "keep all the durations as alrady specified, and just refresh the data in the L1 from L2 every few seconds".

But with only the DistributedCacheDuration option available, the way to achieve this was counterintuitive: instead of somehow override the L1 duration, we needed to lower the normal Duration to a few seconds and specify the intended logical duration as the DistributedCacheDuration.

Not terrible, but not great.

But now, not anymore: enter MemoryCacheDuration.

We can of course go granular on a call-by-call basis, but there's something better: we can simply specify a value in the DefaultEntryOptions, and all the existing call sites will inherit this new value which will automatically override the duration only for the L1.

Done.

And, if we use Tagging we can simply do the same thing for the TagsDefaultEntryOptions, and we're done.

Something like this:

services.AddFusionCache()
	.WithOptions(options =>
	{
		options.DefaultEntryOptions.MemoryCacheDuration = TimeSpan.FromSeconds(5);
		options.TagsDefaultEntryOptions.MemoryCacheDuration = TimeSpan.FromSeconds(5);
	});

Oh, and the new Best Practices Advisor (see next point) can already give this advice when it detects such a scenario.

Nice 🙂

[!IMPORTANT]
It's important to say that, if you can, you should always use the backplane, as that is THE way to solve cache coherence for good without out-of-sync windows or other issues.

See here for the original issue.

🏅 Best Practices Advisor

Sometimes we may inadvertently fall into a scenario with:

  • a particularly strange combination of options
  • a particularly strange combination of components (distributed cache, backplane, etc)
  • a particularly strange combination of... both

With time FusionCache got more and more new components (like #​575 ) and options (like #​571 ) and this, along with the naturally dynamic nature of a flexible setup and configuration, may lead to inadvertently make the wrong decisions and fall into some gotchas.

FusionCache already had a couple of internal checks, like looking for a missing CacheKeyPrefix when using a shared L1 (which may lead to cache key collisions), and warns about them in the logs.

Now this practice has been unified & expanded, and it has a name: Best Practices Advisor.

Long story short, FusionCache now checks for common pitfalls and can give warnings and suggestions, all automatically and based on the current runtime state: no need to scrape the docs to see if the current config may lead to surprises thanks to a bad incantation of options.

I'd like to highlight that I've been careful about not trying to make it too smart for its own good: that is an easy to miss cliff that would lead to exaggerate in the implemented heuristics and checks, leading to bad results.

The checks initially implemented are:

  • missing cache key prefix: when using a named cache with an L2 or even just a shared L1, a missing cache key prefix may lead to cache key collisions
  • backplane + no distributed cache: when using a backplane without a distributed cache, it's important to check the default value for sending automatic distributed notifications, to avoid a useless continuous refresh cycle
  • distributed cache + no backplane + no memory cache duration: in this scenario it's probably better to use a lower memory cache duration to mitigate the cache coherence problem
  • distributed locker + no distributed cache: without a distributed cache it does not make much sense to use a distributed locker

More checks will be added in the future, but for now these are already quite useful.

Oh, one final thing: if you are thinking "great, a new piece of AI crap that will waste resources" then... nah, it's just a bunch of ifs done automatically in the background during startup. And if you want you can disable the Advisor by simply setting the new EnableBestPracticesAdvisor option to false (default is true).

See here for the original issue.

⚙️ New IgnoreTimeoutsWhenDebugging option

Community user @​tvardero asked if it was possible to automatically ignore all timeouts when debugging.

That was in fact an interesting feature request, and after some investigations I decided to proceed.

Now, when setting the new IgnoreTimeoutsWhenDebugging option to true, all timeouts will be ignored, but ONLY when there is a debugger attached (via Debugger.IsAttached).

All in all this will help when debugging issues locally, without nasty timeouts hitting simply because we are inspecting a variable after a breapoint hit, which is... the whole point of debugging, right?

Thanks @​tvardero for the input!

See here for the original issue.

See here for the feature design issue.

🕑 Small timestamps change

Thanks to community user @​vit-svoboda I changed the logic that gets the timestamp for a new entry, generated from a factory.
Before, the timestamp was about the moment the factory ended, now it's when it started.
No big change really, but it should help in a couple of edge cases with high concurrency.

See here for the original issue.

⚡ Minor performance tweaks

Nothing big really, as the perf were already great: just a bunch of extra tuning in a couple of edge cases, nothing big really.

📕 Docs (not yet!)

I did not have time to update the docs related to all this new stuff, but I'll do it in the next few days, pinky promise.

For now, this massive release note should be good enough.

✅ Tests

As always, with new features come new tests to make sure that all work as intended, now and in the future (regressions, am I right?).
Now we're up to 1534 total running tests, including params combinations & friends.

I can always do more, but still: not bad.


Configuration

📅 Schedule: Branch creation - "every 2nd week starting on the 2 week of the year before 4am on Monday" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about these updates again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@renovate renovate bot requested review from a team as code owners January 19, 2026 03:48
@renovate renovate bot requested a review from dereknance January 19, 2026 03:48
@bitwarden-bot bitwarden-bot changed the title [deps]: Update fusioncache monorepo to 2.5.0 [PM-30974] [deps]: Update fusioncache monorepo to 2.5.0 Jan 19, 2026
@bitwarden-bot
Copy link

Internal tracking:

@codecov
Copy link

codecov bot commented Jan 19, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 67.65%. Comparing base (12088ac) to head (a3392b6).
⚠️ Report is 2 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main     #236   +/-   ##
=======================================
  Coverage   67.65%   67.65%           
=======================================
  Files          46       46           
  Lines        1141     1141           
  Branches      100      100           
=======================================
  Hits          772      772           
  Misses        325      325           
  Partials       44       44           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@renovate renovate bot changed the title [PM-30974] [deps]: Update fusioncache monorepo to 2.5.0 [deps]: Update fusioncache monorepo to 2.5.0 Jan 19, 2026
@renovate renovate bot force-pushed the renovate/fusioncache-monorepo branch from e512e18 to 4d96b0c Compare February 2, 2026 14:50
@renovate renovate bot force-pushed the renovate/fusioncache-monorepo branch from 4d96b0c to 0d743ad Compare February 12, 2026 12:33
@renovate renovate bot force-pushed the renovate/fusioncache-monorepo branch from 0d743ad to a9fbb42 Compare February 24, 2026 16:40
@renovate renovate bot force-pushed the renovate/fusioncache-monorepo branch from a9fbb42 to 2d9a55c Compare March 13, 2026 12:34
@renovate renovate bot force-pushed the renovate/fusioncache-monorepo branch from 2d9a55c to a3392b6 Compare March 21, 2026 10:14
@renovate renovate bot changed the title [deps]: Update fusioncache monorepo to 2.5.0 [deps]: Update fusioncache monorepo to 2.6.0 Mar 21, 2026
@sonarqubecloud
Copy link

@github-actions
Copy link
Contributor

Logo
Checkmarx One – Scan Summary & Details79f7afb6-dec2-4d72-bdae-af569568ef51

Great job! No new security vulnerabilities introduced in this pull request

@dereknance dereknance merged commit bd02e16 into main Mar 24, 2026
19 of 20 checks passed
@dereknance dereknance deleted the renovate/fusioncache-monorepo branch March 24, 2026 16:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants