Skip to content

WIP: vol src example#39

Open
proloyd wants to merge 13 commits intoEelbrain:masterfrom
proloyd:VolExample
Open

WIP: vol src example#39
proloyd wants to merge 13 commits intoEelbrain:masterfrom
proloyd:VolExample

Conversation

@proloyd
Copy link
Collaborator

@proloyd proloyd commented Oct 16, 2025

Adds a Volume Source example to Sphinx-gallery.

Addresses #20 (comment)

@christianbrodbeck need your opinion on the organization.

@codecov
Copy link

codecov bot commented Oct 16, 2025

Welcome to Codecov 🎉

Once you merge this PR into your default branch, you're all set! Codecov will compare coverage reports and display results in all future pull requests.

Thanks for integrating Codecov - We've got you covered ☂️

@christianbrodbeck
Copy link
Member

Looks like doc build is still failing

@christianbrodbeck
Copy link
Member

Is there a faster way to check the doc build than going into the build log and downloading the artifact?

@proloyd
Copy link
Collaborator Author

proloyd commented Oct 17, 2025

Is there a faster way to check the doc build than going into the build log and downloading the artifact?

I usually test on my local computer and then only push the commits.

@proloyd
Copy link
Collaborator Author

proloyd commented Oct 17, 2025

BTW, the earlier build was failing saying the following:
Screenshot 2025-10-17 at 4 31 53 PM
This is coming from rendering of Glassbrain plots (i.e., the doc build is successful over remote after commenting those lines out). I was under the impression that the glassbrain plots use matplotlib backend, so does not need to access X Display. @christianbrodbeck any thoughts on this?

@christianbrodbeck
Copy link
Member

Just noticed the same thing in Eelbrain/Eelbrain#100 It could be because nilearn uses pyplot and pyplot does some display initializing. I remember that's a reason why we made it a lazy import. Can look into it later.

@christianbrodbeck
Copy link
Member

Try again using the latest Eelbrain release (it's on conda forge).

@proloyd
Copy link
Collaborator Author

proloyd commented Oct 18, 2025

@christianbrodbeck yes! It worked like charm! Thanks!

BTW, what would be a better way of organizing the gallery? I can think of making a dataloader to do the repetitive work at the beginning. Any other suggestion?

Copy link
Member

@christianbrodbeck christianbrodbeck left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW, what would be a better way of organizing the gallery? I can think of making a dataloader to do the repetitive work at the beginning.

I think people will actually appreciate a few complete examples, giving all the steps from raw data. I think I see a few places where we could make this more streamlines and I will make a PR with suggestions into this PR.

Separately, I think we'll want a second set of examples working on whole BIDS datasets, there all the preprocessing will be handled by the pipeline and we can focus right on the results. We'll have to think logistically about how we will handle those builds (since they'll take much longer). Maybe a separate website where we push manually?

Quick check whether I got that correctly: events_bad_01.csv marks noisy segments in the data, but we're not actually using that information, right? (cf. #41)

The code line length is currently longer than the display area in some places, but I think we'll address that later with #19

@proloyd
Copy link
Collaborator Author

proloyd commented Oct 22, 2025

@christianbrodbeck how does the example look now?

Copy link
Member

@christianbrodbeck christianbrodbeck left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a few stylistic suggestions and questions, thanks @proloyd !

@christianbrodbeck
Copy link
Member

Actually, another issue: the brain is not fsaverage so it is not aligned with the GlassBrain outline (Eelbrain/Eelbrain#105). I traces the brain outline in the image below (you can see it based on the dots for 0 vetors).

image

@proloyd
Copy link
Collaborator Author

proloyd commented Nov 4, 2025

@christianbrodbeck Thanks for the stylistic suggestions, I incorporated all of them.

Regarding the GlassBrain plots, I utilized source morphing for the time being. Let me know what do you think about it.

@proloyd
Copy link
Collaborator Author

proloyd commented Nov 4, 2025

@christianbrodbeck ready for your review.

@christianbrodbeck
Copy link
Member

Counterintuitive result that frequent tones have a stronger response than oddballs. Is that also the case in an evoked response analysis?

@proloyd
Copy link
Collaborator Author

proloyd commented Nov 26, 2025

Counterintuitive result that frequent tones have a stronger response than oddballs. Is that also the case in an evoked response analysis?

The opposite is true, no?

image

@christianbrodbeck
Copy link
Member

Where did you get this from? Here is a screen shot from the CI build:
Screenshot 2025-11-26 at 5 32 49 AM

@proloyd
Copy link
Collaborator Author

proloyd commented Jan 31, 2026

@christianbrodbeck Are we waiting for morph_source_space to support Volume Source Space morphing in the next Eelbrain release to merge this pull request?

@christianbrodbeck
Copy link
Member

We don't need to, we could update that CI to work with the alpha build of Eelbrain which is on conda-forge.

I was waiting for clarifying which response is frequent and which is infrequent :)

@christianbrodbeck
Copy link
Member

@proloyd any thoughts on the discrepancy?

@gokuprasanna
Copy link
Collaborator

results from running the example on my PC with Ubuntu 24.04

image

@yaylim
Copy link
Collaborator

yaylim commented Feb 5, 2026

Outputs from Mac:

output

@proloyd
Copy link
Collaborator Author

proloyd commented Feb 9, 2026

@yaylim, @gokuprasanna Thanks!
I did the following:

  1. changed filtering frequency limits from 1-8Hz to 1-20Hz.
  2. added EOG spatial filter
    and that seems to produce expected results.
image @christianbrodbeck your thoughts?

@christianbrodbeck
Copy link
Member

Interesting, both changes are good calls. Still does not look like a classical mismatch response (e.g. https://doi.org/10.1016/j.clinph.2008.11.029). @yaylim any progress on the evoked response analysis?

@christianbrodbeck
Copy link
Member

If you scroll down, the Brainstorm tutorial has pictures of what we're expecting in sensor and source space. Still, I think It would be good to align our expectations for NCRF from a sensor space analysis using all the same preprocessing parameters. There's also an opportunity for source localizing MMN and P300.

@yaylim
Copy link
Collaborator

yaylim commented Feb 10, 2026

@christianbrodbeck still checking the CTF analysis example on the MNE website, I'll check the brainstorm tutorial to double check the results, should be done by tomorrow. Thanks for the info!

@proloyd
Copy link
Collaborator Author

proloyd commented Feb 11, 2026

@christianbrodbeck @yaylim I also added ERF analysis as part of the example:
image
This does match with NCRF results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants

Comments