Skip to content

Promote specialized attributes of included typed datasets (e.g. Units.spike_times_resolution)#797

Merged
ehennestad merged 76 commits intomainfrom
promote-specialized-attributes-of-typed-datasets-to-containing-type
Apr 7, 2026
Merged

Promote specialized attributes of included typed datasets (e.g. Units.spike_times_resolution)#797
ehennestad merged 76 commits intomainfrom
promote-specialized-attributes-of-typed-datasets-to-containing-type

Conversation

@ehennestad
Copy link
Copy Markdown
Collaborator

@ehennestad ehennestad commented Mar 29, 2026

Summary

This PR exposes attributes that are added at the inclusion site of a typed dataset on the containing MATLAB class API using the dataset_attribute naming pattern.

Why

The NWB schema allows a typed dataset reused via neurodata_type_inc to gain additional attributes when it is included into another type. matnwb was not surfacing those specialized attributes consistently in the generated API. This showed up most clearly for Units.spike_times.resolution.

What changed

  • added support for promoting inclusion-site attributes from typed datasets onto the containing generated class
  • made the promotion rule schema-driven: promote only attributes added on the included dataset instance, not attributes that already belong to the included dataset type and are merely refined there
  • kept read/export behavior aligned with the promoted properties

User impact

Users can access specialized included-dataset attributes directly from the containing type, e.g. spike_times_resolution, without leaking those attributes onto unrelated VectorData instances.

Root cause

The generator handled untyped dataset attributes as dependent properties on the containing class, but typed reused datasets did not have an equivalent schema-driven path. A temporary hidden-property-based heuristic also turned out to be too tied to legacy VectorData patching.

Validation

  • regenerated core classes for schema 2.9.0
  • ran tests.system.UnitTimesIOTest.testRoundTrip
  • verified that Units exposes spike_times_resolution
  • verified that SimultaneousRecordingsTable does not expose recordings_table
  • checked support for typed unnamed included datasets; no additional generated API changes were triggered by the current core schema

ehennestad and others added 30 commits March 10, 2026 13:46
Extend the test helper to treat types.untyped.DataPipe like DataStub by calling load() on actualValue before comparing.
Replace the old SKIP_PYNWB_COMPATIBILITY_TEST_FOR_TUTORIALS env var with SKIP_PYNWB_TESTS across tests and nwbtest.m, and update +tests/nwbtest.default.env.
Update CI workflow matrix keys and usage in prepare_release.yml, run_tests.yml, and configurations/matlab_release_matrix_strategy.yml to use the new key (matrix.skip-pynwb-tests) and wire the MATLAB setenv accordingly.
Move conditional installation of pynwb into the workflows (install pynwb only when skip-pynwb-tests == '0') and remove the direct git+ dependency from +tests/requirements.txt (update comment to reflect conditional installs).
Also add TestTags = {'UsesPython'} to PyNWBIOTest.

These changes centralize the CI control of pynwb installation and standardize the skip variable name.
…nore-nwbinspector-subject-checks-in-tutorialtests
checking if this class attribute trips up docs build
Move TestTags to methods blocks. Check if that fixes issue with sphinx build of docs
…nore-nwbinspector-subject-checks-in-tutorialtests
Install pynwb and nwbinspector from PyPI stable releases in the main
test workflow so CI only fails due to matnwb regressions. Add a
separate weekly workflow that tests against the dev branches to catch
upstream incompatibilities early without blocking PRs.

Co-Authored-By: Claude Sonnet 4.6 <[email protected]>
Added more details in comments
Removed unused vars NWB_TEST_DEBUG, GITHUB_TOKEN
Added PYNWB_REPO_DIR
Replace custom venv/download/GitHub API infrastructure with a simple
approach: read tutorial files from PYNWB_REPO_DIR env var pointing to
a pre-cloned pynwb repo, and run them against the system Python. The
CI workflows are now responsible for cloning the repo and setting the
env var. TestTags = {'UsesPython'} added so the tag selector in the
dev workflow picks up these tests, and SKIP_PYNWB_TESTS correctly
excludes them on older MATLAB releases.

Co-Authored-By: Claude Sonnet 4.6 <[email protected]>
Covers prerequisites, running tests via nwbtest() and the MATLAB unit
testing framework, Python dependency setup, environment variable
configuration, and test authoring conventions.

Co-Authored-By: Claude Sonnet 4.6 <[email protected]>
actions/checkout requires path to be within the workspace directory.
Use path: pynwb-repo (inside workspace) instead of ../pynwb-repo.

Co-Authored-By: Claude Sonnet 4.6 <[email protected]>
Suppress output
Suppress warning that happens due to wrong values in the file's version attribute in the source schemas for v 2.2.0 and 2.6.0
Added section about setting up dynamically loaded filters
Move test tags to methods block, as testtags in classdef breaks the sphinx/docs build
resolution property of spike_times VectorData object was previously cleared when spike_times was added to the units object
Improve test diagnostic message
data -> datasetValue
info -> datasetInfo
fullpath -> datasetPath
datadim -> dataDims
class_id -> classId
@ehennestad ehennestad marked this pull request as ready for review March 31, 2026 20:13
@codecov
Copy link
Copy Markdown

codecov bot commented Apr 1, 2026

Codecov Report

❌ Patch coverage is 92.13483% with 7 lines in your changes missing coverage. Please review.
✅ Project coverage is 95.49%. Comparing base (270e871) to head (3a33a13).
⚠️ Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
+file/processClass.m 85.10% 7 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #797      +/-   ##
==========================================
- Coverage   95.53%   95.49%   -0.05%     
==========================================
  Files         192      192              
  Lines        7011     7097      +86     
==========================================
+ Hits         6698     6777      +79     
- Misses        313      320       +7     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

…ributes-of-typed-datasets-to-containing-type
@ehennestad ehennestad changed the title [codex] Promote specialized attributes of included typed datasets [Zarr-support] Promote specialized attributes of included typed datasets Apr 6, 2026
@ehennestad ehennestad changed the title [Zarr-support] Promote specialized attributes of included typed datasets Promote specialized attributes of included typed datasets (e.g. Units.spike_times_resolution) Apr 6, 2026
@ehennestad ehennestad changed the title Promote specialized attributes of included typed datasets (e.g. Units.spike_times_resolution) Promote specialized attributes of included typed datasets (e.g. Units.spike_times_resolution) Apr 6, 2026
@ehennestad ehennestad changed the base branch from refactor-io-parse-dataset to main April 7, 2026 13:44
@ehennestad
Copy link
Copy Markdown
Collaborator Author

Unittest failing due to unexpected behaviour in PyNWB:
NeurodataWithoutBorders/pynwb#2182

@ehennestad ehennestad enabled auto-merge April 7, 2026 19:44
@ehennestad ehennestad added this pull request to the merge queue Apr 7, 2026
Merged via the queue into main with commit 6148385 Apr 7, 2026
18 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants