Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
76 commits
Select commit Hold shift + click to select a range
c1814e7
Update TutorialTest.m
ehennestad Mar 10, 2026
ee7579b
Support DataPipe in verifyContainerEqual
ehennestad Mar 10, 2026
d05bd36
Merge branch 'fix-failing-pynwbio-tests' into ignore-nwbinspector-sub…
ehennestad Mar 10, 2026
b2df6b9
Rename SKIP_PYNWB var and update CI/tests
ehennestad Mar 10, 2026
2a45d53
Merge branch 'fix-skip-pynwb-tests-for-older-matlab-releases' into ig…
ehennestad Mar 10, 2026
6a42eb6
debug readthedocs failed build
ehennestad Mar 11, 2026
9d78ddd
Update PyNWBIOTest.m
ehennestad Mar 11, 2026
0b5e80b
Merge branch 'fix-skip-pynwb-tests-for-older-matlab-releases' into ig…
ehennestad Mar 11, 2026
3ae2cac
Merge branch 'main' into ignore-nwbinspector-subject-checks-in-tutori…
ehennestad Mar 19, 2026
5693fa3
Add subject section to domain tutorials
ehennestad Mar 23, 2026
6099a81
Update generated html- and m-files
ehennestad Mar 23, 2026
5f9890f
Update electrode/imaging plane location to valid term in tutorials
ehennestad Mar 24, 2026
2edc6cd
Update inspectNwbFile.m
ehennestad Mar 24, 2026
6ed5095
Update inspectNwbFile.m
ehennestad Mar 24, 2026
ebea03a
Make subject ids/descriptions consistent
ehennestad Mar 24, 2026
c6afaeb
Decouple stable CI from pynwb/nwbinspector dev branches
ehennestad Mar 25, 2026
2b78969
Update requirements.txt
ehennestad Mar 25, 2026
c8228dc
Update nwbtest.default.env
ehennestad Mar 25, 2026
dece8d1
Updated test workflows to check out pynwb in the runner
ehennestad Mar 25, 2026
04939a5
Refactor PynwbTutorialTest to use pre-cloned pynwb repo
ehennestad Mar 25, 2026
68c4ad3
change checkout location for pynwb
ehennestad Mar 25, 2026
278ae4b
Add test suite README for new developers
ehennestad Mar 25, 2026
956b4f2
Fix pynwb repo checkout path in CI workflows
ehennestad Mar 25, 2026
f98de16
Update untypedSetTest.m
ehennestad Mar 25, 2026
a34a073
Update GenerationTest.m
ehennestad Mar 25, 2026
136270b
Update README.md
ehennestad Mar 25, 2026
4af3980
Update PynwbTutorialTest.m
ehennestad Mar 25, 2026
6dab0ed
Expand test README with dynamic filters and TestTags convention
ehennestad Mar 25, 2026
2c6ca8d
Merge branch 'main' into decouple-ci-testing-from-pynwb-dev
ehennestad Mar 26, 2026
48f3058
Smaller test improvements
ehennestad Mar 28, 2026
6f6e6c6
Hoist typed dataset attributes onto parent classes
ehennestad Mar 29, 2026
b73a472
Use schema-defined attrs for included dataset promotion
ehennestad Mar 29, 2026
807ab43
Add resolution to spike times in Units table
ehennestad Mar 29, 2026
95ae151
Updated ecephys tutorial generated files
ehennestad Mar 29, 2026
b5419e3
Restore file.fillConstructor
ehennestad Mar 30, 2026
9a353bd
Update Group.m
ehennestad Mar 30, 2026
9f03b0f
Update parseDataset.m
ehennestad Mar 30, 2026
ba0bd26
Standardize output arguments of io.parseDataset
ehennestad Mar 30, 2026
bca4192
Renamed attrargs -> datasetAttributes
ehennestad Mar 30, 2026
8d8b1a7
Renamed Type -> typeInfo
ehennestad Mar 30, 2026
1e1c684
Explicitly check if dataset is typed
ehennestad Mar 30, 2026
0d1e5fa
Rename props to datasetPropertyMap and shift to relevant location
ehennestad Mar 30, 2026
b73e921
Move attribute promotion to relevant location
ehennestad Mar 30, 2026
c02f41d
Collect h5 specific logic in one block and add short descriptions
ehennestad Mar 30, 2026
5ffe251
make elseif case more explicit for scalar dataspace type
ehennestad Mar 30, 2026
0bc57e3
Rename datasetName and remove confusing comment
ehennestad Mar 30, 2026
0220f6d
Added more robust file and dataset cleanup
ehennestad Mar 30, 2026
cb4b5c5
Update parseDataset.m
ehennestad Mar 30, 2026
f25afda
Add docstring
ehennestad Mar 30, 2026
6161a65
Clean up
ehennestad Mar 31, 2026
b62ba05
Update ParseDatasetTest.m
ehennestad Mar 31, 2026
b9bab48
make blacklist optional
ehennestad Mar 31, 2026
79d7b1d
Merge branch 'decouple-ci-testing-from-pynwb-dev' into refactor-io-pa…
ehennestad Mar 31, 2026
4cf4fcd
Update parseDataset.m
ehennestad Mar 31, 2026
afb7234
Move coverage skip flag to workflow env
ehennestad Mar 31, 2026
234fc28
Merge branch 'decouple-ci-testing-from-pynwb-dev' into refactor-io-pa…
ehennestad Mar 31, 2026
7032cb2
Update parseDataset.m
ehennestad Mar 31, 2026
5dff250
Merge branch 'refactor-io-parse-dataset' of https://github.com/Neurod…
ehennestad Mar 31, 2026
510d18e
Merge branch 'refactor-io-parse-dataset' into promote-specialized-att…
ehennestad Mar 31, 2026
4f0c1ea
Update parseDataset.m
ehennestad Mar 31, 2026
df40f88
restore unrelated changes
ehennestad Mar 31, 2026
ab5e3f5
Fix legacy compatibility
ehennestad Mar 31, 2026
0a99d16
Update TutorialTest.m
ehennestad Mar 29, 2026
9fc53dd
Merge branch 'decouple-ci-testing-from-pynwb-dev' into refactor-io-pa…
ehennestad Mar 31, 2026
e94a8a1
Renamed variables for consistency and clarity
ehennestad Mar 31, 2026
0ebdb72
Fixed docstring
ehennestad Mar 31, 2026
ecd8263
Update run_tests.yml
ehennestad Mar 31, 2026
a198575
Merge branch 'decouple-ci-testing-from-pynwb-dev' into refactor-io-pa…
ehennestad Mar 31, 2026
5fdb2e5
Update parseDataset.m
ehennestad Mar 31, 2026
bd167f2
Run Tests for PR t any branch
ehennestad Mar 31, 2026
cdebab3
Merge branch 'decouple-ci-testing-from-pynwb-dev' into refactor-io-pa…
ehennestad Mar 31, 2026
fd6a330
Merge branch 'refactor-io-parse-dataset' into promote-specialized-att…
bendichter Apr 1, 2026
df83711
Merge branch 'main' into refactor-io-parse-dataset
bendichter Apr 1, 2026
8ae7459
Merge branch 'refactor-io-parse-dataset' into promote-specialized-att…
ehennestad Apr 1, 2026
1876eeb
Update UnitTimesIOTest.m
ehennestad Apr 7, 2026
3a33a13
Merge branch 'main' into promote-specialized-attributes-of-typed-data…
ehennestad Apr 7, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions +file/Attribute.m
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
dtype; %type of value
dependent; %set externally. If the attribute is actually dependent on an untyped dataset/group
dependent_fullname; %set externally. This is the full name, including names of potential parent groups separated by underscore. A value will only be present if it would differ from dependent.
promoted_to_container = false; % set externally when promoted from a typed dataset onto the containing class API
scalar; %if the value is scalar or an array
dimnames;
shape;
Expand All @@ -25,6 +26,7 @@
obj.dtype = '';
obj.dependent = '';
obj.dependent_fullname = '';
obj.promoted_to_container = false;
obj.scalar = true;
obj.shape = {};
obj.dimnames = {};
Expand Down
32 changes: 31 additions & 1 deletion +file/Group.m
Original file line number Diff line number Diff line change
Expand Up @@ -161,7 +161,14 @@
PropertyMap = [PropertyMap; Sub_Attribute_Map];
end
PropertyMap(SubData.name) = SubData;
else
else % Typed dataset
includedAttributes = getIncludedTypedDatasetAttributes(obj, SubData);
if ~isempty(includedAttributes)
attrNames = {includedAttributes.name};
attrNames = strcat(SubData.name, '_', attrNames);
PropertyMap = [PropertyMap; ...
containers.Map(attrNames, num2cell(includedAttributes))];
end
if isempty(SubData.name)
PropertyMap(lower(SubData.type)) = SubData;
else
Expand Down Expand Up @@ -253,3 +260,26 @@
end
end
end

function includedAttributes = getIncludedTypedDatasetAttributes(GroupObj, datasetObj)
% getIncludedTypedDatasetAttributes - Return attributes declared on a named
% included typed dataset instance.
%
% This is used for reuse by inclusion (`neurodata_type_inc` without
% `neurodata_type_def`), where an existing typed dataset is embedded as a
% named component of another type. Promotion decisions are resolved later,
% once namespace context is available, so we can distinguish newly added
% attributes from modifications of attributes already defined on the
% included dataset type.

includedAttributes = file.Attribute.empty;
if isempty(GroupObj.type) || isempty(datasetObj.name) || isempty(datasetObj.attributes)
return;
end

for iAttr = 1:length(datasetObj.attributes)
attribute = datasetObj.attributes(iAttr);
attribute.dependent = datasetObj.name;
includedAttributes(end+1) = attribute; %#ok<AGROW>
end
end
3 changes: 2 additions & 1 deletion +file/fillClass.m
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@
defaults = {};
dependent = {};
hidden = {}; % special hidden properties for hard-coded workarounds

%separate into readonly, required, and optional properties
for iGroup = 1:length(allProperties)
propertyName = allProperties{iGroup};
Expand Down Expand Up @@ -165,7 +166,7 @@
inherited);
setterFcns = file.fillSetters(setdiff(nonInherited, union(readonly, hiddenAndReadonly)), classprops);
validatorFcns = file.fillValidators(allProperties, classprops, namespace, namespace.getFullClassName(name), inherited);
exporterFcns = file.fillExport(nonInherited, class, superclassNames{1}, required);
exporterFcns = file.fillExport(nonInherited, class, superclassNames{1}, required, classprops);
methodBody = strjoin({constructorBody...
'%% SETTERS' setterFcns...
'%% VALIDATORS' validatorFcns...
Expand Down
19 changes: 18 additions & 1 deletion +file/fillExport.m
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
function festr = fillExport(propertyNames, RawClass, parentName, required)
function festr = fillExport(propertyNames, RawClass, parentName, required, classprops)
exportHeader = 'function refs = export(obj, fid, fullpath, refs)';
if isa(RawClass, 'file.Dataset')
propertyNames = propertyNames(~strcmp(propertyNames, 'data'));
Expand All @@ -22,6 +22,11 @@
propertyName = propertyNames{i};
pathProps = traverseRaw(propertyName, RawClass);
prop = pathProps{end};
if nargin >= 5 && isa(prop, 'file.Attribute') ...
&& isKey(classprops, propertyName) ...
&& isa(classprops(propertyName), 'file.Attribute')
prop = classprops(propertyName);
end
elideProps = pathProps(1:end-1);
elisions = cell(length(elideProps),1);
% Construct elisions
Expand Down Expand Up @@ -222,6 +227,7 @@

propertyChecks = {};
dependencyCheck = {};
preExportString = '';

if isa(prop, 'file.Attribute') && ~isempty(prop.dependent)
%if attribute is dependent, check before writing
Expand Down Expand Up @@ -254,6 +260,13 @@
warnIfMissingRequiredDependentAttributeStr = ...
sprintf('obj.throwErrorIfRequiredDependencyMissing(''%s'', ''%s'', fullpath)', name, depPropname);
end

if prop.promoted_to_container
preExportString = sprintf([ ...
'if isempty(obj.%1$s) && ~isempty(obj.%2$s) && isobject(obj.%2$s) && isprop(obj.%2$s, ''%3$s'') && ~isempty(obj.%2$s.%3$s)\n' ...
' obj.%1$s = obj.%2$s.%3$s;\n' ...
'end'], name, depPropname, prop.name);
end
end

if ~prop.required
Expand All @@ -273,6 +286,10 @@
end
end

if ~isempty(preExportString)
dataExportString = sprintf('%s\n%s', preExportString, dataExportString);
end

if ~isempty(dependencyCheck)
dataExportString = sprintf('%s\nif %s\n%s\nend', ...
dataExportString, ...
Expand Down
20 changes: 18 additions & 2 deletions +file/fillSetters.m
Original file line number Diff line number Diff line change
Expand Up @@ -43,13 +43,29 @@
warnIfDependencyMissingString = sprintf(...
'obj.warnIfAttributeDependencyMissing(''%s'', ''%s'')', ...
propname, parentname);

syncPromotedDatasetAttributeString = '';
if prop.promoted_to_container
syncPromotedDatasetAttributeString = sprintf([ ...
'if ~isempty(obj.%1$s) && isobject(obj.%1$s) && isprop(obj.%1$s, ''%2$s'')\n' ...
' if ~isempty(obj.%3$s)\n' ...
' obj.%1$s.%2$s = obj.%3$s;\n' ...
' elseif ~isempty(obj.%1$s.%2$s)\n' ...
' obj.%3$s = obj.%1$s.%2$s;\n' ...
' end\n' ...
'end'], parentname, prop.name, propname);
end

postsetFunctionStr = strjoin({...
postsetLines = {...
sprintf('function postset_%s(obj)', propname), ...
file.addSpaces(conditionStr, 4), ...
file.addSpaces(warnIfDependencyMissingString, 8), ...
file.addSpaces('end', 4), ...
'end'}, newline);
'end'};
if ~isempty(syncPromotedDatasetAttributeString)
postsetLines = [postsetLines(1:end-1), {file.addSpaces(syncPromotedDatasetAttributeString, 4)}, postsetLines(end)];
end
postsetFunctionStr = strjoin(postsetLines, newline);
end
end
end
78 changes: 77 additions & 1 deletion +file/processClass.m
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@
class = patchVectorData(class);
end
props = class.getProps();
props = markPromotedAttributesForIncludedTypedDatasets(class, props, namespace);

% Apply patches for special cases of schema/specification errors
class = applySchemaVersionPatches(nodename, class, props, namespace);
Expand All @@ -53,6 +54,81 @@
end
end

function props = markPromotedAttributesForIncludedTypedDatasets(classObj, props, namespace)
if ~isa(classObj, 'file.Group') || isempty(classObj.datasets)
return;
end

for iDataset = 1:length(classObj.datasets)
datasetObj = classObj.datasets(iDataset);
if isempty(datasetObj.type) || isempty(datasetObj.name) || isempty(datasetObj.attributes)
continue;
end

datasetNamespace = namespace.getNamespace(datasetObj.type);
if isempty(datasetNamespace)
continue;
end

schemaAttributeNames = getSchemaDefinedAttributeNames(datasetObj.type, datasetNamespace);
for iAttr = 1:length(datasetObj.attributes)
attribute = datasetObj.attributes(iAttr);
propertyName = [datasetObj.name '_' attribute.name];
if ~isKey(props, propertyName)
continue;
end

if any(strcmp(attribute.name, schemaAttributeNames))
remove(props, propertyName);
else
promotedAttribute = props(propertyName);
promotedAttribute.promoted_to_container = true;
props(propertyName) = promotedAttribute;
end
end
end
end

function attributeNames = getSchemaDefinedAttributeNames(typeName, namespace)
persistent schemaAttributeNameCache

if isempty(schemaAttributeNameCache)
schemaAttributeNameCache = containers.Map('KeyType', 'char', 'ValueType', 'any');
end

cacheKey = strjoin({namespace.name, namespace.version, typeName}, '::');
if isKey(schemaAttributeNameCache, cacheKey)
attributeNames = schemaAttributeNameCache(cacheKey);
return;
end

typeSpec = namespace.getClass(typeName);
if isempty(typeSpec)
attributeNames = {};
return;
end

branch = [{typeSpec} namespace.getRootBranch(typeName)];
spec.internal.resolveInheritedFields(typeSpec, branch(2:end))
spec.internal.expandFieldsInheritedByInclusion(typeSpec)

switch typeSpec('class_type')
case 'groups'
classObj = file.Group(typeSpec);
case 'datasets'
classObj = file.Dataset(typeSpec);
otherwise
attributeNames = {};
return;
end

typeProps = classObj.getProps();
propNames = keys(typeProps);
isAttribute = cellfun(@(name) isa(typeProps(name), 'file.Attribute'), propNames);
attributeNames = propNames(isAttribute);
schemaAttributeNameCache(cacheKey) = attributeNames;
end

function class = patchVectorData(class)
%% Unit Attribute
% derived from schema 2.6.0
Expand Down Expand Up @@ -95,4 +171,4 @@
source('required') = false;

class.attributes(end+1) = file.Attribute(source);
end
end
34 changes: 26 additions & 8 deletions +tests/+system/UnitTimesIOTest.m
Original file line number Diff line number Diff line change
Expand Up @@ -40,17 +40,35 @@ function addContainer(~, file)
, 'data', 1 ...
);

% set optional hidden vector data attributes
file.units.spike_times.resolution = 3;
Units = file.units;
[Units.waveform_mean.sampling_rate ...
, Units.waveform_sd.sampling_rate ...
, Units.waveforms.sampling_rate ...
] = deal(1);
% Set optional Units table dataset attributes via promoted container
% API
file.units.spike_times_resolution = 3;
file.units.waveform_mean_sampling_rate = 1;
file.units.waveform_sd_sampling_rate = 1;

% Skip waveforms_sampling_rate because PyNWB does not export it.
% Units.waveforms_sampling_rate = 1
end

function c = getContainer(~, file)
c = file.units;
end
end
end

methods (Test)
function testLegacyNestedSpikeTimesResolutionIsPreserved(testCase)
spikeTimes = types.hdmf_common.VectorData( ...
'data', 11, ...
'description', 'the spike times for each unit in seconds');
spikeTimes.resolution = 1/20000;

units = types.core.Units( ...
'colnames', {'spike_times'}, ...
'description', 'data on spiking units', ...
'spike_times', spikeTimes);

testCase.verifyEqual(units.spike_times.resolution, 1/20000);
testCase.verifyEqual(units.spike_times_resolution, 1/20000);
end
end
end
Loading
Loading