You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Need a module between end of instance evaluation and Panoptica Result, that actually handles metric computation -> PanopticaResult object becomes a purely result-keeping object.
refactor result objects for each metric type (global, instance, region-wise, autc), to be easily expandable and still understandable by the user (something like result.global.dice vs. result.autc.dice)
refactor whole codebase in terms of folder/file structure, make folders for each pipeline phase, sort utils better, ...
Large refactor of the whole pipeline, switching to a metric-focused approach (e.g. we loop over each metric the user wants, compute it, save all intermediate steps so that other metrics can still utilize the intermediate steps instead of computing everything anew).
Refactor ideas in order:
Need a module between end of instance evaluation and Panoptica Result, that actually handles metric computation -> PanopticaResult object becomes a purely result-keeping object.
refactor result objects for each metric type (global, instance, region-wise, autc), to be easily expandable and still understandable by the user (something like result.global.dice vs. result.autc.dice)
refactor whole codebase in terms of folder/file structure, make folders for each pipeline phase, sort utils better, ...
Large refactor of the whole pipeline, switching to a metric-focused approach (e.g. we loop over each metric the user wants, compute it, save all intermediate steps so that other metrics can still utilize the intermediate steps instead of computing everything anew).
Speed improvements?