Skip to content

Issue #33: add executable onboarding and smoke-test scripts for the core repo#34

Open
01stamat wants to merge 3 commits intomainfrom
01stamat/simplify-example-usage
Open

Issue #33: add executable onboarding and smoke-test scripts for the core repo#34
01stamat wants to merge 3 commits intomainfrom
01stamat/simplify-example-usage

Conversation

@01stamat
Copy link
Copy Markdown

Changes:

  • Added make target to automatically build GORILLA (after setting up required libraries), run a specified example and plot results. See also run_example.sh. Not sure if this is best practice.
  • Slightly restructured README to highlight new make target
  • Found wrong path to GORILLA executable in plotting_tutorial.ipynb. Also a few typos in markdown cells.

Questions:

@qodo-code-review
Copy link
Copy Markdown

Review Summary by Qodo

Add executable example runner and improve documentation

✨ Enhancement

Grey Divider

Walkthroughs

Description
• Added make example target to simplify running examples
• Created run_example.sh script for automated example execution
• Fixed executable path in plotting tutorial notebook
• Restructured README to prioritize examples section
• Corrected typos and path references in documentation
Diagram
flowchart LR
  A["Makefile"] -->|"adds example target"| B["make example EXAMPLE=N"]
  C["run_example.sh"] -->|"executes example"| B
  B -->|"builds & runs"| D["GORILLA executable"]
  D -->|"generates output"| E["Plot results"]
  F["plotting_tutorial.ipynb"] -->|"fixes paths"| D
  G["README.md"] -->|"restructures sections"| H["Examples first"]
Loading

Grey Divider

File Changes

1. Makefile ✨ Enhancement +11/-2

Add make example target for automation

• Added example target to .PHONY declaration
• Implemented example target that validates EXAMPLE parameter and calls run_example.sh
• Target depends on build to ensure executable is available

Makefile


2. run_example.sh ✨ Enhancement +40/-0

New script for automated example execution

• New executable script to run GORILLA examples with automatic validation
• Validates example directory existence and executable availability
• Executes example and runs corresponding Python plotting script
• Provides user-friendly error messages and usage instructions

run_example.sh


3. PYTHON/plotting_tutorial.ipynb 🐞 Bug fix +4/-4

Fix executable paths and typos

• Fixed executable path from test_gorilla-main.x to test_gorilla_main.x (hyphen to underscore)
• Corrected path from BUILD/SRC/test_gorilla_main.x to BUILD/test_gorilla_main.x
• Fixed typo: "completly" to "completely"

PYTHON/plotting_tutorial.ipynb


View more (1)
4. README.md 📝 Documentation +46/-43

Restructure README with examples first

• Moved Examples section before Usage and Tutorial sections for better discoverability
• Added instructions for using make example EXAMPLE=<number> command
• Clarified manual execution instructions for examples
• Fixed typo: "REPROCESSING" to "PREPROCESSING" in path references
• Reorganized content structure to highlight quick-start approach

README.md


Grey Divider

Qodo Logo

@qodo-code-review
Copy link
Copy Markdown

qodo-code-review bot commented Apr 13, 2026

Code Review by Qodo

🐞 Bugs (1)   📘 Rule violations (0)   📎 Requirement gaps (3)   🖥 UI issues (0)   🎨 UX Issues (0)
🐞\ ⚙ Maintainability (1)
📎\ ☼ Reliability (2) ⚙ Maintainability (1)

Grey Divider


Action required

1. run_example.sh runs wrong binary 📎
Description
The script checks for BUILD/test_gorilla_main.x but then executes ./test_gorilla_main.x inside
the example directory without creating/linking it, so the documented make example EXAMPLE=<n>
bootstrap is likely to fail from a clean checkout. This violates the requirement that the single
entrypoint builds and runs an example to completion.
Code

run_example.sh[R21-30]

+# Check if executable exists
+if [ ! -f "BUILD/test_gorilla_main.x" ]; then
+    echo "Error: Executable BUILD/SRC/test_gorilla_main.x not found. Please run 'make build' first."
+    exit 1
+fi
+
+echo "Running example $EXAMPLE_NUM from $EXAMPLE_DIR"
+cd "$EXAMPLE_DIR"
+./test_gorilla_main.x
+
Evidence
PR Compliance ID 1 requires a single command that builds and runs an example to completion; the new
entrypoint (make example) invokes run_example.sh, which validates a build artifact in BUILD/
but then runs a different, unchecked path (./test_gorilla_main.x) inside EXAMPLES/example_<n>.
This mismatch means the onboarding entrypoint is not self-contained and can fail even when the build
succeeded.

Provide a single documented bootstrap entrypoint that builds the executable and runs at least one example
Makefile[29-35]
run_example.sh[21-30]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`run_example.sh` checks for `BUILD/test_gorilla_main.x` but then runs `./test_gorilla_main.x` within the example directory without ensuring that file exists.

## Issue Context
The README documents `make example EXAMPLE=<n>` as an onboarding bootstrap command, and the Makefile calls `bash run_example.sh $(EXAMPLE)`. For a clean checkout, the script must run the built executable reliably.

## Fix Focus Areas
- run_example.sh[21-36]
- Makefile[29-35]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. No expected-output validation 📎
Description
run_example.sh does not check for expected orbit/output artifacts after the example run (or after
plotting), so it can report success without verifying outputs exist. This violates the requirement
that onboarding/smoke-test entrypoints explicitly detect missing outputs and fail nonzero.
Code

run_example.sh[R27-40]

+echo "Running example $EXAMPLE_NUM from $EXAMPLE_DIR"
+cd "$EXAMPLE_DIR"
+./test_gorilla_main.x
+
+if [ $? -eq 0 ]; then
+    echo "Example $EXAMPLE_NUM completed successfully"
+else
+    echo "Error: Example $EXAMPLE_NUM failed"
+    exit 1
+fi
+    
+echo "Plotting results for example $EXAMPLE_NUM from $EXAMPLE_DIR"
+cd ../../PYTHON
+python3 plot_example_$EXAMPLE_NUM.py
Evidence
PR Compliance ID 2/3 require smoke-test entrypoints to be self-checking by validating expected
outputs; the script only checks the executable exit code and then runs a plotting script, with no
explicit -f checks for expected .dat results or generated artifacts. The plotting script itself
is interactive (plt.show()) and does not create a file artifact to validate, weakening
determinism/automation suitability.

Onboarding/smoke-test entrypoints are deterministic and fail loudly (nonzero exit) when outputs are missing
Add one or two repo-native smoke-test example runners that build if needed and run a representative orbit example
run_example.sh[27-40]
PYTHON/plot_example_1.py[20-56]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The new onboarding runner does not validate that the example produced the expected output files/artifacts, and the plotting step is not producing a deterministic artifact that can be checked.

## Issue Context
Compliance requires deterministic, self-checking smoke-test behavior that fails nonzero when outputs are missing. The current runner only checks the executable exit code and then launches interactive plotting.

## Fix Focus Areas
- run_example.sh[27-40]
- PYTHON/plot_example_1.py[20-56]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. Runner not under scripts/ 📎
Description
The added smoke-test runner is placed at repo root (run_example.sh) rather than under EXAMPLES/
or scripts/ as required. This does not meet the checklist’s location requirement for repo-native
smoke-test runners.
Code

run_example.sh[R1-5]

+#!/bin/bash
+
+# Script to run a GORILLA example
+# Usage: ./run_example.sh <example_number>
+
Evidence
PR Compliance ID 3 specifies that smoke-test runners must be added under EXAMPLES/ or scripts/;
the PR adds run_example.sh at the repository root and the Makefile references it there. This fails
the required structure for smoke-test runners.

Add one or two repo-native smoke-test example runners that build if needed and run a representative orbit example
run_example.sh[1-5]
Makefile[29-35]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The smoke-test/onboarding runner script is not located under `EXAMPLES/` or `scripts/`.

## Issue Context
The compliance checklist explicitly requires one or two smoke-test runners to live under `EXAMPLES/` or `scripts/`.

## Fix Focus Areas
- run_example.sh[1-40]
- Makefile[29-35]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

4. Misleading executable path message 🐞
Description
When the executable is missing, run_example.sh reports BUILD/SRC/test_gorilla_main.x even though
it actually checks for BUILD/test_gorilla_main.x. This can mislead users into looking in the wrong
location and contradicts the repo’s CMake output path.
Code

run_example.sh[R21-24]

+# Check if executable exists
+if [ ! -f "BUILD/test_gorilla_main.x" ]; then
+    echo "Error: Executable BUILD/SRC/test_gorilla_main.x not found. Please run 'make build' first."
+    exit 1
Evidence
The script checks for BUILD/test_gorilla_main.x but prints an error pointing to
BUILD/SRC/test_gorilla_main.x. The top-level CMake config defines test_gorilla_main.x at the
project root scope (defaulting runtime output to the build directory root, i.e.,
BUILD/test_gorilla_main.x unless explicitly overridden).

run_example.sh[21-25]
CMakeLists.txt[36-43]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`run_example.sh` checks for `BUILD/test_gorilla_main.x` but the error message incorrectly references `BUILD/SRC/test_gorilla_main.x`.

## Issue Context
This appears to be an outdated path assumption and will confuse users when the build artifact is missing.

## Fix Focus Areas
- run_example.sh[21-25]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

@qodo-code-review
Copy link
Copy Markdown

CI Feedback 🧐

A test triggered by this PR failed. Here is an AI-generated analysis of the failure:

Action: Ubuntu-coverage

Failed stage: Set up job [❌]

Failed test name: ""

Failure summary:

The workflow failed during action preparation because it references the deprecated action
actions/upload-artifact@v3.
GitHub automatically rejects runs using actions/upload-artifact@v3 (per
the v3 deprecation notice), so the job stops before executing any workflow steps.

Relevant error logs:
1:  ##[group]Runner Image Provisioner
2:  Hosted Compute Agent
...

26:  Discussions: write
27:  Issues: write
28:  Metadata: read
29:  Models: read
30:  Packages: write
31:  Pages: write
32:  PullRequests: write
33:  RepositoryProjects: write
34:  SecurityEvents: write
35:  Statuses: write
36:  ##[endgroup]
37:  Secret source: Actions
38:  Prepare workflow directory
39:  Prepare all required actions
40:  Getting action download info
41:  ##[error]This request has been automatically failed because it uses a deprecated version of `actions/upload-artifact: v3`. Learn more: https://github.blog/changelog/2024-04-16-deprecation-notice-v3-of-the-artifact-actions/

Comment thread run_example.sh
Comment on lines +21 to +30
# Check if executable exists
if [ ! -f "BUILD/test_gorilla_main.x" ]; then
echo "Error: Executable BUILD/SRC/test_gorilla_main.x not found. Please run 'make build' first."
exit 1
fi

echo "Running example $EXAMPLE_NUM from $EXAMPLE_DIR"
cd "$EXAMPLE_DIR"
./test_gorilla_main.x

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. run_example.sh runs wrong binary 📎 Requirement gap ☼ Reliability

The script checks for BUILD/test_gorilla_main.x but then executes ./test_gorilla_main.x inside
the example directory without creating/linking it, so the documented make example EXAMPLE=<n>
bootstrap is likely to fail from a clean checkout. This violates the requirement that the single
entrypoint builds and runs an example to completion.
Agent Prompt
## Issue description
`run_example.sh` checks for `BUILD/test_gorilla_main.x` but then runs `./test_gorilla_main.x` within the example directory without ensuring that file exists.

## Issue Context
The README documents `make example EXAMPLE=<n>` as an onboarding bootstrap command, and the Makefile calls `bash run_example.sh $(EXAMPLE)`. For a clean checkout, the script must run the built executable reliably.

## Fix Focus Areas
- run_example.sh[21-36]
- Makefile[29-35]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment thread run_example.sh
Comment on lines +27 to +40
echo "Running example $EXAMPLE_NUM from $EXAMPLE_DIR"
cd "$EXAMPLE_DIR"
./test_gorilla_main.x

if [ $? -eq 0 ]; then
echo "Example $EXAMPLE_NUM completed successfully"
else
echo "Error: Example $EXAMPLE_NUM failed"
exit 1
fi

echo "Plotting results for example $EXAMPLE_NUM from $EXAMPLE_DIR"
cd ../../PYTHON
python3 plot_example_$EXAMPLE_NUM.py No newline at end of file
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

2. No expected-output validation 📎 Requirement gap ☼ Reliability

run_example.sh does not check for expected orbit/output artifacts after the example run (or after
plotting), so it can report success without verifying outputs exist. This violates the requirement
that onboarding/smoke-test entrypoints explicitly detect missing outputs and fail nonzero.
Agent Prompt
## Issue description
The new onboarding runner does not validate that the example produced the expected output files/artifacts, and the plotting step is not producing a deterministic artifact that can be checked.

## Issue Context
Compliance requires deterministic, self-checking smoke-test behavior that fails nonzero when outputs are missing. The current runner only checks the executable exit code and then launches interactive plotting.

## Fix Focus Areas
- run_example.sh[27-40]
- PYTHON/plot_example_1.py[20-56]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment thread run_example.sh
Comment on lines +1 to +5
#!/bin/bash

# Script to run a GORILLA example
# Usage: ./run_example.sh <example_number>

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

3. Runner not under scripts/ 📎 Requirement gap ⚙ Maintainability

The added smoke-test runner is placed at repo root (run_example.sh) rather than under EXAMPLES/
or scripts/ as required. This does not meet the checklist’s location requirement for repo-native
smoke-test runners.
Agent Prompt
## Issue description
The smoke-test/onboarding runner script is not located under `EXAMPLES/` or `scripts/`.

## Issue Context
The compliance checklist explicitly requires one or two smoke-test runners to live under `EXAMPLES/` or `scripts/`.

## Fix Focus Areas
- run_example.sh[1-40]
- Makefile[29-35]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant