ci: gate example runs until all checks pass#1998
ci: gate example runs until all checks pass#1998tech0priyanshu wants to merge 1 commit intohiero-ledger:mainfrom
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. @@ Coverage Diff @@
## main #1998 +/- ##
=======================================
Coverage 93.66% 93.66%
=======================================
Files 144 144
Lines 9348 9348
=======================================
Hits 8756 8756
Misses 592 592 🚀 New features to boost your workflow:
|
| statuses: read | ||
| pull-requests: read | ||
|
|
||
| on: |
There was a problem hiding this comment.
You should be able to do eg
.github/workflows/bot-workflows.yml
.github/workflows/pr-check-test.yml
WalkthroughA gating mechanism is added to the example-checking workflow, introducing a new script that validates PR context (required status checks, changed files) before triggering example execution. The workflow trigger is updated from push/pull_request to check_suite completion to run only after upstream checks pass, combined with workflow_dispatch support for manual triggers. Changes
Sequence Diagram(s)sequenceDiagram
actor GH as GitHub
participant Gate as Gate Job
participant Script as Gate Script
participant API as GitHub API
participant RunEx as Run-Examples Job
GH->>Gate: Trigger (check_suite completed)
activate Gate
Gate->>Script: Execute gating logic
activate Script
Script->>API: Fetch check runs & combined status
API-->>Script: Check results
Script->>API: Fetch PR changed files
API-->>Script: File changes
Script->>Script: Validate required checks passed
Script->>Script: Evaluate runtime-relevant changes
Script-->>Gate: Output (should_run, head_sha)
deactivate Script
Gate-->>GH: Set outputs
deactivate Gate
alt should_run == 'true'
GH->>RunEx: Trigger with gate outputs
activate RunEx
RunEx->>RunEx: Checkout at head_sha
RunEx->>RunEx: Setup & run examples
RunEx-->>GH: Report results
deactivate RunEx
else should_run == 'false'
GH->>GH: Skip Run-Examples Job
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. 📋 Issue PlannerLet us write the prompt for your AI agent so you can ship faster (with fewer bugs). View plan for ticket: ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
.github/workflows/pr-check-examples.yml (1)
158-162: 🧹 Nitpick | 🔵 TrivialAdd version comment to
actions/checkoutaction.The
actions/checkoutaction is pinned to SHA8e8c483db84b4bee98b60c0593521ed34d9990e8without a version comment. All other third-party actions in this workflow include version comments (e.g.,# v2.15.1,# v8.0.0,# v7.5.0) for clarity and maintainability. Add a version comment to match this pattern.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 0d402535-d5fe-4663-83ba-ca97c3173d7f
📒 Files selected for processing (3)
.github/scripts/pr-check-examples-gate.js.github/workflows/pr-check-examples.ymlCHANGELOG.md
| module.exports = async ({ github, context, core }) => { | ||
| const isManualRun = context.eventName === "workflow_dispatch"; | ||
| const headSha = isManualRun ? context.sha : context.payload.check_suite.head_sha; | ||
| core.setOutput("head_sha", headSha); | ||
|
|
||
| if (isManualRun) { | ||
| core.info("Manual dispatch: bypassing status gate."); | ||
| core.setOutput("should_run", "true"); | ||
| return; | ||
| } |
There was a problem hiding this comment.
Script file is not used by the workflow.
This script file exists but the workflow at .github/workflows/pr-check-examples.yml has the same logic duplicated inline in the actions/github-script step (lines 29-146). Either remove this file or update the workflow to use it via the script property pointing to this file.
💡 Option to use this script file in the workflow
In the workflow YAML, replace the inline script with:
- name: Check required PR statuses
id: checks-gate
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
with:
script: |
const gate = require('./.github/scripts/pr-check-examples-gate.js');
await gate({ github, context, core });This requires checking out the repository first, so add a checkout step before the gate check.
As per coding guidelines: "Non-trivial logic belongs in dedicated scripts under .github/scripts/, keeping the workflow YAML focused on orchestration."
| module.exports = async ({ github, context, core }) => { | ||
| const isManualRun = context.eventName === "workflow_dispatch"; | ||
| const headSha = isManualRun ? context.sha : context.payload.check_suite.head_sha; | ||
| core.setOutput("head_sha", headSha); | ||
|
|
||
| if (isManualRun) { | ||
| core.info("Manual dispatch: bypassing status gate."); | ||
| core.setOutput("should_run", "true"); | ||
| return; | ||
| } | ||
|
|
||
| if (context.payload.check_suite.conclusion !== "success") { | ||
| core.info( | ||
| `Triggering check suite concluded as ${context.payload.check_suite.conclusion}; skipping examples.` | ||
| ); | ||
| core.setOutput("should_run", "false"); | ||
| return; | ||
| } | ||
|
|
||
| if ( | ||
| !context.payload.check_suite.pull_requests || | ||
| context.payload.check_suite.pull_requests.length === 0 | ||
| ) { | ||
| core.info("No pull request is associated with this check suite event; skipping examples."); | ||
| core.setOutput("should_run", "false"); | ||
| return; | ||
| } | ||
|
|
||
| const owner = context.repo.owner; | ||
| const repo = context.repo.repo; | ||
|
|
||
| const checkRuns = await github.paginate(github.rest.checks.listForRef, { | ||
| owner, | ||
| repo, | ||
| ref: headSha, | ||
| per_page: 100, | ||
| }); | ||
|
|
||
| const combinedStatus = await github.rest.repos.getCombinedStatusForRef({ | ||
| owner, | ||
| repo, | ||
| ref: headSha, | ||
| }); | ||
|
|
||
| const requiredChecks = [ | ||
| { label: "Codacy Static Code Analysis", pattern: /^Codacy Static Code Analysis$/i }, | ||
| { label: "Code Coverage / coverage (pull_request)", pattern: /^coverage( \(pull_request\))?$/i }, | ||
| { label: "DCO", pattern: /^DCO$/i }, | ||
| { | ||
| label: "PR Check – Broken Markdown Links / pr-check-broken-links (pull_request)", | ||
| pattern: /^pr-check-broken-links( \(pull_request\))?$/i, | ||
| }, | ||
| { | ||
| label: "PR Changelog Check", | ||
| pattern: /^(PR Changelog Check|changelog-check)( \(pull_request\))?$/i, | ||
| }, | ||
| { label: "StepSecurity Harden-Runner", pattern: /^StepSecurity Harden-Runner$/i }, | ||
| { label: "StepSecurity Required Checks", pattern: /^StepSecurity Required Checks$/i }, | ||
| ]; | ||
|
|
||
| const requiredStatuses = ["codecov/patch", "codecov/project"]; | ||
| const missingOrFailed = []; | ||
|
|
||
| for (const required of requiredChecks) { | ||
| const matchingRuns = checkRuns.filter((run) => required.pattern.test(run.name)); | ||
| if (matchingRuns.length === 0) { | ||
| missingOrFailed.push(`${required.label} (missing)`); | ||
| continue; | ||
| } | ||
|
|
||
| const hasSuccess = matchingRuns.some((run) => run.conclusion === "success"); | ||
| if (!hasSuccess) { | ||
| const conclusions = [ | ||
| ...new Set(matchingRuns.map((run) => run.conclusion || "pending")), | ||
| ].join(", "); | ||
| missingOrFailed.push(`${required.label} (${conclusions})`); | ||
| } | ||
| } | ||
|
|
||
| for (const contextName of requiredStatuses) { | ||
| const status = combinedStatus.data.statuses.find((item) => item.context === contextName); | ||
| if (!status) { | ||
| missingOrFailed.push(`${contextName} (missing)`); | ||
| continue; | ||
| } | ||
|
|
||
| if (status.state !== "success") { | ||
| missingOrFailed.push(`${contextName} (${status.state})`); | ||
| } | ||
| } | ||
|
|
||
| if (missingOrFailed.length > 0) { | ||
| core.info("Skipping examples: required checks are not all successful yet."); | ||
| core.info(missingOrFailed.join("\n")); | ||
| core.setOutput("should_run", "false"); | ||
| return; | ||
| } | ||
|
|
||
| const prNumber = context.payload.check_suite.pull_requests[0].number; | ||
| const changedFiles = await github.paginate(github.rest.pulls.listFiles, { | ||
| owner, | ||
| repo, | ||
| pull_number: prNumber, | ||
| per_page: 100, | ||
| }); | ||
|
|
||
| const runRelevantPatterns = [ | ||
| /^src\/.*\.(py|pyi)$/i, | ||
| /^examples\/.*\.py$/i, | ||
| /^tests\/.*\.py$/i, | ||
| /^tck\/.*\.py$/i, | ||
| /^scripts\/.*\.py$/i, | ||
| /^generate_proto\.py$/i, | ||
| /^pyproject\.toml$/i, | ||
| /^uv\.lock$/i, | ||
| ]; | ||
|
|
||
| const shouldRunForChanges = changedFiles.some((file) => | ||
| runRelevantPatterns.some((pattern) => pattern.test(file.filename)) | ||
| ); | ||
|
|
||
| if (!shouldRunForChanges) { | ||
| const changedFileNames = changedFiles.map((file) => file.filename); | ||
| core.info("Skipping examples: no runtime-relevant Python files changed in this PR."); | ||
| core.info(`Changed files: ${changedFileNames.join(", ")}`); | ||
| core.setOutput("should_run", "false"); | ||
| return; | ||
| } | ||
|
|
||
| core.info("All required checks are successful. Running examples."); | ||
| core.setOutput("should_run", "true"); | ||
| }; |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Wrap API calls in try/catch for contextual error handling.
The script makes several GitHub API calls (github.paginate, github.rest.repos.getCombinedStatusForRef) without error handling. Per coding guidelines, async operations should be wrapped in try/catch with contextual errors to aid debugging when API calls fail.
🛡️ Proposed fix to add error handling
module.exports = async ({ github, context, core }) => {
+ try {
const isManualRun = context.eventName === "workflow_dispatch";
const headSha = isManualRun ? context.sha : context.payload.check_suite.head_sha;
core.setOutput("head_sha", headSha);
// ... rest of the function ...
core.info("All required checks are successful. Running examples.");
core.setOutput("should_run", "true");
+ } catch (error) {
+ core.setFailed(`Gate script failed: ${error.message}`);
+ }
};As per coding guidelines: "Wrap async operations in try/catch with contextual errors."
| const requiredChecks = [ | ||
| { label: "Codacy Static Code Analysis", pattern: /^Codacy Static Code Analysis$/i }, | ||
| { label: "Code Coverage / coverage (pull_request)", pattern: /^coverage( \(pull_request\))?$/i }, | ||
| { label: "DCO", pattern: /^DCO$/i }, | ||
| { | ||
| label: "PR Check – Broken Markdown Links / pr-check-broken-links (pull_request)", | ||
| pattern: /^pr-check-broken-links( \(pull_request\))?$/i, | ||
| }, | ||
| { | ||
| label: "PR Changelog Check", | ||
| pattern: /^(PR Changelog Check|changelog-check)( \(pull_request\))?$/i, | ||
| }, | ||
| { label: "StepSecurity Harden-Runner", pattern: /^StepSecurity Harden-Runner$/i }, | ||
| { label: "StepSecurity Required Checks", pattern: /^StepSecurity Required Checks$/i }, | ||
| ]; |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Move configuration arrays to top-level constants.
requiredChecks and runRelevantPatterns are configuration values that would be easier to maintain as top-level constants outside the function body.
♻️ Proposed refactor
+const REQUIRED_CHECKS = [
+ { label: "Codacy Static Code Analysis", pattern: /^Codacy Static Code Analysis$/i },
+ { label: "Code Coverage / coverage (pull_request)", pattern: /^coverage( \(pull_request\))?$/i },
+ { label: "DCO", pattern: /^DCO$/i },
+ {
+ label: "PR Check – Broken Markdown Links / pr-check-broken-links (pull_request)",
+ pattern: /^pr-check-broken-links( \(pull_request\))?$/i,
+ },
+ {
+ label: "PR Changelog Check",
+ pattern: /^(PR Changelog Check|changelog-check)( \(pull_request\))?$/i,
+ },
+ { label: "StepSecurity Harden-Runner", pattern: /^StepSecurity Harden-Runner$/i },
+ { label: "StepSecurity Required Checks", pattern: /^StepSecurity Required Checks$/i },
+];
+
+const REQUIRED_STATUSES = ["codecov/patch", "codecov/project"];
+
+const RUN_RELEVANT_PATTERNS = [
+ /^src\/.*\.(py|pyi)$/i,
+ /^examples\/.*\.py$/i,
+ /^tests\/.*\.py$/i,
+ /^tck\/.*\.py$/i,
+ /^scripts\/.*\.py$/i,
+ /^generate_proto\.py$/i,
+ /^pyproject\.toml$/i,
+ /^uv\.lock$/i,
+];
+
module.exports = async ({ github, context, core }) => {As per coding guidelines: "Use top-level constants for configuration — avoid hardcoded values scattered through the script."
| on: | ||
| push: | ||
| branches: | ||
| - "**" | ||
| pull_request: | ||
| check_suite: | ||
| types: [completed] | ||
| workflow_dispatch: |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Gate job may run excessively due to unfiltered check_suite trigger.
The check_suite: [completed] trigger fires for every check suite that completes on the repository. For a typical PR with 7+ required checks, this gate job will run 7+ times before the final run actually proceeds to examples. While the gate logic correctly prevents premature example execution, each gate run consumes CI minutes.
Consider filtering the trigger to reduce unnecessary runs, such as triggering only on the last expected check suite or using a different approach.
One alternative approach would be to use workflow_run triggered by specific workflows completing, though this has its own trade-offs. The current approach is functionally correct but may not fully achieve the CI minutes reduction goal from issue #1949.
| script: | | ||
| const isManualRun = context.eventName === "workflow_dispatch"; | ||
| const headSha = isManualRun ? context.sha : context.payload.check_suite.head_sha; | ||
| core.setOutput("head_sha", headSha); | ||
|
|
||
| if (isManualRun) { | ||
| core.info("Manual dispatch: bypassing status gate."); | ||
| core.setOutput("should_run", "true"); | ||
| return; | ||
| } | ||
|
|
||
| if (context.payload.check_suite.conclusion !== "success") { | ||
| core.info(`Triggering check suite concluded as ${context.payload.check_suite.conclusion}; skipping examples.`); | ||
| core.setOutput("should_run", "false"); | ||
| return; | ||
| } | ||
|
|
||
| if (!context.payload.check_suite.pull_requests || context.payload.check_suite.pull_requests.length === 0) { | ||
| core.info("No pull request is associated with this check suite event; skipping examples."); | ||
| core.setOutput("should_run", "false"); | ||
| return; | ||
| } | ||
|
|
||
| const owner = context.repo.owner; | ||
| const repo = context.repo.repo; | ||
|
|
||
| const checkRuns = await github.paginate(github.rest.checks.listForRef, { | ||
| owner, | ||
| repo, | ||
| ref: headSha, | ||
| per_page: 100, | ||
| }); | ||
|
|
||
| const combinedStatus = await github.rest.repos.getCombinedStatusForRef({ | ||
| owner, | ||
| repo, | ||
| ref: headSha, | ||
| }); | ||
|
|
||
| const requiredChecks = [ | ||
| { label: "Codacy Static Code Analysis", pattern: /^Codacy Static Code Analysis$/i }, | ||
| { label: "Code Coverage / coverage (pull_request)", pattern: /^coverage( \(pull_request\))?$/i }, | ||
| { label: "DCO", pattern: /^DCO$/i }, | ||
| { label: "PR Check – Broken Markdown Links / pr-check-broken-links (pull_request)", pattern: /^pr-check-broken-links( \(pull_request\))?$/i }, | ||
| { label: "PR Changelog Check", pattern: /^(PR Changelog Check|changelog-check)( \(pull_request\))?$/i }, | ||
| { label: "StepSecurity Harden-Runner", pattern: /^StepSecurity Harden-Runner$/i }, | ||
| { label: "StepSecurity Required Checks", pattern: /^StepSecurity Required Checks$/i }, | ||
| ]; | ||
|
|
||
| const requiredStatuses = ["codecov/patch", "codecov/project"]; | ||
|
|
||
| const missingOrFailed = []; | ||
|
|
||
| for (const required of requiredChecks) { | ||
| const matchingRuns = checkRuns.filter((run) => required.pattern.test(run.name)); | ||
| if (matchingRuns.length === 0) { | ||
| missingOrFailed.push(`${required.label} (missing)`); | ||
| continue; | ||
| } | ||
|
|
||
| const hasSuccess = matchingRuns.some((run) => run.conclusion === "success"); | ||
| if (!hasSuccess) { | ||
| const conclusions = [...new Set(matchingRuns.map((run) => run.conclusion || "pending"))].join(", "); | ||
| missingOrFailed.push(`${required.label} (${conclusions})`); | ||
| } | ||
| } | ||
|
|
||
| for (const contextName of requiredStatuses) { | ||
| const status = combinedStatus.data.statuses.find((item) => item.context === contextName); | ||
| if (!status) { | ||
| missingOrFailed.push(`${contextName} (missing)`); | ||
| continue; | ||
| } | ||
| if (status.state !== "success") { | ||
| missingOrFailed.push(`${contextName} (${status.state})`); | ||
| } | ||
| } | ||
|
|
||
| if (missingOrFailed.length > 0) { | ||
| core.info("Skipping examples: required checks are not all successful yet."); | ||
| core.info(missingOrFailed.join("\n")); | ||
| core.setOutput("should_run", "false"); | ||
| return; | ||
| } | ||
|
|
||
| const prNumber = context.payload.check_suite.pull_requests[0].number; | ||
| const changedFiles = await github.paginate(github.rest.pulls.listFiles, { | ||
| owner, | ||
| repo, | ||
| pull_number: prNumber, | ||
| per_page: 100, | ||
| }); | ||
|
|
||
| const runRelevantPatterns = [ | ||
| /^src\/.*\.(py|pyi)$/i, | ||
| /^examples\/.*\.py$/i, | ||
| /^tests\/.*\.py$/i, | ||
| /^tck\/.*\.py$/i, | ||
| /^scripts\/.*\.py$/i, | ||
| /^generate_proto\.py$/i, | ||
| /^pyproject\.toml$/i, | ||
| /^uv\.lock$/i, | ||
| ]; | ||
|
|
||
| const shouldRunForChanges = changedFiles.some((file) => | ||
| runRelevantPatterns.some((pattern) => pattern.test(file.filename)) | ||
| ); | ||
|
|
||
| if (!shouldRunForChanges) { | ||
| const changedFileNames = changedFiles.map((file) => file.filename); | ||
| core.info("Skipping examples: no runtime-relevant Python files changed in this PR."); | ||
| core.info(`Changed files: ${changedFileNames.join(", ")}`); | ||
| core.setOutput("should_run", "false"); | ||
| return; | ||
| } | ||
|
|
||
| core.info("All required checks are successful. Running examples."); | ||
| core.setOutput("should_run", "true"); |
There was a problem hiding this comment.
Move inline script logic to the external script file.
This inline script (~115 lines) duplicates the logic in .github/scripts/pr-check-examples-gate.js. Per coding guidelines, non-trivial logic should be in dedicated scripts under .github/scripts/, keeping workflow YAML focused on orchestration.
To use the external script, add a checkout step before the gate and reference the script:
♻️ Proposed refactor to use external script
gate:
runs-on: ubuntu-latest
outputs:
should_run: ${{ steps.checks-gate.outputs.should_run }}
head_sha: ${{ steps.checks-gate.outputs.head_sha }}
steps:
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@58077d3c7e43986b6b15fba718e8ea69e387dfcc # v2.15.1
with:
egress-policy: audit
+ - name: Checkout repository
+ uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
+ with:
+ sparse-checkout: .github/scripts
+ sparse-checkout-cone-mode: false
+
- name: Check required PR statuses
id: checks-gate
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
with:
script: |
- const isManualRun = context.eventName === "workflow_dispatch";
- // ... 115 lines of inline code ...
- core.setOutput("should_run", "true");
+ const gate = require('./.github/scripts/pr-check-examples-gate.js');
+ await gate({ github, context, core });As per coding guidelines: "Non-trivial logic belongs in dedicated scripts under .github/scripts/, keeping the workflow YAML focused on orchestration."
There was a problem hiding this comment.
Pull request overview
This PR optimizes the CI “Run Examples” workflow by gating example execution so it only runs when prerequisite checks are successful and when PR changes are relevant to runtime Python behavior, aiming to reduce CI minutes (Fixes #1949).
Changes:
- Switch
Run Examplesworkflow triggering tocheck_suite: completed+ manual dispatch, and add agatejob that evaluates prerequisite check/status success. - Add PR file-change filtering so examples are skipped when no runtime-relevant Python files changed.
- Add a new gate script file and a changelog entry documenting the CI change.
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
CHANGELOG.md |
Adds a changelog entry for the CI gating change. |
.github/workflows/pr-check-examples.yml |
Implements the gate job, check/status requirements, and SHA-based checkout before running examples. |
.github/scripts/pr-check-examples-gate.js |
Adds a reusable JS implementation of the gating logic (currently not wired into the workflow). |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
| const requiredChecks = [ | ||
| { label: "Codacy Static Code Analysis", pattern: /^Codacy Static Code Analysis$/i }, | ||
| { label: "Code Coverage / coverage (pull_request)", pattern: /^coverage( \(pull_request\))?$/i }, | ||
| { label: "DCO", pattern: /^DCO$/i }, | ||
| { label: "PR Check – Broken Markdown Links / pr-check-broken-links (pull_request)", pattern: /^pr-check-broken-links( \(pull_request\))?$/i }, | ||
| { label: "PR Changelog Check", pattern: /^(PR Changelog Check|changelog-check)( \(pull_request\))?$/i }, | ||
| { label: "StepSecurity Harden-Runner", pattern: /^StepSecurity Harden-Runner$/i }, | ||
| { label: "StepSecurity Required Checks", pattern: /^StepSecurity Required Checks$/i }, | ||
| ]; |
| on: | ||
| push: | ||
| branches: | ||
| - "**" | ||
| pull_request: | ||
| check_suite: | ||
| types: [completed] | ||
| workflow_dispatch: |
| const isManualRun = context.eventName === "workflow_dispatch"; | ||
| const headSha = isManualRun ? context.sha : context.payload.check_suite.head_sha; | ||
| core.setOutput("head_sha", headSha); | ||
|
|
||
| if (isManualRun) { | ||
| core.info("Manual dispatch: bypassing status gate."); | ||
| core.setOutput("should_run", "true"); | ||
| return; | ||
| } | ||
|
|
||
| if (context.payload.check_suite.conclusion !== "success") { | ||
| core.info(`Triggering check suite concluded as ${context.payload.check_suite.conclusion}; skipping examples.`); | ||
| core.setOutput("should_run", "false"); | ||
| return; | ||
| } | ||
|
|
||
| if (!context.payload.check_suite.pull_requests || context.payload.check_suite.pull_requests.length === 0) { | ||
| core.info("No pull request is associated with this check suite event; skipping examples."); | ||
| core.setOutput("should_run", "false"); | ||
| return; | ||
| } | ||
|
|
||
| const owner = context.repo.owner; | ||
| const repo = context.repo.repo; | ||
|
|
fb6eee0 to
5581f6a
Compare
exploreriii
left a comment
There was a problem hiding this comment.
Hi @tech0priyanshu
The issue was to optimize the examples, one aspect is to reduce the triggers, but there are lots of other ways you can optimize it (so it shouldn't be just an example guard file).
Additionally, I notice you have manually constructed all the triggers for this, which makes it quite hard to maintain possibly, can you use more of existing github logic? i pointed you to some helpful files that may be able to help
e.g. would something like this work?
on:
workflow_run:
workflows:
- "PR Check – Tests"
- "Bot Workflows"
types: [completed]
Signed-off-by: tech0priyanshu <priyanshuyadv101106@gmail.com>
5581f6a to
4511e6b
Compare
exploreriii
left a comment
There was a problem hiding this comment.
study
https://github.com/hiero-ledger/hiero-sdk-python/blob/main/.github/workflows/pr-check-test.yml
and
https://github.com/hiero-ledger/hiero-sdk-python/blob/main/.github/workflows/bot-workflows.yml
to see if github already suports these triggers for you, reducing the lines of code vastly
|
Hello, this is the OfficeHourBot. This is a reminder that the Hiero Python SDK Office Hours are scheduled in approximately 4 hours (14:00 UTC). This session provides an opportunity to ask questions regarding this Pull Request. Details:
Disclaimer: This is an automated reminder. Please verify the schedule here for any changes. From, |
|
Hi @tech0priyanshu, This pull request has had no commit activity for 10 days. Are you still working on it?
If you're no longer working on this, please comment Reach out on discord or join our office hours if you need assistance. From the Python SDK Team |
Description:
Related issue(s):
Fixes #1949
Notes for reviewer:
Checklist