Skip to content

test: stabilize local CI and expand GOROOT coverage#1721

Draft
cpunion wants to merge 48 commits intogoplus:mainfrom
cpunion:wt/goroot-runner
Draft

test: stabilize local CI and expand GOROOT coverage#1721
cpunion wants to merge 48 commits intogoplus:mainfrom
cpunion:wt/goroot-runner

Conversation

@cpunion
Copy link
Collaborator

@cpunion cpunion commented Mar 16, 2026

Summary

  • add a GOROOT-based compatibility runner with process timeout handling
  • expand runtime and compiler compatibility fixes found by GOROOT/test coverage
  • stabilize local CI and target/demo environment handling
  • add std internal mirror/overlay groundwork for reusing selected upstream packages

Validation

  • go test ./internal/build ./internal/packages -count=1
  • go test ./test/goroot -count=1
  • ./dev/local_ci.sh on darwin/arm64

cpunion added 30 commits March 16, 2026 09:45
@gemini-code-assist
Copy link

Note

The number of changes in this pull request is too large for Gemini Code Assist to generate a summary.

@cpunion cpunion marked this pull request as draft March 16, 2026 03:38
// introduce dependencies on errors and internal/reflectlite packages that cause
// linking issues in c-shared and c-archive build modes.
SIGSEGV = c.Int(0xb)
SIGBUS = c.Int(0xa)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: SIGBUS is 0x7 on Linux but 0xa on macOS/BSD. This constant 0xa is only correct for Darwin — on Linux, 0xa is SIGUSR1. This means:

  1. Actual bus errors on Linux won't be caught (signal 7 goes unhandled → process termination).
  2. SIGUSR1 on Linux will incorrectly trigger a nil-pointer-dereference panic.

This needs platform-conditional values or separate build-tagged files (similar to how SIGSEGV is noted as universal at 0xb).

ssa/type_cvt.go Outdated
@@ -72,7 +72,14 @@ func (p Program) Closure(sig *types.Signature) Type {
return p.rawType(closure)
}

func isOpaqueGoSSAType(typ types.Type) bool {
return fmt.Sprintf("%T", typ) == "*ssa.opaqueType"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Performance / Fragility: Using fmt.Sprintf("%T", typ) == "*ssa.opaqueType" for type detection is both expensive (allocates via reflection on every call) and fragile (breaks if the internal SSA package is renamed/restructured). This sits on the hot rawTypecvtType path, called for every type reference during compilation.

Consider using reflect.TypeOf(typ).String() with a cached expected value, or better yet, detect this via the type's behavior (e.g., checking if Underlying() returns itself or a sentinel).

var th pthread.Thread
if rc := CreateThread(&th, nil, cleanupThread, nil); rc != 0 {
fatal("failed to create cleanup thread")
c.Exit(2)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dead code: If fatal() terminates the process, c.Exit(2) is unreachable. If fatal() does not terminate, its name is misleading. Either way, this line should be clarified or removed. Same pattern appears in z_thread.go.

Comment on lines +100 to +105
if atomic.Load(&e.stop) != 1 {
if e.valueFn != nil {
e.valueFn(e.obj)
} else {
e.fn()
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Robustness: cleanupThread calls user-provided cleanup functions (e.valueFn / e.fn) without any panic recovery. If a cleanup function panics, this thread dies permanently and no further cleanups/finalizers will ever run. The Go runtime wraps its finalizer goroutine in defer/recover for this reason. Consider adding similar protection here.

Comment on lines +136 to +137
if e.copySize == 0 {
e.obj = ptr
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Potential use-after-free: When copySize == 0, e.obj = ptr stores the pointer to the object being finalized. After finalizer() returns, bdwgc may reclaim ptr. But e.obj is later dereferenced asynchronously in cleanupThread via e.valueFn(e.obj). If the GC collects the object between enqueue and dequeue, this is a use-after-free. The copySize > 0 path correctly snapshots the data — should the copySize == 0 path do the same, or document why the pointer remains valid?

clitedebug "github.com/goplus/llgo/runtime/internal/clite/debug"
psync "github.com/goplus/llgo/runtime/internal/clite/pthread/sync"
"github.com/goplus/llgo/runtime/internal/clite/sync/atomic"
_ "unsafe"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: _ "unsafe" (blank import) is redundant since "unsafe" is already directly imported on line 6.

Comment on lines +40 to +45
func memProfileEnabled() bool {
if memProfileRatePtr == nil {
return false
}
rate := *memProfileRatePtr
return rate > 0 && rate <= 64*1024 && atomic.Load(&memProfileSnapshotting) == 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Data race: memProfileRatePtr is a *int read here without synchronization — *memProfileRatePtr could race with SetMemProfileRatePtr writing the pointer from another goroutine. The pointer itself should be stored/loaded atomically (e.g., atomic.Pointer[int]), or at minimum the read of the rate value should use atomic.LoadInt32/atomic.LoadInt64 if the rate itself can change concurrently.

Comment on lines +106 to +113
for p := memProfileSites; p != nil; p = p.next {
if equalMemProfileStack(p.stack, p.nstk, candidate.stack, candidate.nstk) {
p.allocObjects++
p.allocBytes += int64(size)
memProfileMu.Unlock()
c.Free(unsafe.Pointer(candidate))
return
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Performance: recordMemProfileAlloc does an O(N) linear scan of the entire profile site linked list under a global mutex on every sampled allocation. The Go runtime uses a hash table for this. As distinct allocation sites grow, this will become a bottleneck under high allocation rates. A hash map keyed by stack hash would give O(1) amortized lookup.

@xgopilot
Copy link
Contributor

xgopilot bot commented Mar 16, 2026

Review Summary

Impressive scope — this PR adds a solid GOROOT compatibility runner, expands runtime coverage (finalizers, cleanup threads, memory profiling, panic/recover rework), and lays groundwork for std internal mirrors. The compiler and SSA changes are generally well-structured.

Key items to address:

  1. Bug: SIGBUS constant (0xa) is only correct on macOS; on Linux it should be 0x7. This will silently mishandle bus errors on Linux.
  2. Safety: cleanupThread lacks panic recovery — a single panicking cleanup permanently kills all future finalizer processing.
  3. Memory safety: The copySize == 0 finalizer path may store a pointer to an object the GC can reclaim before the cleanup thread processes it.
  4. Data race: memProfileRatePtr is read without atomic synchronization.
  5. Performance/Fragility: isOpaqueGoSSAType uses fmt.Sprintf("%T") string comparison on a hot path.

See inline comments for details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant