test: stabilize local CI and expand GOROOT coverage#1721
test: stabilize local CI and expand GOROOT coverage#1721cpunion wants to merge 48 commits intogoplus:mainfrom
Conversation
|
Note The number of changes in this pull request is too large for Gemini Code Assist to generate a summary. |
runtime/internal/runtime/z_signal.go
Outdated
| // introduce dependencies on errors and internal/reflectlite packages that cause | ||
| // linking issues in c-shared and c-archive build modes. | ||
| SIGSEGV = c.Int(0xb) | ||
| SIGBUS = c.Int(0xa) |
There was a problem hiding this comment.
Bug: SIGBUS is 0x7 on Linux but 0xa on macOS/BSD. This constant 0xa is only correct for Darwin — on Linux, 0xa is SIGUSR1. This means:
- Actual bus errors on Linux won't be caught (signal 7 goes unhandled → process termination).
SIGUSR1on Linux will incorrectly trigger a nil-pointer-dereference panic.
This needs platform-conditional values or separate build-tagged files (similar to how SIGSEGV is noted as universal at 0xb).
ssa/type_cvt.go
Outdated
| @@ -72,7 +72,14 @@ func (p Program) Closure(sig *types.Signature) Type { | |||
| return p.rawType(closure) | |||
| } | |||
|
|
|||
| func isOpaqueGoSSAType(typ types.Type) bool { | |||
| return fmt.Sprintf("%T", typ) == "*ssa.opaqueType" | |||
There was a problem hiding this comment.
Performance / Fragility: Using fmt.Sprintf("%T", typ) == "*ssa.opaqueType" for type detection is both expensive (allocates via reflection on every call) and fragile (breaks if the internal SSA package is renamed/restructured). This sits on the hot rawType → cvtType path, called for every type reference during compilation.
Consider using reflect.TypeOf(typ).String() with a cached expected value, or better yet, detect this via the type's behavior (e.g., checking if Underlying() returns itself or a sentinel).
runtime/internal/runtime/z_gc.go
Outdated
| var th pthread.Thread | ||
| if rc := CreateThread(&th, nil, cleanupThread, nil); rc != 0 { | ||
| fatal("failed to create cleanup thread") | ||
| c.Exit(2) |
There was a problem hiding this comment.
Dead code: If fatal() terminates the process, c.Exit(2) is unreachable. If fatal() does not terminate, its name is misleading. Either way, this line should be clarified or removed. Same pattern appears in z_thread.go.
runtime/internal/runtime/z_gc.go
Outdated
| if atomic.Load(&e.stop) != 1 { | ||
| if e.valueFn != nil { | ||
| e.valueFn(e.obj) | ||
| } else { | ||
| e.fn() | ||
| } |
There was a problem hiding this comment.
Robustness: cleanupThread calls user-provided cleanup functions (e.valueFn / e.fn) without any panic recovery. If a cleanup function panics, this thread dies permanently and no further cleanups/finalizers will ever run. The Go runtime wraps its finalizer goroutine in defer/recover for this reason. Consider adding similar protection here.
runtime/internal/runtime/z_gc.go
Outdated
| if e.copySize == 0 { | ||
| e.obj = ptr |
There was a problem hiding this comment.
Potential use-after-free: When copySize == 0, e.obj = ptr stores the pointer to the object being finalized. After finalizer() returns, bdwgc may reclaim ptr. But e.obj is later dereferenced asynchronously in cleanupThread via e.valueFn(e.obj). If the GC collects the object between enqueue and dequeue, this is a use-after-free. The copySize > 0 path correctly snapshots the data — should the copySize == 0 path do the same, or document why the pointer remains valid?
| clitedebug "github.com/goplus/llgo/runtime/internal/clite/debug" | ||
| psync "github.com/goplus/llgo/runtime/internal/clite/pthread/sync" | ||
| "github.com/goplus/llgo/runtime/internal/clite/sync/atomic" | ||
| _ "unsafe" |
There was a problem hiding this comment.
Nit: _ "unsafe" (blank import) is redundant since "unsafe" is already directly imported on line 6.
| func memProfileEnabled() bool { | ||
| if memProfileRatePtr == nil { | ||
| return false | ||
| } | ||
| rate := *memProfileRatePtr | ||
| return rate > 0 && rate <= 64*1024 && atomic.Load(&memProfileSnapshotting) == 0 |
There was a problem hiding this comment.
Data race: memProfileRatePtr is a *int read here without synchronization — *memProfileRatePtr could race with SetMemProfileRatePtr writing the pointer from another goroutine. The pointer itself should be stored/loaded atomically (e.g., atomic.Pointer[int]), or at minimum the read of the rate value should use atomic.LoadInt32/atomic.LoadInt64 if the rate itself can change concurrently.
| for p := memProfileSites; p != nil; p = p.next { | ||
| if equalMemProfileStack(p.stack, p.nstk, candidate.stack, candidate.nstk) { | ||
| p.allocObjects++ | ||
| p.allocBytes += int64(size) | ||
| memProfileMu.Unlock() | ||
| c.Free(unsafe.Pointer(candidate)) | ||
| return | ||
| } |
There was a problem hiding this comment.
Performance: recordMemProfileAlloc does an O(N) linear scan of the entire profile site linked list under a global mutex on every sampled allocation. The Go runtime uses a hash table for this. As distinct allocation sites grow, this will become a bottleneck under high allocation rates. A hash map keyed by stack hash would give O(1) amortized lookup.
|
Review Summary Impressive scope — this PR adds a solid GOROOT compatibility runner, expands runtime coverage (finalizers, cleanup threads, memory profiling, panic/recover rework), and lays groundwork for std internal mirrors. The compiler and SSA changes are generally well-structured. Key items to address:
See inline comments for details. |
Summary
Validation
go test ./internal/build ./internal/packages -count=1go test ./test/goroot -count=1./dev/local_ci.shon darwin/arm64