[Cpp API Compatibility] Align some other APIs#78837
[Cpp API Compatibility] Align some other APIs#78837youge325 wants to merge 5 commits intoPaddlePaddle:developfrom
Conversation
|
你的PR提交成功,感谢你对开源项目的贡献! |
There was a problem hiding this comment.
Pull request overview
Aligns several PaddlePaddle C++ PyTorch-compat APIs with PyTorch-observable behavior to reduce incompatibilities in the compat layer and update the corresponding C++ tests.
Changes:
- Update compat implementations for
expand,chunk,index, sparse tensor constructors,_values(), andIValue::to_repr()to match (or more closely match) PyTorch behavior. - Adjust C++ compat tests to assert the new throw/return semantics.
- Add a placeholder CUDA CMake macro header for downstream
__has_includefeature detection.
Reviewed changes
Copilot reviewed 12 out of 12 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
| test/cpp/compat/ATen_values_test.cc | Update CSR _values() expectation to throw. |
| test/cpp/compat/ATen_index_test.cc | Update empty index-list behavior to throw. |
| test/cpp/compat/ATen_expand_test.cc | Update multiple expand cases to expect rejection/throws. |
| test/cpp/compat/ATen_chunk_test.cc | Update chunk(chunks > dim_size) expectation to match PyTorch non-empty behavior. |
| paddle/phi/api/include/compat/c10/cuda/impl/cuda_cmake_macros.h | Add placeholder header for PyTorch-compat feature detection. |
| paddle/phi/api/include/compat/ATen/ops/sparse_csr_tensor.h | Change dtype-mismatch handling to throw. |
| paddle/phi/api/include/compat/ATen/ops/sparse_coo_tensor.h | Change dtype-mismatch handling to ignore/carry values dtype. |
| paddle/phi/api/include/compat/ATen/ops/index.h | Throw on empty index list. |
| paddle/phi/api/include/compat/ATen/ops/expand.h | Tighten expand behavior to reject unsupported expansions instead of fallback tiling/slicing. |
| paddle/phi/api/include/compat/ATen/ops/chunk.h | Cap chunks to dim_size when chunks > dim_size. |
| paddle/phi/api/include/compat/ATen/ops/_values.h | Throw for CSR _values() instead of returning CSR values tensor. |
| paddle/phi/api/include/compat/ATen/core/ivalue.h | Make IValue::to_repr() throw for Tensor values. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| PD_THROW("expand(): the expanded size of the tensor (", | ||
| target_size_vec[0], | ||
| ") must match the existing size (", | ||
| reshape_vec[0], | ||
| ") at non-singleton dimension 0."); | ||
| } else if (input_rank == target_rank) { |
There was a problem hiding this comment.
This branch used to fall back to tile_and_slice_to_target, but now always throws. That makes the tile_and_slice_to_target lambda above dead code (potential -Wunused-variable / -Werror build break) and also removes previously supported behavior. If the intent is to fully match PyTorch by rejecting these cases, please remove/guard the unused fallback helper (e.g., delete it or mark it [[maybe_unused]]) so the header compiles cleanly.
| if (chunks > dim_size) { | ||
| // First create non-empty chunks for existing elements | ||
| for (int64_t i = 0; i < dim_size; ++i) { | ||
| auto chunk_tensor = | ||
| paddle::experimental::slice(pd_tensor, {dim}, {i}, {i + 1}, {1}, {}); | ||
| result.push_back(Tensor(chunk_tensor)); | ||
| } | ||
| // Then add empty chunks | ||
| for (int64_t i = dim_size; i < chunks; ++i) { | ||
| // Create empty tensor with same shape except for the chunk dimension | ||
| std::vector<int64_t> empty_shape; | ||
| for (int64_t j = 0; j < pd_tensor.dims().size(); ++j) { | ||
| if (j == dim) { | ||
| empty_shape.push_back(0); | ||
| } else { | ||
| empty_shape.push_back(pd_tensor.dims()[j]); | ||
| } | ||
| } | ||
| auto empty_tensor = paddle::experimental::empty( | ||
| phi::IntArray(empty_shape), pd_tensor.dtype(), pd_tensor.place()); | ||
| result.push_back(Tensor(empty_tensor)); | ||
| } | ||
| return result; | ||
| chunks = dim_size; | ||
| } | ||
|
|
||
| int64_t chunk_size = (dim_size + chunks - 1) / chunks; |
There was a problem hiding this comment.
When dim_size is 0 (or when chunks becomes 0 after chunks = dim_size), the subsequent chunk_size = (dim_size + chunks - 1) / chunks will divide by zero. Please add validation for chunks <= 0 (PyTorch throws) and handle dim_size == 0 explicitly (e.g., return an empty vector) before computing chunk_size.
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## develop #78837 +/- ##
===========================================
Coverage ? 100.00%
===========================================
Files ? 4
Lines ? 8
Branches ? 0
===========================================
Hits ? 8
Misses ? 0
Partials ? 0 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 13 out of 13 changed files in this pull request and generated no new comments.
Comments suppressed due to low confidence (1)
paddle/phi/api/include/compat/ATen/ops/chunk.h:41
- If
dim_sizeis 0, thechunks > dim_sizebranch setschunks = 0, and then(dim_size + chunks - 1) / chunksdivides by zero. Please add an early return whenchunks == 0(and consider validatingchunks > 0up front to match PyTorch).
// PyTorch returns at most 'dim_size' non-empty chunks when chunks > dim_size
if (chunks > dim_size) {
chunks = dim_size;
}
int64_t chunk_size = (dim_size + chunks - 1) / chunks;
int64_t remaining = dim_size;
for (int64_t i = 0; i < chunks && remaining > 0; ++i) {
int64_t current_chunk_size = std::min(chunk_size, remaining);
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
/re-run all-failed |
PR Category
Execute Infrastructure
PR Types
Bug fixes
Description
拆分自 #78707 ,将部分接口行为对齐 Pytorch
是否引起精度变化
否