Skip to content

Evaluation fails for YOLOX-X with "could not execute a primitive" #1664

@piotrgrubicki

Description

@piotrgrubicki

Discovered by @vibhubithar

  1. Create a project, add several video files, annotate images.
  2. start YOLOX-X training.
    Training fails at the last stage. The log reveals:
2026-01-13 19:40:07,508 [ERROR   ] [job.tasks.evaluate_and_infer.evaluate_and_infer:296]  [organization_id=04af51bd-7c4c-4240-a153-c0f0d8d5891e workspace_id=16bdc975-2a09-49f3-ab80-61ceecf4a492]: Error occurred during model evaluation
Traceback (most recent call last):
  File "/interactive_ai/workflows/workflow/job/tasks/evaluate_and_infer/evaluate_and_infer.py", line 279, in evaluate_and_infer
    is_model_accepted, train_inference_subset_id = evaluate(
  File "/interactive_ai/workflows/workflow/job/tasks/evaluate_and_infer/evaluate.py", line 143, in evaluate
    infer_and_evaluate(
  File "/interactive_ai/workflows/workflow/.venv/lib/python3.10/site-packages/jobs_common_extras/evaluation/tasks/infer_and_evaluate.py", line 97, in infer_and_evaluate
    batch_inference.run(use_async=True)
  File "/interactive_ai/workflows/workflow/.venv/lib/python3.10/site-packages/jobs_common_extras/evaluation/services/batch_inference.py", line 140, in run
    self.infer_dataset(batch_dataset, use_async)
  File "/interactive_ai/workflows/workflow/.venv/lib/python3.10/site-packages/jobs_common_extras/evaluation/services/batch_inference.py", line 203, in infer_dataset
    predicted_ann_scene, metadata = self.inferencer.predict(
  File "/interactive_ai/workflows/workflow/.venv/lib/python3.10/site-packages/jobs_common_extras/evaluation/services/inferencer.py", line 201, in predict
    result, metadata = self._predict_raw(numpy_image)
  File "/interactive_ai/workflows/workflow/.venv/lib/python3.10/site-packages/jobs_common_extras/evaluation/services/inferencer.py", line 364, in _predict_raw
    return super()._predict_raw(image)
  File "/interactive_ai/workflows/workflow/.venv/lib/python3.10/site-packages/jobs_common_extras/evaluation/services/inferencer.py", line 172, in _predict_raw
    raw_result = self.model.infer_sync(processed_image)
  File "/interactive_ai/workflows/workflow/.venv/lib/python3.10/site-packages/model_api/models/model.py", line 463, in infer_sync
    return self.inference_adapter.infer_sync(dict_data)
  File "/interactive_ai/workflows/workflow/.venv/lib/python3.10/site-packages/model_api/adapters/openvino_adapter.py", line 313, in infer_sync
    self.infer_request.infer(dict_data)
  File "/interactive_ai/workflows/workflow/.venv/lib/python3.10/site-packages/openvino/runtime/ie_api.py", line 132, in infer
    return OVDict(super().infer(_data_dispatch(
RuntimeError: Exception from src/inference/src/cpp/infer_request.cpp:223:
Exception from src/plugins/intel_cpu/src/graph.cpp:1365:
Node /neck/out_convs.0/conv/Conv/WithoutBiases of type Convolution
could not execute a primitive

I can share the exported project on request. It's quite big. I am not able, though, to reproduce the problem after importing it.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions