Conversation
Update readme, for using dynamic onnx and then to converting to TensorRT
added flag dynamic, for using dynamic onnx
|
Also for converting i used this docker image nvcr.io/nvidia/tensorrt:23.06-py3 |
|
Great job! |
|
@stqwzr Hello! |
|
Hi @Egorundel |
|
@stqwzr It's a pity, because I tried to change the inference code so that everything works with the incoming image package In theory, everything is ready for it, just need to submit an image in the preprocessing and postprocessing functions and form an image vector in this way, and then submit it to the inference. Do you have any tips on what can be done? |
|
@Egorundel You can try with creating some flattened_batch_data (float *), where shape is (batch x channels x width x height). Kind of like this, btw it was generated by GPT maybe some issues, but logic should be same |
Small changes for creating dynamic onnx and the converting to tensorrt. Attached image input and output shapes will look like this. (Image after onnx visualization netron)
Tested with multiple batch sizes to ensure the model performs efficiently and correctly with dynamic input shapes. Also attached the screen of successfull converting to tensorrt by using trtexec only
Please review the changes and provide any feedback if needed. Thank you!