This project provides an efficient framework for deploying PyTorch models on mobile devices and edge computing systems. The aim is to bring the capabilities of deep learning to the edge.
- Lightweight model deployment
- Support for various mobile platforms
- Efficient memory and compute resources handling
- Clone the repository:
git clone https://github.com/sttadic/pytorch-mobile-edge-inference cd pytorch-mobile-edge-inference - Install the required dependencies:
pip install -r requirements.txt
- Import the necessary libraries and load your model in your application.
- Use the provided APIs to run inference on input data.
- Train your model using PyTorch.
- Use
torch.jit.scriptto export your model to TorchScript for performance optimization. - Convert the scripted model for mobile compatibility following the guidelines provided in the documentation.
This project is licensed under the MIT License. See the LICENSE file for details.