Advanced motion planning system for Unitree G1 humanoid robot in MuJoCo simulation.
| Feature | Description | Result |
|---|---|---|
| Walk + Reach | Whole-body coordination | 2m walk + 4/4 reach |
| ZMP Preview Control | LIPM CoM trajectory (Kajita) | 69cm trajectory |
| Footstep Planning | A* search with obstacles | 16 steps |
| MPC Balance | Predictive control | 49% less energy |
| RL Locomotion | Pre-trained policy | 2.01m @ 0.4m/s |
| Push Recovery | Perturbation resistance | 4/4 survived |
Left: RL Locomotion (2.01m walk) | Right: Manipulation, Push Recovery, Wave
╔══════════════════════════════════════════════════════════╗
║ HUMANOID SHOWCASE DEMO ║
║ Walk → Reach → Push Recovery → Wave ║
╚══════════════════════════════════════════════════════════╝
[PHASE 1] Walking 2 meters...
✓ Walked 2.01m
[PHASE 2-4] Manipulation, Push Recovery, Wave...
✓ Reaching: 4/4 targets
✓ Push recovery: 2/2 survived
✓ Victory wave: Done!
# Clone the repository
git clone https://github.com/ansh1113/humanoid-motion-planning.git
cd humanoid-motion-planning
# Create virtual environment
python -m venv venv
source venv/bin/activate
# Install dependencies
pip install mujoco numpy scipy matplotlib torch
# Run the showcase demo
python src/showcase_demo.pyhumanoid_motion_planning/
├── src/
│ ├── showcase_demo.py # Video-friendly continuous demo
│ ├── full_visualization.py # Step-by-step feature demo
│ ├── walk_and_reach.py # Whole-body coordination
│ ├── zmp_preview_control.py # LIPM preview control
│ ├── footstep_planner.py # A* footstep planning
│ ├── mpc_balance.py # MPC controller
│ └── locomotion/
│ └── g1_walker.py # RL-based walking
├── results/ # Output visualizations
├── media/ # Demo GIFs and videos
├── mujoco_menagerie/unitree_g1/ # Robot model
└── unitree_rl_gym/ # Pre-trained RL policy
Classic Kajita LIPM method for CoM trajectory generation:
LIPM: x'' = ω²(x - ZMP), ω = √(g/z_c) ≈ 3.6 rad/s
A* search with discrete actions:
- Forward: 8-25cm, Lateral: ±12cm, Rotation: ±17°
- Real-time collision checking with obstacles
State: [x, ẋ], Control: acceleration
Horizon: 25 steps (0.5s), Cost: J = Σ(Q·x² + R·u²)
Damped least-squares with waist compensation:
Δq = J^T(JJ^T + λI)^{-1} · error, λ = 0.005| Metric | Value |
|---|---|
| Walking Distance | 2.01m |
| Walking Speed | ~0.4 m/s |
| Manipulation Success | 75-100% |
| Push Recovery | 4/4 directions |
| MPC Energy Savings | 49% vs PD |
| ZMP Trajectory | 69cm |
| Footstep Planning | 16 steps w/ obstacles |
Left: A footstep planning, Center: ZMP preview trajectories, Right: MPC balance comparison*
python src/full_visualization.py6-phase demo with user prompts:
- Footstep Planning (A*)
- ZMP Preview Control
- RL Locomotion
- Manipulation (Jacobian IK)
- MPC Balance
- ZMP Stability Test
python src/showcase_demo.pyContinuous demonstration:
- Walking 2m → Reaching → Push Recovery → Wave
python src/walk_and_reach.py # Walk + Reach
python src/zmp_preview_control.py # ZMP Preview
python src/footstep_planner.py # A* Planning
python src/mpc_balance.py # MPC Balance- Python 3.8+
- MuJoCo 3.0+
- NumPy, SciPy, Matplotlib
- PyTorch (for RL policy)
- Kajita et al., "Biped Walking Pattern Generation by using Preview Control of Zero-Moment Point"
- Unitree G1 Documentation
- MuJoCo Physics Engine
This project demonstrates:
- ✅ Whole-body motion planning (walk + reach)
- ✅ Classical control (ZMP, LIPM, preview control)
- ✅ Modern optimization (MPC, A* search)
- ✅ Practical robotics (IK, trajectory optimization)
- ✅ Simulation (MuJoCo integration)
- Train custom RL locomotion policy in Isaac Sim
- Vision-based manipulation
- Dynamic walking with ZMP tracking
- Real hardware deployment
MIT License - see LICENSE file for details
Ansh Bhansali
- MS in Autonomy & Robotics, UIUC (2026)
- Email: anshbhansali5@gmail.com
- GitHub: @ansh1113
Developed as part of robotics portfolio for humanoid robotics positions





