Skip to content
Merged

docs #261

Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 22 additions & 0 deletions .readthedocs.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details

# Required
version: 2

# Set the OS, Python version, and other tools you might need
build:
os: ubuntu-24.04
tools:
python: "3.13"

# Build documentation in the "docs/" directory with Sphinx
sphinx:
configuration: docs/source/conf.py

# Optionally, but recommended,
# declare the Python requirements required to build your documentation
# See https://docs.readthedocs.io/en/stable/guides/reproducible-builds.html
python:
install:
- requirements: docs/requirements.txt
5 changes: 2 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,12 @@
> [!NOTE]
> We will soon release a new version of SevenNet, featuring a **SevenNet-Omni**, which delivers state-of-the-art accuracy across diverse material domains at the PBE level and also provides high-fidelity channels such as r²SCAN and ωB97M-V. This release also includes a CUDA-accelerated version of SevenNet integrating flashTP and cuEquivariance, with support for the LAMMPS ML-IAP module. We are currently merging these features. See **[the develop/omni branch](https://github.com/MDIL-SNU/SevenNet/tree/develop/omni)** for ongoing updates.

<img src="SevenNet_logo.png" alt="Alt text" height="180">

# SevenNet

SevenNet (Scalable EquiVariance-Enabled Neural Network) is a graph neural network (GNN)-based interatomic potential package that supports parallel molecular dynamics simulations using [`LAMMPS`](https://lammps.org). Its core model is based on [`NequIP`](https://github.com/mir-group/nequip).

> [!NOTE]
> We will soon release a CUDA-accelerated version of SevenNet, which will significantly increase the speed of our pretrained models on [Matbench Discovery](https://matbench-discovery.materialsproject.org/).

> [!TIP]
> SevenNet supports NVIDIA's [cuEquivariance](https://github.com/NVIDIA/cuEquivariance) for acceleration. In our benchmarks, we found that the cuEquivariance improves inference speed by a factor of three for the SevenNet-MF-ompa potential. See [Installation](#installation) for details.

Expand Down
20 changes: 20 additions & 0 deletions docs/Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# Minimal makefile for Sphinx documentation
#

# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = source
BUILDDIR = build

# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

.PHONY: help Makefile

# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
35 changes: 35 additions & 0 deletions docs/make.bat
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
@ECHO OFF

pushd %~dp0

REM Command file for Sphinx documentation

if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set SOURCEDIR=source
set BUILDDIR=build

if "%1" == "" goto help

%SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.https://www.sphinx-doc.org/
exit /b 1
)

%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
goto end

:help
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%

:end
popd
9 changes: 9 additions & 0 deletions docs/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
sphinx==8.1.3
sphinx_rtd_theme==3.0.2
sphinx-notfound-page==1.1.0
furo==2025.9.25
pydata-sphinx-theme
sphinx-autobuild
myst_parser
sphinx_copybutton
tomli; python_version < "3.11"
Binary file added docs/source/_static/SevenNet_logo.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 2 additions & 0 deletions docs/source/changelog.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
```{include} ../../CHANGELOG.md
```
40 changes: 40 additions & 0 deletions docs/source/cite.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# Citation<a name="citation"></a>

If you use this code, please cite our paper:
```bib
@article{park_scalable_2024,
title = {Scalable Parallel Algorithm for Graph Neural Network Interatomic Potentials in Molecular Dynamics Simulations},
volume = {20},
doi = {10.1021/acs.jctc.4c00190},
number = {11},
journal = {J. Chem. Theory Comput.},
author = {Park, Yutack and Kim, Jaesun and Hwang, Seungwoo and Han, Seungwu},
year = {2024},
pages = {4857--4868},
}
```

If you utilize the multi-fidelity feature of this code or the pretrained model SevenNet-MF-ompa, please cite the following paper:
```bib
@article{kim_sevennet_mf_2024,
title = {Data-Efficient Multifidelity Training for High-Fidelity Machine Learning Interatomic Potentials},
volume = {147},
doi = {10.1021/jacs.4c14455},
number = {1},
journal = {J. Am. Chem. Soc.},
author = {Kim, Jaesun and Kim, Jisu and Kim, Jaehoon and Lee, Jiho and Park, Yutack and Kang, Youngho and Han, Seungwu},
year = {2024},
pages = {1042--1054},
}
```

If you utilize the pretrained model SevenNet-Omni or multi-task training strategies including task-specific regularization and domain-bridging dataset, please cite the following paper:
```bib
@article{kim_optimizing_2025,
title = {Optimizing Cross-Domain Transfer for Universal Machine Learning Interatomic Potentials},
doi = {10.48550/arxiv.2510.11241},
journal = {arXiv},
author = {Kim, Jaesun and You, Jinmu and Park, Yutack and Lim, Yunsung and Kang, Yujin and Kim, Jisu and Jeon, Haekwan and Ju, Suyeon and Hong, Deokgi and Lee, Seung Yul and Choi, Saerom and Kim, Yongdeok and Lee, Jae W and Han, Seungwu},
year = {2025},
}
```
132 changes: 132 additions & 0 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,132 @@
import os

try:
import tomllib
except ModuleNotFoundError:
import tomli as tomllib

# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html

# -- Path setup --------------------------------------------------------------

# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
# import os
# import sys
# sys.path.insert(0, os.path.abspath('.'))


# -- Project information -----------------------------------------------------
project = 'SevenNet'
copyright = '2024, MIT'

# read ../pyproject.toml for version info
pyproject_path = os.path.abspath(
os.path.join(os.path.dirname(__file__), '../../', 'pyproject.toml')
)
with open(pyproject_path, 'rb') as f:
pyproject_data = tomllib.load(f)
__version__ = pyproject_data['project']['version']

author_lst = []
for author_dct in pyproject_data['project']['authors']:
try:
author_lst.append(author_dct['name'])
except KeyError:
continue
author = ', '.join(author_lst)

# The short X.Y version
version = __version__
# The full version, including alpha/beta/rc tags
release = __version__


# -- General configuration ---------------------------------------------------

# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.intersphinx',
'sphinx.ext.coverage',
'sphinx.ext.doctest',
'sphinx.ext.mathjax',
'sphinx.ext.viewcode',
#'sphinx.ext.autosectionlabel',
'sphinx_copybutton',
'myst_parser'
]

# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']

# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = []


# -- Options for HTML output -------------------------------------------------

# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
# html_theme = 'furo'
html_theme = 'pydata_sphinx_theme'

# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
html_logo = '_static/SevenNet_logo.png'
html_context = {
# ...
'default_mode': 'light'
}

html_theme_options = {
'use_edit_page_button': False,
'header_links_before_dropdown': 3,
'navbar_end': ['navbar-icon-links'],
'logo': {
'text': ' Documentation',
#"image_light": "_static/SevenNet_logo.png",
},
'icon_links': [
{
'name': 'GitHub',
'url': 'https://github.com/MDIL-SNU/SevenNet',
'icon': 'fab fa-github',
'type': 'fontawesome',
},
],
'show_nav_level': 4,
#"primary_sidebar_end": ["indices.html", "sidebar-ethical-ads.html"]
}

# -- Options for intersphinx extension ---------------------------------------

# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {
'python': ('https://docs.python.org/3', None),
}

myst_heading_anchors = 4
myst_enable_extensions = ['dollarmath', 'colon_fence']

autosectionlabel_exclude_patterns = [
'changelog.md',
'CHANGELOG.md',
]


# -- Options for autosectionlabel extension ----------------------------------
autosectionlabel_prefix_document = True
47 changes: 47 additions & 0 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
.. SevenNet documentation master file, created by
sphinx-quickstart on Thu Nov 13 16:59:34 2025.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.

:github_url: https://github.com/MDIL-SNU/SevenNet

SevenNet
========

SevenNet (Scalable EquiVariance-Enabled Neural Network) is a graph neural network (GNN)-based interatomic potential package that supports parallel molecular dynamics simulations using `LAMMPS <https://lammps.org>`_. Its core model is based on `NequIP <https://github.com/mir-group/nequip>`_.


* Pretrained GNN interatomic potential and fine-tuning interface
* Support for the Python Atomic Simulation Environment (ASE) calculator
* GPU-parallelized molecular dynamics with LAMMPS
* CUDA-accelerated D3 (van der Waals) dispersion
* Multi-fidelity training for combining multiple databases with different calculation settings


Installation
============
Ensure that your environment uses **Python >= 3.10** and **PyTorch >= 2.0.0** (see https://pytorch.org/get-started/locally/)
Then install SevenNet via:

.. code-block:: bash

pip install sevenn

For acceleration and LAMMPS integration, refer to the the installation guides below.


Contents
========

.. toctree::
:maxdepth: 2

user_guide/index
cite


.. toctree::
:maxdepth: 1
:caption: Misc

changelog
41 changes: 41 additions & 0 deletions docs/source/user_guide/accelerator.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
# Accelerators

CuEquivariance and flashTP provide acceleration for both SevenNet training and inference. For Benchmark results, follow [here](https://arxiv.org/abs/2510.11241)

## CuEquivariance

CuEquivariance is an NVIDIA Python library designed to facilitate the construction of high-performance geometric neural networks using segmented polynomials and triangular operations. For more information, refer to [cuEquivariance](https://github.com/NVIDIA/cuEquivariance).

### Requirements
- Python >= 3.10
- cuEquivariance >= 0.6.1

Install via:

```bash
pip install sevenn[cueq12] # cueq11 for CUDA version 11.*
```

:::{note}
Some GeForce GPUs do not support `pynvml`,
causing `pynvml.NVMLError_NotSupported: Not Supported`.
Then try a lower cuEquivariance version, such as 0.6.1.
:::

## FlashTP

FlashTP is a high-performance Tensor-Product library for Machine Learning Interatomic Potentials (MLIPs). For more information and the installation guide, refer to [flashTP](https://github.com/SNU-ARC/flashTP).

### Requirements
- Python >= 3.10
- flashTP >= 0.1.0

:::{note}
During installation of flashTP,
`subprocess.CalledProcessError: ninja ... exit status 137`
typically indicates **out-of-memory** during compilation.
Try reducing the build parallelism:
```bash
export MAX_JOBS=1
```
:::
Loading
Loading