Skip to content

Commit feb1e9a

Browse files
authored
Add readme (#577)
1 parent c21bfe4 commit feb1e9a

File tree

2 files changed

+118
-1
lines changed

2 files changed

+118
-1
lines changed

ch05/README.md

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,4 +16,11 @@
1616
- [07_gpt_to_llama](07_gpt_to_llama) contains a step-by-step guide for converting a GPT architecture implementation to Llama 3.2 and loads pretrained weights from Meta AI
1717
- [08_memory_efficient_weight_loading](08_memory_efficient_weight_loading) contains a bonus notebook showing how to load model weights via PyTorch's `load_state_dict` method more efficiently
1818
- [09_extending-tokenizers](09_extending-tokenizers) contains a from-scratch implementation of the GPT-2 BPE tokenizer
19-
- [10_llm-training-speed](10_llm-training-speed) shows PyTorch performance tips to improve the LLM training speed
19+
- [10_llm-training-speed](10_llm-training-speed) shows PyTorch performance tips to improve the LLM training speed
20+
21+
22+
23+
<br>
24+
<br>
25+
26+
[![Link to the video](https://img.youtube.com/vi/Zar2TJv-sE0/0.jpg)](https://www.youtube.com/watch?v=Zar2TJv-sE0)

pkg/llms_from_scratch/README.md

Lines changed: 110 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,110 @@
1+
# `llms-from-scratch` PyPI Package
2+
3+
This optional PyPI package lets you conveniently import code from various chapters of the *Build a Large Language Model From Scratch* book.
4+
5+
&nbsp;
6+
## Installation
7+
8+
&nbsp;
9+
### From PyPI
10+
11+
Install the `llms-from-scratch` package from the official [Python Package Index](https://pypi.org/project/llms-from-scratch/) (PyPI):
12+
13+
```bash
14+
pip install llms-from-scratch
15+
```
16+
17+
> **Note:** If you're using [`uv`](https://github.com/astral-sh/uv), replace `pip` with `uv pip` or use `uv add`:
18+
19+
```bash
20+
uv add llms-from-scratch
21+
```
22+
23+
24+
25+
&nbsp;
26+
### Editable Install from GitHub
27+
28+
If you'd like to modify the code and have those changes reflected during development:
29+
30+
```bash
31+
git clone https://github.com/rasbt/LLMs-from-scratch.git
32+
cd LLMs-from-scratch
33+
pip install -e .
34+
```
35+
36+
> **Note:** With `uv`, use:
37+
38+
```bash
39+
uv add --editable . --dev
40+
```
41+
42+
43+
44+
&nbsp;
45+
## Using the Package
46+
47+
Once installed, you can import code from any chapter using:
48+
49+
```python
50+
from llms_from_scratch.ch02 import GPTDatasetV1, create_dataloader_v1
51+
52+
from llms_from_scratch.ch03 import (
53+
MultiHeadAttention,
54+
SelfAttention_v1,
55+
SelfAttention_v2,
56+
CausalAttention,
57+
MultiHeadAttentionWrapper
58+
)
59+
60+
from llms_from_scratch.ch04 import (
61+
LayerNorm,
62+
GELU,
63+
FeedForward,
64+
TransformerBlock,
65+
GPTModel,
66+
generate_text_simple
67+
)
68+
69+
from llms_from_scratch.ch05 import (
70+
generate,
71+
train_model_simple,
72+
evaluate_model,
73+
generate_and_print_sample,
74+
assign,
75+
load_weights_into_gpt,
76+
text_to_token_ids,
77+
token_ids_to_text,
78+
calc_loss_batch,
79+
calc_loss_loader,
80+
plot_losses
81+
)
82+
83+
from llms_from_scratch.ch06 import (
84+
download_and_unzip_spam_data,
85+
create_balanced_dataset,
86+
random_split,
87+
SpamDataset,
88+
calc_accuracy_loader,
89+
evaluate_model,
90+
train_classifier_simple,
91+
plot_values,
92+
classify_review
93+
)
94+
95+
from llms_from_scratch.ch07 import (
96+
download_and_load_file,
97+
format_input,
98+
InstructionDataset,
99+
custom_collate_fn,
100+
check_if_running,
101+
query_model,
102+
generate_model_scores
103+
)
104+
105+
106+
from llms_from_scratch.appendix_a import NeuralNetwork, ToyDataset
107+
108+
from llms_from_scratch.appendix_d import find_highest_gradient, train_model
109+
```
110+

0 commit comments

Comments
 (0)