You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+56Lines changed: 56 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,6 +18,9 @@ Magentic-UI is a **research prototype** human-centered AI agent that solves comp
18
18
19
19
## ✨ What's New
20
20
21
+
Microsoft latest agentic model [Fara-7B](https://www.microsoft.com/en-us/research/blog/fara-7b-an-efficient-agentic-model-for-computer-use/) is now integrated in Magentic-UI, read how to launch in <ahref="#Magentic-UI with Fara-7B">Fara-7B guide</a>.
22
+
23
+
21
24
-**"Tell me When"**: Automate monitoring tasks and repeatable workflows that require web or API access that span minutes to days. *Learn more [here](https://www.microsoft.com/en-us/research/blog/tell-me-when-building-agents-that-can-wait-monitor-and-act/).*
22
25
-**File Upload Support**: Upload any file through the UI for analysis or modification
23
26
-**MCP Agents**: Extend capabilities with your favorite MCP servers
@@ -212,6 +215,59 @@ If you face issues with Docker, please refer to the [TROUBLESHOOTING.md](TROUBLE
212
215
Once the server is running, you can access the UI at <http://localhost:8081>.
213
216
214
217
218
+
219
+
### Magentic-UI with Fara-7B
220
+
221
+
1) First install magentic-ui with the fara extras:
222
+
223
+
```bash
224
+
```bash
225
+
python3 -m venv .venv
226
+
source .venv/bin/activate
227
+
pip install magentic-ui[fara]
228
+
```
229
+
230
+
2) In a seperate process, serve the Fara-7B model using vLLM:
231
+
232
+
```bash
233
+
vllm serve "microsoft/Fara-7B" --port 5000 --dtype auto
234
+
```
235
+
236
+
3) First create a `fara_config.yaml` file with the following content:
237
+
238
+
```yaml
239
+
model_config_local_surfer: &client_surfer
240
+
provider: OpenAIChatCompletionClient
241
+
config:
242
+
model: "microsoft/Fara-7B"
243
+
base_url: http://localhost:5000/v1
244
+
api_key: not-needed
245
+
model_info:
246
+
vision: true
247
+
function_calling: true
248
+
json_output: false
249
+
family: "unknown"
250
+
structured_output: false
251
+
multiple_system_messages: false
252
+
253
+
orchestrator_client: *client_surfer
254
+
coder_client: *client_surfer
255
+
web_surfer_client: *client_surfer
256
+
file_surfer_client: *client_surfer
257
+
action_guard_client: *client_surfer
258
+
model_client: *client_surfer
259
+
```
260
+
Note: if you are hosting vLLM on a different port or host, change the `base_url` accordingly.
0 commit comments