Skip to content

Releases: chigkim/VOLlama

VOLlama v0.7.0

06 Sep 19:07

Choose a tag to compare

Change log

  • Now you can specify the name of embedding model (default EmbeddingGemma) in the rag settings. Embedding only works for OLlama.
  • You can now mute the sound effect.
  • When launching for the first time, it'll display the dialog for API Settings.
  • It now displays friendlier message when unable to retrieve model list and displays API settings dialog.
  • Resetting the setting quits the app.
  • Both Windows and MacOS Now use Python 3.13.7.

Download

VOLlama v0.6.0

09 Aug 20:23

Choose a tag to compare

Changelog

  • Windows only: Added option to toggle screen reader in the chat menu.
  • Mac: Switched to NSSpeechSynthesizer.
  • Mac: Organized voices into submenus.
  • You can now use the default voice set in System Settings > Accessibility > Spoken Content.
  • Added ability to reset settings from the Advanced menu.
  • Added warning when resetting settings that are not compatible with the current version.
  • Startup speech now triggers only when Read Response is checked.

VOLlama v0.5.0

08 Aug 16:59

Choose a tag to compare

Change log

  • Upgraded dependencies to support GPT-5
  • You can specify a direct URL to an image when using the attach URL feature.
  • The prompt field now accepts both Shift+Enter and Ctrl+Enter for a new line.
  • Changed "Speak response" to "Read response" in the chat menu, so it now uses "R" as the accelerator on Windows.
  • Typo fix

Download

VOLlama v0.4.0

14 Jun 17:32

Choose a tag to compare

Change log

  • You can now attach multiple images at once for vision language models that support multiple images such as Gemma3, Qwen2.5VL, Llama Vision.

Download

VOLlama v0.3.2

24 May 16:06

Choose a tag to compare

Changelog

  • Displays presets in alphabetical order.
  • Removed deprecated parameters for Ollama: penalize_newline, mirostat, mirostat_tau, mirostat_eta, tfs_z
  • When temperature is not set, llama-index-llms-ollama sends none as temperature value in order to use the default temperature set by the model.

Download

VOLlama v0.3.1

22 May 11:54

Choose a tag to compare

ChangeLog

  • Fixed the bug where selecting a preset doesn't set the system message.
  • Selecting a preset starts a new chat.

Download

VOLlama v0.3.0

22 May 01:43

Choose a tag to compare

Changelog

  • Use Chat > Presets to save server type, model, system message, and generation parameters.
  • Press Ctrl/Cmd+p to quickly switch between your favorite presets.
  • Settings aren’t saved to a preset until you click Save in the Preset menu.

Download

VOLlama v0.2.2

20 May 05:31

Choose a tag to compare

Change log

  • Correctly map num_predict to max_tokens.
  • Important: Set num_predict to a positive value, such as 1024.
  • Parameters (except num_ctx) can be left empty to use the engine’s default values.
  • The system message can be empty to use the model’s default.
  • You can now specify the num_gpu parameter to set the number of layers loaded to the GPU for Ollama.

Download

VOLlama v0.2.1

08 Apr 16:12

Choose a tag to compare

Change log

v0.2.1

  • No longer throws an error when there's no settings for new users.
  • Fixed the bug for follow up question with image
  • Image, document, and URL uploaded mid generation don't get cleared.
  • Only clear prompt when you press escape and you were focused on a previous message.
  • List all models returned from server
  • Upgrade dependencies
  • Other minor bug fixes and performance improvements

Download

VOLlama v0.2.1-beta.2

17 Mar 14:19

Choose a tag to compare

VOLlama v0.2.1-beta.2 Pre-release
Pre-release

Changelog

  • Fixed a bug for follow up question with image
  • Image, document, and URL uploaded mid generation don't get cleared.
  • Only clear prompt when you press escape and you were focused on a previous message.
  • List all models returned from server
  • Upgrade dependencies
  • Other minor bug fixes and performance improvements

Download