-
Notifications
You must be signed in to change notification settings - Fork 0
Feature/raster blend boolean #4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…menu; support per-layer globalCompositeOperation in compositor; UI to toggle and select color blend modes (multiply, screen, darken, lighten, color-dodge, color-burn, hard-light, soft-light, difference, hue, saturation, color, luminosity).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds support for per-layer blend modes (global composite operations) to raster and control layers. Users can now apply different blending operations to individual layers and perform boolean operations between layers.
- Adds a new
globalCompositeOperationoptional field to raster and control layer states - Implements UI components for selecting and managing blend modes
- Adds boolean operation support (intersection, cutout, cut-away, exclude) using composite operations
Reviewed Changes
Copilot reviewed 11 out of 11 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| compositeOperations.ts | Defines available composite operations as a constant array and exports the CompositeOperation type |
| types.ts | Adds globalCompositeOperation field to CanvasRasterLayerState schema using the new COMPOSITE_OPERATIONS enum |
| canvasSlice.ts | Adds reducer action for updating the globalCompositeOperation field on raster layers |
| CanvasEntityAdapterRasterLayer.ts | Implements syncing of composite operation to canvas element's mix-blend-mode CSS property for live preview |
| CanvasCompositorModule.ts | Updates compositing logic to apply per-layer composite operations with proper priority handling |
| RasterLayerMenuItemsCompositeOperation.tsx | New component for toggling blend mode on/off from layer menu |
| RasterLayerMenuItemsBooleanSubMenu.tsx | New component providing boolean operations submenu using composite operations |
| RasterLayerCompositeOperationSettings.tsx | New component for selecting specific blend mode from a dropdown |
| RasterLayer.tsx | Integrates composite operation settings panel into layer UI |
| RasterLayerMenuItems.tsx | Adds new menu items for composite operations and boolean operations |
| en.json | Adds translation strings for blend mode and boolean operation UI labels |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
invokeai/frontend/web/src/features/controlLayers/store/types.ts
Outdated
Show resolved
Hide resolved
.../src/features/controlLayers/components/RasterLayer/RasterLayerCompositeOperationSettings.tsx
Outdated
Show resolved
Hide resolved
.../src/features/controlLayers/components/RasterLayer/RasterLayerCompositeOperationSettings.tsx
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Copilot reviewed 11 out of 11 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
...frontend/web/src/features/controlLayers/konva/CanvasEntity/CanvasEntityAdapterRasterLayer.ts
Outdated
Show resolved
Hide resolved
.../src/features/controlLayers/components/RasterLayer/RasterLayerCompositeOperationSettings.tsx
Outdated
Show resolved
Hide resolved
feat: Add Z-Image ControlNet support with spatial conditioning Add comprehensive ControlNet support for Z-Image models including: Backend: - New ControlNet_Checkpoint_ZImage_Config for Z-Image control adapter models - Z-Image control key detection (_has_z_image_control_keys) to identify control layers - ZImageControlAdapter loader for standalone control models - ZImageControlTransformer2DModel combining base transformer with control layers - Memory-efficient model loading by building combined state dict
VRAM usage is high. - Auto-detect control_in_dim from adapter weights (16 for V1, 33 for V2.0) - Auto-detect n_refiner_layers from state dict - Add zero-padding for V2.0's additional channels - Use accelerate.init_empty_weights() for efficient model creation - Add ControlNet_Checkpoint_ZImage_Config to frontend schema
- Add missing ControlNet_Checkpoint_ZImage_Config import - Remove unused imports (Any, Dict, ADALN_EMBED_DIM, is_torch_version) - Add strict=True to zip() calls - Replace mutable list defaults with immutable tuples - Replace dict() calls with literal syntax - Sort imports in z_image_denoise.py
Implement Z-Image ControlNet as an Extension pattern (similar to FLUX ControlNet) instead of merging control weights into the base transformer. This provides: - Lower memory usage (no weight duplication) - Flexibility to enable/disable control per step - Cleaner architecture with separate control adapter Key implementation details: - ZImageControlNetExtension: computes control hints per denoising step - z_image_forward_with_control: custom forward pass with hint injection - patchify_control_context: utility for control image patchification - ZImageControlAdapter: standalone adapter with control_layers and noise_refiner Architecture matches original VideoX-Fun implementation: - Hints computed ONCE using INITIAL unified state (before main layers) - Hints injected at every other main transformer layer (15 control blocks) - Control signal added after each designated layer's forward pass V2.0 ControlNet support (control_in_dim=33): - Channels 0-15: control image latents - Channels 16-31: reference image (zeros for pure control) - Channel 32: inpaint mask (1.0 = don't inpaint, use control signal)
Add Z-Image Turbo and related models to the starter models list: - Z-Image Turbo (full precision, ~13GB) - Z-Image Turbo quantized (GGUF Q4_K, ~4GB) - Z-Image Qwen3 Text Encoder (full precision, ~8GB) - Z-Image Qwen3 Text Encoder quantized (GGUF Q6_K, ~3.3GB) - Z-Image ControlNet Union (Canny, HED, Depth, Pose, MLSD, Inpainting) The quantized Turbo model includes the quantized Qwen3 encoder as a dependency for automatic installation.
…i#8687) * fix(ui): 🐛 `HotkeysModal` and `SettingsModal` initial focus instead of using the `initialFocusRef` prop, the `Modal` component was focusing on the last available Button. This is a workaround that uses `tabIndex` instead which seems to be working. Closes invoke-ai#8685 * style: 🚨 satisfy linter --------- Co-authored-by: Lincoln Stein <[email protected]>
GGUF Z-Image models store x_pad_token and cap_pad_token with shape [dim], but diffusers ZImageTransformer2DModel expects [1, dim]. This caused a RuntimeError when loading GGUF-quantized Z-Image models. The fix dequantizes GGMLTensors first (since they don't support unsqueeze), then reshapes to add the batch dimension.
…e-ai#8690) ## Summary Fix shape mismatch when loading GGUF-quantized Z-Image transformer models. GGUF Z-Image models store `x_pad_token` and `cap_pad_token` with shape `[3840]`, but diffusers `ZImageTransformer2DModel` expects `[1, 3840]` (with batch dimension). This caused a `RuntimeError` on Linux systems when loading models like `z_image_turbo-Q4_K.gguf`. The fix: - Dequantizes GGMLTensors first (since they don't support `unsqueeze`) - Reshapes the tensors to add the missing batch dimension ## Related Issues / Discussions Reported by Linux user using: - https://huggingface.co/leejet/Z-Image-Turbo-GGUF/resolve/main/z_image_turbo-Q4_K.gguf - https://huggingface.co/worstplayer/Z-Image_Qwen_3_4b_text_encoder_GGUF/resolve/main/Qwen_3_4b-Q6_K.gguf ## QA Instructions 1. Install a GGUF-quantized Z-Image model (e.g., `z_image_turbo-Q4_K.gguf`) 2. Install a Qwen3 GGUF encoder 3. Run a Z-Image generation 4. Verify no `RuntimeError: size mismatch for x_pad_token` error occurs ## Merge Plan None, straightforward fix. ## Checklist - [x] _The PR has a short but descriptive title, suitable for a changelog_ - [ ] _Tests added / updated (if applicable)_ - [ ] _❗Changes to a redux slice have a corresponding migration_ - [ ] _Documentation added / updated (if applicable)_ - [ ] _Updated `What's New` copy (if doing a release after this PR)_
Add higher quality Q8_0 quantization option for Z-Image Turbo (~6.6GB) to complement existing Q4_K variant, providing better quality for users with more VRAM. Add dedicated Z-Image ControlNet Tile model (~6.7GB) for upscaling and detail enhancement workflows.
## Summary Add Z-Image Turbo and related models to the starter models list for easy installation via the Model Manager: - **Z-Image Turbo** - Full precision Diffusers format (~13GB) - **Z-Image Turbo (quantized)** - GGUF Q4_K format (~4GB) - **Z-Image Qwen3 Text Encoder** - Full precision (~8GB) - **Z-Image Qwen3 Text Encoder (quantized)** - GGUF Q6_K format (~3.3GB) - **Z-Image ControlNet Union** - Unified ControlNet supporting Canny, HED, Depth, Pose, MLSD, and Inpainting modes The quantized Turbo model includes the quantized Qwen3 encoder as a dependency for automatic installation. ## Related Issues / Discussions Builds on the Z-Image Turbo support added in main. ## QA Instructions 1. Open Model Manager → Starter Models 2. Search for "Z-Image" 3. Verify all 5 models appear with correct descriptions 4. Install the quantized version and confirm the Qwen3 encoder dependency is also installed ## Merge Plan Standard merge, no special considerations. ## Checklist - [x] _The PR has a short but descriptive title, suitable for a changelog_ - [ ] _Tests added / updated (if applicable)_ - [ ] _❗Changes to a redux slice have a corresponding migration_ - [ ] _Documentation added / updated (if applicable)_ - [ ] _Updated `What's New` copy (if doing a release after this PR)_
No longer needed coz Z Image works at 1.0
…-ai#8684) * feat(model manager): 💄 refactor model manager bulk actions UI * feat(model manager): 💄 tweak model list item ui for checkbox selects * style(model manager): 🚨 satisfy the linter * feat(model manager): 💄 tweak search and actions dropdown placement * refactor(model manager): 🔥 remove unused `ModelListHeader` component * fix(model manager): 🐛 list items overlapping sticky headers --------- Co-authored-by: Lincoln Stein <[email protected]>
* feat(hotkeys): ✨ overhaul hotkeys modal UI * fix(model manager): 🩹 improved check for hotkey search clear button * fix(model manager): 🩹 remove unused exports * feat(starter-models): add Z-Image Turbo starter models Add Z-Image Turbo and related models to the starter models list: - Z-Image Turbo (full precision, ~13GB) - Z-Image Turbo quantized (GGUF Q4_K, ~4GB) - Z-Image Qwen3 Text Encoder (full precision, ~8GB) - Z-Image Qwen3 Text Encoder quantized (GGUF Q6_K, ~3.3GB) - Z-Image ControlNet Union (Canny, HED, Depth, Pose, MLSD, Inpainting) The quantized Turbo model includes the quantized Qwen3 encoder as a dependency for automatic installation. * feat(starter-models): add Z-Image Q8 quant and ControlNet Tile Add higher quality Q8_0 quantization option for Z-Image Turbo (~6.6GB) to complement existing Q4_K variant, providing better quality for users with more VRAM. Add dedicated Z-Image ControlNet Tile model (~6.7GB) for upscaling and detail enhancement workflows. * feat(hotkeys): ✨ overhaul hotkeys modal UI * feat(hotkeys modal): 💄 shrink add hotkey button * fix(hotkeys): normalization and detection issues * style: 🚨 satisfy the linter * fix(hotkeys modal): 🩹 remove unused exports --------- Co-authored-by: Alexander Eichhorn <[email protected]> Co-authored-by: Lincoln Stein <[email protected]>
* feat(ui): add model path update for external models Add ability to update file paths for externally managed models (models with absolute paths). Invoke-controlled models (with relative paths in the models directory) are excluded from this feature to prevent breaking internal model management. - Add ModelUpdatePathButton component with modal dialog - Only show button for external models (absolute path check) - Add translations for path update UI elements * Added support for Windows UNC paths in ModelView.tsx:38-41. The isExternalModel function now detects: Unix absolute paths: /home/user/models/... Windows drive paths: C:\Models\... or D:/Models/... Windows UNC paths: \\ServerName\ShareName\... or //ServerName/ShareName/... * fix(ui): validate path format in Update Path modal to prevent invalid paths When updating an external model's path, the new path is now validated to ensure it follows an absolute path format (Unix, Windows drive, or UNC). This prevents users from accidentally entering invalid paths that would cause the Update Path button to disappear, leaving them unable to correct the mistake. * fix(ui): extract isExternalModel to separate file to fix circular dependency Moves the isExternalModel utility function to its own file to break the circular dependency between ModelView.tsx and ModelUpdatePathButton.tsx. --------- Co-authored-by: Lincoln Stein <[email protected]>
…ke-ai#8692) * fix(model-install): support multi-subfolder downloads for Z-Image Qwen3 encoder The Z-Image Qwen3 text encoder requires both text_encoder and tokenizer subfolders from the HuggingFace repo, but the previous implementation only downloaded the text_encoder subfolder, causing model identification to fail. Changes: - Add subfolders property to HFModelSource supporting '+' separated paths - Extend filter_files() and download_urls() to handle multiple subfolders - Update _multifile_download() to preserve subfolder structure - Make Qwen3Encoder probe check both nested and direct config.json paths - Update Qwen3EncoderLoader to handle both directory structures - Change starter model source to text_encoder+tokenizer * ruff format * fix schema description * fix schema description --------- Co-authored-by: Lincoln Stein <[email protected]>
* feat(nodes): add Prompt Template node
Add a new node that applies Style Preset templates to prompts in workflows.
The node takes a style preset ID and positive/negative prompts as inputs,
then replaces {prompt} placeholders in the template with the provided prompts.
This makes Style Preset templates accessible in Workflow mode, enabling
users to apply consistent styling across their workflow-based generations.
* feat(nodes): add StylePresetField for database-driven preset selection
Adds a new StylePresetField type that enables dropdown selection of
style presets from the database in the workflow editor.
Changes:
- Add StylePresetField to backend (fields.py)
- Update Prompt Template node to use StylePresetField instead of string ID
- Add frontend field type definitions (zod schemas, type guards)
- Create StylePresetFieldInputComponent with Combobox
- Register field in InputFieldRenderer and nodesSlice
- Add translations for preset selection
* fix schema.ts on windows.
* chore(api): regenerate schema.ts after merge
---------
Co-authored-by: Claude <[email protected]>
…ke-ai#8694) * feat(hotkeys modal): ⚡ loading state + performance improvements * feat(hotkeys modal): add tooltip to edit button and adjust layout spacing * style(hotkeys modal): 🚨 satisfy the linter --------- Co-authored-by: Lincoln Stein <[email protected]>
* Feature: Add Tag System for user made Workflows * feat(ui): display tags on workflow library tiles Show workflow tags at the bottom of each tile in the workflow browser, making it easier to identify workflow categories at a glance. --------- Co-authored-by: Lincoln Stein <[email protected]>
Add support for loading Flux LoRA models in the xlabs format, which uses
keys like `double_blocks.X.processor.{qkv|proj}_lora{1|2}.{down|up}.weight`.
The xlabs format maps:
- lora1 -> img_attn (image attention stream)
- lora2 -> txt_attn (text attention stream)
- qkv -> query/key/value projection
- proj -> output projection
Changes:
- Add FluxLoRAFormat.XLabs enum value
- Add flux_xlabs_lora_conversion_utils.py with detection and conversion
- Update formats.py to detect xlabs format
- Update lora.py loader to handle xlabs format
- Update model probe to accept recognized Flux LoRA formats
- Add unit tests for xlabs format detection and conversion
Co-authored-by: Lincoln Stein <[email protected]>
* fix(prompts): 🐛 prompt attention adjust elevation edge cases, added tests * refactor(prompts): ♻️ create attention edit helper for prompt boxes * feat(prompts): ✨ apply attention keybinds to negative prompt * feat(prompts): 🚀 reconsider behaviors, simplify code * fix(prompts): 🐛 keybind attention update not tracked by undo/redo * feat(prompts): ✨ overhaul prompt attention behavior * fix(prompts): 🩹 remove unused type * fix(prompts): 🩹 remove unused `Token` type --------- Co-authored-by: Lincoln Stein <[email protected]>
…stalling GGUF files (invoke-ai#8699) * (bugfix)(mm) work around Windows being unable to rmtree tmp directories after GGUF install * (style) fix ruff error * (fix) add workaround for Windows Permission Denied on GGUF file move() call * (fix) perform torch copy() in GGUF reader to avoid deletion failures on Windows * (style) fix ruff formatting issues
* chore: bump version to v6.10.0rc1 * docs: fix names of code owners in release doc
* feat: Add Regional Guidance support for Z-Image model Implements regional prompting for Z-Image (S3-DiT Transformer) allowing different prompts to affect different image regions using attention masks. Backend changes: - Add ZImageRegionalPromptingExtension for mask preparation - Add ZImageTextConditioning and ZImageRegionalTextConditioning data classes - Patch transformer forward to inject 4D regional attention masks - Use additive float mask (0.0 attend, -inf block) in bfloat16 for compatibility - Alternate regional/full attention layers for global coherence Frontend changes: - Update buildZImageGraph to support regional conditioning collectors - Update addRegions to create z_image_text_encoder nodes for regions - Update addZImageLoRAs to handle optional negCond when guidance_scale=0 - Add Z-Image validation (no IP adapters, no autoNegative) * @Pfannkuchensack Fix windows path again * ruff check fix * ruff formating * fix(ui): Z-Image CFG guidance_scale check uses > 1 instead of > 0 Changed the guidance_scale check from > 0 to > 1 for Z-Image models. Since Z-Image uses guidance_scale=1.0 as "no CFG" (matching FLUX convention), negative conditioning should only be created when guidance_scale > 1. --------- Co-authored-by: Lincoln Stein <[email protected]>
Override _validate_looks_like_lora in LoRA_LyCORIS_ZImage_Config to recognize Z-Image specific LoRA formats that use different key patterns than SD/SDXL LoRAs. Z-Image LoRAs (including DoRA format) use keys like: - diffusion_model.layers.X.attention.to_k.lora_down.weight - diffusion_model.layers.X.attention.to_k.dora_scale The base LyCORIS config only checked for lora_A.weight/lora_B.weight suffixes, missing the lora_down.weight/lora_up.weight and dora_scale patterns used by Z-Image LoRAs.
Two fixes for Z-Image LoRA support: 1. Override _validate_looks_like_lora in LoRA_LyCORIS_ZImage_Config to recognize Z-Image specific LoRA formats that use different key patterns than SD/SDXL LoRAs. Z-Image LoRAs use lora_down.weight/lora_up.weight and dora_scale suffixes instead of lora_A.weight/lora_B.weight. 2. Fix _group_by_layer in z_image_lora_conversion_utils.py to correctly group LoRA keys by layer name. The previous logic used rsplit with maxsplit=2 which incorrectly grouped keys like: - "to_k.alpha" -> layer "diffusion_model.layers.17.attention" - "lora_down.weight" -> layer "diffusion_model.layers.17.attention.to_k" Now uses suffix matching to ensure all keys for a layer are grouped together (alpha, dora_scale, lora_down.weight, lora_up.weight).
…Pfannkuchensack/InvokeAI into fix/z-image-lora-dora-detection
…i#8709) ## Summary Fix Z-Image LoRA/DoRA model detection failing during installation. Z-Image LoRAs use different key patterns than SD/SDXL LoRAs. The base `LoRA_LyCORIS_Config_Base` class only checked for key suffixes like `lora_A.weight` and `lora_B.weight`, but Z-Image LoRAs (especially those in DoRA format) use: - `lora_down.weight` / `lora_up.weight` (standard LoRA format) - `dora_scale` (DoRA weight decomposition) This PR overrides `_validate_looks_like_lora` in `LoRA_LyCORIS_ZImage_Config` to recognize Z-Image specific patterns: - Keys starting with `diffusion_model.layers.` (Z-Image S3-DiT architecture) - Keys ending with `lora_down.weight`, `lora_up.weight`, `lora_A.weight`, `lora_B.weight`, or `dora_scale` ## Related Issues / Discussions Fixes installation of Z-Image LoRAs trained with DoRA (Weight-Decomposed Low-Rank Adaptation). ## QA Instructions 1. Download a Z-Image LoRA in DoRA format (e.g., from CivitAI with keys like `diffusion_model.layers.X.attention.to_k.lora_down.weight`) 2. Try to install the LoRA via Model Manager 3. Verify the model is recognized as a Z-Image LoRA and installs successfully 4. Verify the LoRA can be applied when generating with Z-Image ## Merge Plan Standard merge, no special considerations. ## Checklist - [x] _The PR has a short but descriptive title, suitable for a changelog_ - [ ] _Tests added / updated (if applicable)_ - [ ] _❗Changes to a redux slice have a corresponding migration_ - [ ] _Documentation added / updated (if applicable)_ - [ ] _Updated `What's New` copy (if doing a release after this PR)_
* feat(prompts): 💄 increase prompt font size * style(prompts): 🚨 satisfy linter
Summary
Adds two new features that expose utility of the globalCompositeOperation to the canvas raster layers.
Color Blend mode can be enabled on a layer via the right click menu, and will persistently update with layer changes as the user draws/edits. Merging, creating a raster layer out of the current view, or applying booleans will bake the composite view in.
Boolean operations are exposed via a submenu on right click. They are:
Related Issues / Discussions
QA Instructions
Merge Plan