The Invoke canvas.The Invoke canvas.
Invoke workflow
Plate-Appropriate Background Generation for VFX Compositing

The Plate-Appropriate Background Generation for VFX Compositing workflow generates a photorealistic or stylized background that seamlessly integrates into a VFX plate, allowing for efficient set extensions and scene augmentation.

This enables VFX artists to reduce reliance on traditional matte painting, making it faster and easier to match backgrounds to live-action footage.

How to Use It

  1. Upload Your Plate Image – Begin by uploading the main plate image you are working with. This is the key reference for generating a matching background.
  2. Describe the Scene – Enter a description of the background elements and textures you want in the provided input field. This helps guide the AI in generating a visually consistent image.
  3. Adjust Structural Adherence – Use the slider to define how closely the generated background should adhere to the structural elements of the plate image (0 for more creative flexibility, 100 for strict adherence).
  4. Choose Your Model – Select the appropriate SDXL-based model to generate the background.
  5. Generate and Save – Once your parameters are set, click ‘Invoke’ to generate the background. The output is saved to a designated board for review and final compositing.

The generated image will include a new foreground / subject as well as a new background. The background can be isolated in your preferred photo editing application or by using the control canvas.

How It Works

  1. Plate Image Processing – The uploaded image is analyzed to extract key structural and depth information, ensuring the generated background aligns with the scene.
  2. Prompt Construction – The provided scene description is combined with cinematic still elements and photography metadata (e.g., focal length, aperture settings) to refine the AI’s output.
  3. Model Selection & Control Processing – The SDXL-based model is loaded, and structural adherence is applied using ControlNet depth estimation to maintain compositional integrity.
  4. Noise and Denoising Processing – The system generates a noise map based on the input parameters and refines the image through denoising steps, ensuring realistic textures and details.
  5. Final Image Output – The refined background is rendered and saved to a board for further adjustments or compositing into the VFX pipeline.

FAQs

The workflow is loading with errors or telling me that I don't have a model in my project. What do I do?
Can I use a different model with these workflows?
How should I structure my prompts?
The result I generated isn’t what I expected. What should I do?
"Moving towards creation of assets that will actually be placed in game is more demanding. However, a number of companies like Invoke… are focusing on developing effective specialized tools for game artists for both concept art and production assets."
VFX & Compositing
3D & Previs