The Invoke canvas.The Invoke canvas.
Invoke workflow
Level Grey Scale Layout to Full Rendered Concept

The Level Grey Scale Layout to Full Rendered Concept workflow converts a graybox or rough level sketch into a fully detailed environment concept.

This helps level designers quickly visualize spaces and iterate on layouts before committing to final assets.

How To Use

  1. Upload Your 3D Blockout Image: Begin by uploading a rough 3D blockout or base model of the object you want to refine. This can be a grayscale render, a simple model with minimal details, or a viewport screenshot.
  2. Describe the Object and Desired Textures: Input a short description of the object and the textures you want to apply. For example, if you’re designing a futuristic weapon, you might describe materials like “metallic, reflective surfaces, glowing energy accents, and high-tech engravings.”
  3. Generate the Rendered Concept: Click Invoke to process your blockout through the AI pipeline. The system will apply textures, lighting, and rendering adjustments to convert your rough model into a professional-grade concept image.

How It Works

This workflow transforms a greyscale into a high-fidelity rendered concept using a structured AI pipeline. Below is a breakdown of what happens at each stage:

1. Upload & Input Processing

  • The user uploads a 3D blockout image, which serves as the foundation for the rendering process.
  • The workflow extracts the width and height of the input image to ensure consistency in processing.

2. Structural Analysis & Control Processing

  • The workflow applies edge detection to extract the outlines and forms of the blockout, helping the AI understand key structural details.
  • A depth map is generated to capture the spatial relationships and relative distances of different parts of the object. This helps the AI maintain accurate 3D perception when applying textures and lighting.
  • The extracted edges and depth map are fed into ControlNet models to guide the AI model, ensuring that the final output maintains the original form and depth.

3. Defining the Render Style

  • The user inputs a text description of the object and the materials they want (e.g., “an energy railgun, metallic, glowing edges, futuristic textures”).
  • The system automatically appends additional rendering terms to ensure a high-quality result, such as: “game rendering, high-poly model, PBR materials, next-gen rendering, hero prop, CGI render”
  • A negative prompt is also applied to remove unwanted styles (e.g., “blurry, sketch, 2D, flat”).

4. AI Model Processing

  • The workflow loads the selected AI model (SDXL-based model, e.g., JuggernautXv10) to generate a high-resolution render.
  • A random seed is applied to introduce variation in results.
  • The AI applies denoising and texturing using a DPM++ 2M SDE K sampler, which iteratively refines the details.
  • The AI incorporates ControlNet guidance from the depth and edge maps, ensuring that the output remains faithful to the original 3D blockout.

5. Final Render & Output

  • The processed image is generated with textures, materials, and lighting enhancements.
  • The rendered concept is saved to a design board, allowing users to iterate further or use it in their workflow.

FAQs

The workflow is loading with errors or telling me that I don't have a model in my project. What do I do?
Can I use a different model with these workflows?
How should I structure my prompts?
The result I generated isn’t what I expected. What should I do?
"Moving towards creation of assets that will actually be placed in game is more demanding. However, a number of companies like Invoke… are focusing on developing effective specialized tools for game artists for both concept art and production assets."
3D & Previs
Concept Art