diffusion¶
diffusion
¶
Diffuser
¶
Bases: WrapperModule
Wrapper module that loads a diffusion pipeline and exposes its components as submodules.
All pipeline components that are torch.nn.Module or
PreTrainedTokenizerBase instances are registered as attributes
so they appear in the Envoy tree and can be traced. The exact
component names depend on the pipeline (e.g. unet for Stable
Diffusion, transformer for Flux, plus vae, text_encoder,
etc.).
Can be constructed in two ways:
- From pretrained (default): pass a pipeline class and repo ID
to download and load full weights via
from_pretrained(). - From a pre-built pipeline: pass a
DiffusionPipelineinstance directly (used by :meth:DiffusionModel._load_metafor meta-tensor initialization).
| PARAMETER | DESCRIPTION |
|---|---|
automodel_or_pipeline
|
Either a pipeline class
(
DEFAULT:
|
*args
|
Forwarded to
TYPE:
|
**kwargs
|
Forwarded to
TYPE:
|
| ATTRIBUTE | DESCRIPTION |
|---|---|
pipeline |
The underlying diffusers pipeline.
TYPE:
|
generate
¶
Run the full diffusion pipeline.
Calls the pipeline's __call__ method (not .generate(),
which does not exist on DiffusionPipeline).
| RETURNS | DESCRIPTION |
|---|---|
Any
|
The pipeline output (typically a dataclass with |
DiffusionModel
¶
Bases: HuggingFaceModel
NNsight wrapper for diffusion pipelines.
Wraps any diffusers.DiffusionPipeline so that its components
can be traced and intervened on. Works with UNet-based pipelines
(Stable Diffusion) and transformer-based pipelines (Flux, DiT)
alike — the denoiser is accessible as whatever attribute the
pipeline exposes (model.unet or model.transformer).
By default, .trace() runs the full diffusion pipeline with
num_inference_steps=1 for fast single-step tracing. Use
.generate() to run the full pipeline with the default or
user-specified number of inference steps.
When dispatch=False (the default), only lightweight config
files are downloaded and the model architecture is created with
meta tensors (no memory). Real weights are loaded automatically
on the first .trace() or .generate() call, or explicitly
via .dispatch().
Examples::
# Stable Diffusion (UNet-based)
sd = DiffusionModel("stabilityai/stable-diffusion-2-1")
with sd.generate("A cat", num_inference_steps=50) as tracer:
for step in tracer.iter[:]:
denoiser_out = sd.unet.output.save()
# Flux (Transformer-based)
flux = DiffusionModel("black-forest-labs/FLUX.1-schnell")
with flux.trace("A cat"):
denoiser_out = flux.transformer.output.save()
| PARAMETER | DESCRIPTION |
|---|---|
*args
|
Forwarded to :class:
TYPE:
|
automodel
|
The diffusers pipeline
class (or a string name resolvable from
TYPE:
|
**kwargs
|
Forwarded to the pipeline's
TYPE:
|
| ATTRIBUTE | DESCRIPTION |
|---|---|
automodel |
The pipeline class used for loading.
TYPE:
|
automodel
instance-attribute
¶
automodel: Type[DiffusionPipeline] = automodel if not isinstance(automodel, str) else getattr(pipelines, automodel)
__call__
¶
Run the full diffusion pipeline with a 1-step default.
Used by .trace() — defaults to num_inference_steps=1
for fast single-step tracing unless the user overrides it.
| PARAMETER | DESCRIPTION |
|---|---|
prepared_inputs
|
The prompt list from
|
*args
|
Additional positional arguments for the pipeline.
DEFAULT:
|
**kwargs
|
Keyword arguments forwarded to the pipeline.
DEFAULT:
|
| RETURNS | DESCRIPTION |
|---|---|
|
The pipeline output passed through the wrapper module. |
__nnsight_generate__
¶
Run the full diffusion pipeline for .generate() contexts.
Unlike __call__, this does not set a default for
num_inference_steps, allowing the pipeline's own default
(or the user's explicit value) to take effect.
| PARAMETER | DESCRIPTION |
|---|---|
prepared_inputs
|
The prompt list from
|
*args
|
Additional positional arguments for the pipeline.
DEFAULT:
|
**kwargs
|
Keyword arguments forwarded to the pipeline.
DEFAULT:
|
| RETURNS | DESCRIPTION |
|---|---|
|
The pipeline output passed through the wrapper module. |