Envoy#

class nnsight.intervention.envoy.Envoy(module: Module, interleaver: Interleaver | None = None, path: str | None = 'model', rename: Dict[str, str | List[str]] | None = None)[source]#

A proxy class that wraps a PyTorch module to enable intervention during execution.

This class provides access to module inputs and outputs during forward passes, and allows for modification of these values through an interleaving mechanism. It serves as the primary interface for inspecting and modifying the behavior of neural network modules during execution.

path#

The module’s location in the model hierarchy. Example: “model.encoder.layer1” indicates this module is the first layer of the encoder in the model.

Type:

str

_module#

The underlying PyTorch module

Type:

torch.nn.Module

_source#

Source code representation of the module

Type:

Optional[EnvoySource]

_interleaver#

Interleaver for managing execution flow

Type:

Optional[Interleaver]

_default_mediators#

List of default mediators created with .edit

Type:

List[List[str]]

_children#

List of child Envoys

Type:

List[Envoy]

_alias#

Aliaser object for managing aliases

Type:

Aliaser

clear_edits()[source]#

Clear all edits for this Envoy.

cpu(*args, **kwargs)[source]#

Move the module to the CPU.

cuda(*args, **kwargs)[source]#

Move the module to the GPU.

property device: device | None#

Get the device the module is on. Finds the first parameter and return its device.

edit(*, inplace: bool = False)[source]#

Create an editing tracer for this module. Allows for setting default interventions. This means this tracer won’t execute the module, but will instead set default interventions that are applied on all future executions.

Edits can be cleared with Envoy.clear_edits().

Example

>>> model = LanguageModel("gpt2", device_map='auto', dispatch=True)
>>> # Now the first layer attention output will always be 0.
>>> with model.edit() as edited_model:
>>>     edited_model.transformer.h[0].attn.output[:] = 0
>>> with model.trace("Hello World"):
>>>     output = model.output.save()
>>> # The orignal model will have the default output.
>>> print(output)
>>> with edited_model.trace("Hello World"):
>>>     edited_output = edited_model.output.save()
>>> # The edited model will have the output after our intervention.
>>> print(edited_output)
Parameters:

inplace (bool, optional) – Whether to edit in place. Defaults to False.

Returns:

An EditingTracer for this module

Return type:

(EditingTracer)

export_edits(name: str, export_dir: str | None = None, variant: str = '__default__')[source]#

TODO

Parameters:
  • name (str) – _description_

  • export_dir (Optional[str], optional) – _description_. Defaults to None.

  • variant (str, optional) – _description_. Defaults to ‘__default__’.

Raises:

ValueError – _description_

get(path: str) Object[source]#

Gets the Envoy/Proxy via its path.

e.x:

model = nnsight.LanguageModel(“openai-community/gpt2”)

module = model.get(‘transformer.h.0.mlp’)

with model.trace(“Hello”):

value = model.get(‘transformer.h.0.mlp.output’).save()

Parameters:

path (str) – ‘.’ separated path.

Returns:

Fetched Envoy/Proxy

Return type:

Union[Envoy, InterventionProxyType]

import_edits(name: str, export_dir: str | None = None, variant: str = '__default__')[source]#

TODO

Parameters:
  • name (str) – _description_

  • export_dir (Optional[str], optional) – _description_. Defaults to None.

  • variant (str, optional) – _description_. Defaults to ‘__default__’.

property input: Object#

Get the first input to the module’s forward pass.

This is a convenience property that returns just the first input value from all inputs passed to the module. So first positional argument, or first keyword argumetn if there are no positional arguments.

Example

>>> model = LanguageModel("gpt2", device_map='auto', dispatch=True)
>>> with model.trace("Hello World"):
>>>     hidden_states = model.transformer.h[0].attn.input.save()
>>> print(hidden_states)
Returns:

The first input value

property inputs: Tuple[Tuple[Object], Dict[str, Object]]#

Get the inputs to the module’s forward pass.

This property provides access to all input values passed to the module during the forward pass.

Example

>>> model = LanguageModel("gpt2", device_map='auto', dispatch=True)
>>> with model.trace("Hello World"):
>>>     args, kwargs = model.transformer.h[0].attn.inputs
Returns:

The module’s input values as a tuple of positional and keyword arguments. i.e (args, kwargs)

property interleaving: bool#

Check if the Envoy is currently nterleaving.

Returns:

True if the Envoy is interleaving, False otherwise

modules(include_fn: Callable[[Envoy], bool] = None, names: bool = False) List[Envoy][source]#

Get all modules in the Envoy tree.

This method returns all Envoys in the tree, optionally filtered by an inclusion function.

Parameters:
  • include_fn – Optional function to filter modules

  • names – Whether to include module names in the result

Returns:

A list of Envoys or (name, Envoy) tuples

named_modules(*args, **kwargs) List[Tuple[str, Envoy]][source]#

Returns all Envoys in the Envoy tree along with their name/module_path.

This is a convenience method that calls modules() with names=True.

Parameters:
  • include_fn (Callable, optional) – Optional function to be ran against all Envoys to check if they should be included in the final collection of Envoys. Defaults to None.

  • *args – Additional arguments to pass to modules()

  • **kwargs

    Additional arguments to pass to modules()

Returns:

Included Envoys and their names/module_paths.

Return type:

List[Tuple[str, Envoy]]

property output: Object#

Get the output of the module’s forward pass.

This property allows access to the return values produced by the module during the forward pass.

Example

>>> model = LanguageModel("gpt2", device_map='auto', dispatch=True)
>>> with model.trace("Hello World"):
>>>     attn = model.transformer.h[0].attn.output[0].save()
>>> print(attn)
Returns:

The module’s output values

scan(*args, **kwargs)[source]#

Just like .trace() but runs the model in fake tensor mode to validate operations and inspect tensor shapes.

This method returns a tracer that runs the model in fake tensor mode to validate operations and inspect tensor shapes without performing actual computation. This is useful for: - Validating that operations will work with given input shapes - Inspecting the shapes and types of tensors that would flow through the model - Debugging shape mismatches or other tensor-related issues.

Note this will not dispatch the model if not dispatched.

Example

>>> model = LanguageModel("gpt2", device_map='auto', dispatch=True)
>>> # Value error as the fake inputs and outputs have not been scanned in.
>>> print(model.transformer.h[0].mlp.output.shape)
>>> # Scan the model to validate operations and inspect shapes
>>> with model.scan("Hello World"):
>>>     # Access fake inputs/outputs to inspect shapes
>>>     attn_input = model.transformer.h[0].attn.input.save()
>>>     attn_output = model.transformer.h[0].attn.output[0].save()
>>> print(f"Attention input shape: {attn_input.shape}")
>>> print(f"Attention output shape: {attn_output.shape}")
>>> print(model.transformer.h[0].mlp.output.shape)
Parameters:
  • *args – Arguments to pass to the tracer

  • **kwargs – Keyword arguments to pass to the tracer

Returns:

A ScanningTracer for this module

skip(replacement: Any)[source]#

Skips the execution of this module duting execution / interleaving. Behavior is the module will not be executed and will return a replacement value instead.

Example

>>> model = LanguageModel("gpt2", device_map='auto', dispatch=True)
>>> with model.trace("Hello World"):
>>>     # Skip the first layer and replace it with the input to the layer.
>>>     model.transformer.h[0].skip((model.transformer.h[0].input, None))
>>>     output = model.output.save()
>>> print(output)
Parameters:

replacement (Any) – The replacement value to replace the module’s output with.

property source: EnvoySource#

Get the source code representation of the module.

This property provides access to the module’s source code with operations highlighted, allowing for inspection and intervention at specific points.

Example

>>> model = LanguageModel("gpt2", device_map='auto', dispatch=True)
>>> # We can print to see the formward method of the module and names associated with the operations within.
>>> print(model.transformer.h[0].attn.source)

60 61 if using_eager and self.reorder_and_upcast_attn:

self__upcast_and_reordered_attn_0 -> 62 attn_output, attn_weights = self._upcast_and_reordered_attn(

63 query_states, key_states, value_states, attention_mask, head_mask 64 ) 65 else:

attention_interface_0 -> 66 attn_output, attn_weights = attention_interface(

67 self, 68 query_states, 69 key_states, 70 value_states, 71 attention_mask, 72 head_mask=head_mask, 73 dropout=self.attn_dropout.p if self.training else 0.0, 74 is_causal=is_causal, 75 **kwargs, 76 ) 77

attn_output_reshape_0 -> 78 attn_output = attn_output.reshape(*attn_output.shape[:-2], -1).contiguous() contiguous_0 -> + … self_c_proj_0 -> 79 attn_output = self.c_proj(attn_output) self_resid_dropout_0 -> 80 attn_output = self.resid_dropout(attn_output)

81 82 return attn_output, attn_weights 83

>>> # We can print out one of these to see the only the operation and a few operations before and after.
>>> print(model.transformer.h[0].attn.source.attention_interface_0)

.transformer.h.0.attn.attention_interface_0:

if using_eager and self.reorder_and_upcast_attn:
attn_output, attn_weights = self._upcast_and_reordered_attn(

query_states, key_states, value_states, attention_mask, head_mask

)

else:

–> attn_output, attn_weights = attention_interface( <–

self, query_states, key_states, value_states, attention_mask, head_mask=head_mask,

>>> with model.trace("Hello World"):
>>>     # Now we can access it like we would any other Envoy with .input or .output to grab the intermediate value.
>>>     attn = model.transformer.h[0].attn.source.attention_interface_0.output.save()
>>> print(attn)
Returns:

An EnvoySource object containing the module’s source code and operations

to(device: device)[source]#

Move the module to a specific device.

This method moves the underlying PyTorch module to the specified device.

Parameters:

device – The device to move the module to

Returns:

Self, for method chaining

trace(*args, fn: ~typing.Callable | None = None, trace: bool = None, tracer_cls: ~typing.Type[~nnsight.intervention.tracing.tracer.InterleavingTracer] = <class 'nnsight.intervention.tracing.tracer.InterleavingTracer'>, **kwargs)[source]#

Create a tracer for this module.

This method returns a tracer that can be used to capture and modify the execution of the module.

Example

>>> model = LanguageModel("gpt2", device_map='auto', dispatch=True)
>>> with model.trace("Hello World"):
>>>     model.transformer.h[0].attn.output[0][:] = 0
>>>     output = model.output.save()
>>> print(output)
Parameters:
  • *args – Arguments to pass to the tracer

  • **kwargs – Keyword arguments to pass to the tracer

Returns:

An InterleavingTracer for this module

wait_for_input()[source]#

Wait for the input to the module to be available.

wait_for_output()[source]#

Wait for the output to the module to be available.

class nnsight.intervention.envoy.EnvoySource(name: str, source: str, line_numbers: dict, interleaver: Interleaver | None = None)[source]#

Represents the source code of a module with operations highlighted.

This class provides access to the individual operations within a module’s source code, allowing for inspection and intervention at specific points in the code. It serves as a bridge between the source code representation and the runtime execution of operations.

class nnsight.intervention.envoy.OperationEnvoy(name: str, source: str, line_number: int, interleaver: Interleaver | None = None)[source]#

Represents a specific operation within a module’s forward pass.

This class provides access to the inputs and outputs of individual operations within a module’s execution, allowing for fine-grained inspection and intervention at the operation level.

property input: Any | Tensor#

Get the first input to the operation.

This is a convenience property that returns just the first input value from all inputs passed to the operation.

Returns:

The first input value

property inputs: Tuple[Tuple[Any, Tensor], Dict[str, Tensor | Any]]#

Get the inputs to this operation.

This property provides access to all input value(s) passed to the operation during execution, structured as a tuple of positional and keyword arguments.

Returns:

The operation’s input value(s)

property output: Any | Tensor#

Get the output of this operation.

This property provides access to the return value(s) produced by the operation during execution.

Returns:

The operation’s output value(s)

property source: EnvoySource#

Get the source code of the operation.

This property provides access to the operation’s source code with nested operations highlighted, allowing for inspection and intervention at specific points.

Returns:

An EnvoySource object containing the operation’s source code and nested operations