nnsight.envoy#

class nnsight.envoy.Envoy(module: Module, module_path: str = '')[source]#

Envoy object act as proxies for torch modules within a model’s module tree in order to add nnsight functionality. Proxies of the underlying module’s output and input are accessed by .output and .input respectively.

path#

String representing the attribute path of this Envoy’s module relative the the root model. Separated by ‘.’ e.x (‘transformer.h.0.mlp’). Set by NNsight on initialization of meta model.

Type:

str

_fake_outputs#

List of ‘meta’ tensors built from the outputs most recent _scan. Is list as there can be multiple shapes for a module called more than once.

Type:

List[torch.Tensor]

_fake_inputs#

List of ‘meta’ tensors built from the inputs most recent _scan. Is list as there can be multiple shapes for a module called more than once.

Type:

List[torch.Tensor]

output#

Proxy object representing the output of this Envoy’s module. Reset on forward pass.

Type:

nnsight.intervention.InterventionProxy

input#

Proxy object representing the input of this Envoy’s module. Reset on forward pass.

Type:

nnsight.intervention.InterventionProxy

_call_iter#

Integer representing the current iteration of this Envoy’s module’s inputs/outputs.

Type:

int

_tracer#

Object which adds this Envoy’s module’s output and input proxies to an intervention graph. Must be set on Envoys objects manually by the Tracer.

Type:

nnsight.context.Tracer.Tracer

property input: InterventionProxy#

Getting the first positional argument input of the model’s module.

Returns:

Input proxy.

Return type:

InterventionProxy

property inputs: InterventionProxy#

Calling denotes the user wishes to get the input of the underlying module and therefore we create a Proxy of that request. Only generates a proxy the first time it is references otherwise return the already set one.

Returns:

Input proxy.

Return type:

InterventionProxy

modules(include_fn: Callable[[Envoy], bool] = None, names: bool = False, envoys: List = None) List[Envoy][source]#

Returns all Envoys in the Envoy tree.

Parameters:
  • include_fn (Callable, optional) – Optional function to be ran against all Envoys to check if they should be included in the final collection of Envoys. Defaults to None.

  • names (bool, optional) – If to include the name/module_path of returned Envoys along with the Envoy itself. Defaults to False.

Returns:

Included Envoys

Return type:

List[Envoy]

named_modules(*args, **kwargs) List[Tuple[str, Envoy]][source]#

Returns all Envoys in the Envoy tree along with their name/module_path.

Parameters:

include_fn (Callable, optional) – Optional function to be ran against all Envoys to check if they should be included in the final collection of Envoys. Defaults to None.

Returns:

Included Envoys and their names/module_paths.

Return type:

List[Tuple[str, Envoy]]

property output: InterventionProxy#

Calling denotes the user wishes to get the output of the underlying module and therefore we create a Proxy of that request. Only generates a proxy the first time it is references otherwise return the already set one.

Returns:

Output proxy.

Return type:

InterventionProxy

to(*args, **kwargs) Envoy[source]#

Override torch.nn.Module.to so this returns the Envoy, not the underlying module when doing: model = model.to(…)

Returns:

Envoy.

Return type:

Envoy