nnsight.modeling#

class nnsight.modeling.language.LanguageModel(*args, config: ~transformers.configuration_utils.PretrainedConfig | None = None, tokenizer: ~transformers.tokenization_utils.PreTrainedTokenizer | None = None, automodel: ~typing.Type[~transformers.models.auto.modeling_auto.AutoModel] = <class 'transformers.models.auto.modeling_auto.AutoModelForCausalLM'>, **kwargs)[source]#

LanguageModels are NNsight wrappers around transformers language models.

Inputs can be in the form of:

Prompt: (str) Prompts: (List[str]) Batched prompts: (List[List[str]]) Tokenized prompt: (Union[List[int], torch.Tensor]) Tokenized prompts: (Union[List[List[int]], torch.Tensor]) Direct input: (Dict[str,Any])

If using a custom model, you also need to provide the tokenizer like LanguageModel(custom_model, tokenizer=tokenizer)

Calls to generate pass arguments downstream to GenerationMixin.generate()

config#

Huggingface config file loaded from repository or checkpoint.

Type:

PretrainedConfig

tokenizer#

Tokenizer for LMs.

Type:

PreTrainedTokenizer

automodel#

AutoModel type from transformer auto models.

Type:

Type

model#

Meta version of underlying auto model.

Type:

PreTrainedModel

class Generator[source]#
class Streamer(*args, **kwargs)[source]#
class nnsight.modeling.diffusion.Diffuser(*args, **kwargs)[source]#
class nnsight.modeling.diffusion.DiffusionModel(*args, **kwargs)[source]#