Skip to content

blenderbot

mindnlp.transformers.models.blenderbot.configuration_blenderbot.BlenderbotConfig

Bases: PretrainedConfig

This is the configuration class to store the configuration of a [BlenderbotModel]. It is used to instantiate an Blenderbot model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Blenderbot facebook/blenderbot-3B architecture.

Configuration objects inherit from [PretrainedConfig] and can be used to control the model outputs. Read the documentation from [PretrainedConfig] for more information.

PARAMETER DESCRIPTION
vocab_size

Vocabulary size of the Blenderbot model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling [BlenderbotModel] or [TFBlenderbotModel].

TYPE: `int`, *optional*, defaults to 50265 DEFAULT: 8008

d_model

Dimensionality of the layers and the pooler layer.

TYPE: `int`, *optional*, defaults to 1024 DEFAULT: 2560

encoder_layers

Number of encoder layers.

TYPE: `int`, *optional*, defaults to 12 DEFAULT: 2

decoder_layers

Number of decoder layers.

TYPE: `int`, *optional*, defaults to 12 DEFAULT: 24

encoder_attention_heads

Number of attention heads for each attention layer in the Transformer encoder.

TYPE: `int`, *optional*, defaults to 16 DEFAULT: 32

decoder_attention_heads

Number of attention heads for each attention layer in the Transformer decoder.

TYPE: `int`, *optional*, defaults to 16 DEFAULT: 32

decoder_ffn_dim

Dimensionality of the "intermediate" (often named feed-forward) layer in decoder.

TYPE: `int`, *optional*, defaults to 4096 DEFAULT: 10240

encoder_ffn_dim

Dimensionality of the "intermediate" (often named feed-forward) layer in decoder.

TYPE: `int`, *optional*, defaults to 4096 DEFAULT: 10240

activation_function

The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported.

TYPE: `str` or `function`, *optional*, defaults to `"gelu"` DEFAULT: 'gelu'

dropout

The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.

TYPE: `float`, *optional*, defaults to 0.1 DEFAULT: 0.1

attention_dropout

The dropout ratio for the attention probabilities.

TYPE: `float`, *optional*, defaults to 0.0 DEFAULT: 0.0

activation_dropout

The dropout ratio for activations inside the fully connected layer.

TYPE: `float`, *optional*, defaults to 0.0 DEFAULT: 0.0

max_position_embeddings

The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).

TYPE: `int`, *optional*, defaults to 128 DEFAULT: 128

init_std

The standard deviation of the truncated_normal_initializer for initializing all weight matrices.

TYPE: `float`, *optional*, defaults to 0.02 DEFAULT: 0.02

encoder_layerdrop

The LayerDrop probability for the encoder. See the LayerDrop paper for more details.

TYPE: `float`, *optional*, defaults to 0.0 DEFAULT: 0.0

decoder_layerdrop

The LayerDrop probability for the decoder. See the LayerDrop paper for more details.

TYPE: `float`, *optional*, defaults to 0.0 DEFAULT: 0.0

scale_embedding

Scale embeddings by diving by sqrt(d_model).

TYPE: `bool`, *optional*, defaults to `False` DEFAULT: False

use_cache

Whether or not the model should return the last key/values attentions (not used by all models)

TYPE: `bool`, *optional*, defaults to `True` DEFAULT: True

forced_eos_token_id

The id of the token to force as the last generated token when max_length is reached. Usually set to eos_token_id.

TYPE: `int`, *optional*, defaults to 2 DEFAULT: 2

Example
>>> from transformers import BlenderbotConfig, BlenderbotModel
... 
>>> # Initializing a Blenderbot facebook/blenderbot-3B style configuration
>>> configuration = BlenderbotConfig()
... 
>>> # Initializing a model (with random weights) from the facebook/blenderbot-3B style configuration
>>> model = BlenderbotModel(configuration)
... 
>>> # Accessing the model configuration
>>> configuration = model.config
Source code in mindnlp/transformers/models/blenderbot/configuration_blenderbot.py
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
class BlenderbotConfig(PretrainedConfig):
    r"""
    This is the configuration class to store the configuration of a [`BlenderbotModel`]. It is used to instantiate an
    Blenderbot model according to the specified arguments, defining the model architecture. Instantiating a
    configuration with the defaults will yield a similar configuration to that of the Blenderbot
    [facebook/blenderbot-3B](https://huggingface.co/facebook/blenderbot-3B) architecture.

    Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
    documentation from [`PretrainedConfig`] for more information.


    Args:
        vocab_size (`int`, *optional*, defaults to 50265):
            Vocabulary size of the Blenderbot model. Defines the number of different tokens that can be represented by
            the `inputs_ids` passed when calling [`BlenderbotModel`] or [`TFBlenderbotModel`].
        d_model (`int`, *optional*, defaults to 1024):
            Dimensionality of the layers and the pooler layer.
        encoder_layers (`int`, *optional*, defaults to 12):
            Number of encoder layers.
        decoder_layers (`int`, *optional*, defaults to 12):
            Number of decoder layers.
        encoder_attention_heads (`int`, *optional*, defaults to 16):
            Number of attention heads for each attention layer in the Transformer encoder.
        decoder_attention_heads (`int`, *optional*, defaults to 16):
            Number of attention heads for each attention layer in the Transformer decoder.
        decoder_ffn_dim (`int`, *optional*, defaults to 4096):
            Dimensionality of the "intermediate" (often named feed-forward) layer in decoder.
        encoder_ffn_dim (`int`, *optional*, defaults to 4096):
            Dimensionality of the "intermediate" (often named feed-forward) layer in decoder.
        activation_function (`str` or `function`, *optional*, defaults to `"gelu"`):
            The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
            `"relu"`, `"silu"` and `"gelu_new"` are supported.
        dropout (`float`, *optional*, defaults to 0.1):
            The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
        attention_dropout (`float`, *optional*, defaults to 0.0):
            The dropout ratio for the attention probabilities.
        activation_dropout (`float`, *optional*, defaults to 0.0):
            The dropout ratio for activations inside the fully connected layer.
        max_position_embeddings (`int`, *optional*, defaults to 128):
            The maximum sequence length that this model might ever be used with. Typically set this to something large
            just in case (e.g., 512 or 1024 or 2048).
        init_std (`float`, *optional*, defaults to 0.02):
            The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
        encoder_layerdrop (`float`, *optional*, defaults to 0.0):
            The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
            for more details.
        decoder_layerdrop (`float`, *optional*, defaults to 0.0):
            The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
            for more details.
        scale_embedding (`bool`, *optional*, defaults to `False`):
            Scale embeddings by diving by sqrt(d_model).
        use_cache (`bool`, *optional*, defaults to `True`):
            Whether or not the model should return the last key/values attentions (not used by all models)
        forced_eos_token_id (`int`, *optional*, defaults to 2):
            The id of the token to force as the last generated token when `max_length` is reached. Usually set to
            `eos_token_id`.

    Example:
        ```python
        >>> from transformers import BlenderbotConfig, BlenderbotModel
        ... 
        >>> # Initializing a Blenderbot facebook/blenderbot-3B style configuration
        >>> configuration = BlenderbotConfig()
        ... 
        >>> # Initializing a model (with random weights) from the facebook/blenderbot-3B style configuration
        >>> model = BlenderbotModel(configuration)
        ... 
        >>> # Accessing the model configuration
        >>> configuration = model.config
        ```
    """
    model_type = "blenderbot"
    keys_to_ignore_at_inference = ["past_key_values"]
    attribute_map = {"num_attention_heads": "encoder_attention_heads", "hidden_size": "d_model"}

    def __init__(
        self,
        vocab_size=8008,
        max_position_embeddings=128,
        encoder_layers=2,
        encoder_ffn_dim=10240,
        encoder_attention_heads=32,
        decoder_layers=24,
        decoder_ffn_dim=10240,
        decoder_attention_heads=32,
        encoder_layerdrop=0.0,
        decoder_layerdrop=0.0,
        use_cache=True,
        is_encoder_decoder=True,
        activation_function="gelu",
        d_model=2560,
        dropout=0.1,
        attention_dropout=0.0,
        activation_dropout=0.0,
        init_std=0.02,
        decoder_start_token_id=1,
        scale_embedding=False,
        pad_token_id=0,
        bos_token_id=1,
        eos_token_id=2,
        encoder_no_repeat_ngram_size=3,
        forced_eos_token_id=2,
        **kwargs,
    ):
        """
        Initialize a BlenderbotConfig instance.

        Args:
            vocab_size (int, optional): The size of the vocabulary. Defaults to 8008.
            max_position_embeddings (int, optional): The maximum number of positional embeddings. Defaults to 128.
            encoder_layers (int, optional): The number of encoder layers. Defaults to 2.
            encoder_ffn_dim (int, optional): The dimension of the encoder's feedforward network. Defaults to 10240.
            encoder_attention_heads (int, optional): The number of attention heads in the encoder. Defaults to 32.
            decoder_layers (int, optional): The number of decoder layers. Defaults to 24.
            decoder_ffn_dim (int, optional): The dimension of the decoder's feedforward network. Defaults to 10240.
            decoder_attention_heads (int, optional): The number of attention heads in the decoder. Defaults to 32.
            encoder_layerdrop (float, optional): The probability of dropping a layer in the encoder. Defaults to 0.0.
            decoder_layerdrop (float, optional): The probability of dropping a layer in the decoder. Defaults to 0.0.
            use_cache (bool, optional): Whether to use cache during decoding. Defaults to True.
            is_encoder_decoder (bool, optional): Whether the model is an encoder-decoder architecture. Defaults to True.
            activation_function (str, optional): The activation function to use. Defaults to 'gelu'.
            d_model (int, optional): The dimension of the model. Defaults to 2560.
            dropout (float, optional): The dropout probability. Defaults to 0.1.
            attention_dropout (float, optional): The dropout probability for attention layers. Defaults to 0.0.
            activation_dropout (float, optional): The dropout probability for activation layers. Defaults to 0.0.
            init_std (float, optional): The standard deviation for weight initialization. Defaults to 0.02.
            decoder_start_token_id (int, optional): The token id for the start of the decoder sequence. Defaults to 1.
            scale_embedding (bool, optional): Whether to scale the embeddings. Defaults to False.
            pad_token_id (int, optional): The token id for padding. Defaults to 0.
            bos_token_id (int, optional): The token id for the beginning of sequence. Defaults to 1.
            eos_token_id (int, optional): The token id for the end of sequence. Defaults to 2.
            encoder_no_repeat_ngram_size (int, optional): The size of the no repeat n-gram in the encoder. Defaults to 3.
            forced_eos_token_id (int, optional): The token id for the forced end of sequence. Defaults to 2.

        Returns:
            None.

        Raises:
            None
        """
        self.vocab_size = vocab_size
        self.max_position_embeddings = max_position_embeddings
        self.d_model = d_model
        self.encoder_ffn_dim = encoder_ffn_dim
        self.encoder_layers = encoder_layers
        self.encoder_attention_heads = encoder_attention_heads
        self.decoder_ffn_dim = decoder_ffn_dim
        self.decoder_layers = decoder_layers
        self.decoder_attention_heads = decoder_attention_heads
        self.dropout = dropout
        self.attention_dropout = attention_dropout
        self.activation_dropout = activation_dropout
        self.activation_function = activation_function
        self.init_std = init_std
        self.encoder_layerdrop = encoder_layerdrop
        self.decoder_layerdrop = decoder_layerdrop
        self.use_cache = use_cache
        self.num_hidden_layers = encoder_layers
        self.scale_embedding = scale_embedding  # scale factor will be sqrt(d_model) if True

        super().__init__(
            pad_token_id=pad_token_id,
            bos_token_id=bos_token_id,
            eos_token_id=eos_token_id,
            is_encoder_decoder=is_encoder_decoder,
            decoder_start_token_id=decoder_start_token_id,
            encoder_no_repeat_ngram_size=encoder_no_repeat_ngram_size,
            forced_eos_token_id=forced_eos_token_id,
            **kwargs,
        )

mindnlp.transformers.models.blenderbot.configuration_blenderbot.BlenderbotConfig.__init__(vocab_size=8008, max_position_embeddings=128, encoder_layers=2, encoder_ffn_dim=10240, encoder_attention_heads=32, decoder_layers=24, decoder_ffn_dim=10240, decoder_attention_heads=32, encoder_layerdrop=0.0, decoder_layerdrop=0.0, use_cache=True, is_encoder_decoder=True, activation_function='gelu', d_model=2560, dropout=0.1, attention_dropout=0.0, activation_dropout=0.0, init_std=0.02, decoder_start_token_id=1, scale_embedding=False, pad_token_id=0, bos_token_id=1, eos_token_id=2, encoder_no_repeat_ngram_size=3, forced_eos_token_id=2, **kwargs)

Initialize a BlenderbotConfig instance.

PARAMETER DESCRIPTION
vocab_size

The size of the vocabulary. Defaults to 8008.

TYPE: int DEFAULT: 8008

max_position_embeddings

The maximum number of positional embeddings. Defaults to 128.

TYPE: int DEFAULT: 128

encoder_layers

The number of encoder layers. Defaults to 2.

TYPE: int DEFAULT: 2

encoder_ffn_dim

The dimension of the encoder's feedforward network. Defaults to 10240.

TYPE: int DEFAULT: 10240

encoder_attention_heads

The number of attention heads in the encoder. Defaults to 32.

TYPE: int DEFAULT: 32

decoder_layers

The number of decoder layers. Defaults to 24.

TYPE: int DEFAULT: 24

decoder_ffn_dim

The dimension of the decoder's feedforward network. Defaults to 10240.

TYPE: int DEFAULT: 10240

decoder_attention_heads

The number of attention heads in the decoder. Defaults to 32.

TYPE: int DEFAULT: 32

encoder_layerdrop

The probability of dropping a layer in the encoder. Defaults to 0.0.

TYPE: float DEFAULT: 0.0

decoder_layerdrop

The probability of dropping a layer in the decoder. Defaults to 0.0.

TYPE: float DEFAULT: 0.0

use_cache

Whether to use cache during decoding. Defaults to True.

TYPE: bool DEFAULT: True

is_encoder_decoder

Whether the model is an encoder-decoder architecture. Defaults to True.

TYPE: bool DEFAULT: True

activation_function

The activation function to use. Defaults to 'gelu'.

TYPE: str DEFAULT: 'gelu'

d_model

The dimension of the model. Defaults to 2560.

TYPE: int DEFAULT: 2560

dropout

The dropout probability. Defaults to 0.1.

TYPE: float DEFAULT: 0.1

attention_dropout

The dropout probability for attention layers. Defaults to 0.0.

TYPE: float DEFAULT: 0.0

activation_dropout

The dropout probability for activation layers. Defaults to 0.0.

TYPE: float DEFAULT: 0.0

init_std

The standard deviation for weight initialization. Defaults to 0.02.

TYPE: float DEFAULT: 0.02

decoder_start_token_id

The token id for the start of the decoder sequence. Defaults to 1.

TYPE: int DEFAULT: 1

scale_embedding

Whether to scale the embeddings. Defaults to False.

TYPE: bool DEFAULT: False

pad_token_id

The token id for padding. Defaults to 0.

TYPE: int DEFAULT: 0

bos_token_id

The token id for the beginning of sequence. Defaults to 1.

TYPE: int DEFAULT: 1

eos_token_id

The token id for the end of sequence. Defaults to 2.

TYPE: int DEFAULT: 2

encoder_no_repeat_ngram_size

The size of the no repeat n-gram in the encoder. Defaults to 3.

TYPE: int DEFAULT: 3

forced_eos_token_id

The token id for the forced end of sequence. Defaults to 2.

TYPE: int DEFAULT: 2

RETURNS DESCRIPTION

None.

Source code in mindnlp/transformers/models/blenderbot/configuration_blenderbot.py
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
def __init__(
    self,
    vocab_size=8008,
    max_position_embeddings=128,
    encoder_layers=2,
    encoder_ffn_dim=10240,
    encoder_attention_heads=32,
    decoder_layers=24,
    decoder_ffn_dim=10240,
    decoder_attention_heads=32,
    encoder_layerdrop=0.0,
    decoder_layerdrop=0.0,
    use_cache=True,
    is_encoder_decoder=True,
    activation_function="gelu",
    d_model=2560,
    dropout=0.1,
    attention_dropout=0.0,
    activation_dropout=0.0,
    init_std=0.02,
    decoder_start_token_id=1,
    scale_embedding=False,
    pad_token_id=0,
    bos_token_id=1,
    eos_token_id=2,
    encoder_no_repeat_ngram_size=3,
    forced_eos_token_id=2,
    **kwargs,
):
    """
    Initialize a BlenderbotConfig instance.

    Args:
        vocab_size (int, optional): The size of the vocabulary. Defaults to 8008.
        max_position_embeddings (int, optional): The maximum number of positional embeddings. Defaults to 128.
        encoder_layers (int, optional): The number of encoder layers. Defaults to 2.
        encoder_ffn_dim (int, optional): The dimension of the encoder's feedforward network. Defaults to 10240.
        encoder_attention_heads (int, optional): The number of attention heads in the encoder. Defaults to 32.
        decoder_layers (int, optional): The number of decoder layers. Defaults to 24.
        decoder_ffn_dim (int, optional): The dimension of the decoder's feedforward network. Defaults to 10240.
        decoder_attention_heads (int, optional): The number of attention heads in the decoder. Defaults to 32.
        encoder_layerdrop (float, optional): The probability of dropping a layer in the encoder. Defaults to 0.0.
        decoder_layerdrop (float, optional): The probability of dropping a layer in the decoder. Defaults to 0.0.
        use_cache (bool, optional): Whether to use cache during decoding. Defaults to True.
        is_encoder_decoder (bool, optional): Whether the model is an encoder-decoder architecture. Defaults to True.
        activation_function (str, optional): The activation function to use. Defaults to 'gelu'.
        d_model (int, optional): The dimension of the model. Defaults to 2560.
        dropout (float, optional): The dropout probability. Defaults to 0.1.
        attention_dropout (float, optional): The dropout probability for attention layers. Defaults to 0.0.
        activation_dropout (float, optional): The dropout probability for activation layers. Defaults to 0.0.
        init_std (float, optional): The standard deviation for weight initialization. Defaults to 0.02.
        decoder_start_token_id (int, optional): The token id for the start of the decoder sequence. Defaults to 1.
        scale_embedding (bool, optional): Whether to scale the embeddings. Defaults to False.
        pad_token_id (int, optional): The token id for padding. Defaults to 0.
        bos_token_id (int, optional): The token id for the beginning of sequence. Defaults to 1.
        eos_token_id (int, optional): The token id for the end of sequence. Defaults to 2.
        encoder_no_repeat_ngram_size (int, optional): The size of the no repeat n-gram in the encoder. Defaults to 3.
        forced_eos_token_id (int, optional): The token id for the forced end of sequence. Defaults to 2.

    Returns:
        None.

    Raises:
        None
    """
    self.vocab_size = vocab_size
    self.max_position_embeddings = max_position_embeddings
    self.d_model = d_model
    self.encoder_ffn_dim = encoder_ffn_dim
    self.encoder_layers = encoder_layers
    self.encoder_attention_heads = encoder_attention_heads
    self.decoder_ffn_dim = decoder_ffn_dim
    self.decoder_layers = decoder_layers
    self.decoder_attention_heads = decoder_attention_heads
    self.dropout = dropout
    self.attention_dropout = attention_dropout
    self.activation_dropout = activation_dropout
    self.activation_function = activation_function
    self.init_std = init_std
    self.encoder_layerdrop = encoder_layerdrop
    self.decoder_layerdrop = decoder_layerdrop
    self.use_cache = use_cache
    self.num_hidden_layers = encoder_layers
    self.scale_embedding = scale_embedding  # scale factor will be sqrt(d_model) if True

    super().__init__(
        pad_token_id=pad_token_id,
        bos_token_id=bos_token_id,
        eos_token_id=eos_token_id,
        is_encoder_decoder=is_encoder_decoder,
        decoder_start_token_id=decoder_start_token_id,
        encoder_no_repeat_ngram_size=encoder_no_repeat_ngram_size,
        forced_eos_token_id=forced_eos_token_id,
        **kwargs,
    )

mindnlp.transformers.models.blenderbot.modeling_blenderbot.BlenderbotForCausalLM

Bases: BlenderbotPreTrainedModel

Represents the Blenderbot model for causal language modeling.

This class provides the functionality to initialize the model, set input and output embeddings, set the decoder, and forward the model. It also includes methods for preparing inputs for generation and reordering cache.

The forward method takes various input arguments and returns the model outputs. The prepare_inputs_for_generation method prepares inputs for generation, and the _reorder_cache method reorders the cache.

The class inherits from BlenderbotPreTrainedModel and includes detailed explanations of the input arguments, return values, and examples for usage.

For consistency, the docstring follows the triple double quotes format.

Source code in mindnlp/transformers/models/blenderbot/modeling_blenderbot.py
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
class BlenderbotForCausalLM(BlenderbotPreTrainedModel):

    """
    Represents the Blenderbot model for causal language modeling.

    This class provides the functionality to initialize the model, set input and output embeddings, set the decoder,
    and forward the model. It also includes methods for preparing inputs for generation and reordering cache.

    The `forward` method takes various input arguments and returns the model outputs.
    The `prepare_inputs_for_generation` method prepares inputs for generation, and the `_reorder_cache` method
    reorders the cache.

    The class inherits from `BlenderbotPreTrainedModel` and includes detailed explanations of the input arguments,
    return values, and examples for usage.

    For consistency, the docstring follows the triple double quotes format.
    """
    _tied_weights_keys = ["lm_head.weight"]

    def __init__(self, config):
        """
        Initializes a new instance of the BlenderbotForCausalLM class.

        Args:
            self: The object instance.
            config (obj): The configuration object containing various settings for the model.
                It must have the following attributes:

                - is_decoder (bool): Specifies whether the model is a decoder. Must be set to True.
                - is_encoder_decoder (bool): Specifies whether the model is an encoder-decoder. Must be set to False.
                - hidden_size (int): The size of the hidden states.
                - vocab_size (int): The size of the vocabulary.

        Returns:
            None

        Raises:
            None
        """
        config = copy.deepcopy(config)
        config.is_decoder = True
        config.is_encoder_decoder = False
        super().__init__(config)
        self.model = BlenderbotDecoderWrapper(config)

        self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)

        # Initialize weights and apply final processing
        self.post_init()

    def get_input_embeddings(self):
        """
        Retrieves the input embeddings from the BlenderbotForCausalLM model.

        Args:
            self (BlenderbotForCausalLM): The instance of the BlenderbotForCausalLM class.

        Returns:
            None.

        Raises:
            None.

        This method retrieves the input embeddings from the decoder of the BlenderbotForCausalLM model.
        The input embeddings are used to convert the input tokens into continuous vector representations.
        These embeddings capture the semantic meaning of the input tokens and are essential for
        the model's understanding and generation of text.

        Note:
            The input embeddings are accessed using the 'embed_tokens' attribute of the model's decoder.
        """
        return self.model.decoder.embed_tokens

    def set_input_embeddings(self, value):
        """
        Method to set the input embeddings for the BlenderbotForCausalLM model.

        Args:
            self (BlenderbotForCausalLM): The instance of BlenderbotForCausalLM class.
                This parameter is always implicitly passed and refers to the current instance of the class.
            value (torch.Tensor): The input embeddings to be set for the model.
                This parameter should be a torch.Tensor containing the input embeddings.

        Returns:
            None.

        Raises:
            None.
        """
        self.model.decoder.embed_tokens = value

    def get_output_embeddings(self):
        """
        Method to retrieve the output embeddings from the BlenderbotForCausalLM model.

        Args:
            self (BlenderbotForCausalLM): The instance of the BlenderbotForCausalLM class.
                This parameter refers to the current instance of the model.

        Returns:
            None: This method returns the output embeddings represented by the lm_head attribute.
                The output embeddings are used for generating the model's output.

        Raises:
            None.
        """
        return self.lm_head

    def set_output_embeddings(self, new_embeddings):
        """
        Sets the output embeddings of the BlenderbotForCausalLM model.

        Args:
            self (BlenderbotForCausalLM): The instance of the BlenderbotForCausalLM class.
            new_embeddings (torch.nn.Embedding): The new embeddings to be set as the output embeddings.
                It should be an instance of `torch.nn.Embedding` class.

        Returns:
            None.

        Raises:
            None.

        """
        self.lm_head = new_embeddings

    def set_decoder(self, decoder):
        """
        Method to set the decoder for the BlenderbotForCausalLM model.

        Args:
            self (BlenderbotForCausalLM): The instance of the BlenderbotForCausalLM class.
                This parameter refers to the current instance of the class.
            decoder: The decoder object to be set for the model.
                It should be a valid decoder object compatible with the model.

        Returns:
            None: This method does not return any value. It updates the decoder for the model in-place.

        Raises:
            None.
        """
        self.model.decoder = decoder

    def get_decoder(self):
        """
        Returns the decoder of the BlenderbotForCausalLM model.

        Args:
            self: An instance of the BlenderbotForCausalLM class.

        Returns:
            None: This method returns the decoder of the BlenderbotForCausalLM model.
                The decoder is responsible for decoding the input sequence into a generated response.

        Raises:
            None.
        """
        return self.model.decoder

    def forward(
        self,
        input_ids: mindspore.Tensor = None,
        attention_mask: Optional[mindspore.Tensor] = None,
        encoder_hidden_states: Optional[mindspore.Tensor] = None,
        encoder_attention_mask: Optional[mindspore.Tensor] = None,
        head_mask: Optional[mindspore.Tensor] = None,
        cross_attn_head_mask: Optional[mindspore.Tensor] = None,
        past_key_values: Optional[List[mindspore.Tensor]] = None,
        inputs_embeds: Optional[mindspore.Tensor] = None,
        labels: Optional[mindspore.Tensor] = None,
        use_cache: Optional[bool] = None,
        output_attentions: Optional[bool] = None,
        output_hidden_states: Optional[bool] = None,
        return_dict: Optional[bool] = None,
    ) -> Union[Tuple, CausalLMOutputWithCrossAttentions]:
        r"""
        Args:
            input_ids (`mindspore.Tensor` of shape `(batch_size, sequence_length)`):
                Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you
                provide it.

                Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
                [`PreTrainedTokenizer.__call__`] for details.

                [What are input IDs?](../glossary#input-ids)
            attention_mask (`mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
                Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:

                - 1 for tokens that are **not masked**,
                - 0 for tokens that are **masked**.

                [What are attention masks?](../glossary#attention-mask)
            encoder_hidden_states  (`mindspore.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
                Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention
                if the model is configured as a decoder.
            encoder_attention_mask (`mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
                Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used
                in the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`:
            head_mask (`mindspore.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*):
                Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`:

                - 1 indicates the head is **not masked**,
                - 0 indicates the head is **masked**.
            cross_attn_head_mask (`mindspore.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*):
                Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`:

                - 1 indicates the head is **not masked**,
                - 0 indicates the head is **masked**.
            past_key_values (`tuple(tuple(mindspore.Tensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
                Tuple of `tuple(mindspore.Tensor)` of length `config.n_layers`, with each tuple having 2 tensors of
                shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of
                shape `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`. The two additional
                tensors are only required when the model is used as a decoder in a Sequence to Sequence model.

                Contains pre-computed hidden-states (key and values in the self-attention blocks and in the
                cross-attention blocks) that can be used (see `past_key_values` input) to speed up sequential decoding.

                If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those
                that don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of
                all `decoder_input_ids` of shape `(batch_size, sequence_length)`.
            labels (`mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
                Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
                config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
                (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
            use_cache (`bool`, *optional*):
                If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
                (see `past_key_values`).

               - 1 for tokens that are **not masked**,
               - 0 for tokens that are **masked**.
            output_attentions (`bool`, *optional*):
                Whether or not to return the attentions tensors of all attention layers. See `attentions` under
                returned tensors for more detail.
            output_hidden_states (`bool`, *optional*):
                Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors
                for more detail.
            return_dict (`bool`, *optional*):
                Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.

        Returns:
            Union[Tuple, CausalLMOutputWithCrossAttentions]

        Example:
            ```python
            >>> from transformers import AutoTokenizer, BlenderbotForCausalLM
            ...
            >>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill")
            >>> model = BlenderbotForCausalLM.from_pretrained("facebook/blenderbot-400M-distill", add_cross_attention=False)
            >>> assert model.config.is_decoder, f"{model.__class__} has to be configured as a decoder."
            >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
            >>> outputs = model(**inputs)
            ...
            >>> logits = outputs.logits
            >>> expected_shape = [1, inputs.input_ids.shape[-1], model.config.vocab_size]
            >>> list(logits.shape) == expected_shape
            True
            ```
        """
        output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
        output_hidden_states = (
            output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
        )
        return_dict = return_dict if return_dict is not None else self.config.use_return_dict

        # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
        outputs = self.model.decoder(
            input_ids=input_ids,
            attention_mask=attention_mask,
            encoder_hidden_states=encoder_hidden_states,
            encoder_attention_mask=encoder_attention_mask,
            head_mask=head_mask,
            cross_attn_head_mask=cross_attn_head_mask,
            past_key_values=past_key_values,
            inputs_embeds=inputs_embeds,
            use_cache=use_cache,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
        )

        logits = self.lm_head(outputs[0])

        loss = None
        if labels is not None:
            loss = F.cross_entropy(logits.view(-1, self.config.vocab_size), labels.view(-1))

        if not return_dict:
            output = (logits,) + outputs[1:]
            return (loss,) + output if loss is not None else output

        return CausalLMOutputWithCrossAttentions(
            loss=loss,
            logits=logits,
            past_key_values=outputs.past_key_values,
            hidden_states=outputs.hidden_states,
            attentions=outputs.attentions,
            cross_attentions=outputs.cross_attentions,
        )

    def prepare_inputs_for_generation(
        self, input_ids, past_key_values=None, attention_mask=None, use_cache=None, **kwargs
    ):
        """
        This method prepares inputs for generation in the BlenderbotForCausalLM class.

        Args:
            self: The instance of the class.
            input_ids (torch.Tensor): The input tensor containing token ids for the input sequence.
            past_key_values (Tuple[torch.Tensor]): Optional past key values for caching attention weights.
            attention_mask (torch.Tensor): Optional tensor specifying which elements of the input sequence should be attended to.
            use_cache (bool): Flag indicating whether to use caching for efficient generation.

        Returns:
            dict: A dictionary containing the updated input_ids, attention_mask, past_key_values, and use_cache.

        Raises:
            ValueError: If input_ids or attention_mask is not provided.
            IndexError: If the input_ids shape does not match the past key values.
        """
        # if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly
        if attention_mask is None:
            attention_mask = input_ids.new_ones(input_ids.shape)

        if past_key_values:
            past_length = past_key_values[0][0].shape[2]

            # Some generation methods already pass only the last input ID
            if input_ids.shape[1] > past_length:
                remove_prefix_length = past_length
            else:
                # Default to old behavior: keep only final ID
                remove_prefix_length = input_ids.shape[1] - 1

            input_ids = input_ids[:, remove_prefix_length:]
        # first step, decoder_cached_states are empty
        return {
            "input_ids": input_ids,  # encoder_outputs is defined. input_ids not needed
            "attention_mask": attention_mask,
            "past_key_values": past_key_values,
            "use_cache": use_cache,
        }

    @staticmethod
    def _reorder_cache(past_key_values, beam_idx):
        """
        Method to reorder cache for beam search in the BlenderbotForCausalLM class.

        Args:
            past_key_values (tuple): Tuple containing past key-value states for each layer.
            beam_idx (Tensor): Index tensor specifying the order for reordering the past states.

        Returns:
            None: This method modifies the past_key_values in-place to reorder the cache according to the beam_idx.

        Raises:
            IndexError: If the provided beam_idx is out of bounds or not compatible with past_key_values.
            ValueError: If the input parameters are not in the expected format or do not meet the requirements.
        """
        reordered_past = ()
        for layer_past in past_key_values:
            reordered_past += (
                tuple(past_state.index_select(0, beam_idx) for past_state in layer_past),
            )
        return reordered_past

mindnlp.transformers.models.blenderbot.modeling_blenderbot.BlenderbotForCausalLM.__init__(config)

Initializes a new instance of the BlenderbotForCausalLM class.

PARAMETER DESCRIPTION
self

The object instance.

config

The configuration object containing various settings for the model. It must have the following attributes:

  • is_decoder (bool): Specifies whether the model is a decoder. Must be set to True.
  • is_encoder_decoder (bool): Specifies whether the model is an encoder-decoder. Must be set to False.
  • hidden_size (int): The size of the hidden states.
  • vocab_size (int): The size of the vocabulary.

TYPE: obj

RETURNS DESCRIPTION

None

Source code in mindnlp/transformers/models/blenderbot/modeling_blenderbot.py
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
def __init__(self, config):
    """
    Initializes a new instance of the BlenderbotForCausalLM class.

    Args:
        self: The object instance.
        config (obj): The configuration object containing various settings for the model.
            It must have the following attributes:

            - is_decoder (bool): Specifies whether the model is a decoder. Must be set to True.
            - is_encoder_decoder (bool): Specifies whether the model is an encoder-decoder. Must be set to False.
            - hidden_size (int): The size of the hidden states.
            - vocab_size (int): The size of the vocabulary.

    Returns:
        None

    Raises:
        None
    """
    config = copy.deepcopy(config)
    config.is_decoder = True
    config.is_encoder_decoder = False
    super().__init__(config)
    self.model = BlenderbotDecoderWrapper(config)

    self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)

    # Initialize weights and apply final processing
    self.post_init()

mindnlp.transformers.models.blenderbot.modeling_blenderbot.BlenderbotForCausalLM.forward(input_ids=None, attention_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, head_mask=None, cross_attn_head_mask=None, past_key_values=None, inputs_embeds=None, labels=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None)

PARAMETER DESCRIPTION
input_ids

Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.

Indices can be obtained using [AutoTokenizer]. See [PreTrainedTokenizer.encode] and [PreTrainedTokenizer.__call__] for details.

What are input IDs?

TYPE: `mindspore.Tensor` of shape `(batch_size, sequence_length)` DEFAULT: None

attention_mask

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

  • 1 for tokens that are not masked,
  • 0 for tokens that are masked.

What are attention masks?

TYPE: `mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional* DEFAULT: None

encoder_hidden_states

Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder.

TYPE: (`mindspore.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional* DEFAULT: None

encoder_attention_mask

Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:

TYPE: `mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional* DEFAULT: None

head_mask

Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]:

  • 1 indicates the head is not masked,
  • 0 indicates the head is masked.

TYPE: `mindspore.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional* DEFAULT: None

cross_attn_head_mask

Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:

  • 1 indicates the head is not masked,
  • 0 indicates the head is masked.

TYPE: `mindspore.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional* DEFAULT: None

past_key_values

Tuple of tuple(mindspore.Tensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). The two additional tensors are only required when the model is used as a decoder in a Sequence to Sequence model.

Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.

If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don't have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length).

TYPE: `tuple(tuple(mindspore.Tensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True` DEFAULT: None

labels

Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].

TYPE: `mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional* DEFAULT: None

use_cache

If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).

  • 1 for tokens that are not masked,
  • 0 for tokens that are masked.

TYPE: `bool`, *optional* DEFAULT: None

output_attentions

Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.

TYPE: `bool`, *optional* DEFAULT: None

output_hidden_states

Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

TYPE: `bool`, *optional* DEFAULT: None

return_dict

Whether or not to return a [~utils.ModelOutput] instead of a plain tuple.

TYPE: `bool`, *optional* DEFAULT: None

RETURNS DESCRIPTION
Union[Tuple, CausalLMOutputWithCrossAttentions]

Union[Tuple, CausalLMOutputWithCrossAttentions]

Example
>>> from transformers import AutoTokenizer, BlenderbotForCausalLM
...
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill")
>>> model = BlenderbotForCausalLM.from_pretrained("facebook/blenderbot-400M-distill", add_cross_attention=False)
>>> assert model.config.is_decoder, f"{model.__class__} has to be configured as a decoder."
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)
...
>>> logits = outputs.logits
>>> expected_shape = [1, inputs.input_ids.shape[-1], model.config.vocab_size]
>>> list(logits.shape) == expected_shape
True
Source code in mindnlp/transformers/models/blenderbot/modeling_blenderbot.py
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
def forward(
    self,
    input_ids: mindspore.Tensor = None,
    attention_mask: Optional[mindspore.Tensor] = None,
    encoder_hidden_states: Optional[mindspore.Tensor] = None,
    encoder_attention_mask: Optional[mindspore.Tensor] = None,
    head_mask: Optional[mindspore.Tensor] = None,
    cross_attn_head_mask: Optional[mindspore.Tensor] = None,
    past_key_values: Optional[List[mindspore.Tensor]] = None,
    inputs_embeds: Optional[mindspore.Tensor] = None,
    labels: Optional[mindspore.Tensor] = None,
    use_cache: Optional[bool] = None,
    output_attentions: Optional[bool] = None,
    output_hidden_states: Optional[bool] = None,
    return_dict: Optional[bool] = None,
) -> Union[Tuple, CausalLMOutputWithCrossAttentions]:
    r"""
    Args:
        input_ids (`mindspore.Tensor` of shape `(batch_size, sequence_length)`):
            Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you
            provide it.

            Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
            [`PreTrainedTokenizer.__call__`] for details.

            [What are input IDs?](../glossary#input-ids)
        attention_mask (`mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
            Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:

            - 1 for tokens that are **not masked**,
            - 0 for tokens that are **masked**.

            [What are attention masks?](../glossary#attention-mask)
        encoder_hidden_states  (`mindspore.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
            Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention
            if the model is configured as a decoder.
        encoder_attention_mask (`mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
            Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used
            in the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`:
        head_mask (`mindspore.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*):
            Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`:

            - 1 indicates the head is **not masked**,
            - 0 indicates the head is **masked**.
        cross_attn_head_mask (`mindspore.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*):
            Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`:

            - 1 indicates the head is **not masked**,
            - 0 indicates the head is **masked**.
        past_key_values (`tuple(tuple(mindspore.Tensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
            Tuple of `tuple(mindspore.Tensor)` of length `config.n_layers`, with each tuple having 2 tensors of
            shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of
            shape `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`. The two additional
            tensors are only required when the model is used as a decoder in a Sequence to Sequence model.

            Contains pre-computed hidden-states (key and values in the self-attention blocks and in the
            cross-attention blocks) that can be used (see `past_key_values` input) to speed up sequential decoding.

            If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those
            that don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of
            all `decoder_input_ids` of shape `(batch_size, sequence_length)`.
        labels (`mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
            Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
            config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
            (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
        use_cache (`bool`, *optional*):
            If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
            (see `past_key_values`).

           - 1 for tokens that are **not masked**,
           - 0 for tokens that are **masked**.
        output_attentions (`bool`, *optional*):
            Whether or not to return the attentions tensors of all attention layers. See `attentions` under
            returned tensors for more detail.
        output_hidden_states (`bool`, *optional*):
            Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors
            for more detail.
        return_dict (`bool`, *optional*):
            Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.

    Returns:
        Union[Tuple, CausalLMOutputWithCrossAttentions]

    Example:
        ```python
        >>> from transformers import AutoTokenizer, BlenderbotForCausalLM
        ...
        >>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill")
        >>> model = BlenderbotForCausalLM.from_pretrained("facebook/blenderbot-400M-distill", add_cross_attention=False)
        >>> assert model.config.is_decoder, f"{model.__class__} has to be configured as a decoder."
        >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
        >>> outputs = model(**inputs)
        ...
        >>> logits = outputs.logits
        >>> expected_shape = [1, inputs.input_ids.shape[-1], model.config.vocab_size]
        >>> list(logits.shape) == expected_shape
        True
        ```
    """
    output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
    output_hidden_states = (
        output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
    )
    return_dict = return_dict if return_dict is not None else self.config.use_return_dict

    # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
    outputs = self.model.decoder(
        input_ids=input_ids,
        attention_mask=attention_mask,
        encoder_hidden_states=encoder_hidden_states,
        encoder_attention_mask=encoder_attention_mask,
        head_mask=head_mask,
        cross_attn_head_mask=cross_attn_head_mask,
        past_key_values=past_key_values,
        inputs_embeds=inputs_embeds,
        use_cache=use_cache,
        output_attentions=output_attentions,
        output_hidden_states=output_hidden_states,
        return_dict=return_dict,
    )

    logits = self.lm_head(outputs[0])

    loss = None
    if labels is not None:
        loss = F.cross_entropy(logits.view(-1, self.config.vocab_size), labels.view(-1))

    if not return_dict:
        output = (logits,) + outputs[1:]
        return (loss,) + output if loss is not None else output

    return CausalLMOutputWithCrossAttentions(
        loss=loss,
        logits=logits,
        past_key_values=outputs.past_key_values,
        hidden_states=outputs.hidden_states,
        attentions=outputs.attentions,
        cross_attentions=outputs.cross_attentions,
    )

mindnlp.transformers.models.blenderbot.modeling_blenderbot.BlenderbotForCausalLM.get_decoder()

Returns the decoder of the BlenderbotForCausalLM model.

PARAMETER DESCRIPTION
self

An instance of the BlenderbotForCausalLM class.

RETURNS DESCRIPTION
None

This method returns the decoder of the BlenderbotForCausalLM model. The decoder is responsible for decoding the input sequence into a generated response.

Source code in mindnlp/transformers/models/blenderbot/modeling_blenderbot.py
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
def get_decoder(self):
    """
    Returns the decoder of the BlenderbotForCausalLM model.

    Args:
        self: An instance of the BlenderbotForCausalLM class.

    Returns:
        None: This method returns the decoder of the BlenderbotForCausalLM model.
            The decoder is responsible for decoding the input sequence into a generated response.

    Raises:
        None.
    """
    return self.model.decoder

mindnlp.transformers.models.blenderbot.modeling_blenderbot.BlenderbotForCausalLM.get_input_embeddings()

Retrieves the input embeddings from the BlenderbotForCausalLM model.

PARAMETER DESCRIPTION
self

The instance of the BlenderbotForCausalLM class.

TYPE: BlenderbotForCausalLM

RETURNS DESCRIPTION

None.

This method retrieves the input embeddings from the decoder of the BlenderbotForCausalLM model. The input embeddings are used to convert the input tokens into continuous vector representations. These embeddings capture the semantic meaning of the input tokens and are essential for the model's understanding and generation of text.

Note

The input embeddings are accessed using the 'embed_tokens' attribute of the model's decoder.

Source code in mindnlp/transformers/models/blenderbot/modeling_blenderbot.py
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
def get_input_embeddings(self):
    """
    Retrieves the input embeddings from the BlenderbotForCausalLM model.

    Args:
        self (BlenderbotForCausalLM): The instance of the BlenderbotForCausalLM class.

    Returns:
        None.

    Raises:
        None.

    This method retrieves the input embeddings from the decoder of the BlenderbotForCausalLM model.
    The input embeddings are used to convert the input tokens into continuous vector representations.
    These embeddings capture the semantic meaning of the input tokens and are essential for
    the model's understanding and generation of text.

    Note:
        The input embeddings are accessed using the 'embed_tokens' attribute of the model's decoder.
    """
    return self.model.decoder.embed_tokens

mindnlp.transformers.models.blenderbot.modeling_blenderbot.BlenderbotForCausalLM.get_output_embeddings()

Method to retrieve the output embeddings from the BlenderbotForCausalLM model.

PARAMETER DESCRIPTION
self

The instance of the BlenderbotForCausalLM class. This parameter refers to the current instance of the model.

TYPE: BlenderbotForCausalLM

RETURNS DESCRIPTION
None

This method returns the output embeddings represented by the lm_head attribute. The output embeddings are used for generating the model's output.

Source code in mindnlp/transformers/models/blenderbot/modeling_blenderbot.py
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
def get_output_embeddings(self):
    """
    Method to retrieve the output embeddings from the BlenderbotForCausalLM model.

    Args:
        self (BlenderbotForCausalLM): The instance of the BlenderbotForCausalLM class.
            This parameter refers to the current instance of the model.

    Returns:
        None: This method returns the output embeddings represented by the lm_head attribute.
            The output embeddings are used for generating the model's output.

    Raises:
        None.
    """
    return self.lm_head

mindnlp.transformers.models.blenderbot.modeling_blenderbot.BlenderbotForCausalLM.prepare_inputs_for_generation(input_ids, past_key_values=None, attention_mask=None, use_cache=None, **kwargs)

This method prepares inputs for generation in the BlenderbotForCausalLM class.

PARAMETER DESCRIPTION
self

The instance of the class.

input_ids

The input tensor containing token ids for the input sequence.

TYPE: Tensor

past_key_values

Optional past key values for caching attention weights.

TYPE: Tuple[Tensor] DEFAULT: None

attention_mask

Optional tensor specifying which elements of the input sequence should be attended to.

TYPE: Tensor DEFAULT: None

use_cache

Flag indicating whether to use caching for efficient generation.

TYPE: bool DEFAULT: None

RETURNS DESCRIPTION
dict

A dictionary containing the updated input_ids, attention_mask, past_key_values, and use_cache.

RAISES DESCRIPTION
ValueError

If input_ids or attention_mask is not provided.

IndexError

If the input_ids shape does not match the past key values.

Source code in mindnlp/transformers/models/blenderbot/modeling_blenderbot.py
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
def prepare_inputs_for_generation(
    self, input_ids, past_key_values=None, attention_mask=None, use_cache=None, **kwargs
):
    """
    This method prepares inputs for generation in the BlenderbotForCausalLM class.

    Args:
        self: The instance of the class.
        input_ids (torch.Tensor): The input tensor containing token ids for the input sequence.
        past_key_values (Tuple[torch.Tensor]): Optional past key values for caching attention weights.
        attention_mask (torch.Tensor): Optional tensor specifying which elements of the input sequence should be attended to.
        use_cache (bool): Flag indicating whether to use caching for efficient generation.

    Returns:
        dict: A dictionary containing the updated input_ids, attention_mask, past_key_values, and use_cache.

    Raises:
        ValueError: If input_ids or attention_mask is not provided.
        IndexError: If the input_ids shape does not match the past key values.
    """
    # if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly
    if attention_mask is None:
        attention_mask = input_ids.new_ones(input_ids.shape)

    if past_key_values:
        past_length = past_key_values[0][0].shape[2]

        # Some generation methods already pass only the last input ID
        if input_ids.shape[1] > past_length:
            remove_prefix_length = past_length
        else:
            # Default to old behavior: keep only final ID
            remove_prefix_length = input_ids.shape[1] - 1

        input_ids = input_ids[:, remove_prefix_length:]
    # first step, decoder_cached_states are empty
    return {
        "input_ids": input_ids,  # encoder_outputs is defined. input_ids not needed
        "attention_mask": attention_mask,
        "past_key_values": past_key_values,
        "use_cache": use_cache,
    }

mindnlp.transformers.models.blenderbot.modeling_blenderbot.BlenderbotForCausalLM.set_decoder(decoder)

Method to set the decoder for the BlenderbotForCausalLM model.

PARAMETER DESCRIPTION
self

The instance of the BlenderbotForCausalLM class. This parameter refers to the current instance of the class.

TYPE: BlenderbotForCausalLM

decoder

The decoder object to be set for the model. It should be a valid decoder object compatible with the model.

RETURNS DESCRIPTION
None

This method does not return any value. It updates the decoder for the model in-place.

Source code in mindnlp/transformers/models/blenderbot/modeling_blenderbot.py
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
def set_decoder(self, decoder):
    """
    Method to set the decoder for the BlenderbotForCausalLM model.

    Args:
        self (BlenderbotForCausalLM): The instance of the BlenderbotForCausalLM class.
            This parameter refers to the current instance of the class.
        decoder: The decoder object to be set for the model.
            It should be a valid decoder object compatible with the model.

    Returns:
        None: This method does not return any value. It updates the decoder for the model in-place.

    Raises:
        None.
    """
    self.model.decoder = decoder

mindnlp.transformers.models.blenderbot.modeling_blenderbot.BlenderbotForCausalLM.set_input_embeddings(value)

Method to set the input embeddings for the BlenderbotForCausalLM model.

PARAMETER DESCRIPTION
self

The instance of BlenderbotForCausalLM class. This parameter is always implicitly passed and refers to the current instance of the class.

TYPE: BlenderbotForCausalLM

value

The input embeddings to be set for the model. This parameter should be a torch.Tensor containing the input embeddings.

TYPE: Tensor

RETURNS DESCRIPTION

None.

Source code in mindnlp/transformers/models/blenderbot/modeling_blenderbot.py
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
def set_input_embeddings(self, value):
    """
    Method to set the input embeddings for the BlenderbotForCausalLM model.

    Args:
        self (BlenderbotForCausalLM): The instance of BlenderbotForCausalLM class.
            This parameter is always implicitly passed and refers to the current instance of the class.
        value (torch.Tensor): The input embeddings to be set for the model.
            This parameter should be a torch.Tensor containing the input embeddings.

    Returns:
        None.

    Raises:
        None.
    """
    self.model.decoder.embed_tokens = value

mindnlp.transformers.models.blenderbot.modeling_blenderbot.BlenderbotForCausalLM.set_output_embeddings(new_embeddings)

Sets the output embeddings of the BlenderbotForCausalLM model.

PARAMETER DESCRIPTION
self

The instance of the BlenderbotForCausalLM class.

TYPE: BlenderbotForCausalLM

new_embeddings

The new embeddings to be set as the output embeddings. It should be an instance of torch.nn.Embedding class.

TYPE: Embedding

RETURNS DESCRIPTION

None.

Source code in mindnlp/transformers/models/blenderbot/modeling_blenderbot.py
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
def set_output_embeddings(self, new_embeddings):
    """
    Sets the output embeddings of the BlenderbotForCausalLM model.

    Args:
        self (BlenderbotForCausalLM): The instance of the BlenderbotForCausalLM class.
        new_embeddings (torch.nn.Embedding): The new embeddings to be set as the output embeddings.
            It should be an instance of `torch.nn.Embedding` class.

    Returns:
        None.

    Raises:
        None.

    """
    self.lm_head = new_embeddings

mindnlp.transformers.models.blenderbot.modeling_blenderbot.BlenderbotForConditionalGeneration

Bases: BlenderbotPreTrainedModel

A class for generating text using the Blenderbot model with conditional generation. This class inherits from BlenderbotPreTrainedModel and provides methods for preparing inputs for generation and reordering cache.

ATTRIBUTE DESCRIPTION
model

A model instance of the BlenderbotModel class.

TYPE: BlenderbotModel

final_logits_bias

A tensor representing the final logits bias.

TYPE: Tensor

lm_head

A fully connected linear layer for the language modeling head.

TYPE: Linear

METHOD DESCRIPTION
__init__

Initializes the class with a BlenderbotConfig instance.

get_encoder

Returns the encoder from the model.

get_decoder

Returns the decoder from the model.

resize_token_embeddings

Resizes the token embeddings.

_resize_final_logits_bias

Resizes the final logits bias.

get_output_embeddings

Returns the output embeddings.

set_output_embeddings

Sets the output embeddings.

forward

Constructs the model for generation.

prepare_inputs_for_generation

Prepares the inputs for generation.

_reorder_cache

Reorders the cache.

Source code in mindnlp/transformers/models/blenderbot/modeling_blenderbot.py
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
class BlenderbotForConditionalGeneration(BlenderbotPreTrainedModel):

    """
    A class for generating text using the Blenderbot model with conditional generation. This class inherits from BlenderbotPreTrainedModel and provides methods for preparing inputs for generation and
    reordering cache.

    Attributes:
        model (BlenderbotModel): A model instance of the BlenderbotModel class.
        final_logits_bias (mindspore.Tensor): A tensor representing the final logits bias.
        lm_head (mindspore.nn.Linear): A fully connected linear layer for the language modeling head.

    Methods:
        __init__: Initializes the class with a BlenderbotConfig instance.
        get_encoder: Returns the encoder from the model.
        get_decoder: Returns the decoder from the model.
        resize_token_embeddings: Resizes the token embeddings.
        _resize_final_logits_bias: Resizes the final logits bias.
        get_output_embeddings: Returns the output embeddings.
        set_output_embeddings: Sets the output embeddings.
        forward: Constructs the model for generation.
        prepare_inputs_for_generation: Prepares the inputs for generation.
        _reorder_cache: Reorders the cache.

    """
    base_model_prefix = "model"
    _tied_weights_keys = ["decoder.embed_tokens.weight", "encoder.embed_tokens.weight", "lm_head.weight"]

    def __init__(self, config: BlenderbotConfig):
        """
        Initializes a new instance of the BlenderbotForConditionalGeneration class.

        Args:
            self: The instance of the class.
            config (BlenderbotConfig): An instance of the BlenderbotConfig class containing the configuration settings for the model.

        Returns:
            None.

        Raises:
            None
        """
        super().__init__(config)
        self.model = BlenderbotModel(config)
        self.final_logits_bias = ops.zeros(1, self.model.shared.num_embeddings)
        self.lm_head = nn.Linear(config.d_model, self.model.shared.num_embeddings, bias=False)

        # Initialize weights and apply final processing
        self.post_init()

    def get_encoder(self):
        """
        This method returns the encoder of the BlenderbotForConditionalGeneration model.

        Args:
            self: The instance of the BlenderbotForConditionalGeneration class.

        Returns:
            None: This method returns the encoder of the model as an object of type 'None'.

        Raises:
            None
        """
        return self.model.get_encoder()

    def get_decoder(self):
        """
        Returns the decoder of the BlenderbotForConditionalGeneration model.

        Args:
            self: An instance of the BlenderbotForConditionalGeneration class.

        Returns:
            None: The method returns the decoder of the model, which is of type None.

        Raises:
            None.

        Note:
            The decoder is a component of the BlenderbotForConditionalGeneration model
            that is responsible for generating responses based on the input.

        Example:
            ```python
            >>> blenderbot = BlenderbotForConditionalGeneration()
            >>> decoder = blenderbot.get_decoder()
            >>> print(decoder)
            None
            ```
        """
        return self.model.get_decoder()

    def resize_token_embeddings(self, new_num_tokens: int, pad_to_multiple_of: Optional[int] = None) -> nn.Embedding:
        """
        Resize the token embeddings of the Blenderbot model.

        Args:
            self (BlenderbotForConditionalGeneration): The instance of the BlenderbotForConditionalGeneration class.
            new_num_tokens (int): The desired number of tokens for the resized embeddings.
            pad_to_multiple_of (Optional[int], optional): If provided, the number of tokens will be padded to a multiple of this value. Defaults to None.

        Returns:
            nn.Embedding: The new resized token embeddings.

        Raises:
            None.
        """
        new_embeddings = super().resize_token_embeddings(new_num_tokens, pad_to_multiple_of)
        self._resize_final_logits_bias(new_embeddings.weight.shape[0])
        return new_embeddings

    def _resize_final_logits_bias(self, new_num_tokens: int) -> None:
        """
        Resizes the final logits bias of the BlenderbotForConditionalGeneration model.

        Args:
            self (BlenderbotForConditionalGeneration): The instance of the BlenderbotForConditionalGeneration class.
            new_num_tokens (int): The desired number of tokens for the resized final logits bias.

        Returns:
            None: This method modifies the 'final_logits_bias' attribute of the BlenderbotForConditionalGeneration instance.

        Raises:
            None.

        Description:
            This method resizes the 'final_logits_bias' attribute of the BlenderbotForConditionalGeneration model.
            The 'final_logits_bias' is a tensor that represents the bias to be added to the final logits of the model.

            If the desired number of tokens, given by 'new_num_tokens', is less than or equal to the current number
            of tokens in the 'final_logits_bias', no resizing is performed.
            In this case, the 'final_logits_bias' is sliced to retain the desired number of tokens.

            If the desired number of tokens is greater than the current number of tokens,
            the 'final_logits_bias' is extended by appending zero-valued bias columns.
            The number of extra tokens is calculated as 'new_num_tokens - old_num_tokens',
            where 'old_num_tokens' is the current number of tokens in the 'final_logits_bias'.
            The extra bias columns are created using ops.zeros() function and then concatenated with the existing
            'final_logits_bias' tensor using ops.cat() function along the last axis.

            The 'final_logits_bias' attribute is updated with the resized tensor.

        Note:
            This method does not perform any validation on the inputs or check for any specific restrictions.

        Example:
            ```python
            >>> # Create an instance of the BlenderbotForConditionalGeneration model
            >>> model = BlenderbotForConditionalGeneration()
            ...
            >>> # Resize the final_logits_bias to have 100 tokens
            >>> model._resize_final_logits_bias(100)
            ```
        """
        old_num_tokens = self.final_logits_bias.shape[-1]
        if new_num_tokens <= old_num_tokens:
            new_bias = self.final_logits_bias[:, :new_num_tokens]
        else:
            extra_bias = ops.zeros(1, new_num_tokens - old_num_tokens)
            new_bias = ops.cat([self.final_logits_bias, extra_bias], dim=1)
        self.final_logits_bias = new_bias

    def get_output_embeddings(self):
        """
        This method retrieves the output embeddings from the BlenderbotForConditionalGeneration model.

        Args:
            self (BlenderbotForConditionalGeneration): The instance of the BlenderbotForConditionalGeneration class.
                It is used to access the lm_head attribute, which contains the output embeddings.

        Returns:
            None.

        Raises:
            None
        """
        return self.lm_head

    def set_output_embeddings(self, new_embeddings):
        """
        Sets the output embeddings for the Blenderbot model.

        Args:
            self (BlenderbotForConditionalGeneration): The instance of the BlenderbotForConditionalGeneration class.
            new_embeddings: The new embeddings to be set as the output embeddings. This parameter can be of any type.

        Returns:
            None.

        Raises:
            None.
        """
        self.lm_head = new_embeddings

    def forward(
        self,
        input_ids: Optional[mindspore.Tensor] = None,
        attention_mask: Optional[mindspore.Tensor] = None,
        decoder_input_ids: Optional[mindspore.Tensor] = None,
        decoder_attention_mask: Optional[mindspore.Tensor] = None,
        head_mask: Optional[mindspore.Tensor] = None,
        decoder_head_mask: Optional[mindspore.Tensor] = None,
        cross_attn_head_mask: Optional[mindspore.Tensor] = None,
        encoder_outputs: Optional[Union[Tuple, BaseModelOutput]] = None,
        past_key_values: Optional[List[mindspore.Tensor]] = None,
        inputs_embeds: Optional[mindspore.Tensor] = None,
        decoder_inputs_embeds: Optional[mindspore.Tensor] = None,
        labels: Optional[mindspore.Tensor] = None,
        use_cache: Optional[bool] = None,
        output_attentions: Optional[bool] = None,
        output_hidden_states: Optional[bool] = None,
        return_dict: Optional[bool] = None,
    ) -> Union[Tuple[mindspore.Tensor], Seq2SeqLMOutput]:
        r"""
        Args:
            labels (`mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
                Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
                config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
                (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.

        Returns:
            Union[Tuple[mindspore.Tensor], Seq2SeqLMOutput]
        """
        return_dict = return_dict if return_dict is not None else self.config.use_return_dict

        if labels is not None:
            if use_cache:
                logger.warning("The `use_cache` argument is changed to `False` since `labels` is provided.")
            use_cache = False
            if decoder_input_ids is None and decoder_inputs_embeds is None:
                decoder_input_ids = shift_tokens_right(
                    labels, self.config.pad_token_id, self.config.decoder_start_token_id
                )

        outputs = self.model(
            input_ids,
            attention_mask=attention_mask,
            decoder_input_ids=decoder_input_ids,
            encoder_outputs=encoder_outputs,
            decoder_attention_mask=decoder_attention_mask,
            head_mask=head_mask,
            decoder_head_mask=decoder_head_mask,
            cross_attn_head_mask=cross_attn_head_mask,
            past_key_values=past_key_values,
            inputs_embeds=inputs_embeds,
            decoder_inputs_embeds=decoder_inputs_embeds,
            use_cache=use_cache,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
        )
        lm_logits = self.lm_head(outputs[0]) + self.final_logits_bias

        masked_lm_loss = None
        if labels is not None:
            masked_lm_loss = F.cross_entropy(lm_logits.view(-1, self.config.vocab_size), labels.view(-1))

        if not return_dict:
            output = (lm_logits,) + outputs[1:]
            return ((masked_lm_loss,) + output) if masked_lm_loss is not None else output

        return Seq2SeqLMOutput(
            loss=masked_lm_loss,
            logits=lm_logits,
            past_key_values=outputs.past_key_values,
            decoder_hidden_states=outputs.decoder_hidden_states,
            decoder_attentions=outputs.decoder_attentions,
            cross_attentions=outputs.cross_attentions,
            encoder_last_hidden_state=outputs.encoder_last_hidden_state,
            encoder_hidden_states=outputs.encoder_hidden_states,
            encoder_attentions=outputs.encoder_attentions,
        )

    def prepare_inputs_for_generation(
        self,
        decoder_input_ids,
        past_key_values=None,
        attention_mask=None,
        head_mask=None,
        decoder_head_mask=None,
        cross_attn_head_mask=None,
        use_cache=None,
        encoder_outputs=None,
        **kwargs,
    ):
        """
        This method prepares inputs for generation in the BlenderbotForConditionalGeneration class.

        Args:
            self: The instance of the class.
            decoder_input_ids (Tensor): The input tensor for the decoder.
            past_key_values (Tuple): A tuple of past key values for attention mechanism.
            attention_mask (Tensor, optional): An optional tensor for attention mask.
            head_mask (Tensor, optional): An optional tensor for head mask.
            decoder_head_mask (Tensor, optional): An optional tensor for decoder head mask.
            cross_attn_head_mask (Tensor, optional): An optional tensor for cross-attention head mask.
            use_cache (bool, optional): A flag indicating whether to use cache.
            encoder_outputs (Dict, optional): A dictionary containing encoder outputs.

        Returns:
            Dict: A dictionary containing the prepared inputs for generation
                including 'input_ids', 'encoder_outputs', 'past_key_values', 'decoder_input_ids', 'attention_mask',
                'head_mask', 'decoder_head_mask', 'cross_attn_head_mask', and 'use_cache'.

        Raises:
            None
        """
        # cut decoder_input_ids if past is used
        if past_key_values is not None:
            past_length = past_key_values[0][0].shape[2]

            # Some generation methods already pass only the last input ID
            if decoder_input_ids.shape[1] > past_length:
                remove_prefix_length = past_length
            else:
                # Default to old behavior: keep only final ID
                remove_prefix_length = decoder_input_ids.shape[1] - 1

            decoder_input_ids = decoder_input_ids[:, remove_prefix_length:]

        return {
            "input_ids": None,  # encoder_outputs is defined. input_ids not needed
            "encoder_outputs": encoder_outputs,
            "past_key_values": past_key_values,
            "decoder_input_ids": decoder_input_ids,
            "attention_mask": attention_mask,
            "head_mask": head_mask,
            "decoder_head_mask": decoder_head_mask,
            "cross_attn_head_mask": cross_attn_head_mask,
            "use_cache": use_cache,  # change this to avoid caching (presumably for debugging)
        }

    @staticmethod
    def _reorder_cache(past_key_values, beam_idx):
        """
        Reorders the past key values according to the beam index.

        Args:
            past_key_values (Tuple): A tuple of past key values for each layer of the model.
                Each past key value is a tuple with three elements:

                - A tensor of shape (batch_size * beam_size, sequence_length, hidden_size)
                - A tensor of shape (batch_size * beam_size, hidden_size)
                - A tensor of shape (batch_size * beam_size, sequence_length)

            beam_idx (Tensor): A tensor of shape (batch_size, beam_size) containing the indices to reorder the past key values.

        Returns:
            reordered_past: This method returns the reordered past key values as a tuple.

        Raises:
            None.
        """
        reordered_past = ()
        for layer_past in past_key_values:
            # cached cross_attention states don't have to be reordered -> they are always the same
            reordered_past += (
                tuple(past_state.index_select(0, beam_idx) for past_state in layer_past[:2])
                + layer_past[2:],
            )
        return reordered_past

mindnlp.transformers.models.blenderbot.modeling_blenderbot.BlenderbotForConditionalGeneration.__init__(config)

Initializes a new instance of the BlenderbotForConditionalGeneration class.

PARAMETER DESCRIPTION
self

The instance of the class.

config

An instance of the BlenderbotConfig class containing the configuration settings for the model.

TYPE: BlenderbotConfig

RETURNS DESCRIPTION

None.

Source code in mindnlp/transformers/models/blenderbot/modeling_blenderbot.py
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
def __init__(self, config: BlenderbotConfig):
    """
    Initializes a new instance of the BlenderbotForConditionalGeneration class.

    Args:
        self: The instance of the class.
        config (BlenderbotConfig): An instance of the BlenderbotConfig class containing the configuration settings for the model.

    Returns:
        None.

    Raises:
        None
    """
    super().__init__(config)
    self.model = BlenderbotModel(config)
    self.final_logits_bias = ops.zeros(1, self.model.shared.num_embeddings)
    self.lm_head = nn.Linear(config.d_model, self.model.shared.num_embeddings, bias=False)

    # Initialize weights and apply final processing
    self.post_init()

mindnlp.transformers.models.blenderbot.modeling_blenderbot.BlenderbotForConditionalGeneration.forward(input_ids=None, attention_mask=None, decoder_input_ids=None, decoder_attention_mask=None, head_mask=None, decoder_head_mask=None, cross_attn_head_mask=None, encoder_outputs=None, past_key_values=None, inputs_embeds=None, decoder_inputs_embeds=None, labels=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None)

PARAMETER DESCRIPTION
labels

Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].

TYPE: `mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional* DEFAULT: None

RETURNS DESCRIPTION
Union[Tuple[Tensor], Seq2SeqLMOutput]

Union[Tuple[mindspore.Tensor], Seq2SeqLMOutput]

Source code in mindnlp/transformers/models/blenderbot/modeling_blenderbot.py
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
def forward(
    self,
    input_ids: Optional[mindspore.Tensor] = None,
    attention_mask: Optional[mindspore.Tensor] = None,
    decoder_input_ids: Optional[mindspore.Tensor] = None,
    decoder_attention_mask: Optional[mindspore.Tensor] = None,
    head_mask: Optional[mindspore.Tensor] = None,
    decoder_head_mask: Optional[mindspore.Tensor] = None,
    cross_attn_head_mask: Optional[mindspore.Tensor] = None,
    encoder_outputs: Optional[Union[Tuple, BaseModelOutput]] = None,
    past_key_values: Optional[List[mindspore.Tensor]] = None,
    inputs_embeds: Optional[mindspore.Tensor] = None,
    decoder_inputs_embeds: Optional[mindspore.Tensor] = None,
    labels: Optional[mindspore.Tensor] = None,
    use_cache: Optional[bool] = None,
    output_attentions: Optional[bool] = None,
    output_hidden_states: Optional[bool] = None,
    return_dict: Optional[bool] = None,
) -> Union[Tuple[mindspore.Tensor], Seq2SeqLMOutput]:
    r"""
    Args:
        labels (`mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
            Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
            config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
            (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.

    Returns:
        Union[Tuple[mindspore.Tensor], Seq2SeqLMOutput]
    """
    return_dict = return_dict if return_dict is not None else self.config.use_return_dict

    if labels is not None:
        if use_cache:
            logger.warning("The `use_cache` argument is changed to `False` since `labels` is provided.")
        use_cache = False
        if decoder_input_ids is None and decoder_inputs_embeds is None:
            decoder_input_ids = shift_tokens_right(
                labels, self.config.pad_token_id, self.config.decoder_start_token_id
            )

    outputs = self.model(
        input_ids,
        attention_mask=attention_mask,
        decoder_input_ids=decoder_input_ids,
        encoder_outputs=encoder_outputs,
        decoder_attention_mask=decoder_attention_mask,
        head_mask=head_mask,
        decoder_head_mask=decoder_head_mask,
        cross_attn_head_mask=cross_attn_head_mask,
        past_key_values=past_key_values,
        inputs_embeds=inputs_embeds,
        decoder_inputs_embeds=decoder_inputs_embeds,
        use_cache=use_cache,
        output_attentions=output_attentions,
        output_hidden_states=output_hidden_states,
        return_dict=return_dict,
    )
    lm_logits = self.lm_head(outputs[0]) + self.final_logits_bias

    masked_lm_loss = None
    if labels is not None:
        masked_lm_loss = F.cross_entropy(lm_logits.view(-1, self.config.vocab_size), labels.view(-1))

    if not return_dict:
        output = (lm_logits,) + outputs[1:]
        return ((masked_lm_loss,) + output) if masked_lm_loss is not None else output

    return Seq2SeqLMOutput(
        loss=masked_lm_loss,
        logits=lm_logits,
        past_key_values=outputs.past_key_values,
        decoder_hidden_states=outputs.decoder_hidden_states,
        decoder_attentions=outputs.decoder_attentions,
        cross_attentions=outputs.cross_attentions,
        encoder_last_hidden_state=outputs.encoder_last_hidden_state,
        encoder_hidden_states=outputs.encoder_hidden_states,
        encoder_attentions=outputs.encoder_attentions,
    )

mindnlp.transformers.models.blenderbot.modeling_blenderbot.BlenderbotForConditionalGeneration.get_decoder()

Returns the decoder of the BlenderbotForConditionalGeneration model.

PARAMETER DESCRIPTION
self

An instance of the BlenderbotForConditionalGeneration class.

RETURNS DESCRIPTION
None

The method returns the decoder of the model, which is of type None.

Note

The decoder is a component of the BlenderbotForConditionalGeneration model that is responsible for generating responses based on the input.

Example
>>> blenderbot = BlenderbotForConditionalGeneration()
>>> decoder = blenderbot.get_decoder()
>>> print(decoder)
None
Source code in mindnlp/transformers/models/blenderbot/modeling_blenderbot.py
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
def get_decoder(self):
    """
    Returns the decoder of the BlenderbotForConditionalGeneration model.

    Args:
        self: An instance of the BlenderbotForConditionalGeneration class.

    Returns:
        None: The method returns the decoder of the model, which is of type None.

    Raises:
        None.

    Note:
        The decoder is a component of the BlenderbotForConditionalGeneration model
        that is responsible for generating responses based on the input.

    Example:
        ```python
        >>> blenderbot = BlenderbotForConditionalGeneration()
        >>> decoder = blenderbot.get_decoder()
        >>> print(decoder)
        None
        ```
    """
    return self.model.get_decoder()

mindnlp.transformers.models.blenderbot.modeling_blenderbot.BlenderbotForConditionalGeneration.get_encoder()

This method returns the encoder of the BlenderbotForConditionalGeneration model.

PARAMETER DESCRIPTION
self

The instance of the BlenderbotForConditionalGeneration class.

RETURNS DESCRIPTION
None

This method returns the encoder of the model as an object of type 'None'.

Source code in mindnlp/transformers/models/blenderbot/modeling_blenderbot.py
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
def get_encoder(self):
    """
    This method returns the encoder of the BlenderbotForConditionalGeneration model.

    Args:
        self: The instance of the BlenderbotForConditionalGeneration class.

    Returns:
        None: This method returns the encoder of the model as an object of type 'None'.

    Raises:
        None
    """
    return self.model.get_encoder()

mindnlp.transformers.models.blenderbot.modeling_blenderbot.BlenderbotForConditionalGeneration.get_output_embeddings()

This method retrieves the output embeddings from the BlenderbotForConditionalGeneration model.

PARAMETER DESCRIPTION
self

The instance of the BlenderbotForConditionalGeneration class. It is used to access the lm_head attribute, which contains the output embeddings.

TYPE: BlenderbotForConditionalGeneration

RETURNS DESCRIPTION

None.

Source code in mindnlp/transformers/models/blenderbot/modeling_blenderbot.py
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
def get_output_embeddings(self):
    """
    This method retrieves the output embeddings from the BlenderbotForConditionalGeneration model.

    Args:
        self (BlenderbotForConditionalGeneration): The instance of the BlenderbotForConditionalGeneration class.
            It is used to access the lm_head attribute, which contains the output embeddings.

    Returns:
        None.

    Raises:
        None
    """
    return self.lm_head

mindnlp.transformers.models.blenderbot.modeling_blenderbot.BlenderbotForConditionalGeneration.prepare_inputs_for_generation(decoder_input_ids, past_key_values=None, attention_mask=None, head_mask=None, decoder_head_mask=None, cross_attn_head_mask=None, use_cache=None, encoder_outputs=None, **kwargs)

This method prepares inputs for generation in the BlenderbotForConditionalGeneration class.

PARAMETER DESCRIPTION
self

The instance of the class.

decoder_input_ids

The input tensor for the decoder.

TYPE: Tensor

past_key_values

A tuple of past key values for attention mechanism.

TYPE: Tuple DEFAULT: None

attention_mask

An optional tensor for attention mask.

TYPE: Tensor DEFAULT: None

head_mask

An optional tensor for head mask.

TYPE: Tensor DEFAULT: None

decoder_head_mask

An optional tensor for decoder head mask.

TYPE: Tensor DEFAULT: None

cross_attn_head_mask

An optional tensor for cross-attention head mask.

TYPE: Tensor DEFAULT: None

use_cache

A flag indicating whether to use cache.

TYPE: bool DEFAULT: None

encoder_outputs

A dictionary containing encoder outputs.

TYPE: Dict DEFAULT: None

RETURNS DESCRIPTION
Dict

A dictionary containing the prepared inputs for generation including 'input_ids', 'encoder_outputs', 'past_key_values', 'decoder_input_ids', 'attention_mask', 'head_mask', 'decoder_head_mask', 'cross_attn_head_mask', and 'use_cache'.

Source code in mindnlp/transformers/models/blenderbot/modeling_blenderbot.py
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
def prepare_inputs_for_generation(
    self,
    decoder_input_ids,
    past_key_values=None,
    attention_mask=None,
    head_mask=None,
    decoder_head_mask=None,
    cross_attn_head_mask=None,
    use_cache=None,
    encoder_outputs=None,
    **kwargs,
):
    """
    This method prepares inputs for generation in the BlenderbotForConditionalGeneration class.

    Args:
        self: The instance of the class.
        decoder_input_ids (Tensor): The input tensor for the decoder.
        past_key_values (Tuple): A tuple of past key values for attention mechanism.
        attention_mask (Tensor, optional): An optional tensor for attention mask.
        head_mask (Tensor, optional): An optional tensor for head mask.
        decoder_head_mask (Tensor, optional): An optional tensor for decoder head mask.
        cross_attn_head_mask (Tensor, optional): An optional tensor for cross-attention head mask.
        use_cache (bool, optional): A flag indicating whether to use cache.
        encoder_outputs (Dict, optional): A dictionary containing encoder outputs.

    Returns:
        Dict: A dictionary containing the prepared inputs for generation
            including 'input_ids', 'encoder_outputs', 'past_key_values', 'decoder_input_ids', 'attention_mask',
            'head_mask', 'decoder_head_mask', 'cross_attn_head_mask', and 'use_cache'.

    Raises:
        None
    """
    # cut decoder_input_ids if past is used
    if past_key_values is not None:
        past_length = past_key_values[0][0].shape[2]

        # Some generation methods already pass only the last input ID
        if decoder_input_ids.shape[1] > past_length:
            remove_prefix_length = past_length
        else:
            # Default to old behavior: keep only final ID
            remove_prefix_length = decoder_input_ids.shape[1] - 1

        decoder_input_ids = decoder_input_ids[:, remove_prefix_length:]

    return {
        "input_ids": None,  # encoder_outputs is defined. input_ids not needed
        "encoder_outputs": encoder_outputs,
        "past_key_values": past_key_values,
        "decoder_input_ids": decoder_input_ids,
        "attention_mask": attention_mask,
        "head_mask": head_mask,
        "decoder_head_mask": decoder_head_mask,
        "cross_attn_head_mask": cross_attn_head_mask,
        "use_cache": use_cache,  # change this to avoid caching (presumably for debugging)
    }

mindnlp.transformers.models.blenderbot.modeling_blenderbot.BlenderbotForConditionalGeneration.resize_token_embeddings(new_num_tokens, pad_to_multiple_of=None)

Resize the token embeddings of the Blenderbot model.

PARAMETER DESCRIPTION
self

The instance of the BlenderbotForConditionalGeneration class.

TYPE: BlenderbotForConditionalGeneration

new_num_tokens

The desired number of tokens for the resized embeddings.

TYPE: int

pad_to_multiple_of

If provided, the number of tokens will be padded to a multiple of this value. Defaults to None.

TYPE: Optional[int] DEFAULT: None

RETURNS DESCRIPTION
Embedding

nn.Embedding: The new resized token embeddings.

Source code in mindnlp/transformers/models/blenderbot/modeling_blenderbot.py
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
def resize_token_embeddings(self, new_num_tokens: int, pad_to_multiple_of: Optional[int] = None) -> nn.Embedding:
    """
    Resize the token embeddings of the Blenderbot model.

    Args:
        self (BlenderbotForConditionalGeneration): The instance of the BlenderbotForConditionalGeneration class.
        new_num_tokens (int): The desired number of tokens for the resized embeddings.
        pad_to_multiple_of (Optional[int], optional): If provided, the number of tokens will be padded to a multiple of this value. Defaults to None.

    Returns:
        nn.Embedding: The new resized token embeddings.

    Raises:
        None.
    """
    new_embeddings = super().resize_token_embeddings(new_num_tokens, pad_to_multiple_of)
    self._resize_final_logits_bias(new_embeddings.weight.shape[0])
    return new_embeddings

mindnlp.transformers.models.blenderbot.modeling_blenderbot.BlenderbotForConditionalGeneration.set_output_embeddings(new_embeddings)

Sets the output embeddings for the Blenderbot model.

PARAMETER DESCRIPTION
self

The instance of the BlenderbotForConditionalGeneration class.

TYPE: BlenderbotForConditionalGeneration

new_embeddings

The new embeddings to be set as the output embeddings. This parameter can be of any type.

RETURNS DESCRIPTION

None.

Source code in mindnlp/transformers/models/blenderbot/modeling_blenderbot.py
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
def set_output_embeddings(self, new_embeddings):
    """
    Sets the output embeddings for the Blenderbot model.

    Args:
        self (BlenderbotForConditionalGeneration): The instance of the BlenderbotForConditionalGeneration class.
        new_embeddings: The new embeddings to be set as the output embeddings. This parameter can be of any type.

    Returns:
        None.

    Raises:
        None.
    """
    self.lm_head = new_embeddings

mindnlp.transformers.models.blenderbot.modeling_blenderbot.BlenderbotModel

Bases: BlenderbotPreTrainedModel

The BlenderbotModel class represents a model for generating responses in conversational AI systems. It is a subclass of BlenderbotPreTrainedModel and inherits its functionality.

PARAMETER DESCRIPTION
config

The configuration class that contains the model's hyperparameters.

TYPE: BlenderbotConfig

ATTRIBUTE DESCRIPTION
shared

The shared embedding layer used for both the encoder and decoder.

TYPE: Embedding

encoder

The encoder module of the model.

TYPE: BlenderbotEncoder

decoder

The decoder module of the model.

TYPE: BlenderbotDecoder

METHOD DESCRIPTION
__init__

Initializes the BlenderbotModel instance.

get_input_embeddings

Retrieves the shared embedding layer.

set_input_embeddings

Sets the shared embedding layer to a new value.

get_encoder

Retrieves the encoder module.

get_decoder

Retrieves the decoder module.

forward

Constructs the model and performs the forward pass.

RETURNS DESCRIPTION

Union[Tuple[mindspore.Tensor], Seq2SeqModelOutput]: The output of the forward pass, including the last hidden state, past key values, decoder hidden states, decoder attentions, cross attentions, encoder last hidden state, encoder hidden states, and encoder attentions.

Example
>>> from transformers import AutoTokenizer, BlenderbotModel
...
>>> model = BlenderbotModel.from_pretrained("facebook/blenderbot-400M-distill")
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill")
...
>>> inputs = tokenizer("Studies have been shown that owning a dog is good for you", return_tensors="pt")
>>> decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids  # Batch size 1
>>> outputs = model(input_ids=inputs.input_ids, decoder_input_ids=decoder_input_ids)
...
>>> last_hidden_states = outputs.last_hidden_state
>>> list(last_hidden_states.shape)
[1, 6, 1280]
Source code in mindnlp/transformers/models/blenderbot/modeling_blenderbot.py
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
class BlenderbotModel(BlenderbotPreTrainedModel):

    """
        The `BlenderbotModel` class represents a model for generating responses in conversational AI systems.
        It is a subclass of `BlenderbotPreTrainedModel` and inherits its functionality.

        Args:
            config (BlenderbotConfig): The configuration class that contains the model's hyperparameters.

        Attributes:
            shared (nn.Embedding): The shared embedding layer used for both the encoder and decoder.
            encoder (BlenderbotEncoder): The encoder module of the model.
            decoder (BlenderbotDecoder): The decoder module of the model.

        Methods:
            __init__: Initializes the `BlenderbotModel` instance.
            get_input_embeddings: Retrieves the shared embedding layer.
            set_input_embeddings: Sets the shared embedding layer to a new value.
            get_encoder: Retrieves the encoder module.
            get_decoder: Retrieves the decoder module.
            forward: Constructs the model and performs the forward pass.

        Returns:
            Union[Tuple[mindspore.Tensor], Seq2SeqModelOutput]: The output of the forward pass,
                including the last hidden state, past key values, decoder hidden states, decoder attentions, cross attentions,
                encoder last hidden state, encoder hidden states, and encoder attentions.

        Example:
            ```python
            >>> from transformers import AutoTokenizer, BlenderbotModel
            ...
            >>> model = BlenderbotModel.from_pretrained("facebook/blenderbot-400M-distill")
            >>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill")
            ...
            >>> inputs = tokenizer("Studies have been shown that owning a dog is good for you", return_tensors="pt")
            >>> decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids  # Batch size 1
            >>> outputs = model(input_ids=inputs.input_ids, decoder_input_ids=decoder_input_ids)
            ...
            >>> last_hidden_states = outputs.last_hidden_state
            >>> list(last_hidden_states.shape)
            [1, 6, 1280]
            ```
        """
    _tied_weights_keys = ["decoder.embed_tokens.weight", "encoder.embed_tokens.weight"]

    def __init__(self, config: BlenderbotConfig):
        """
        This method initializes a new instance of the BlenderbotModel class.

        Args:
            self: The instance of the BlenderbotModel class.
            config (BlenderbotConfig):
                An instance of the BlenderbotConfig class containing configuration parameters for the model.

        Returns:
            None.

        Raises:
            None.
        """
        super().__init__(config)

        padding_idx, vocab_size = config.pad_token_id, config.vocab_size
        self.shared = nn.Embedding(vocab_size, config.d_model, padding_idx)

        self.encoder = BlenderbotEncoder(config, self.shared)
        self.decoder = BlenderbotDecoder(config, self.shared)

        # Initialize weights and apply final processing
        self.post_init()

    def get_input_embeddings(self):
        """
        This method retrieves the input embeddings from the BlenderbotModel.

        Args:
            self: BlenderbotModel instance. The instance of the BlenderbotModel class.

        Returns:
            None: This method returns the shared input embeddings.

        Raises:
            None.
        """
        return self.shared

    def set_input_embeddings(self, value):
        """
        Sets the input embeddings for the BlenderbotModel.

        Args:
            self (BlenderbotModel): The instance of the BlenderbotModel class.
            value: The input embeddings to be set. It should be a tensor of shape (vocab_size, embeddings_dim).

        Returns:
            None.

        Raises:
            None.
        """
        self.shared = value
        self.encoder.embed_tokens = self.shared
        self.decoder.embed_tokens = self.shared

    def get_encoder(self):
        """
        Returns the encoder used in the BlenderbotModel.

        Args:
            self (BlenderbotModel): An instance of the BlenderbotModel class.

        Returns:
            None.

        Raises:
            None.
        """
        return self.encoder

    def get_decoder(self):
        """
        This method returns the decoder used in the BlenderbotModel.

        Args:
            self: The instance of the BlenderbotModel class.

        Returns:
            None: This method returns the decoder used in the BlenderbotModel. It returns None if the decoder is not set.

        Raises:
            None.
        """
        return self.decoder

    def forward(
        self,
        input_ids: Optional[mindspore.Tensor] = None,
        attention_mask: Optional[mindspore.Tensor] = None,
        decoder_input_ids: Optional[mindspore.Tensor] = None,
        decoder_attention_mask: Optional[mindspore.Tensor] = None,
        head_mask: Optional[mindspore.Tensor] = None,
        decoder_head_mask: Optional[mindspore.Tensor] = None,
        cross_attn_head_mask: Optional[mindspore.Tensor] = None,
        encoder_outputs: Optional[Union[Tuple, BaseModelOutput]] = None,
        past_key_values: Optional[List[mindspore.Tensor]] = None,
        inputs_embeds: Optional[mindspore.Tensor] = None,
        decoder_inputs_embeds: Optional[mindspore.Tensor] = None,
        use_cache: Optional[bool] = None,
        output_attentions: Optional[bool] = None,
        output_hidden_states: Optional[bool] = None,
        return_dict: Optional[bool] = None,
    ) -> Union[Tuple[mindspore.Tensor], Seq2SeqModelOutput]:
        r"""
        Returns:
            Union[Tuple[mindspore.Tensor], Seq2SeqModelOutput]

        Example:
            ```python
            >>> from transformers import AutoTokenizer, BlenderbotModel
            ...
            >>> model = BlenderbotModel.from_pretrained("facebook/blenderbot-400M-distill")
            >>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill")
            ...
            >>> inputs = tokenizer("Studies have been shown that owning a dog is good for you", return_tensors="pt")
            >>> decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids  # Batch size 1
            >>> outputs = model(input_ids=inputs.input_ids, decoder_input_ids=decoder_input_ids)
            ...
            >>> last_hidden_states = outputs.last_hidden_state
            >>> list(last_hidden_states.shape)
            [1, 6, 1280]
            ```
        """
        output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
        output_hidden_states = (
            output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
        )
        use_cache = use_cache if use_cache is not None else self.config.use_cache
        return_dict = return_dict if return_dict is not None else self.config.use_return_dict

        if encoder_outputs is None:
            encoder_outputs = self.encoder(
                input_ids=input_ids,
                attention_mask=attention_mask,
                head_mask=head_mask,
                inputs_embeds=inputs_embeds,
                output_attentions=output_attentions,
                output_hidden_states=output_hidden_states,
                return_dict=return_dict,
            )
        # If the user passed a tuple for encoder_outputs, we wrap it in a BaseModelOutput when return_dict=True
        elif return_dict and not isinstance(encoder_outputs, BaseModelOutput):
            encoder_outputs = BaseModelOutput(
                last_hidden_state=encoder_outputs[0],
                hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None,
                attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None,
            )

        # decoder outputs consists of (dec_features, past_key_value, dec_hidden, dec_attn)
        decoder_outputs = self.decoder(
            input_ids=decoder_input_ids,
            attention_mask=decoder_attention_mask,
            encoder_hidden_states=encoder_outputs[0],
            encoder_attention_mask=attention_mask,
            head_mask=decoder_head_mask,
            cross_attn_head_mask=cross_attn_head_mask,
            past_key_values=past_key_values,
            inputs_embeds=decoder_inputs_embeds,
            use_cache=use_cache,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
        )

        if not return_dict:
            return decoder_outputs + encoder_outputs

        return Seq2SeqModelOutput(
            last_hidden_state=decoder_outputs.last_hidden_state,
            past_key_values=decoder_outputs.past_key_values,
            decoder_hidden_states=decoder_outputs.hidden_states,
            decoder_attentions=decoder_outputs.attentions,
            cross_attentions=decoder_outputs.cross_attentions,
            encoder_last_hidden_state=encoder_outputs.last_hidden_state,
            encoder_hidden_states=encoder_outputs.hidden_states,
            encoder_attentions=encoder_outputs.attentions,
        )

mindnlp.transformers.models.blenderbot.modeling_blenderbot.BlenderbotModel.__init__(config)

This method initializes a new instance of the BlenderbotModel class.

PARAMETER DESCRIPTION
self

The instance of the BlenderbotModel class.

config

An instance of the BlenderbotConfig class containing configuration parameters for the model.

TYPE: BlenderbotConfig

RETURNS DESCRIPTION

None.

Source code in mindnlp/transformers/models/blenderbot/modeling_blenderbot.py
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
def __init__(self, config: BlenderbotConfig):
    """
    This method initializes a new instance of the BlenderbotModel class.

    Args:
        self: The instance of the BlenderbotModel class.
        config (BlenderbotConfig):
            An instance of the BlenderbotConfig class containing configuration parameters for the model.

    Returns:
        None.

    Raises:
        None.
    """
    super().__init__(config)

    padding_idx, vocab_size = config.pad_token_id, config.vocab_size
    self.shared = nn.Embedding(vocab_size, config.d_model, padding_idx)

    self.encoder = BlenderbotEncoder(config, self.shared)
    self.decoder = BlenderbotDecoder(config, self.shared)

    # Initialize weights and apply final processing
    self.post_init()

mindnlp.transformers.models.blenderbot.modeling_blenderbot.BlenderbotModel.forward(input_ids=None, attention_mask=None, decoder_input_ids=None, decoder_attention_mask=None, head_mask=None, decoder_head_mask=None, cross_attn_head_mask=None, encoder_outputs=None, past_key_values=None, inputs_embeds=None, decoder_inputs_embeds=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None)

RETURNS DESCRIPTION
Union[Tuple[Tensor], Seq2SeqModelOutput]

Union[Tuple[mindspore.Tensor], Seq2SeqModelOutput]

Example
>>> from transformers import AutoTokenizer, BlenderbotModel
...
>>> model = BlenderbotModel.from_pretrained("facebook/blenderbot-400M-distill")
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill")
...
>>> inputs = tokenizer("Studies have been shown that owning a dog is good for you", return_tensors="pt")
>>> decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids  # Batch size 1
>>> outputs = model(input_ids=inputs.input_ids, decoder_input_ids=decoder_input_ids)
...
>>> last_hidden_states = outputs.last_hidden_state
>>> list(last_hidden_states.shape)
[1, 6, 1280]
Source code in mindnlp/transformers/models/blenderbot/modeling_blenderbot.py
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
def forward(
    self,
    input_ids: Optional[mindspore.Tensor] = None,
    attention_mask: Optional[mindspore.Tensor] = None,
    decoder_input_ids: Optional[mindspore.Tensor] = None,
    decoder_attention_mask: Optional[mindspore.Tensor] = None,
    head_mask: Optional[mindspore.Tensor] = None,
    decoder_head_mask: Optional[mindspore.Tensor] = None,
    cross_attn_head_mask: Optional[mindspore.Tensor] = None,
    encoder_outputs: Optional[Union[Tuple, BaseModelOutput]] = None,
    past_key_values: Optional[List[mindspore.Tensor]] = None,
    inputs_embeds: Optional[mindspore.Tensor] = None,
    decoder_inputs_embeds: Optional[mindspore.Tensor] = None,
    use_cache: Optional[bool] = None,
    output_attentions: Optional[bool] = None,
    output_hidden_states: Optional[bool] = None,
    return_dict: Optional[bool] = None,
) -> Union[Tuple[mindspore.Tensor], Seq2SeqModelOutput]:
    r"""
    Returns:
        Union[Tuple[mindspore.Tensor], Seq2SeqModelOutput]

    Example:
        ```python
        >>> from transformers import AutoTokenizer, BlenderbotModel
        ...
        >>> model = BlenderbotModel.from_pretrained("facebook/blenderbot-400M-distill")
        >>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill")
        ...
        >>> inputs = tokenizer("Studies have been shown that owning a dog is good for you", return_tensors="pt")
        >>> decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids  # Batch size 1
        >>> outputs = model(input_ids=inputs.input_ids, decoder_input_ids=decoder_input_ids)
        ...
        >>> last_hidden_states = outputs.last_hidden_state
        >>> list(last_hidden_states.shape)
        [1, 6, 1280]
        ```
    """
    output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
    output_hidden_states = (
        output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
    )
    use_cache = use_cache if use_cache is not None else self.config.use_cache
    return_dict = return_dict if return_dict is not None else self.config.use_return_dict

    if encoder_outputs is None:
        encoder_outputs = self.encoder(
            input_ids=input_ids,
            attention_mask=attention_mask,
            head_mask=head_mask,
            inputs_embeds=inputs_embeds,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
        )
    # If the user passed a tuple for encoder_outputs, we wrap it in a BaseModelOutput when return_dict=True
    elif return_dict and not isinstance(encoder_outputs, BaseModelOutput):
        encoder_outputs = BaseModelOutput(
            last_hidden_state=encoder_outputs[0],
            hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None,
            attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None,
        )

    # decoder outputs consists of (dec_features, past_key_value, dec_hidden, dec_attn)
    decoder_outputs = self.decoder(
        input_ids=decoder_input_ids,
        attention_mask=decoder_attention_mask,
        encoder_hidden_states=encoder_outputs[0],
        encoder_attention_mask=attention_mask,
        head_mask=decoder_head_mask,
        cross_attn_head_mask=cross_attn_head_mask,
        past_key_values=past_key_values,
        inputs_embeds=decoder_inputs_embeds,
        use_cache=use_cache,
        output_attentions=output_attentions,
        output_hidden_states=output_hidden_states,
        return_dict=return_dict,
    )

    if not return_dict:
        return decoder_outputs + encoder_outputs

    return Seq2SeqModelOutput(
        last_hidden_state=decoder_outputs.last_hidden_state,
        past_key_values=decoder_outputs.past_key_values,
        decoder_hidden_states=decoder_outputs.hidden_states,
        decoder_attentions=decoder_outputs.attentions,
        cross_attentions=decoder_outputs.cross_attentions,
        encoder_last_hidden_state=encoder_outputs.last_hidden_state,
        encoder_hidden_states=encoder_outputs.hidden_states,
        encoder_attentions=encoder_outputs.attentions,
    )

mindnlp.transformers.models.blenderbot.modeling_blenderbot.BlenderbotModel.get_decoder()

This method returns the decoder used in the BlenderbotModel.

PARAMETER DESCRIPTION
self

The instance of the BlenderbotModel class.

RETURNS DESCRIPTION
None

This method returns the decoder used in the BlenderbotModel. It returns None if the decoder is not set.

Source code in mindnlp/transformers/models/blenderbot/modeling_blenderbot.py
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
def get_decoder(self):
    """
    This method returns the decoder used in the BlenderbotModel.

    Args:
        self: The instance of the BlenderbotModel class.

    Returns:
        None: This method returns the decoder used in the BlenderbotModel. It returns None if the decoder is not set.

    Raises:
        None.
    """
    return self.decoder

mindnlp.transformers.models.blenderbot.modeling_blenderbot.BlenderbotModel.get_encoder()

Returns the encoder used in the BlenderbotModel.

PARAMETER DESCRIPTION
self

An instance of the BlenderbotModel class.

TYPE: BlenderbotModel

RETURNS DESCRIPTION

None.

Source code in mindnlp/transformers/models/blenderbot/modeling_blenderbot.py
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
def get_encoder(self):
    """
    Returns the encoder used in the BlenderbotModel.

    Args:
        self (BlenderbotModel): An instance of the BlenderbotModel class.

    Returns:
        None.

    Raises:
        None.
    """
    return self.encoder

mindnlp.transformers.models.blenderbot.modeling_blenderbot.BlenderbotModel.get_input_embeddings()

This method retrieves the input embeddings from the BlenderbotModel.

PARAMETER DESCRIPTION
self

BlenderbotModel instance. The instance of the BlenderbotModel class.

RETURNS DESCRIPTION
None

This method returns the shared input embeddings.

Source code in mindnlp/transformers/models/blenderbot/modeling_blenderbot.py
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
def get_input_embeddings(self):
    """
    This method retrieves the input embeddings from the BlenderbotModel.

    Args:
        self: BlenderbotModel instance. The instance of the BlenderbotModel class.

    Returns:
        None: This method returns the shared input embeddings.

    Raises:
        None.
    """
    return self.shared

mindnlp.transformers.models.blenderbot.modeling_blenderbot.BlenderbotModel.set_input_embeddings(value)

Sets the input embeddings for the BlenderbotModel.

PARAMETER DESCRIPTION
self

The instance of the BlenderbotModel class.

TYPE: BlenderbotModel

value

The input embeddings to be set. It should be a tensor of shape (vocab_size, embeddings_dim).

RETURNS DESCRIPTION

None.

Source code in mindnlp/transformers/models/blenderbot/modeling_blenderbot.py
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
def set_input_embeddings(self, value):
    """
    Sets the input embeddings for the BlenderbotModel.

    Args:
        self (BlenderbotModel): The instance of the BlenderbotModel class.
        value: The input embeddings to be set. It should be a tensor of shape (vocab_size, embeddings_dim).

    Returns:
        None.

    Raises:
        None.
    """
    self.shared = value
    self.encoder.embed_tokens = self.shared
    self.decoder.embed_tokens = self.shared

mindnlp.transformers.models.blenderbot.modeling_blenderbot.BlenderbotPreTrainedModel

Bases: PreTrainedModel

BlenderbotPreTrainedModel is a Python class representing a pre-trained model for Blenderbot. This class inherits from PreTrainedModel and includes methods for initializing weights and providing dummy inputs.

The _init_weights method initializes the weights of the model based on the specified standard deviation and cell type, ensuring proper initialization for both Dense and Embedding cells.

The dummy_inputs method generates a set of dummy inputs for the model, including attention mask, input IDs, and decoder input IDs, with consideration for padding tokens.

This class provides essential functionality for initializing model weights and generating dummy inputs, making it a crucial component for working with pre-trained Blenderbot models.

Source code in mindnlp/transformers/models/blenderbot/modeling_blenderbot.py
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
class BlenderbotPreTrainedModel(PreTrainedModel):

    """
    BlenderbotPreTrainedModel is a Python class representing a pre-trained model for Blenderbot.
    This class inherits from PreTrainedModel and includes methods for initializing weights and providing dummy inputs.

    The _init_weights method initializes the weights of the model based on the specified standard deviation and cell type,
    ensuring proper initialization for both Dense and Embedding cells.

    The dummy_inputs method generates a set of dummy inputs for the model, including attention mask, input IDs,
    and decoder input IDs, with consideration for padding tokens.

    This class provides essential functionality for initializing model weights and generating dummy inputs,
    making it a crucial component for working with pre-trained Blenderbot models.
    """
    config_class = BlenderbotConfig
    base_model_prefix = "model"
    supports_gradient_checkpointing = True

    def _init_weights(self, cell):
        """Initialize the weights"""
        std = self.config.init_std
        if isinstance(cell, nn.Linear):
            # Slightly different from the TF version which uses truncated_normal for initialization
            # cf https://github.com/pytorch/pytorch/pull/5617
            cell.weight.set_data(initializer(Normal(std),
                                                    cell.weight.shape, cell.weight.dtype))
            if cell.bias is not None:
                cell.bias.set_data(initializer('zeros', cell.bias.shape, cell.bias.dtype))
        elif isinstance(cell, nn.Embedding):
            weight = np.random.normal(0.0, std, cell.weight.shape)
            if cell.padding_idx:
                weight[cell.padding_idx] = 0

            cell.weight.set_data(Tensor(weight, cell.weight.dtype))

    @property
    def dummy_inputs(self):
        """
        This method generates dummy inputs for the BlenderbotPreTrainedModel.

        Args:
            self: The instance of the BlenderbotPreTrainedModel class.

        Returns:
            A dictionary containing dummy inputs in the following format:
                {
                    'attention_mask': A tensor representing the attention mask where pad tokens are masked,
                    'input_ids': A tensor representing the input IDs,
                    'decoder_input_ids': A tensor representing the decoder input IDs
                }

        Raises:
            None
        """
        pad_token = self.config.pad_token_id
        input_ids = mindspore.tensor([[0, 6, 10, 4, 2], [0, 8, 12, 2, pad_token]])
        dummy_inputs = {
            "attention_mask": input_ids.ne(pad_token),
            "input_ids": input_ids,
            "decoder_input_ids": input_ids,
        }
        return dummy_inputs

mindnlp.transformers.models.blenderbot.modeling_blenderbot.BlenderbotPreTrainedModel.dummy_inputs property

This method generates dummy inputs for the BlenderbotPreTrainedModel.

PARAMETER DESCRIPTION
self

The instance of the BlenderbotPreTrainedModel class.

RETURNS DESCRIPTION

A dictionary containing dummy inputs in the following format: { 'attention_mask': A tensor representing the attention mask where pad tokens are masked, 'input_ids': A tensor representing the input IDs, 'decoder_input_ids': A tensor representing the decoder input IDs }

mindnlp.transformers.models.blenderbot.tokenization_blenderbot.BlenderbotTokenizer

Bases: PreTrainedTokenizer

Constructs a Blenderbot tokenizer, derived from the GPT-2 tokenizer, using byte-level Byte-Pair-Encoding.

This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not:

Example
>>> from transformers import BlenderbotTokenizer
...
>>> tokenizer = BlenderbotTokenizer.from_pretrained("facebook/blenderbot-3B")
>>> tokenizer.add_prefix_space = False
>>> tokenizer("Hello world")["input_ids"]
[47, 921, 86, 1085, 2]
>>> tokenizer(" Hello world")["input_ids"]
[6950, 1085, 2]

You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer or when you call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.

When used with is_split_into_words=True, this tokenizer will add a space before each word (even the first one).

This tokenizer inherits from [PreTrainedTokenizer] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.

PARAMETER DESCRIPTION
vocab_file

Path to the vocabulary file.

TYPE: `str`

merges_file

Path to the merges file.

TYPE: `str`

errors

Paradigm to follow when decoding bytes to UTF-8. See bytes.decode for more information.

TYPE: `str`, *optional*, defaults to `"replace"` DEFAULT: 'replace'

bos_token

The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.

When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the cls_token.

TYPE: `str`, *optional*, defaults to `"<s>"` DEFAULT: '<s>'

eos_token

The end of sequence token.

When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the sep_token.

TYPE: `str`, *optional*, defaults to `"</s>"` DEFAULT: '</s>'

sep_token

The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.

TYPE: `str`, *optional*, defaults to `"</s>"` DEFAULT: '</s>'

cls_token

The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.

TYPE: `str`, *optional*, defaults to `"<s>"` DEFAULT: '<s>'

unk_token

The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.

TYPE: `str`, *optional*, defaults to `"<unk>"` DEFAULT: '<unk>'

pad_token

The token used for padding, for example when batching sequences of different lengths.

TYPE: `str`, *optional*, defaults to `"<pad>"` DEFAULT: '<pad>'

mask_token

The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.

TYPE: `str`, *optional*, defaults to `"<mask>"` DEFAULT: '<mask>'

add_prefix_space

Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. (Blenderbot tokenizer detect beginning of words by the preceding space).

TYPE: `bool`, *optional*, defaults to `False` DEFAULT: False

Source code in mindnlp/transformers/models/blenderbot/tokenization_blenderbot.py
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
class BlenderbotTokenizer(PreTrainedTokenizer):
    """
    Constructs a Blenderbot tokenizer, derived from the GPT-2 tokenizer, using byte-level Byte-Pair-Encoding.

    This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
    be encoded differently whether it is at the beginning of the sentence (without space) or not:

    Example:
        ```python
        >>> from transformers import BlenderbotTokenizer
        ...
        >>> tokenizer = BlenderbotTokenizer.from_pretrained("facebook/blenderbot-3B")
        >>> tokenizer.add_prefix_space = False
        >>> tokenizer("Hello world")["input_ids"]
        [47, 921, 86, 1085, 2]
        >>> tokenizer(" Hello world")["input_ids"]
        [6950, 1085, 2]
        ```

    You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer or when you
    call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.

    <Tip>

    When used with `is_split_into_words=True`, this tokenizer will add a space before each word (even the first one).

    </Tip>

    This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
    this superclass for more information regarding those methods.

    Args:
        vocab_file (`str`):
            Path to the vocabulary file.
        merges_file (`str`):
            Path to the merges file.
        errors (`str`, *optional*, defaults to `"replace"`):
            Paradigm to follow when decoding bytes to UTF-8. See
            [bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information.
        bos_token (`str`, *optional*, defaults to `"<s>"`):
            The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.

            <Tip>

            When building a sequence using special tokens, this is not the token that is used for the beginning of
            sequence. The token used is the `cls_token`.

            </Tip>

        eos_token (`str`, *optional*, defaults to `"</s>"`):
            The end of sequence token.

            <Tip>

            When building a sequence using special tokens, this is not the token that is used for the end of sequence.
            The token used is the `sep_token`.

            </Tip>

        sep_token (`str`, *optional*, defaults to `"</s>"`):
            The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
            sequence classification or for a text and a question for question answering. It is also used as the last
            token of a sequence built with special tokens.
        cls_token (`str`, *optional*, defaults to `"<s>"`):
            The classifier token which is used when doing sequence classification (classification of the whole sequence
            instead of per-token classification). It is the first token of the sequence when built with special tokens.
        unk_token (`str`, *optional*, defaults to `"<unk>"`):
            The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
            token instead.
        pad_token (`str`, *optional*, defaults to `"<pad>"`):
            The token used for padding, for example when batching sequences of different lengths.
        mask_token (`str`, *optional*, defaults to `"<mask>"`):
            The token used for masking values. This is the token used when training this model with masked language
            modeling. This is the token which the model will try to predict.
        add_prefix_space (`bool`, *optional*, defaults to `False`):
            Whether or not to add an initial space to the input. This allows to treat the leading word just as any
            other word. (Blenderbot tokenizer detect beginning of words by the preceding space).
    """
    vocab_files_names = VOCAB_FILES_NAMES
    model_input_names = ["input_ids", "attention_mask"]

    # Copied from transformers.models.roberta.tokenization_roberta.RobertaTokenizer.__init__ with Roberta->Blenderbot, RoBERTa->Blenderbot
    def __init__(
        self,
        vocab_file,
        merges_file,
        errors="replace",
        bos_token="<s>",
        eos_token="</s>",
        sep_token="</s>",
        cls_token="<s>",
        unk_token="<unk>",
        pad_token="<pad>",
        mask_token="<mask>",
        add_prefix_space=False,
        **kwargs,
    ):
        """
        Initializes a new instance of the BlenderbotTokenizer class.

        Args:
            self: The object instance.
            vocab_file (str): The path to the vocabulary file.
            merges_file (str): The path to the BPE merges file.
            errors (str, optional): Specifies how to handle encoding errors. Defaults to 'replace'.
            bos_token (str, optional): The beginning of sentence token. Defaults to '<s>'.
            eos_token (str, optional): The end of sentence token. Defaults to '</s>'.
            sep_token (str, optional): The separator token. Defaults to '</s>'.
            cls_token (str, optional): The classification token. Defaults to '<s>'.
            unk_token (str, optional): The unknown token. Defaults to '<unk>'.
            pad_token (str, optional): The padding token. Defaults to '<pad>'.
            mask_token (str, optional): The mask token. Defaults to '<mask>'.
            add_prefix_space (bool, optional): Whether to add a prefix space to the input. Defaults to False.

        Returns:
            None

        Raises:
            FileNotFoundError: If the vocab_file or merges_file is not found.
            UnicodeDecodeError: If there is an error decoding the vocabulary or merges file.
            ValueError: If the bos_token, eos_token, sep_token, cls_token, unk_token, pad_token, or mask_token is not a string.
            TypeError: If the bos_token, eos_token, sep_token, cls_token, unk_token, pad_token, or mask_token is not a string or AddedToken.

        """
        bos_token = AddedToken(bos_token, lstrip=False, rstrip=False) if isinstance(bos_token, str) else bos_token
        pad_token = AddedToken(pad_token, lstrip=False, rstrip=False) if isinstance(pad_token, str) else pad_token
        eos_token = AddedToken(eos_token, lstrip=False, rstrip=False) if isinstance(eos_token, str) else eos_token
        unk_token = AddedToken(unk_token, lstrip=False, rstrip=False) if isinstance(unk_token, str) else unk_token
        sep_token = AddedToken(sep_token, lstrip=False, rstrip=False) if isinstance(sep_token, str) else sep_token
        cls_token = AddedToken(cls_token, lstrip=False, rstrip=False) if isinstance(cls_token, str) else cls_token

        # Mask token behave like a normal word, i.e. include the space before it
        mask_token = (
            AddedToken(mask_token, lstrip=True, rstrip=False, normalized=False)
            if isinstance(mask_token, str)
            else mask_token
        )

        # these special tokens are not part of the vocab.json, let's add them in the correct order

        with open(vocab_file, encoding="utf-8") as vocab_handle:
            self.encoder = json.load(vocab_handle)
        self.decoder = {v: k for k, v in self.encoder.items()}
        self.errors = errors  # how to handle errors in decoding
        self.byte_encoder = bytes_to_unicode()
        self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
        with open(merges_file, encoding="utf-8") as merges_handle:
            bpe_merges = merges_handle.read().split("\n")[1:-1]
        bpe_merges = [tuple(merge.split()) for merge in bpe_merges]
        self.bpe_ranks = dict(zip(bpe_merges, range(len(bpe_merges))))
        self.cache = {}
        self.add_prefix_space = add_prefix_space

        # Should have added re.IGNORECASE so BPE merges can happen for capitalized versions of contractions
        self.pat = re.compile(r"""'s|'t|'re|'ve|'m|'ll|'d| ?\p{L}+| ?\p{N}+| ?[^\s\p{L}\p{N}]+|\s+(?!\S)|\s+""")

        super().__init__(
            errors=errors,
            bos_token=bos_token,
            eos_token=eos_token,
            unk_token=unk_token,
            sep_token=sep_token,
            cls_token=cls_token,
            pad_token=pad_token,
            mask_token=mask_token,
            add_prefix_space=add_prefix_space,
            **kwargs,
        )

    @property
    # Copied from transformers.models.roberta.tokenization_roberta.RobertaTokenizer.vocab_size with Roberta->Blenderbot, RoBERTa->Blenderbot
    def vocab_size(self):
        """
        Method:
            vocab_size

        Description:
            Returns the size of the vocabulary used by the BlenderbotTokenizer instance.

        Args:
            self (BlenderbotTokenizer): The instance of BlenderbotTokenizer.

        Returns:
            int: The size of the vocabulary used by the BlenderbotTokenizer.

        Raises:
            None
        """
        return len(self.encoder)

    # Copied from transformers.models.roberta.tokenization_roberta.RobertaTokenizer.get_vocab with Roberta->Blenderbot, RoBERTa->Blenderbot
    def get_vocab(self):
        """
        Retrieve the vocabulary from the BlenderbotTokenizer.

        Args:
            self: An instance of the BlenderbotTokenizer class.

        Returns:
            A dictionary object representing the vocabulary of the tokenizer. The dictionary contains the encoder tokens
            mapping with their corresponding ids. The vocabulary includes tokens from the encoder and any additional
            tokens that have been added using the 'add_tokens' method.

        Raises:
            None.
        """
        vocab = dict(self.encoder).copy()
        vocab.update(self.added_tokens_encoder)
        return vocab

    # Copied from transformers.models.roberta.tokenization_roberta.RobertaTokenizer.bpe with Roberta->Blenderbot, RoBERTa->Blenderbot
    def bpe(self, token):
        """
        This method, 'bpe', is defined within the class 'BlenderbotTokenizer' and is used to perform Byte Pair Encoding (BPE) on a given token.

        Args:
            self (BlenderbotTokenizer): The instance of the BlenderbotTokenizer class.
            token (str): The input token to be processed through BPE. It should be a string representing a token.

        Returns:
            str: The BPE processed token as a string. If the input token does not contain any pairs for BPE processing, the original token is returned.

        Raises:
            None
        """
        if token in self.cache:
            return self.cache[token]
        word = tuple(token)
        pairs = get_pairs(word)

        if not pairs:
            return token

        while True:
            bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float("inf")))
            if bigram not in self.bpe_ranks:
                break
            first, second = bigram
            new_word = []
            i = 0
            while i < len(word):
                try:
                    j = word.index(first, i)
                except ValueError:
                    new_word.extend(word[i:])
                    break
                else:
                    new_word.extend(word[i:j])
                    i = j

                if word[i] == first and i < len(word) - 1 and word[i + 1] == second:
                    new_word.append(first + second)
                    i += 2
                else:
                    new_word.append(word[i])
                    i += 1
            new_word = tuple(new_word)
            word = new_word
            if len(word) == 1:
                break
            else:
                pairs = get_pairs(word)
        word = " ".join(word)
        self.cache[token] = word
        return word

    # Copied from transformers.models.roberta.tokenization_roberta.RobertaTokenizer._tokenize with Roberta->Blenderbot, RoBERTa->Blenderbot
    def _tokenize(self, text):
        """Tokenize a string."""
        bpe_tokens = []
        for token in re.findall(self.pat, text):
            token = "".join(
                self.byte_encoder[b] for b in token.encode("utf-8")
            )  # Maps all our bytes to unicode strings, avoiding control tokens of the BPE (spaces in our case)
            bpe_tokens.extend(bpe_token for bpe_token in self.bpe(token).split(" "))
        return bpe_tokens

    # Copied from transformers.models.roberta.tokenization_roberta.RobertaTokenizer._convert_token_to_id with Roberta->Blenderbot, RoBERTa->Blenderbot
    def _convert_token_to_id(self, token):
        """Converts a token (str) in an id using the vocab."""
        return self.encoder.get(token, self.encoder.get(self.unk_token))

    # Copied from transformers.models.roberta.tokenization_roberta.RobertaTokenizer._convert_id_to_token with Roberta->Blenderbot, RoBERTa->Blenderbot
    def _convert_id_to_token(self, index):
        """Converts an index (integer) in a token (str) using the vocab."""
        return self.decoder.get(index)

    # Copied from transformers.models.roberta.tokenization_roberta.RobertaTokenizer.convert_tokens_to_string with Roberta->Blenderbot, RoBERTa->Blenderbot
    def convert_tokens_to_string(self, tokens):
        """Converts a sequence of tokens (string) in a single string."""
        text = "".join(tokens)
        text = bytearray([self.byte_decoder[c] for c in text]).decode("utf-8", errors=self.errors)
        return text

    # Copied from transformers.models.roberta.tokenization_roberta.RobertaTokenizer.save_vocabulary with Roberta->Blenderbot, RoBERTa->Blenderbot
    def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
        """
        Save the vocabulary files for the BlenderbotTokenizer.

        Args:
            self (BlenderbotTokenizer): The instance of the BlenderbotTokenizer class.
            save_directory (str): The directory where the vocabulary files will be saved.
            filename_prefix (Optional[str], optional):
                The prefix to be added to the vocabulary file names. Defaults to None.

        Returns:
            Tuple[str]: A tuple containing the paths of the saved vocabulary files.

        Raises:
            FileNotFoundError: If the `save_directory` does not exist or is not a directory.

        The `save_vocabulary` method saves the vocabulary files for the tokenizer.
        It takes the `save_directory` as input, which is the directory where the vocabulary files will be saved. The optional
        `filename_prefix` parameter can be used to add a prefix to the vocabulary file names.

        The method saves two files: the vocabulary file and the merges file.
        The vocabulary file contains the encoding dictionary of the tokenizer,
        while the merges file contains the BPE merge indices.

        If the `save_directory` does not exist or is not a directory, a `FileNotFoundError` is raised.

        Example:
            ```python
            >>> tokenizer = BlenderbotTokenizer()
            >>> tokenizer.save_vocabulary('/path/to/save', filename_prefix='my-prefix')
            ```
        """
        if not os.path.isdir(save_directory):
            logger.error(f"Vocabulary path ({save_directory}) should be a directory")
            return
        vocab_file = os.path.join(
            save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"]
        )
        merge_file = os.path.join(
            save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["merges_file"]
        )

        with open(vocab_file, "w", encoding="utf-8") as f:
            f.write(json.dumps(self.encoder, indent=2, sort_keys=True, ensure_ascii=False) + "\n")

        index = 0
        with open(merge_file, "w", encoding="utf-8") as writer:
            writer.write("#version: 0.2\n")
            for bpe_tokens, token_index in sorted(self.bpe_ranks.items(), key=lambda kv: kv[1]):
                if index != token_index:
                    logger.warning(
                        f"Saving vocabulary to {merge_file}: BPE merge indices are not consecutive."
                        " Please check that the tokenizer is not corrupted!"
                    )
                    index = token_index
                writer.write(" ".join(bpe_tokens) + "\n")
                index += 1

        return vocab_file, merge_file

    # Copied from transformers.models.roberta.tokenization_roberta.RobertaTokenizer.get_special_tokens_mask with Roberta->Blenderbot, RoBERTa->Blenderbot
    def get_special_tokens_mask(
        self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
    ) -> List[int]:
        """
        Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
        special tokens using the tokenizer `prepare_for_model` method.

        Args:
            token_ids_0 (`List[int]`):
                List of IDs.
            token_ids_1 (`List[int]`, *optional*):
                Optional second list of IDs for sequence pairs.
            already_has_special_tokens (`bool`, *optional*, defaults to `False`):
                Whether or not the token list is already formatted with special tokens for the model.

        Returns:
            `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
        """
        if already_has_special_tokens:
            return super().get_special_tokens_mask(
                token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True
            )

        if token_ids_1 is None:
            return [1] + ([0] * len(token_ids_0)) + [1]
        return [1] + ([0] * len(token_ids_0)) + [1, 1] + ([0] * len(token_ids_1)) + [1]

    # Copied from transformers.models.roberta.tokenization_roberta.RobertaTokenizer.create_token_type_ids_from_sequences with Roberta->Blenderbot, RoBERTa->Blenderbot
    def create_token_type_ids_from_sequences(
        self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
    ) -> List[int]:
        """
        Create a mask from the two sequences passed to be used in a sequence-pair classification task. Blenderbot does not
        make use of token type ids, therefore a list of zeros is returned.

        Args:
            token_ids_0 (`List[int]`):
                List of IDs.
            token_ids_1 (`List[int]`, *optional*):
                Optional second list of IDs for sequence pairs.

        Returns:
            `List[int]`: List of zeros.
        """
        sep = [self.sep_token_id]
        cls = [self.cls_token_id]

        if token_ids_1 is None:
            return len(cls + token_ids_0 + sep) * [0]
        return len(cls + token_ids_0 + sep + sep + token_ids_1 + sep) * [0]

    # Copied from transformers.models.roberta.tokenization_roberta.RobertaTokenizer.prepare_for_tokenization with Roberta->Blenderbot, RoBERTa->Blenderbot
    def prepare_for_tokenization(self, text, is_split_into_words=False, **kwargs):
        """
        This method prepares the input text for tokenization by adding a prefix space if specified or if the text is not already split into words.

        Args:
            self: The instance of the BlenderbotTokenizer class.
            text (str): The input text to be prepared for tokenization.
            is_split_into_words (bool): A flag indicating whether the input text is already split into words. Default is False.

        Returns:
            None: The method modifies the input text in place.

        Raises:
            None.
        """
        add_prefix_space = kwargs.pop("add_prefix_space", self.add_prefix_space)
        if (is_split_into_words or add_prefix_space) and (len(text) > 0 and not text[0].isspace()):
            text = " " + text
        return (text, kwargs)

    def build_inputs_with_special_tokens(self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None):
        """
        Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
        adding special tokens. A Blenderbot sequence has the following format:

        - single sequence: ` X </s>`

        Args:
            token_ids_0 (`List[int]`):
                List of IDs to which the special tokens will be added
            token_ids_1 (`List[int]`, *optional*):
                Will be ignored
        Returns:
            `List[int]`: list of [input IDs](../glossary#input-ids) with the appropriate special tokens.
        """
        return token_ids_0 + [self.eos_token_id]

    @property
    def default_chat_template(self):
        """
        A very simple chat template that just adds whitespace between messages.
        """
        logger.warning_once(
            "\nNo chat template is defined for this tokenizer - using the default template "
            f"for the {self.__class__.__name__} class. If the default is not appropriate for "
            "your model, please set `tokenizer.chat_template` to an appropriate template. "
            "See https://huggingface.co/docs/transformers/main/chat_templating for more information.\n"
        )
        return (
            "{% for message in messages %}"
            "{% if message['role'] == 'user' %}{{ ' ' }}{% endif %}"
            "{{ message['content'] }}"
            "{% if not loop.last %}{{ '  ' }}{% endif %}"
            "{% endfor %}"
            "{{ eos_token }}"
        )

mindnlp.transformers.models.blenderbot.tokenization_blenderbot.BlenderbotTokenizer.default_chat_template property

A very simple chat template that just adds whitespace between messages.

mindnlp.transformers.models.blenderbot.tokenization_blenderbot.BlenderbotTokenizer.vocab_size property

Method

vocab_size

Description

Returns the size of the vocabulary used by the BlenderbotTokenizer instance.

PARAMETER DESCRIPTION
self

The instance of BlenderbotTokenizer.

TYPE: BlenderbotTokenizer

RETURNS DESCRIPTION
int

The size of the vocabulary used by the BlenderbotTokenizer.

mindnlp.transformers.models.blenderbot.tokenization_blenderbot.BlenderbotTokenizer.__init__(vocab_file, merges_file, errors='replace', bos_token='<s>', eos_token='</s>', sep_token='</s>', cls_token='<s>', unk_token='<unk>', pad_token='<pad>', mask_token='<mask>', add_prefix_space=False, **kwargs)

Initializes a new instance of the BlenderbotTokenizer class.

PARAMETER DESCRIPTION
self

The object instance.

vocab_file

The path to the vocabulary file.

TYPE: str

merges_file

The path to the BPE merges file.

TYPE: str

errors

Specifies how to handle encoding errors. Defaults to 'replace'.

TYPE: str DEFAULT: 'replace'

bos_token

The beginning of sentence token. Defaults to ''.

TYPE: str DEFAULT: '<s>'

eos_token

The end of sentence token. Defaults to ''.

TYPE: str DEFAULT: '</s>'

sep_token

The separator token. Defaults to ''.

TYPE: str DEFAULT: '</s>'

cls_token

The classification token. Defaults to ''.

TYPE: str DEFAULT: '<s>'

unk_token

The unknown token. Defaults to ''.

TYPE: str DEFAULT: '<unk>'

pad_token

The padding token. Defaults to ''.

TYPE: str DEFAULT: '<pad>'

mask_token

The mask token. Defaults to ''.

TYPE: str DEFAULT: '<mask>'

add_prefix_space

Whether to add a prefix space to the input. Defaults to False.

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION

None

RAISES DESCRIPTION
FileNotFoundError

If the vocab_file or merges_file is not found.

UnicodeDecodeError

If there is an error decoding the vocabulary or merges file.

ValueError

If the bos_token, eos_token, sep_token, cls_token, unk_token, pad_token, or mask_token is not a string.

TypeError

If the bos_token, eos_token, sep_token, cls_token, unk_token, pad_token, or mask_token is not a string or AddedToken.

Source code in mindnlp/transformers/models/blenderbot/tokenization_blenderbot.py
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
def __init__(
    self,
    vocab_file,
    merges_file,
    errors="replace",
    bos_token="<s>",
    eos_token="</s>",
    sep_token="</s>",
    cls_token="<s>",
    unk_token="<unk>",
    pad_token="<pad>",
    mask_token="<mask>",
    add_prefix_space=False,
    **kwargs,
):
    """
    Initializes a new instance of the BlenderbotTokenizer class.

    Args:
        self: The object instance.
        vocab_file (str): The path to the vocabulary file.
        merges_file (str): The path to the BPE merges file.
        errors (str, optional): Specifies how to handle encoding errors. Defaults to 'replace'.
        bos_token (str, optional): The beginning of sentence token. Defaults to '<s>'.
        eos_token (str, optional): The end of sentence token. Defaults to '</s>'.
        sep_token (str, optional): The separator token. Defaults to '</s>'.
        cls_token (str, optional): The classification token. Defaults to '<s>'.
        unk_token (str, optional): The unknown token. Defaults to '<unk>'.
        pad_token (str, optional): The padding token. Defaults to '<pad>'.
        mask_token (str, optional): The mask token. Defaults to '<mask>'.
        add_prefix_space (bool, optional): Whether to add a prefix space to the input. Defaults to False.

    Returns:
        None

    Raises:
        FileNotFoundError: If the vocab_file or merges_file is not found.
        UnicodeDecodeError: If there is an error decoding the vocabulary or merges file.
        ValueError: If the bos_token, eos_token, sep_token, cls_token, unk_token, pad_token, or mask_token is not a string.
        TypeError: If the bos_token, eos_token, sep_token, cls_token, unk_token, pad_token, or mask_token is not a string or AddedToken.

    """
    bos_token = AddedToken(bos_token, lstrip=False, rstrip=False) if isinstance(bos_token, str) else bos_token
    pad_token = AddedToken(pad_token, lstrip=False, rstrip=False) if isinstance(pad_token, str) else pad_token
    eos_token = AddedToken(eos_token, lstrip=False, rstrip=False) if isinstance(eos_token, str) else eos_token
    unk_token = AddedToken(unk_token, lstrip=False, rstrip=False) if isinstance(unk_token, str) else unk_token
    sep_token = AddedToken(sep_token, lstrip=False, rstrip=False) if isinstance(sep_token, str) else sep_token
    cls_token = AddedToken(cls_token, lstrip=False, rstrip=False) if isinstance(cls_token, str) else cls_token

    # Mask token behave like a normal word, i.e. include the space before it
    mask_token = (
        AddedToken(mask_token, lstrip=True, rstrip=False, normalized=False)
        if isinstance(mask_token, str)
        else mask_token
    )

    # these special tokens are not part of the vocab.json, let's add them in the correct order

    with open(vocab_file, encoding="utf-8") as vocab_handle:
        self.encoder = json.load(vocab_handle)
    self.decoder = {v: k for k, v in self.encoder.items()}
    self.errors = errors  # how to handle errors in decoding
    self.byte_encoder = bytes_to_unicode()
    self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
    with open(merges_file, encoding="utf-8") as merges_handle:
        bpe_merges = merges_handle.read().split("\n")[1:-1]
    bpe_merges = [tuple(merge.split()) for merge in bpe_merges]
    self.bpe_ranks = dict(zip(bpe_merges, range(len(bpe_merges))))
    self.cache = {}
    self.add_prefix_space = add_prefix_space

    # Should have added re.IGNORECASE so BPE merges can happen for capitalized versions of contractions
    self.pat = re.compile(r"""'s|'t|'re|'ve|'m|'ll|'d| ?\p{L}+| ?\p{N}+| ?[^\s\p{L}\p{N}]+|\s+(?!\S)|\s+""")

    super().__init__(
        errors=errors,
        bos_token=bos_token,
        eos_token=eos_token,
        unk_token=unk_token,
        sep_token=sep_token,
        cls_token=cls_token,
        pad_token=pad_token,
        mask_token=mask_token,
        add_prefix_space=add_prefix_space,
        **kwargs,
    )

mindnlp.transformers.models.blenderbot.tokenization_blenderbot.BlenderbotTokenizer.bpe(token)

This method, 'bpe', is defined within the class 'BlenderbotTokenizer' and is used to perform Byte Pair Encoding (BPE) on a given token.

PARAMETER DESCRIPTION
self

The instance of the BlenderbotTokenizer class.

TYPE: BlenderbotTokenizer

token

The input token to be processed through BPE. It should be a string representing a token.

TYPE: str

RETURNS DESCRIPTION
str

The BPE processed token as a string. If the input token does not contain any pairs for BPE processing, the original token is returned.

Source code in mindnlp/transformers/models/blenderbot/tokenization_blenderbot.py
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
def bpe(self, token):
    """
    This method, 'bpe', is defined within the class 'BlenderbotTokenizer' and is used to perform Byte Pair Encoding (BPE) on a given token.

    Args:
        self (BlenderbotTokenizer): The instance of the BlenderbotTokenizer class.
        token (str): The input token to be processed through BPE. It should be a string representing a token.

    Returns:
        str: The BPE processed token as a string. If the input token does not contain any pairs for BPE processing, the original token is returned.

    Raises:
        None
    """
    if token in self.cache:
        return self.cache[token]
    word = tuple(token)
    pairs = get_pairs(word)

    if not pairs:
        return token

    while True:
        bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float("inf")))
        if bigram not in self.bpe_ranks:
            break
        first, second = bigram
        new_word = []
        i = 0
        while i < len(word):
            try:
                j = word.index(first, i)
            except ValueError:
                new_word.extend(word[i:])
                break
            else:
                new_word.extend(word[i:j])
                i = j

            if word[i] == first and i < len(word) - 1 and word[i + 1] == second:
                new_word.append(first + second)
                i += 2
            else:
                new_word.append(word[i])
                i += 1
        new_word = tuple(new_word)
        word = new_word
        if len(word) == 1:
            break
        else:
            pairs = get_pairs(word)
    word = " ".join(word)
    self.cache[token] = word
    return word

mindnlp.transformers.models.blenderbot.tokenization_blenderbot.BlenderbotTokenizer.build_inputs_with_special_tokens(token_ids_0, token_ids_1=None)

Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A Blenderbot sequence has the following format:

  • single sequence: X </s>
PARAMETER DESCRIPTION
token_ids_0

List of IDs to which the special tokens will be added

TYPE: `List[int]`

token_ids_1

Will be ignored

TYPE: `List[int]`, *optional* DEFAULT: None

Source code in mindnlp/transformers/models/blenderbot/tokenization_blenderbot.py
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
def build_inputs_with_special_tokens(self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None):
    """
    Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
    adding special tokens. A Blenderbot sequence has the following format:

    - single sequence: ` X </s>`

    Args:
        token_ids_0 (`List[int]`):
            List of IDs to which the special tokens will be added
        token_ids_1 (`List[int]`, *optional*):
            Will be ignored
    Returns:
        `List[int]`: list of [input IDs](../glossary#input-ids) with the appropriate special tokens.
    """
    return token_ids_0 + [self.eos_token_id]

mindnlp.transformers.models.blenderbot.tokenization_blenderbot.BlenderbotTokenizer.convert_tokens_to_string(tokens)

Converts a sequence of tokens (string) in a single string.

Source code in mindnlp/transformers/models/blenderbot/tokenization_blenderbot.py
367
368
369
370
371
def convert_tokens_to_string(self, tokens):
    """Converts a sequence of tokens (string) in a single string."""
    text = "".join(tokens)
    text = bytearray([self.byte_decoder[c] for c in text]).decode("utf-8", errors=self.errors)
    return text

mindnlp.transformers.models.blenderbot.tokenization_blenderbot.BlenderbotTokenizer.create_token_type_ids_from_sequences(token_ids_0, token_ids_1=None)

Create a mask from the two sequences passed to be used in a sequence-pair classification task. Blenderbot does not make use of token type ids, therefore a list of zeros is returned.

PARAMETER DESCRIPTION
token_ids_0

List of IDs.

TYPE: `List[int]`

token_ids_1

Optional second list of IDs for sequence pairs.

TYPE: `List[int]`, *optional* DEFAULT: None

RETURNS DESCRIPTION
List[int]

List[int]: List of zeros.

Source code in mindnlp/transformers/models/blenderbot/tokenization_blenderbot.py
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def create_token_type_ids_from_sequences(
    self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
    """
    Create a mask from the two sequences passed to be used in a sequence-pair classification task. Blenderbot does not
    make use of token type ids, therefore a list of zeros is returned.

    Args:
        token_ids_0 (`List[int]`):
            List of IDs.
        token_ids_1 (`List[int]`, *optional*):
            Optional second list of IDs for sequence pairs.

    Returns:
        `List[int]`: List of zeros.
    """
    sep = [self.sep_token_id]
    cls = [self.cls_token_id]

    if token_ids_1 is None:
        return len(cls + token_ids_0 + sep) * [0]
    return len(cls + token_ids_0 + sep + sep + token_ids_1 + sep) * [0]

mindnlp.transformers.models.blenderbot.tokenization_blenderbot.BlenderbotTokenizer.get_special_tokens_mask(token_ids_0, token_ids_1=None, already_has_special_tokens=False)

Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model method.

PARAMETER DESCRIPTION
token_ids_0

List of IDs.

TYPE: `List[int]`

token_ids_1

Optional second list of IDs for sequence pairs.

TYPE: `List[int]`, *optional* DEFAULT: None

already_has_special_tokens

Whether or not the token list is already formatted with special tokens for the model.

TYPE: `bool`, *optional*, defaults to `False` DEFAULT: False

RETURNS DESCRIPTION
List[int]

List[int]: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.

Source code in mindnlp/transformers/models/blenderbot/tokenization_blenderbot.py
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
def get_special_tokens_mask(
    self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
) -> List[int]:
    """
    Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
    special tokens using the tokenizer `prepare_for_model` method.

    Args:
        token_ids_0 (`List[int]`):
            List of IDs.
        token_ids_1 (`List[int]`, *optional*):
            Optional second list of IDs for sequence pairs.
        already_has_special_tokens (`bool`, *optional*, defaults to `False`):
            Whether or not the token list is already formatted with special tokens for the model.

    Returns:
        `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
    """
    if already_has_special_tokens:
        return super().get_special_tokens_mask(
            token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True
        )

    if token_ids_1 is None:
        return [1] + ([0] * len(token_ids_0)) + [1]
    return [1] + ([0] * len(token_ids_0)) + [1, 1] + ([0] * len(token_ids_1)) + [1]

mindnlp.transformers.models.blenderbot.tokenization_blenderbot.BlenderbotTokenizer.get_vocab()

Retrieve the vocabulary from the BlenderbotTokenizer.

PARAMETER DESCRIPTION
self

An instance of the BlenderbotTokenizer class.

RETURNS DESCRIPTION

A dictionary object representing the vocabulary of the tokenizer. The dictionary contains the encoder tokens

mapping with their corresponding ids. The vocabulary includes tokens from the encoder and any additional

tokens that have been added using the 'add_tokens' method.

Source code in mindnlp/transformers/models/blenderbot/tokenization_blenderbot.py
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
def get_vocab(self):
    """
    Retrieve the vocabulary from the BlenderbotTokenizer.

    Args:
        self: An instance of the BlenderbotTokenizer class.

    Returns:
        A dictionary object representing the vocabulary of the tokenizer. The dictionary contains the encoder tokens
        mapping with their corresponding ids. The vocabulary includes tokens from the encoder and any additional
        tokens that have been added using the 'add_tokens' method.

    Raises:
        None.
    """
    vocab = dict(self.encoder).copy()
    vocab.update(self.added_tokens_encoder)
    return vocab

mindnlp.transformers.models.blenderbot.tokenization_blenderbot.BlenderbotTokenizer.prepare_for_tokenization(text, is_split_into_words=False, **kwargs)

This method prepares the input text for tokenization by adding a prefix space if specified or if the text is not already split into words.

PARAMETER DESCRIPTION
self

The instance of the BlenderbotTokenizer class.

text

The input text to be prepared for tokenization.

TYPE: str

is_split_into_words

A flag indicating whether the input text is already split into words. Default is False.

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION
None

The method modifies the input text in place.

Source code in mindnlp/transformers/models/blenderbot/tokenization_blenderbot.py
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
def prepare_for_tokenization(self, text, is_split_into_words=False, **kwargs):
    """
    This method prepares the input text for tokenization by adding a prefix space if specified or if the text is not already split into words.

    Args:
        self: The instance of the BlenderbotTokenizer class.
        text (str): The input text to be prepared for tokenization.
        is_split_into_words (bool): A flag indicating whether the input text is already split into words. Default is False.

    Returns:
        None: The method modifies the input text in place.

    Raises:
        None.
    """
    add_prefix_space = kwargs.pop("add_prefix_space", self.add_prefix_space)
    if (is_split_into_words or add_prefix_space) and (len(text) > 0 and not text[0].isspace()):
        text = " " + text
    return (text, kwargs)

mindnlp.transformers.models.blenderbot.tokenization_blenderbot.BlenderbotTokenizer.save_vocabulary(save_directory, filename_prefix=None)

Save the vocabulary files for the BlenderbotTokenizer.

PARAMETER DESCRIPTION
self

The instance of the BlenderbotTokenizer class.

TYPE: BlenderbotTokenizer

save_directory

The directory where the vocabulary files will be saved.

TYPE: str

filename_prefix

The prefix to be added to the vocabulary file names. Defaults to None.

TYPE: Optional[str] DEFAULT: None

RETURNS DESCRIPTION
Tuple[str]

Tuple[str]: A tuple containing the paths of the saved vocabulary files.

RAISES DESCRIPTION
FileNotFoundError

If the save_directory does not exist or is not a directory.

The save_vocabulary method saves the vocabulary files for the tokenizer. It takes the save_directory as input, which is the directory where the vocabulary files will be saved. The optional filename_prefix parameter can be used to add a prefix to the vocabulary file names.

The method saves two files: the vocabulary file and the merges file. The vocabulary file contains the encoding dictionary of the tokenizer, while the merges file contains the BPE merge indices.

If the save_directory does not exist or is not a directory, a FileNotFoundError is raised.

Example
>>> tokenizer = BlenderbotTokenizer()
>>> tokenizer.save_vocabulary('/path/to/save', filename_prefix='my-prefix')
Source code in mindnlp/transformers/models/blenderbot/tokenization_blenderbot.py
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
    """
    Save the vocabulary files for the BlenderbotTokenizer.

    Args:
        self (BlenderbotTokenizer): The instance of the BlenderbotTokenizer class.
        save_directory (str): The directory where the vocabulary files will be saved.
        filename_prefix (Optional[str], optional):
            The prefix to be added to the vocabulary file names. Defaults to None.

    Returns:
        Tuple[str]: A tuple containing the paths of the saved vocabulary files.

    Raises:
        FileNotFoundError: If the `save_directory` does not exist or is not a directory.

    The `save_vocabulary` method saves the vocabulary files for the tokenizer.
    It takes the `save_directory` as input, which is the directory where the vocabulary files will be saved. The optional
    `filename_prefix` parameter can be used to add a prefix to the vocabulary file names.

    The method saves two files: the vocabulary file and the merges file.
    The vocabulary file contains the encoding dictionary of the tokenizer,
    while the merges file contains the BPE merge indices.

    If the `save_directory` does not exist or is not a directory, a `FileNotFoundError` is raised.

    Example:
        ```python
        >>> tokenizer = BlenderbotTokenizer()
        >>> tokenizer.save_vocabulary('/path/to/save', filename_prefix='my-prefix')
        ```
    """
    if not os.path.isdir(save_directory):
        logger.error(f"Vocabulary path ({save_directory}) should be a directory")
        return
    vocab_file = os.path.join(
        save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"]
    )
    merge_file = os.path.join(
        save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["merges_file"]
    )

    with open(vocab_file, "w", encoding="utf-8") as f:
        f.write(json.dumps(self.encoder, indent=2, sort_keys=True, ensure_ascii=False) + "\n")

    index = 0
    with open(merge_file, "w", encoding="utf-8") as writer:
        writer.write("#version: 0.2\n")
        for bpe_tokens, token_index in sorted(self.bpe_ranks.items(), key=lambda kv: kv[1]):
            if index != token_index:
                logger.warning(
                    f"Saving vocabulary to {merge_file}: BPE merge indices are not consecutive."
                    " Please check that the tokenizer is not corrupted!"
                )
                index = token_index
            writer.write(" ".join(bpe_tokens) + "\n")
            index += 1

    return vocab_file, merge_file

mindnlp.transformers.models.blenderbot.tokenization_blenderbot_fast.BlenderbotTokenizerFast

Bases: PreTrainedTokenizerFast

Construct a "fast" Blenderbot tokenizer (backed by HuggingFace's tokenizers library), derived from the GPT-2 tokenizer, using byte-level Byte-Pair-Encoding.

This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not:

Example
>>> from transformers import BlenderbotTokenizerFast
...
>>> tokenizer = BlenderbotTokenizerFast.from_pretrained("facebook/blenderbot-3B")
>>> tokenizer("Hello world")["input_ids"]
[6950, 1085, 2]
>>> tokenizer(" Hello world")["input_ids"]
[6950, 1085, 2]

You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer or when you call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.

When used with is_split_into_words=True, this tokenizer needs to be instantiated with add_prefix_space=True.

This tokenizer inherits from [PreTrainedTokenizerFast] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.

PARAMETER DESCRIPTION
vocab_file

Path to the vocabulary file.

TYPE: `str` DEFAULT: None

merges_file

Path to the merges file.

TYPE: `str` DEFAULT: None

errors

Paradigm to follow when decoding bytes to UTF-8. See bytes.decode for more information.

TYPE: `str`, *optional*, defaults to `"replace"` DEFAULT: 'replace'

bos_token

The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.

When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the cls_token.

TYPE: `str`, *optional*, defaults to `"<s>"` DEFAULT: '<s>'

eos_token

The end of sequence token.

When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the sep_token.

TYPE: `str`, *optional*, defaults to `"</s>"` DEFAULT: '</s>'

sep_token

The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.

TYPE: `str`, *optional*, defaults to `"</s>"` DEFAULT: '</s>'

cls_token

The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.

TYPE: `str`, *optional*, defaults to `"<s>"` DEFAULT: '<s>'

unk_token

The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.

TYPE: `str`, *optional*, defaults to `"<unk>"` DEFAULT: '<unk>'

pad_token

The token used for padding, for example when batching sequences of different lengths.

TYPE: `str`, *optional*, defaults to `"<pad>"` DEFAULT: '<pad>'

mask_token

The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.

TYPE: `str`, *optional*, defaults to `"<mask>"` DEFAULT: '<mask>'

add_prefix_space

Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. (Blenderbot tokenizer detect beginning of words by the preceding space).

TYPE: `bool`, *optional*, defaults to `False` DEFAULT: False

trim_offsets

Whether the post processing step should trim offsets to avoid including whitespaces.

TYPE: `bool`, *optional*, defaults to `True` DEFAULT: True

Source code in mindnlp/transformers/models/blenderbot/tokenization_blenderbot_fast.py
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
class BlenderbotTokenizerFast(PreTrainedTokenizerFast):
    """
    Construct a "fast" Blenderbot tokenizer (backed by HuggingFace's *tokenizers* library), derived from the GPT-2
    tokenizer, using byte-level Byte-Pair-Encoding.

    This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
    be encoded differently whether it is at the beginning of the sentence (without space) or not:

    Example:
        ```python
        >>> from transformers import BlenderbotTokenizerFast
        ...
        >>> tokenizer = BlenderbotTokenizerFast.from_pretrained("facebook/blenderbot-3B")
        >>> tokenizer("Hello world")["input_ids"]
        [6950, 1085, 2]
        >>> tokenizer(" Hello world")["input_ids"]
        [6950, 1085, 2]
        ```

    You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer or when you
    call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.

    <Tip>

    When used with `is_split_into_words=True`, this tokenizer needs to be instantiated with `add_prefix_space=True`.

    </Tip>

    This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should
    refer to this superclass for more information regarding those methods.

    Args:
        vocab_file (`str`):
            Path to the vocabulary file.
        merges_file (`str`):
            Path to the merges file.
        errors (`str`, *optional*, defaults to `"replace"`):
            Paradigm to follow when decoding bytes to UTF-8. See
            [bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information.
        bos_token (`str`, *optional*, defaults to `"<s>"`):
            The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.

            <Tip>

            When building a sequence using special tokens, this is not the token that is used for the beginning of
            sequence. The token used is the `cls_token`.

            </Tip>

        eos_token (`str`, *optional*, defaults to `"</s>"`):
            The end of sequence token.

            <Tip>

            When building a sequence using special tokens, this is not the token that is used for the end of sequence.
            The token used is the `sep_token`.

            </Tip>

        sep_token (`str`, *optional*, defaults to `"</s>"`):
            The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
            sequence classification or for a text and a question for question answering. It is also used as the last
            token of a sequence built with special tokens.
        cls_token (`str`, *optional*, defaults to `"<s>"`):
            The classifier token which is used when doing sequence classification (classification of the whole sequence
            instead of per-token classification). It is the first token of the sequence when built with special tokens.
        unk_token (`str`, *optional*, defaults to `"<unk>"`):
            The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
            token instead.
        pad_token (`str`, *optional*, defaults to `"<pad>"`):
            The token used for padding, for example when batching sequences of different lengths.
        mask_token (`str`, *optional*, defaults to `"<mask>"`):
            The token used for masking values. This is the token used when training this model with masked language
            modeling. This is the token which the model will try to predict.
        add_prefix_space (`bool`, *optional*, defaults to `False`):
            Whether or not to add an initial space to the input. This allows to treat the leading word just as any
            other word. (Blenderbot tokenizer detect beginning of words by the preceding space).
        trim_offsets (`bool`, *optional*, defaults to `True`):
            Whether the post processing step should trim offsets to avoid including whitespaces.
    """

    vocab_files_names = VOCAB_FILES_NAMES
    model_input_names = ["input_ids", "attention_mask"]
    slow_tokenizer_class = BlenderbotTokenizer

    # Copied from transformers.models.roberta.tokenization_roberta_fast.RobertaTokenizerFast.__init__ with Roberta->Blenderbot, RoBERTa->Blenderbot
    def __init__(
        self,
        vocab_file=None,
        merges_file=None,
        tokenizer_file=None,
        errors="replace",
        bos_token="<s>",
        eos_token="</s>",
        sep_token="</s>",
        cls_token="<s>",
        unk_token="<unk>",
        pad_token="<pad>",
        mask_token="<mask>",
        add_prefix_space=False,
        trim_offsets=True,
        **kwargs,
    ):
        mask_token = (
            AddedToken(mask_token, lstrip=True, rstrip=False, normalized=False)
            if isinstance(mask_token, str)
            else mask_token
        )
        super().__init__(
            vocab_file,
            merges_file,
            tokenizer_file=tokenizer_file,
            errors=errors,
            bos_token=bos_token,
            eos_token=eos_token,
            sep_token=sep_token,
            cls_token=cls_token,
            unk_token=unk_token,
            pad_token=pad_token,
            mask_token=mask_token,
            add_prefix_space=add_prefix_space,
            trim_offsets=trim_offsets,
            **kwargs,
        )

        pre_tok_state = json.loads(self.backend_tokenizer.pre_tokenizer.__getstate__())
        if pre_tok_state.get("add_prefix_space", add_prefix_space) != add_prefix_space:
            pre_tok_class = getattr(pre_tokenizers, pre_tok_state.pop("type"))
            pre_tok_state["add_prefix_space"] = add_prefix_space
            self.backend_tokenizer.pre_tokenizer = pre_tok_class(**pre_tok_state)

        self.add_prefix_space = add_prefix_space

        tokenizer_component = "post_processor"
        tokenizer_component_instance = getattr(self.backend_tokenizer, tokenizer_component, None)
        if tokenizer_component_instance:
            state = json.loads(tokenizer_component_instance.__getstate__())

            # The lists 'sep' and 'cls' must be cased in tuples for the object `post_processor_class`
            if "sep" in state:
                state["sep"] = tuple(state["sep"])
            if "cls" in state:
                state["cls"] = tuple(state["cls"])

            changes_to_apply = False

            if state.get("add_prefix_space", add_prefix_space) != add_prefix_space:
                state["add_prefix_space"] = add_prefix_space
                changes_to_apply = True

            if state.get("trim_offsets", trim_offsets) != trim_offsets:
                state["trim_offsets"] = trim_offsets
                changes_to_apply = True

            if changes_to_apply:
                component_class = getattr(processors, state.pop("type"))
                new_value = component_class(**state)
                setattr(self.backend_tokenizer, tokenizer_component, new_value)

    @property
    # Copied from transformers.models.roberta.tokenization_roberta_fast.RobertaTokenizerFast.mask_token with Roberta->Blenderbot, RoBERTa->Blenderbot
    def mask_token(self) -> str:
        """
        `str`: Mask token, to use when training a model with masked-language modeling. Log an error if used while not
        having been set.

        Blenderbot tokenizer has a special mask token to be usable in the fill-mask pipeline. The mask token will greedily
        comprise the space before the *<mask>*.
        """
        if self._mask_token is None:
            if self.verbose:
                logger.error("Using mask_token, but it is not set yet.")
            return None
        return str(self._mask_token)

    @mask_token.setter
    def mask_token(self, value):
        """
        Overriding the default behavior of the mask token to have it eat the space before it.

        This is needed to preserve backward compatibility with all the previously used models based on Roberta.
        """
        # Mask token behave like a normal word, i.e. include the space before it
        # So we set lstrip to True
        value = AddedToken(value, lstrip=True, rstrip=False) if isinstance(value, str) else value
        self._mask_token = value

    # Copied from transformers.models.roberta.tokenization_roberta_fast.RobertaTokenizerFast._batch_encode_plus with Roberta->Blenderbot, RoBERTa->Blenderbot
    def _batch_encode_plus(self, *args, **kwargs) -> BatchEncoding:
        is_split_into_words = kwargs.get("is_split_into_words", False)
        assert self.add_prefix_space or not is_split_into_words, (
            f"You need to instantiate {self.__class__.__name__} with add_prefix_space=True "
            "to use it with pretokenized inputs."
        )

        return super()._batch_encode_plus(*args, **kwargs)

    # Copied from transformers.models.roberta.tokenization_roberta_fast.RobertaTokenizerFast._encode_plus with Roberta->Blenderbot, RoBERTa->Blenderbot
    def _encode_plus(self, *args, **kwargs) -> BatchEncoding:
        is_split_into_words = kwargs.get("is_split_into_words", False)

        assert self.add_prefix_space or not is_split_into_words, (
            f"You need to instantiate {self.__class__.__name__} with add_prefix_space=True "
            "to use it with pretokenized inputs."
        )

        return super()._encode_plus(*args, **kwargs)

    # Copied from transformers.models.roberta.tokenization_roberta_fast.RobertaTokenizerFast.save_vocabulary with Roberta->Blenderbot, RoBERTa->Blenderbot
    def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
        files = self._tokenizer.model.save(save_directory, name=filename_prefix)
        return tuple(files)

    # Copied from transformers.models.roberta.tokenization_roberta_fast.RobertaTokenizerFast.create_token_type_ids_from_sequences with Roberta->Blenderbot, RoBERTa->Blenderbot
    def create_token_type_ids_from_sequences(
        self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
    ) -> List[int]:
        """
        Create a mask from the two sequences passed to be used in a sequence-pair classification task. Blenderbot does not
        make use of token type ids, therefore a list of zeros is returned.

        Args:
            token_ids_0 (`List[int]`):
                List of IDs.
            token_ids_1 (`List[int]`, *optional*):
                Optional second list of IDs for sequence pairs.

        Returns:
            `List[int]`: List of zeros.
        """
        sep = [self.sep_token_id]
        cls = [self.cls_token_id]

        if token_ids_1 is None:
            return len(cls + token_ids_0 + sep) * [0]
        return len(cls + token_ids_0 + sep + sep + token_ids_1 + sep) * [0]

    def build_inputs_with_special_tokens(self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None):
        """
        Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
        adding special tokens. A Blenderbot sequence has the following format:
        - single sequence: ` X </s>`

        Args:
            token_ids_0 (`List[int]`):
                List of IDs to which the special tokens will be added
            token_ids_1 (`List[int]`, *optional*):
                Will be ignored
        Returns:
            `List[int]`: list of [input IDs](../glossary#input-ids) with the appropriate special tokens.
        """
        return token_ids_0 + [self.eos_token_id]

    @property
    # Copied from transformers.models.blenderbot.tokenization_blenderbot.BlenderbotTokenizer.default_chat_template
    def default_chat_template(self):
        """
        A very simple chat template that just adds whitespace between messages.
        """
        return (
            "{% for message in messages %}"
            "{% if message['role'] == 'user' %}{{ ' ' }}{% endif %}"
            "{{ message['content'] }}"
            "{% if not loop.last %}{{ '  ' }}{% endif %}"
            "{% endfor %}"
            "{{ eos_token }}"
        )

mindnlp.transformers.models.blenderbot.tokenization_blenderbot_fast.BlenderbotTokenizerFast.default_chat_template property

A very simple chat template that just adds whitespace between messages.

mindnlp.transformers.models.blenderbot.tokenization_blenderbot_fast.BlenderbotTokenizerFast.mask_token: str property writable

str: Mask token, to use when training a model with masked-language modeling. Log an error if used while not having been set.

Blenderbot tokenizer has a special mask token to be usable in the fill-mask pipeline. The mask token will greedily comprise the space before the .

mindnlp.transformers.models.blenderbot.tokenization_blenderbot_fast.BlenderbotTokenizerFast.build_inputs_with_special_tokens(token_ids_0, token_ids_1=None)

Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A Blenderbot sequence has the following format: - single sequence: X </s>

PARAMETER DESCRIPTION
token_ids_0

List of IDs to which the special tokens will be added

TYPE: `List[int]`

token_ids_1

Will be ignored

TYPE: `List[int]`, *optional* DEFAULT: None

Source code in mindnlp/transformers/models/blenderbot/tokenization_blenderbot_fast.py
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
def build_inputs_with_special_tokens(self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None):
    """
    Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
    adding special tokens. A Blenderbot sequence has the following format:
    - single sequence: ` X </s>`

    Args:
        token_ids_0 (`List[int]`):
            List of IDs to which the special tokens will be added
        token_ids_1 (`List[int]`, *optional*):
            Will be ignored
    Returns:
        `List[int]`: list of [input IDs](../glossary#input-ids) with the appropriate special tokens.
    """
    return token_ids_0 + [self.eos_token_id]

mindnlp.transformers.models.blenderbot.tokenization_blenderbot_fast.BlenderbotTokenizerFast.create_token_type_ids_from_sequences(token_ids_0, token_ids_1=None)

Create a mask from the two sequences passed to be used in a sequence-pair classification task. Blenderbot does not make use of token type ids, therefore a list of zeros is returned.

PARAMETER DESCRIPTION
token_ids_0

List of IDs.

TYPE: `List[int]`

token_ids_1

Optional second list of IDs for sequence pairs.

TYPE: `List[int]`, *optional* DEFAULT: None

RETURNS DESCRIPTION
List[int]

List[int]: List of zeros.

Source code in mindnlp/transformers/models/blenderbot/tokenization_blenderbot_fast.py
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
def create_token_type_ids_from_sequences(
    self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
    """
    Create a mask from the two sequences passed to be used in a sequence-pair classification task. Blenderbot does not
    make use of token type ids, therefore a list of zeros is returned.

    Args:
        token_ids_0 (`List[int]`):
            List of IDs.
        token_ids_1 (`List[int]`, *optional*):
            Optional second list of IDs for sequence pairs.

    Returns:
        `List[int]`: List of zeros.
    """
    sep = [self.sep_token_id]
    cls = [self.cls_token_id]

    if token_ids_1 is None:
        return len(cls + token_ids_0 + sep) * [0]
    return len(cls + token_ids_0 + sep + sep + token_ids_1 + sep) * [0]