Skip to content

bloom

mindnlp.transformers.models.bloom.configuration_bloom.BloomConfig

Bases: PretrainedConfig

This is the configuration class to store the configuration of a [BloomModel]. It is used to instantiate a Bloom model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to the Bloom architecture bigscience/bloom.

Configuration objects inherit from [PretrainedConfig] and can be used to control the model outputs. Read the documentation from [PretrainedConfig] for more information.

PARAMETER DESCRIPTION
vocab_size

Vocabulary size of the Bloom model. Defines the maximum number of different tokens that can be represented by the inputs_ids passed when calling [BloomModel]. Check this discussion on how the vocab_size has been defined.

TYPE: `int`, *optional*, defaults to 250880 DEFAULT: 250880

hidden_size

Dimensionality of the embeddings and hidden states.

TYPE: `int`, *optional*, defaults to 64 DEFAULT: 64

n_layer

Number of hidden layers in the Transformer encoder.

TYPE: `int`, *optional*, defaults to 2 DEFAULT: 2

n_head

Number of attention heads for each attention layer in the Transformer encoder.

TYPE: `int`, *optional*, defaults to 8 DEFAULT: 8

layer_norm_epsilon

The epsilon to use in the layer normalization layers.

TYPE: `float`, *optional*, defaults to 1e-5 DEFAULT: 1e-05

initializer_range

The standard deviation of the truncated_normal_initializer for initializing all weight matrices.

TYPE: `float`, *optional*, defaults to 0.02 DEFAULT: 0.02

apply_residual_connection_post_layernorm

If enabled, use the layer norm of the hidden states as the residual in the transformer blocks

TYPE: `bool`, *optional*, defaults to `False` DEFAULT: False

hidden_dropout

Dropout rate of the dropout function on the bias dropout.

TYPE: `float`, *optional*, defaults to 0.1 DEFAULT: 0.0

attention_dropout

Dropout rate applied to the attention probs

TYPE: `float`, *optional*, defaults to 0.1 DEFAULT: 0.0

use_cache

Whether or not the model should return the last key/values attentions (not used by all models).

TYPE: `bool`, *optional*, defaults to `True` DEFAULT: True

pretraining_tp

Experimental feature. Tensor parallelism rank used during pretraining with Megatron. Please refer to this document to understand more about it. This value is necessary to ensure exact reproducibility of the pretraining results. Please refer to this issue. Note also that this is enabled only when slow_but_exact=True.

TYPE: `int`, *optional*, defaults to `1` DEFAULT: 1

slow_but_exact

Experimental feature. Whether to use slow but exact implementation of the attention mechanism. While merging the TP rank tensors, due to slicing operations the results may be slightly different between the model trained on Megatron and our model. Please refer to this issue. A solution to obtain more accurate results is to enable this feature. Enabling this will hurt the computational time of the inference. Will be probably resolved in the future once the main model has been fine-tuned with TP_rank=1.

TYPE: `bool`, *optional*, defaults to `False` DEFAULT: False

Example
>>> from transformers import BloomConfig, BloomModel
...
>>> # Initializing a Bloom configuration
>>> configuration = BloomConfig()
...
>>> # Initializing a model (with random weights) from the configuration
>>> model = BloomModel(configuration)
...
>>> # Accessing the model configuration
>>> configuration = model.config
Source code in mindnlp/transformers/models/bloom/configuration_bloom.py
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
class BloomConfig(PretrainedConfig):
    """
    This is the configuration class to store the configuration of a [`BloomModel`]. It is used to instantiate a Bloom
    model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
    defaults will yield a similar configuration to the Bloom architecture
    [bigscience/bloom](https://hf-mirror.com/bigscience/bloom).

    Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
    documentation from [`PretrainedConfig`] for more information.


    Args:
        vocab_size (`int`, *optional*, defaults to 250880):
            Vocabulary size of the Bloom model. Defines the maximum number of different tokens that can be represented
            by the `inputs_ids` passed when calling [`BloomModel`]. Check [this
            discussion](https://hf-mirror.com/bigscience/bloom/discussions/120#633d28389addb8530b406c2a) on how the
            `vocab_size` has been defined.
        hidden_size (`int`, *optional*, defaults to 64):
            Dimensionality of the embeddings and hidden states.
        n_layer (`int`, *optional*, defaults to 2):
            Number of hidden layers in the Transformer encoder.
        n_head (`int`, *optional*, defaults to 8):
            Number of attention heads for each attention layer in the Transformer encoder.
        layer_norm_epsilon (`float`, *optional*, defaults to 1e-5):
            The epsilon to use in the layer normalization layers.
        initializer_range (`float`, *optional*, defaults to 0.02):
            The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
        apply_residual_connection_post_layernorm (`bool`, *optional*, defaults to `False`):
            If enabled, use the layer norm of the hidden states as the residual in the transformer blocks
        hidden_dropout (`float`, *optional*, defaults to 0.1):
            Dropout rate of the dropout function on the bias dropout.
        attention_dropout (`float`, *optional*, defaults to 0.1):
            Dropout rate applied to the attention probs
        use_cache (`bool`, *optional*, defaults to `True`):
            Whether or not the model should return the last key/values attentions (not used by all models).
        pretraining_tp (`int`, *optional*, defaults to `1`):
            Experimental feature. Tensor parallelism rank used during pretraining with Megatron. Please refer to [this
            document](https://hf-mirror.com/docs/transformers/parallelism) to understand more about it. This value is
            necessary to ensure exact reproducibility of the pretraining results. Please refer to [this
            issue](https://github.com/pytorch/pytorch/issues/76232). Note also that this is enabled only when
            `slow_but_exact=True`.
        slow_but_exact (`bool`, *optional*, defaults to `False`):
            Experimental feature. Whether to use slow but exact implementation of the attention mechanism. While
            merging the TP rank tensors, due to slicing operations the results may be slightly different between the
            model trained on Megatron and our model. Please refer to [this
            issue](https://github.com/pytorch/pytorch/issues/76232). A solution to obtain more accurate results is to
            enable this feature. Enabling this will hurt the computational time of the inference. Will be probably
            resolved in the future once the main model has been fine-tuned with TP_rank=1.

    Example:
        ```python
        >>> from transformers import BloomConfig, BloomModel
        ...
        >>> # Initializing a Bloom configuration
        >>> configuration = BloomConfig()
        ...
        >>> # Initializing a model (with random weights) from the configuration
        >>> model = BloomModel(configuration)
        ...
        >>> # Accessing the model configuration
        >>> configuration = model.config
        ```
    """
    model_type = "bloom"
    keys_to_ignore_at_inference = ["past_key_values"]
    attribute_map = {
        "num_hidden_layers": "n_layer",
        "num_attention_heads": "n_head",
    }

    def __init__(
        self,
        vocab_size=250880,
        hidden_size=64,
        n_layer=2,
        n_head=8,
        layer_norm_epsilon=1e-5,
        initializer_range=0.02,
        use_cache=True,
        bos_token_id=1,
        eos_token_id=2,
        apply_residual_connection_post_layernorm=False,
        hidden_dropout=0.0,
        attention_dropout=0.0,
        pretraining_tp=1,  # TP rank used when training with megatron
        slow_but_exact=False,
        **kwargs,
    ):
        """
        Initializes a new instance of the BloomConfig class.

        Args:
            self: The object itself.
            vocab_size (int): The size of the vocabulary. Default is 250880.
            hidden_size (int): The size of the hidden layer. Default is 64.
            n_layer (int): The number of layers. Default is 2.
            n_head (int): The number of attention heads. Default is 8.
            layer_norm_epsilon (float): The epsilon value for layer normalization. Default is 1e-05.
            initializer_range (float): The range for the initializer. Default is 0.02.
            use_cache (bool): Determines if caching is used. Default is True.
            bos_token_id (int): The ID of the beginning-of-sentence token. Default is 1.
            eos_token_id (int): The ID of the end-of-sentence token. Default is 2.
            apply_residual_connection_post_layernorm (bool): Determines if residual connection is applied after layer normalization. Default is False.
            hidden_dropout (float): The dropout rate for hidden layers. Default is 0.0.
            attention_dropout (float): The dropout rate for attention layers. Default is 0.0.
            pretraining_tp (int): The pretraining TP value. Default is 1.
            slow_but_exact (bool): Determines if the method should prioritize accuracy over speed. Default is False.
            **kwargs: Additional keyword arguments.

        Returns:
            None.

        Raises:
            None.
        """
        self.vocab_size = vocab_size
        # Backward compatibility with n_embed kwarg
        n_embed = kwargs.pop("n_embed", None)
        self.hidden_size = hidden_size if n_embed is None else n_embed
        self.n_layer = n_layer
        self.n_head = n_head
        self.layer_norm_epsilon = layer_norm_epsilon
        self.initializer_range = initializer_range
        self.use_cache = use_cache
        self.pretraining_tp = pretraining_tp
        self.apply_residual_connection_post_layernorm = apply_residual_connection_post_layernorm
        self.hidden_dropout = hidden_dropout
        self.attention_dropout = attention_dropout

        self.bos_token_id = bos_token_id
        self.eos_token_id = eos_token_id
        self.slow_but_exact = slow_but_exact

        super().__init__(bos_token_id=bos_token_id, eos_token_id=eos_token_id, **kwargs)

mindnlp.transformers.models.bloom.configuration_bloom.BloomConfig.__init__(vocab_size=250880, hidden_size=64, n_layer=2, n_head=8, layer_norm_epsilon=1e-05, initializer_range=0.02, use_cache=True, bos_token_id=1, eos_token_id=2, apply_residual_connection_post_layernorm=False, hidden_dropout=0.0, attention_dropout=0.0, pretraining_tp=1, slow_but_exact=False, **kwargs)

Initializes a new instance of the BloomConfig class.

PARAMETER DESCRIPTION
self

The object itself.

vocab_size

The size of the vocabulary. Default is 250880.

TYPE: int DEFAULT: 250880

hidden_size

The size of the hidden layer. Default is 64.

TYPE: int DEFAULT: 64

n_layer

The number of layers. Default is 2.

TYPE: int DEFAULT: 2

n_head

The number of attention heads. Default is 8.

TYPE: int DEFAULT: 8

layer_norm_epsilon

The epsilon value for layer normalization. Default is 1e-05.

TYPE: float DEFAULT: 1e-05

initializer_range

The range for the initializer. Default is 0.02.

TYPE: float DEFAULT: 0.02

use_cache

Determines if caching is used. Default is True.

TYPE: bool DEFAULT: True

bos_token_id

The ID of the beginning-of-sentence token. Default is 1.

TYPE: int DEFAULT: 1

eos_token_id

The ID of the end-of-sentence token. Default is 2.

TYPE: int DEFAULT: 2

apply_residual_connection_post_layernorm

Determines if residual connection is applied after layer normalization. Default is False.

TYPE: bool DEFAULT: False

hidden_dropout

The dropout rate for hidden layers. Default is 0.0.

TYPE: float DEFAULT: 0.0

attention_dropout

The dropout rate for attention layers. Default is 0.0.

TYPE: float DEFAULT: 0.0

pretraining_tp

The pretraining TP value. Default is 1.

TYPE: int DEFAULT: 1

slow_but_exact

Determines if the method should prioritize accuracy over speed. Default is False.

TYPE: bool DEFAULT: False

**kwargs

Additional keyword arguments.

DEFAULT: {}

RETURNS DESCRIPTION

None.

Source code in mindnlp/transformers/models/bloom/configuration_bloom.py
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
def __init__(
    self,
    vocab_size=250880,
    hidden_size=64,
    n_layer=2,
    n_head=8,
    layer_norm_epsilon=1e-5,
    initializer_range=0.02,
    use_cache=True,
    bos_token_id=1,
    eos_token_id=2,
    apply_residual_connection_post_layernorm=False,
    hidden_dropout=0.0,
    attention_dropout=0.0,
    pretraining_tp=1,  # TP rank used when training with megatron
    slow_but_exact=False,
    **kwargs,
):
    """
    Initializes a new instance of the BloomConfig class.

    Args:
        self: The object itself.
        vocab_size (int): The size of the vocabulary. Default is 250880.
        hidden_size (int): The size of the hidden layer. Default is 64.
        n_layer (int): The number of layers. Default is 2.
        n_head (int): The number of attention heads. Default is 8.
        layer_norm_epsilon (float): The epsilon value for layer normalization. Default is 1e-05.
        initializer_range (float): The range for the initializer. Default is 0.02.
        use_cache (bool): Determines if caching is used. Default is True.
        bos_token_id (int): The ID of the beginning-of-sentence token. Default is 1.
        eos_token_id (int): The ID of the end-of-sentence token. Default is 2.
        apply_residual_connection_post_layernorm (bool): Determines if residual connection is applied after layer normalization. Default is False.
        hidden_dropout (float): The dropout rate for hidden layers. Default is 0.0.
        attention_dropout (float): The dropout rate for attention layers. Default is 0.0.
        pretraining_tp (int): The pretraining TP value. Default is 1.
        slow_but_exact (bool): Determines if the method should prioritize accuracy over speed. Default is False.
        **kwargs: Additional keyword arguments.

    Returns:
        None.

    Raises:
        None.
    """
    self.vocab_size = vocab_size
    # Backward compatibility with n_embed kwarg
    n_embed = kwargs.pop("n_embed", None)
    self.hidden_size = hidden_size if n_embed is None else n_embed
    self.n_layer = n_layer
    self.n_head = n_head
    self.layer_norm_epsilon = layer_norm_epsilon
    self.initializer_range = initializer_range
    self.use_cache = use_cache
    self.pretraining_tp = pretraining_tp
    self.apply_residual_connection_post_layernorm = apply_residual_connection_post_layernorm
    self.hidden_dropout = hidden_dropout
    self.attention_dropout = attention_dropout

    self.bos_token_id = bos_token_id
    self.eos_token_id = eos_token_id
    self.slow_but_exact = slow_but_exact

    super().__init__(bos_token_id=bos_token_id, eos_token_id=eos_token_id, **kwargs)

mindnlp.transformers.models.bloom.modeling_bloom.BLOOM_PRETRAINED_MODEL_ARCHIVE_LIST = ['bigscience/bigscience-small-testing', 'bigscience/bloom-560m', 'bigscience/bloom-1b1', 'bigscience/bloom-1b7', 'bigscience/bloom-3b', 'bigscience/bloom-7b1', 'bigscience/bloom'] module-attribute

mindnlp.transformers.models.bloom.modeling_bloom.BloomForCausalLM

Bases: BloomPreTrainedModel

The BloomForCausalLM class is a subclass of BloomPreTrainedModel and represents a model for causal language modeling using the BLOOM architecture.

Causal language modeling is the task of predicting the next token in a sequence given the previous tokens. The BLOOM architecture is specifically designed for this task, utilizing a transformer model with an additional language modeling head.

The class has the following methods:

  • __init__: Initializes the BloomForCausalLM instance with a configuration object.
  • get_output_embeddings: Returns the language modeling head.
  • set_output_embeddings: Sets the language modeling head to the provided embeddings.
  • prepare_inputs_for_generation: Prepares the inputs for generation by removing the prefix length from the input sequence and converting the past key values to BLOOM cache format.
  • forward: Constructs the BLOOM model by passing the inputs through the transformer and language modeling head. Optionally computes the loss if labels are provided.
  • _reorder_cache: Reorders the past key values cache to match the beam indices during beam search or beam sampling.

Additionally, the class inherits all the properties and methods from the BloomPreTrainedModel class.

Note

The labels parameter in the forward method is for language modeling labels, and the position_ids parameter is deprecated and will be removed in the future.

Source code in mindnlp/transformers/models/bloom/modeling_bloom.py
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
class BloomForCausalLM(BloomPreTrainedModel):

    """
    The `BloomForCausalLM` class is a subclass of `BloomPreTrainedModel` and represents a model
    for causal language modeling using the BLOOM architecture.

    Causal language modeling is the task of predicting the next token in a sequence given the previous tokens.
    The BLOOM architecture is specifically designed for this task, utilizing a transformer model with an additional language modeling head.

    The class has the following methods:

    - `__init__`: Initializes the `BloomForCausalLM` instance with a configuration object.
    - `get_output_embeddings`: Returns the language modeling head.
    - `set_output_embeddings`: Sets the language modeling head to the provided embeddings.
    - `prepare_inputs_for_generation`: Prepares the inputs for generation by removing the prefix length from the
    input sequence and converting the past key values to BLOOM cache format.
    - `forward`: Constructs the BLOOM model by passing the inputs through the transformer and language modeling head.
    Optionally computes the loss if labels are provided.
    - `_reorder_cache`: Reorders the past key values cache to match the beam indices during beam search or beam sampling.

    Additionally, the class inherits all the properties and methods from the `BloomPreTrainedModel` class.

    Note:
        The `labels` parameter in the `forward` method is for language modeling labels, and the `position_ids`
        parameter is deprecated and will be removed in the future.

    """
    _tied_weights_keys = ["lm_head.weight"]

    def __init__(self, config: BloomConfig):
        """
        Initializes a new instance of the BloomForCausalLM class.

        Args:
            self: The current object instance.
            config (BloomConfig): The configuration object containing the model's hyperparameters and settings.

        Returns:
            None.

        Raises:
            None.
        """
        super().__init__(config)
        self.transformer = BloomModel(config)
        self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)

        # Initialize weights and apply final processing
        self.post_init()

    def get_output_embeddings(self):
        """
        Returns the output embeddings of the BloomForCausalLM model.

        Args:
            self (BloomForCausalLM): The instance of the BloomForCausalLM class.

        Returns:
            None.

        Raises:
            None.
        """
        return self.lm_head

    def set_output_embeddings(self, new_embeddings: mindspore.Tensor):
        """
        Sets the output embeddings for the BloomForCausalLM model.

        Args:
            self (BloomForCausalLM): The instance of the BloomForCausalLM class.
            new_embeddings (mindspore.Tensor): The new embeddings to be set for the model's lm_head.

        Returns:
            None.

        Raises:
            None.
        """
        self.lm_head = new_embeddings

    def prepare_inputs_for_generation(
        self,
        input_ids: mindspore.Tensor,
        past_key_values: Optional[mindspore.Tensor] = None,
        attention_mask: Optional[mindspore.Tensor] = None,
        inputs_embeds: Optional[mindspore.Tensor] = None,
        **kwargs,
    ) -> dict:
        """
        Prepare inputs for generation.

        This method takes 5 parameters: self, input_ids, past_key_values, attention_mask, inputs_embeds. It returns a dictionary containing the model inputs.

        Args:
            self (BloomForCausalLM): The instance of the BloomForCausalLM class.
            input_ids (mindspore.Tensor): The input tensor containing the tokenized input sequence.
            past_key_values (Optional[mindspore.Tensor]): The optional tensor containing the cached key-value pairs from previous generation steps.
            attention_mask (Optional[mindspore.Tensor]): The optional tensor representing the attention mask for the input sequence.
            inputs_embeds (Optional[mindspore.Tensor]): The optional tensor containing the embedded input sequence.

        Returns:
            dict: A dictionary containing the model inputs.
                It may include either 'input_ids' or 'inputs_embeds' depending on the availability of
                'inputs_embeds' and 'past_key_values'.
                It also includes 'past_key_values', 'use_cache', and 'attention_mask' if provided.

        Raises:
            None

        """
        # only last tokens for input_ids if past is not None
        if past_key_values is not None:
            past_length = past_key_values[0][0].shape[2]

            # Some generation methods already pass only the last input ID
            if input_ids.shape[1] > past_length:
                remove_prefix_length = past_length
            else:
                # Default to old behavior: keep only final ID
                remove_prefix_length = input_ids.shape[1] - 1

            input_ids = input_ids[:, remove_prefix_length:]

            # the cache may be in the stardard format (e.g. in contrastive search), convert to bloom's format if needed
            if past_key_values[0][0].shape[0] == input_ids.shape[0]:
                past_key_values = self._convert_to_bloom_cache(past_key_values)

        # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
        if inputs_embeds is not None and past_key_values is None:
            model_inputs = {"inputs_embeds": inputs_embeds}
        else:
            model_inputs = {"input_ids": input_ids}

        model_inputs.update(
            {
                "past_key_values": past_key_values,
                "use_cache": kwargs.get("use_cache"),
                "attention_mask": attention_mask,
            }
        )
        return model_inputs

    def forward(
        self,
        input_ids: Optional[mindspore.Tensor] = None,
        past_key_values: Optional[Tuple[Tuple[mindspore.Tensor, mindspore.Tensor], ...]] = None,
        attention_mask: Optional[mindspore.Tensor] = None,
        head_mask: Optional[mindspore.Tensor] = None,
        inputs_embeds: Optional[mindspore.Tensor] = None,
        labels: Optional[mindspore.Tensor] = None,
        use_cache: Optional[bool] = None,
        output_attentions: Optional[bool] = None,
        output_hidden_states: Optional[bool] = None,
        return_dict: Optional[bool] = None,
        **deprecated_arguments,
    ) -> Union[Tuple[mindspore.Tensor], CausalLMOutputWithCrossAttentions]:
        r"""
        Args:
            labels (`mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
                Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set
                `labels = input_ids` Indices are selected in `[-100, 0, ..., config.vocab_size]` All labels set to `-100`
                are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size]`
        """
        if deprecated_arguments.pop("position_ids", False) is not False:
            # `position_ids` could have been `mindspore.Tensor` or `None` so defaulting pop to `False` allows to detect if users were passing explicitly `None`
            warnings.warn(
                "`position_ids` have no functionality in BLOOM and will be removed in v5.0.0. You can safely ignore"
                " passing `position_ids`.",
                FutureWarning,
            )
        if len(deprecated_arguments) > 0:
            raise ValueError(f"Got unexpected arguments: {deprecated_arguments}")

        return_dict = return_dict if return_dict is not None else self.config.use_return_dict

        transformer_outputs = self.transformer(
            input_ids,
            past_key_values=past_key_values,
            attention_mask=attention_mask,
            head_mask=head_mask,
            inputs_embeds=inputs_embeds,
            use_cache=use_cache,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
        )
        hidden_states = transformer_outputs[0]

        lm_logits = self.lm_head(hidden_states)

        loss = None
        if labels is not None:
            # Shift so that tokens < n predict n
            shift_logits = lm_logits[..., :-1, :]
            shift_labels = labels[..., 1:]
            batch_size, seq_length, vocab_size = shift_logits.shape
            # Flatten the tokens
            loss = F.cross_entropy(
                shift_logits.view(batch_size * seq_length, vocab_size), shift_labels.view(batch_size * seq_length)
            )

        if not return_dict:
            output = (lm_logits,) + transformer_outputs[1:]
            return ((loss,) + output) if loss is not None else output

        return CausalLMOutputWithCrossAttentions(
            loss=loss,
            logits=lm_logits,
            past_key_values=transformer_outputs.past_key_values,
            hidden_states=transformer_outputs.hidden_states,
            attentions=transformer_outputs.attentions,
        )

    def _reorder_cache(
        self, past: Tuple[Tuple[mindspore.Tensor, mindspore.Tensor], ...], beam_idx: mindspore.Tensor
    ) -> Tuple[Tuple[mindspore.Tensor, mindspore.Tensor], ...]:
        """
        This function is used to re-order the `past_key_values` cache if [`~PreTrainedModel.beam_search`] or
        [`~PreTrainedModel.beam_sample`] is called. This is required to match `past_key_values` with the correct
        beam_idx at every generation step.

        Output shares the same memory storage as `past`.
        """
        standardized_past = self._convert_to_standard_cache(past, batch_size=len(beam_idx))

        reordered_past = tuple(
            (
                layer_past[0].index_select(0, beam_idx),
                layer_past[1].index_select(0, beam_idx),
            )
            for layer_past in standardized_past
        )
        return self._convert_to_bloom_cache(reordered_past)

mindnlp.transformers.models.bloom.modeling_bloom.BloomForCausalLM.__init__(config)

Initializes a new instance of the BloomForCausalLM class.

PARAMETER DESCRIPTION
self

The current object instance.

config

The configuration object containing the model's hyperparameters and settings.

TYPE: BloomConfig

RETURNS DESCRIPTION

None.

Source code in mindnlp/transformers/models/bloom/modeling_bloom.py
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
def __init__(self, config: BloomConfig):
    """
    Initializes a new instance of the BloomForCausalLM class.

    Args:
        self: The current object instance.
        config (BloomConfig): The configuration object containing the model's hyperparameters and settings.

    Returns:
        None.

    Raises:
        None.
    """
    super().__init__(config)
    self.transformer = BloomModel(config)
    self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)

    # Initialize weights and apply final processing
    self.post_init()

mindnlp.transformers.models.bloom.modeling_bloom.BloomForCausalLM.forward(input_ids=None, past_key_values=None, attention_mask=None, head_mask=None, inputs_embeds=None, labels=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None, **deprecated_arguments)

PARAMETER DESCRIPTION
labels

Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size]

TYPE: `mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional* DEFAULT: None

Source code in mindnlp/transformers/models/bloom/modeling_bloom.py
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
def forward(
    self,
    input_ids: Optional[mindspore.Tensor] = None,
    past_key_values: Optional[Tuple[Tuple[mindspore.Tensor, mindspore.Tensor], ...]] = None,
    attention_mask: Optional[mindspore.Tensor] = None,
    head_mask: Optional[mindspore.Tensor] = None,
    inputs_embeds: Optional[mindspore.Tensor] = None,
    labels: Optional[mindspore.Tensor] = None,
    use_cache: Optional[bool] = None,
    output_attentions: Optional[bool] = None,
    output_hidden_states: Optional[bool] = None,
    return_dict: Optional[bool] = None,
    **deprecated_arguments,
) -> Union[Tuple[mindspore.Tensor], CausalLMOutputWithCrossAttentions]:
    r"""
    Args:
        labels (`mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
            Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set
            `labels = input_ids` Indices are selected in `[-100, 0, ..., config.vocab_size]` All labels set to `-100`
            are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size]`
    """
    if deprecated_arguments.pop("position_ids", False) is not False:
        # `position_ids` could have been `mindspore.Tensor` or `None` so defaulting pop to `False` allows to detect if users were passing explicitly `None`
        warnings.warn(
            "`position_ids` have no functionality in BLOOM and will be removed in v5.0.0. You can safely ignore"
            " passing `position_ids`.",
            FutureWarning,
        )
    if len(deprecated_arguments) > 0:
        raise ValueError(f"Got unexpected arguments: {deprecated_arguments}")

    return_dict = return_dict if return_dict is not None else self.config.use_return_dict

    transformer_outputs = self.transformer(
        input_ids,
        past_key_values=past_key_values,
        attention_mask=attention_mask,
        head_mask=head_mask,
        inputs_embeds=inputs_embeds,
        use_cache=use_cache,
        output_attentions=output_attentions,
        output_hidden_states=output_hidden_states,
        return_dict=return_dict,
    )
    hidden_states = transformer_outputs[0]

    lm_logits = self.lm_head(hidden_states)

    loss = None
    if labels is not None:
        # Shift so that tokens < n predict n
        shift_logits = lm_logits[..., :-1, :]
        shift_labels = labels[..., 1:]
        batch_size, seq_length, vocab_size = shift_logits.shape
        # Flatten the tokens
        loss = F.cross_entropy(
            shift_logits.view(batch_size * seq_length, vocab_size), shift_labels.view(batch_size * seq_length)
        )

    if not return_dict:
        output = (lm_logits,) + transformer_outputs[1:]
        return ((loss,) + output) if loss is not None else output

    return CausalLMOutputWithCrossAttentions(
        loss=loss,
        logits=lm_logits,
        past_key_values=transformer_outputs.past_key_values,
        hidden_states=transformer_outputs.hidden_states,
        attentions=transformer_outputs.attentions,
    )

mindnlp.transformers.models.bloom.modeling_bloom.BloomForCausalLM.get_output_embeddings()

Returns the output embeddings of the BloomForCausalLM model.

PARAMETER DESCRIPTION
self

The instance of the BloomForCausalLM class.

TYPE: BloomForCausalLM

RETURNS DESCRIPTION

None.

Source code in mindnlp/transformers/models/bloom/modeling_bloom.py
969
970
971
972
973
974
975
976
977
978
979
980
981
982
def get_output_embeddings(self):
    """
    Returns the output embeddings of the BloomForCausalLM model.

    Args:
        self (BloomForCausalLM): The instance of the BloomForCausalLM class.

    Returns:
        None.

    Raises:
        None.
    """
    return self.lm_head

mindnlp.transformers.models.bloom.modeling_bloom.BloomForCausalLM.prepare_inputs_for_generation(input_ids, past_key_values=None, attention_mask=None, inputs_embeds=None, **kwargs)

Prepare inputs for generation.

This method takes 5 parameters: self, input_ids, past_key_values, attention_mask, inputs_embeds. It returns a dictionary containing the model inputs.

PARAMETER DESCRIPTION
self

The instance of the BloomForCausalLM class.

TYPE: BloomForCausalLM

input_ids

The input tensor containing the tokenized input sequence.

TYPE: Tensor

past_key_values

The optional tensor containing the cached key-value pairs from previous generation steps.

TYPE: Optional[Tensor] DEFAULT: None

attention_mask

The optional tensor representing the attention mask for the input sequence.

TYPE: Optional[Tensor] DEFAULT: None

inputs_embeds

The optional tensor containing the embedded input sequence.

TYPE: Optional[Tensor] DEFAULT: None

RETURNS DESCRIPTION
dict

A dictionary containing the model inputs. It may include either 'input_ids' or 'inputs_embeds' depending on the availability of 'inputs_embeds' and 'past_key_values'. It also includes 'past_key_values', 'use_cache', and 'attention_mask' if provided.

TYPE: dict

Source code in mindnlp/transformers/models/bloom/modeling_bloom.py
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
def prepare_inputs_for_generation(
    self,
    input_ids: mindspore.Tensor,
    past_key_values: Optional[mindspore.Tensor] = None,
    attention_mask: Optional[mindspore.Tensor] = None,
    inputs_embeds: Optional[mindspore.Tensor] = None,
    **kwargs,
) -> dict:
    """
    Prepare inputs for generation.

    This method takes 5 parameters: self, input_ids, past_key_values, attention_mask, inputs_embeds. It returns a dictionary containing the model inputs.

    Args:
        self (BloomForCausalLM): The instance of the BloomForCausalLM class.
        input_ids (mindspore.Tensor): The input tensor containing the tokenized input sequence.
        past_key_values (Optional[mindspore.Tensor]): The optional tensor containing the cached key-value pairs from previous generation steps.
        attention_mask (Optional[mindspore.Tensor]): The optional tensor representing the attention mask for the input sequence.
        inputs_embeds (Optional[mindspore.Tensor]): The optional tensor containing the embedded input sequence.

    Returns:
        dict: A dictionary containing the model inputs.
            It may include either 'input_ids' or 'inputs_embeds' depending on the availability of
            'inputs_embeds' and 'past_key_values'.
            It also includes 'past_key_values', 'use_cache', and 'attention_mask' if provided.

    Raises:
        None

    """
    # only last tokens for input_ids if past is not None
    if past_key_values is not None:
        past_length = past_key_values[0][0].shape[2]

        # Some generation methods already pass only the last input ID
        if input_ids.shape[1] > past_length:
            remove_prefix_length = past_length
        else:
            # Default to old behavior: keep only final ID
            remove_prefix_length = input_ids.shape[1] - 1

        input_ids = input_ids[:, remove_prefix_length:]

        # the cache may be in the stardard format (e.g. in contrastive search), convert to bloom's format if needed
        if past_key_values[0][0].shape[0] == input_ids.shape[0]:
            past_key_values = self._convert_to_bloom_cache(past_key_values)

    # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
    if inputs_embeds is not None and past_key_values is None:
        model_inputs = {"inputs_embeds": inputs_embeds}
    else:
        model_inputs = {"input_ids": input_ids}

    model_inputs.update(
        {
            "past_key_values": past_key_values,
            "use_cache": kwargs.get("use_cache"),
            "attention_mask": attention_mask,
        }
    )
    return model_inputs

mindnlp.transformers.models.bloom.modeling_bloom.BloomForCausalLM.set_output_embeddings(new_embeddings)

Sets the output embeddings for the BloomForCausalLM model.

PARAMETER DESCRIPTION
self

The instance of the BloomForCausalLM class.

TYPE: BloomForCausalLM

new_embeddings

The new embeddings to be set for the model's lm_head.

TYPE: Tensor

RETURNS DESCRIPTION

None.

Source code in mindnlp/transformers/models/bloom/modeling_bloom.py
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
def set_output_embeddings(self, new_embeddings: mindspore.Tensor):
    """
    Sets the output embeddings for the BloomForCausalLM model.

    Args:
        self (BloomForCausalLM): The instance of the BloomForCausalLM class.
        new_embeddings (mindspore.Tensor): The new embeddings to be set for the model's lm_head.

    Returns:
        None.

    Raises:
        None.
    """
    self.lm_head = new_embeddings

mindnlp.transformers.models.bloom.modeling_bloom.BloomModel

Bases: BloomPreTrainedModel

This class represents a custom implementation of a transformer model called BloomModel. It inherits from the BloomPreTrainedModel class and includes functionalities for building the model architecture, setting and getting input embeddings, and forwarding the model for inference or training.

ATTRIBUTE DESCRIPTION
embed_dim

The dimension of the word embeddings.

TYPE: int

num_heads

The number of attention heads in the model.

TYPE: int

word_embeddings

The word embeddings layer.

TYPE: Embedding

word_embeddings_layernorm

Layer normalization for word embeddings.

TYPE: LayerNorm

h

List of BloomBlocks representing the hidden layers of the model.

TYPE: ModuleList

ln_f

Layer normalization for the final hidden states.

TYPE: LayerNorm

gradient_checkpointing

Flag indicating whether gradient checkpointing is enabled.

TYPE: bool

METHOD DESCRIPTION
build_alibi_tensor

Builds an alibi tensor for the model.

get_input_embeddings

Retrieves the current input embeddings.

set_input_embeddings

Updates the input embeddings with new values.

forward

Constructs the model for inference or training, handling various input parameters and configurations.

Note

This class is designed for custom transformer-based models and may require specific configurations and input formats.

Source code in mindnlp/transformers/models/bloom/modeling_bloom.py
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
class BloomModel(BloomPreTrainedModel):

    """
    This class represents a custom implementation of a transformer model called BloomModel.
    It inherits from the BloomPreTrainedModel class and includes functionalities for building the model architecture,
    setting and getting input embeddings, and forwarding the model for inference or training.

    Attributes:
        embed_dim (int): The dimension of the word embeddings.
        num_heads (int): The number of attention heads in the model.
        word_embeddings (nn.Embedding): The word embeddings layer.
        word_embeddings_layernorm (nn.LayerNorm): Layer normalization for word embeddings.
        h (nn.ModuleList): List of BloomBlocks representing the hidden layers of the model.
        ln_f (nn.LayerNorm): Layer normalization for the final hidden states.
        gradient_checkpointing (bool): Flag indicating whether gradient checkpointing is enabled.

    Methods:
        build_alibi_tensor: Builds an alibi tensor for the model.
        get_input_embeddings: Retrieves the current input embeddings.
        set_input_embeddings: Updates the input embeddings with new values.
        forward: Constructs the model for inference or training, handling various input parameters and configurations.

    Note:
        This class is designed for custom transformer-based models and may require specific configurations and input formats.
    """
    def __init__(self, config: BloomConfig):
        """
        Initialize the BloomModel with the provided configuration.

        Args:
            self (BloomModel): The instance of the BloomModel class.
            config (BloomConfig):
                An object containing configuration settings for the BloomModel.

                - BloomConfig should include the following attributes:
                - hidden_size (int): The size of the hidden layer.
                - n_head (int): The number of attention heads.
                - vocab_size (int): The size of the vocabulary.
                - num_hidden_layers (int): The number of hidden layers.
                - layer_norm_epsilon (float): The epsilon value for layer normalization.

        Returns:
            None.

        Raises:
            None.
        """
        super().__init__(config)

        self.embed_dim = config.hidden_size
        self.num_heads = config.n_head

        # Embedding + LN Embedding
        self.word_embeddings = nn.Embedding(config.vocab_size, self.embed_dim)
        self.word_embeddings_layernorm = nn.LayerNorm([self.embed_dim], eps=config.layer_norm_epsilon)

        # Transformer blocks
        self.h = nn.ModuleList([BloomBlock(config) for _ in range(config.num_hidden_layers)])

        # Final Layer Norm
        self.ln_f = nn.LayerNorm([self.embed_dim], eps=config.layer_norm_epsilon)

        self.gradient_checkpointing = False

        # Initialize weights and apply final processing
        self.post_init()

    def build_alibi_tensor(self, attention_mask: mindspore.Tensor, num_heads: int, dtype) -> mindspore.Tensor:
        '''
        This method builds an alibi tensor based on the provided attention_mask, number of heads, and data type.

        Args:
            self (BloomModel): The instance of the BloomModel class.
            attention_mask (mindspore.Tensor): A tensor representing the attention mask.
            num_heads (int): The number of attention heads to use in building the alibi tensor.
            dtype: The data type of the tensor.

        Returns:
            mindspore.Tensor: A tensor representing the built alibi tensor.

        Raises:
            ValueError: If the attention_mask is not a valid mindspore.Tensor.
            TypeError: If the num_heads is not an integer or if the dtype is not a valid data type.
        '''
        return build_alibi_tensor(attention_mask, num_heads, dtype)

    def get_input_embeddings(self):
        """
        Returns the input embeddings of the BloomModel.

        Args:
            self: An instance of the BloomModel class.

        Returns:
            Returns the word embeddings of the input tokens.

        Raises:
            None.
        """
        return self.word_embeddings

    def set_input_embeddings(self, new_embeddings: mindspore.Tensor):
        """
        Sets the input embeddings for the BloomModel class.

        Args:
            self (BloomModel): The instance of the BloomModel class.
            new_embeddings (mindspore.Tensor): The new embeddings to set as input.
                It should be a tensor representing the word embeddings.

        Returns:
            None.

        Raises:
            None.

        This method sets the word_embeddings attribute of the BloomModel instance to the provided new_embeddings.
        The word_embeddings attribute is used as input for the model during forward propagation.
        """
        self.word_embeddings = new_embeddings

    def forward(
        self,
        input_ids: Optional[mindspore.Tensor] = None,
        past_key_values: Optional[Tuple[Tuple[mindspore.Tensor, mindspore.Tensor], ...]] = None,
        attention_mask: Optional[mindspore.Tensor] = None,
        head_mask: Optional[mindspore.Tensor] = None,
        inputs_embeds: Optional[mindspore.Tensor] = None,
        use_cache: Optional[bool] = None,
        output_attentions: Optional[bool] = None,
        output_hidden_states: Optional[bool] = None,
        return_dict: Optional[bool] = None,
        **deprecated_arguments,
    ) -> Union[Tuple[mindspore.Tensor, ...], BaseModelOutputWithPastAndCrossAttentions]:
        """
        Constructs the BLOOM model based on the input parameters.

        Args:
            self (BloomModel): An instance of the BloomModel class.
            input_ids (Optional[mindspore.Tensor]):
                Input tensor of shape (batch_size, seq_length) containing the input tokens.
            past_key_values (Optional[Tuple[Tuple[mindspore.Tensor, mindspore.Tensor], ...]]):
                Tuple of length 'n_layer' where each tuple contains two tensors of shape
                (batch_size, num_heads, seq_length, hidden_size//num_heads) representing the past key and value
                respectively. If not provided, initialized with None.
            attention_mask (Optional[mindspore.Tensor]): Input tensor of shape (batch_size, seq_length)
                containing the attention mask values. If None, initialized with ones tensor of shape (batch_size,
                seq_length + past_key_values_length) where past_key_values_length is the length of past_key_values.
                Default: None.
            head_mask (Optional[mindspore.Tensor]): Input tensor of shape (n_layer, num_heads)
                containing the mask values for each head in each layer. If None, initialized with None. Default: None.
            inputs_embeds (Optional[mindspore.Tensor]): Input tensor of shape (batch_size, seq_length, hidden_size)
                containing the embedded input tokens. If None, initialized with the embeddings of input_ids.
                Default: None.
            use_cache (Optional[bool]): Whether to use past_key_values for faster decoding.
                If None, initialized with the value from the model config. Default: None.
            output_attentions (Optional[bool]): Whether to return the attentions tensors of all attention layers.
                If None, initialized with the value from the model config. Default: None.
            output_hidden_states (Optional[bool]): Whether to return the hidden states tensors of all layers.
                If None, initialized with the value from the model config. Default: None.
            return_dict (Optional[bool]): Whether to return a BaseModelOutputWithPastAndCrossAttentions object as
                the output instead of a tuple. If None, initialized with the value from the model config.
                Default: None.

        Returns:
            Union[Tuple[mindspore.Tensor, ...], BaseModelOutputWithPastAndCrossAttentions]:
                A tuple of the following tensors depending on the value of 'return_dict':

                - hidden_states (mindspore.Tensor): Output tensor of shape (batch_size, seq_length, hidden_size)
                containing the output features of the last layer.
                - presents (Tuple[mindspore.Tensor, ...]): Tuple of length 'n_layer' containing tuples of two tensors of
                shape (batch_size, num_heads, seq_length + past_key_values_length,
                hidden_size//num_heads) representing the present key and value respectively.
                - all_hidden_states (Tuple[mindspore.Tensor, ...]): Tuple of length 'n_layer+1' containing the hidden
                states tensors of all layers including the input embeddings. Each tensor has shape
                (batch_size, seq_length, hidden_size).
                - all_self_attentions (Tuple[mindspore.Tensor, ...]): Tuple of length 'n_layer' containing the attention
                tensors of all attention layers. Each tensor has shape (batch_size, num_heads,
                seq_length + past_key_values_length, seq_length + past_key_values_length).

        Raises:
            ValueError: If both input_ids and inputs_embeds are provided or neither of them are provided,
                or if there are any unexpected arguments passed in.
            FutureWarning: If position_ids argument is provided (now deprecated), a warning is issued indicating that
                it has no functionality in BLOOM and will be removed in v5.0.0.
        """
        if deprecated_arguments.pop("position_ids", False) is not False:
            # `position_ids` could have been `mindspore.Tensor` or `None` so defaulting pop to `False` allows to detect if users were passing explicitly `None`
            warnings.warn(
                "`position_ids` have no functionality in BLOOM and will be removed in v5.0.0. You can safely ignore"
                " passing `position_ids`.",
                FutureWarning,
            )
        if len(deprecated_arguments) > 0:
            raise ValueError(f"Got unexpected arguments: {deprecated_arguments}")

        output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
        output_hidden_states = (
            output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
        )
        use_cache = use_cache if use_cache is not None else self.config.use_cache
        return_dict = return_dict if return_dict is not None else self.config.use_return_dict

        if input_ids is not None and inputs_embeds is not None:
            raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
        if input_ids is not None:
            batch_size, seq_length = input_ids.shape
        elif inputs_embeds is not None:
            batch_size, seq_length, _ = inputs_embeds.shape
        else:
            raise ValueError("You have to specify either input_ids or inputs_embeds")

        if past_key_values is None:
            past_key_values = tuple([None] * len(self.h))

        # Prepare head mask if needed
        # 1.0 in head_mask indicate we keep the head
        # attention_probs has shape batch_size x num_heads x N x N
        # head_mask has shape n_layer x batch x num_heads x N x N
        head_mask = self.get_head_mask(head_mask, self.config.n_layer)

        if inputs_embeds is None:
            inputs_embeds = self.word_embeddings(input_ids)

        hidden_states = self.word_embeddings_layernorm(inputs_embeds)

        presents = () if use_cache else None
        all_self_attentions = () if output_attentions else None
        all_hidden_states = () if output_hidden_states else None

        # Compute alibi tensor: check build_alibi_tensor documentation
        seq_length_with_past = seq_length
        past_key_values_length = 0
        if past_key_values[0] is not None:
            past_key_values_length = past_key_values[0][0].shape[2]
            seq_length_with_past = seq_length_with_past + past_key_values_length
        if attention_mask is None:
            attention_mask = ops.ones(batch_size, seq_length_with_past)

        alibi = self.build_alibi_tensor(attention_mask, self.num_heads, dtype=hidden_states.dtype)

        causal_mask = _prepare_4d_causal_attention_mask(
            attention_mask,
            input_shape=(batch_size, seq_length),
            inputs_embeds=inputs_embeds,
            past_key_values_length=past_key_values_length,
        )
        causal_mask = causal_mask.bool()

        for i, (block, layer_past) in enumerate(zip(self.h, past_key_values)):
            if output_hidden_states:
                all_hidden_states = all_hidden_states + (hidden_states,)

            outputs = block(
                hidden_states,
                layer_past=layer_past,
                attention_mask=causal_mask,
                head_mask=head_mask[i],
                use_cache=use_cache,
                output_attentions=output_attentions,
                alibi=alibi,
            )

            hidden_states = outputs[0]
            if use_cache is True:
                presents = presents + (outputs[1],)

            if output_attentions:
                all_self_attentions = all_self_attentions + (outputs[2 if use_cache else 1],)

        # Add last hidden state
        hidden_states = self.ln_f(hidden_states)

        if output_hidden_states:
            all_hidden_states = all_hidden_states + (hidden_states,)

        if not return_dict:
            return tuple(v for v in [hidden_states, presents, all_hidden_states, all_self_attentions] if v is not None)

        return BaseModelOutputWithPastAndCrossAttentions(
            last_hidden_state=hidden_states,
            past_key_values=presents,
            hidden_states=all_hidden_states,
            attentions=all_self_attentions,
        )

mindnlp.transformers.models.bloom.modeling_bloom.BloomModel.__init__(config)

Initialize the BloomModel with the provided configuration.

PARAMETER DESCRIPTION
self

The instance of the BloomModel class.

TYPE: BloomModel

config

An object containing configuration settings for the BloomModel.

  • BloomConfig should include the following attributes:
  • hidden_size (int): The size of the hidden layer.
  • n_head (int): The number of attention heads.
  • vocab_size (int): The size of the vocabulary.
  • num_hidden_layers (int): The number of hidden layers.
  • layer_norm_epsilon (float): The epsilon value for layer normalization.

TYPE: BloomConfig

RETURNS DESCRIPTION

None.

Source code in mindnlp/transformers/models/bloom/modeling_bloom.py
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
def __init__(self, config: BloomConfig):
    """
    Initialize the BloomModel with the provided configuration.

    Args:
        self (BloomModel): The instance of the BloomModel class.
        config (BloomConfig):
            An object containing configuration settings for the BloomModel.

            - BloomConfig should include the following attributes:
            - hidden_size (int): The size of the hidden layer.
            - n_head (int): The number of attention heads.
            - vocab_size (int): The size of the vocabulary.
            - num_hidden_layers (int): The number of hidden layers.
            - layer_norm_epsilon (float): The epsilon value for layer normalization.

    Returns:
        None.

    Raises:
        None.
    """
    super().__init__(config)

    self.embed_dim = config.hidden_size
    self.num_heads = config.n_head

    # Embedding + LN Embedding
    self.word_embeddings = nn.Embedding(config.vocab_size, self.embed_dim)
    self.word_embeddings_layernorm = nn.LayerNorm([self.embed_dim], eps=config.layer_norm_epsilon)

    # Transformer blocks
    self.h = nn.ModuleList([BloomBlock(config) for _ in range(config.num_hidden_layers)])

    # Final Layer Norm
    self.ln_f = nn.LayerNorm([self.embed_dim], eps=config.layer_norm_epsilon)

    self.gradient_checkpointing = False

    # Initialize weights and apply final processing
    self.post_init()

mindnlp.transformers.models.bloom.modeling_bloom.BloomModel.build_alibi_tensor(attention_mask, num_heads, dtype)

This method builds an alibi tensor based on the provided attention_mask, number of heads, and data type.

PARAMETER DESCRIPTION
self

The instance of the BloomModel class.

TYPE: BloomModel

attention_mask

A tensor representing the attention mask.

TYPE: Tensor

num_heads

The number of attention heads to use in building the alibi tensor.

TYPE: int

dtype

The data type of the tensor.

RETURNS DESCRIPTION
Tensor

mindspore.Tensor: A tensor representing the built alibi tensor.

RAISES DESCRIPTION
ValueError

If the attention_mask is not a valid mindspore.Tensor.

TypeError

If the num_heads is not an integer or if the dtype is not a valid data type.

Source code in mindnlp/transformers/models/bloom/modeling_bloom.py
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
def build_alibi_tensor(self, attention_mask: mindspore.Tensor, num_heads: int, dtype) -> mindspore.Tensor:
    '''
    This method builds an alibi tensor based on the provided attention_mask, number of heads, and data type.

    Args:
        self (BloomModel): The instance of the BloomModel class.
        attention_mask (mindspore.Tensor): A tensor representing the attention mask.
        num_heads (int): The number of attention heads to use in building the alibi tensor.
        dtype: The data type of the tensor.

    Returns:
        mindspore.Tensor: A tensor representing the built alibi tensor.

    Raises:
        ValueError: If the attention_mask is not a valid mindspore.Tensor.
        TypeError: If the num_heads is not an integer or if the dtype is not a valid data type.
    '''
    return build_alibi_tensor(attention_mask, num_heads, dtype)

mindnlp.transformers.models.bloom.modeling_bloom.BloomModel.forward(input_ids=None, past_key_values=None, attention_mask=None, head_mask=None, inputs_embeds=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None, **deprecated_arguments)

Constructs the BLOOM model based on the input parameters.

PARAMETER DESCRIPTION
self

An instance of the BloomModel class.

TYPE: BloomModel

input_ids

Input tensor of shape (batch_size, seq_length) containing the input tokens.

TYPE: Optional[Tensor] DEFAULT: None

past_key_values

Tuple of length 'n_layer' where each tuple contains two tensors of shape (batch_size, num_heads, seq_length, hidden_size//num_heads) representing the past key and value respectively. If not provided, initialized with None.

TYPE: Optional[Tuple[Tuple[Tensor, Tensor], ...]] DEFAULT: None

attention_mask

Input tensor of shape (batch_size, seq_length) containing the attention mask values. If None, initialized with ones tensor of shape (batch_size, seq_length + past_key_values_length) where past_key_values_length is the length of past_key_values. Default: None.

TYPE: Optional[Tensor] DEFAULT: None

head_mask

Input tensor of shape (n_layer, num_heads) containing the mask values for each head in each layer. If None, initialized with None. Default: None.

TYPE: Optional[Tensor] DEFAULT: None

inputs_embeds

Input tensor of shape (batch_size, seq_length, hidden_size) containing the embedded input tokens. If None, initialized with the embeddings of input_ids. Default: None.

TYPE: Optional[Tensor] DEFAULT: None

use_cache

Whether to use past_key_values for faster decoding. If None, initialized with the value from the model config. Default: None.

TYPE: Optional[bool] DEFAULT: None

output_attentions

Whether to return the attentions tensors of all attention layers. If None, initialized with the value from the model config. Default: None.

TYPE: Optional[bool] DEFAULT: None

output_hidden_states

Whether to return the hidden states tensors of all layers. If None, initialized with the value from the model config. Default: None.

TYPE: Optional[bool] DEFAULT: None

return_dict

Whether to return a BaseModelOutputWithPastAndCrossAttentions object as the output instead of a tuple. If None, initialized with the value from the model config. Default: None.

TYPE: Optional[bool] DEFAULT: None

RETURNS DESCRIPTION
Union[Tuple[Tensor, ...], BaseModelOutputWithPastAndCrossAttentions]

Union[Tuple[mindspore.Tensor, ...], BaseModelOutputWithPastAndCrossAttentions]: A tuple of the following tensors depending on the value of 'return_dict':

  • hidden_states (mindspore.Tensor): Output tensor of shape (batch_size, seq_length, hidden_size) containing the output features of the last layer.
  • presents (Tuple[mindspore.Tensor, ...]): Tuple of length 'n_layer' containing tuples of two tensors of shape (batch_size, num_heads, seq_length + past_key_values_length, hidden_size//num_heads) representing the present key and value respectively.
  • all_hidden_states (Tuple[mindspore.Tensor, ...]): Tuple of length 'n_layer+1' containing the hidden states tensors of all layers including the input embeddings. Each tensor has shape (batch_size, seq_length, hidden_size).
  • all_self_attentions (Tuple[mindspore.Tensor, ...]): Tuple of length 'n_layer' containing the attention tensors of all attention layers. Each tensor has shape (batch_size, num_heads, seq_length + past_key_values_length, seq_length + past_key_values_length).
RAISES DESCRIPTION
ValueError

If both input_ids and inputs_embeds are provided or neither of them are provided, or if there are any unexpected arguments passed in.

FutureWarning

If position_ids argument is provided (now deprecated), a warning is issued indicating that it has no functionality in BLOOM and will be removed in v5.0.0.

Source code in mindnlp/transformers/models/bloom/modeling_bloom.py
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
def forward(
    self,
    input_ids: Optional[mindspore.Tensor] = None,
    past_key_values: Optional[Tuple[Tuple[mindspore.Tensor, mindspore.Tensor], ...]] = None,
    attention_mask: Optional[mindspore.Tensor] = None,
    head_mask: Optional[mindspore.Tensor] = None,
    inputs_embeds: Optional[mindspore.Tensor] = None,
    use_cache: Optional[bool] = None,
    output_attentions: Optional[bool] = None,
    output_hidden_states: Optional[bool] = None,
    return_dict: Optional[bool] = None,
    **deprecated_arguments,
) -> Union[Tuple[mindspore.Tensor, ...], BaseModelOutputWithPastAndCrossAttentions]:
    """
    Constructs the BLOOM model based on the input parameters.

    Args:
        self (BloomModel): An instance of the BloomModel class.
        input_ids (Optional[mindspore.Tensor]):
            Input tensor of shape (batch_size, seq_length) containing the input tokens.
        past_key_values (Optional[Tuple[Tuple[mindspore.Tensor, mindspore.Tensor], ...]]):
            Tuple of length 'n_layer' where each tuple contains two tensors of shape
            (batch_size, num_heads, seq_length, hidden_size//num_heads) representing the past key and value
            respectively. If not provided, initialized with None.
        attention_mask (Optional[mindspore.Tensor]): Input tensor of shape (batch_size, seq_length)
            containing the attention mask values. If None, initialized with ones tensor of shape (batch_size,
            seq_length + past_key_values_length) where past_key_values_length is the length of past_key_values.
            Default: None.
        head_mask (Optional[mindspore.Tensor]): Input tensor of shape (n_layer, num_heads)
            containing the mask values for each head in each layer. If None, initialized with None. Default: None.
        inputs_embeds (Optional[mindspore.Tensor]): Input tensor of shape (batch_size, seq_length, hidden_size)
            containing the embedded input tokens. If None, initialized with the embeddings of input_ids.
            Default: None.
        use_cache (Optional[bool]): Whether to use past_key_values for faster decoding.
            If None, initialized with the value from the model config. Default: None.
        output_attentions (Optional[bool]): Whether to return the attentions tensors of all attention layers.
            If None, initialized with the value from the model config. Default: None.
        output_hidden_states (Optional[bool]): Whether to return the hidden states tensors of all layers.
            If None, initialized with the value from the model config. Default: None.
        return_dict (Optional[bool]): Whether to return a BaseModelOutputWithPastAndCrossAttentions object as
            the output instead of a tuple. If None, initialized with the value from the model config.
            Default: None.

    Returns:
        Union[Tuple[mindspore.Tensor, ...], BaseModelOutputWithPastAndCrossAttentions]:
            A tuple of the following tensors depending on the value of 'return_dict':

            - hidden_states (mindspore.Tensor): Output tensor of shape (batch_size, seq_length, hidden_size)
            containing the output features of the last layer.
            - presents (Tuple[mindspore.Tensor, ...]): Tuple of length 'n_layer' containing tuples of two tensors of
            shape (batch_size, num_heads, seq_length + past_key_values_length,
            hidden_size//num_heads) representing the present key and value respectively.
            - all_hidden_states (Tuple[mindspore.Tensor, ...]): Tuple of length 'n_layer+1' containing the hidden
            states tensors of all layers including the input embeddings. Each tensor has shape
            (batch_size, seq_length, hidden_size).
            - all_self_attentions (Tuple[mindspore.Tensor, ...]): Tuple of length 'n_layer' containing the attention
            tensors of all attention layers. Each tensor has shape (batch_size, num_heads,
            seq_length + past_key_values_length, seq_length + past_key_values_length).

    Raises:
        ValueError: If both input_ids and inputs_embeds are provided or neither of them are provided,
            or if there are any unexpected arguments passed in.
        FutureWarning: If position_ids argument is provided (now deprecated), a warning is issued indicating that
            it has no functionality in BLOOM and will be removed in v5.0.0.
    """
    if deprecated_arguments.pop("position_ids", False) is not False:
        # `position_ids` could have been `mindspore.Tensor` or `None` so defaulting pop to `False` allows to detect if users were passing explicitly `None`
        warnings.warn(
            "`position_ids` have no functionality in BLOOM and will be removed in v5.0.0. You can safely ignore"
            " passing `position_ids`.",
            FutureWarning,
        )
    if len(deprecated_arguments) > 0:
        raise ValueError(f"Got unexpected arguments: {deprecated_arguments}")

    output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
    output_hidden_states = (
        output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
    )
    use_cache = use_cache if use_cache is not None else self.config.use_cache
    return_dict = return_dict if return_dict is not None else self.config.use_return_dict

    if input_ids is not None and inputs_embeds is not None:
        raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
    if input_ids is not None:
        batch_size, seq_length = input_ids.shape
    elif inputs_embeds is not None:
        batch_size, seq_length, _ = inputs_embeds.shape
    else:
        raise ValueError("You have to specify either input_ids or inputs_embeds")

    if past_key_values is None:
        past_key_values = tuple([None] * len(self.h))

    # Prepare head mask if needed
    # 1.0 in head_mask indicate we keep the head
    # attention_probs has shape batch_size x num_heads x N x N
    # head_mask has shape n_layer x batch x num_heads x N x N
    head_mask = self.get_head_mask(head_mask, self.config.n_layer)

    if inputs_embeds is None:
        inputs_embeds = self.word_embeddings(input_ids)

    hidden_states = self.word_embeddings_layernorm(inputs_embeds)

    presents = () if use_cache else None
    all_self_attentions = () if output_attentions else None
    all_hidden_states = () if output_hidden_states else None

    # Compute alibi tensor: check build_alibi_tensor documentation
    seq_length_with_past = seq_length
    past_key_values_length = 0
    if past_key_values[0] is not None:
        past_key_values_length = past_key_values[0][0].shape[2]
        seq_length_with_past = seq_length_with_past + past_key_values_length
    if attention_mask is None:
        attention_mask = ops.ones(batch_size, seq_length_with_past)

    alibi = self.build_alibi_tensor(attention_mask, self.num_heads, dtype=hidden_states.dtype)

    causal_mask = _prepare_4d_causal_attention_mask(
        attention_mask,
        input_shape=(batch_size, seq_length),
        inputs_embeds=inputs_embeds,
        past_key_values_length=past_key_values_length,
    )
    causal_mask = causal_mask.bool()

    for i, (block, layer_past) in enumerate(zip(self.h, past_key_values)):
        if output_hidden_states:
            all_hidden_states = all_hidden_states + (hidden_states,)

        outputs = block(
            hidden_states,
            layer_past=layer_past,
            attention_mask=causal_mask,
            head_mask=head_mask[i],
            use_cache=use_cache,
            output_attentions=output_attentions,
            alibi=alibi,
        )

        hidden_states = outputs[0]
        if use_cache is True:
            presents = presents + (outputs[1],)

        if output_attentions:
            all_self_attentions = all_self_attentions + (outputs[2 if use_cache else 1],)

    # Add last hidden state
    hidden_states = self.ln_f(hidden_states)

    if output_hidden_states:
        all_hidden_states = all_hidden_states + (hidden_states,)

    if not return_dict:
        return tuple(v for v in [hidden_states, presents, all_hidden_states, all_self_attentions] if v is not None)

    return BaseModelOutputWithPastAndCrossAttentions(
        last_hidden_state=hidden_states,
        past_key_values=presents,
        hidden_states=all_hidden_states,
        attentions=all_self_attentions,
    )

mindnlp.transformers.models.bloom.modeling_bloom.BloomModel.get_input_embeddings()

Returns the input embeddings of the BloomModel.

PARAMETER DESCRIPTION
self

An instance of the BloomModel class.

RETURNS DESCRIPTION

Returns the word embeddings of the input tokens.

Source code in mindnlp/transformers/models/bloom/modeling_bloom.py
718
719
720
721
722
723
724
725
726
727
728
729
730
731
def get_input_embeddings(self):
    """
    Returns the input embeddings of the BloomModel.

    Args:
        self: An instance of the BloomModel class.

    Returns:
        Returns the word embeddings of the input tokens.

    Raises:
        None.
    """
    return self.word_embeddings

mindnlp.transformers.models.bloom.modeling_bloom.BloomModel.set_input_embeddings(new_embeddings)

Sets the input embeddings for the BloomModel class.

PARAMETER DESCRIPTION
self

The instance of the BloomModel class.

TYPE: BloomModel

new_embeddings

The new embeddings to set as input. It should be a tensor representing the word embeddings.

TYPE: Tensor

RETURNS DESCRIPTION

None.

This method sets the word_embeddings attribute of the BloomModel instance to the provided new_embeddings. The word_embeddings attribute is used as input for the model during forward propagation.

Source code in mindnlp/transformers/models/bloom/modeling_bloom.py
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
def set_input_embeddings(self, new_embeddings: mindspore.Tensor):
    """
    Sets the input embeddings for the BloomModel class.

    Args:
        self (BloomModel): The instance of the BloomModel class.
        new_embeddings (mindspore.Tensor): The new embeddings to set as input.
            It should be a tensor representing the word embeddings.

    Returns:
        None.

    Raises:
        None.

    This method sets the word_embeddings attribute of the BloomModel instance to the provided new_embeddings.
    The word_embeddings attribute is used as input for the model during forward propagation.
    """
    self.word_embeddings = new_embeddings

mindnlp.transformers.models.bloom.modeling_bloom.BloomPreTrainedModel

Bases: PreTrainedModel

BloomPreTrainedModel is a Python class that extends the functionality of PreTrainedModel. It provides methods for initializing weights and converting cache formats to be compatible with the Bloom model. The class includes functions for initializing weights based on the type of neural network cell and for standardizing or converting cache formats to match specific implementations. Utilize this class to facilitate pre-training tasks in NLP models with MindSpore framework.

Source code in mindnlp/transformers/models/bloom/modeling_bloom.py
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
class BloomPreTrainedModel(PreTrainedModel):

    """
    BloomPreTrainedModel is a Python class that extends the functionality of PreTrainedModel.
    It provides methods for initializing weights and converting cache formats to be compatible with the Bloom model.
    The class includes functions for initializing weights based on the type of neural network cell and for
    standardizing or converting cache formats to match specific implementations. Utilize this class to
    facilitate pre-training tasks in NLP models with MindSpore framework.
    """
    config_class = BloomConfig
    base_model_prefix = "transformer"
    supports_gradient_checkpointing = False
    _no_split_modules = ["BloomBlock"]

    def _init_weights(self, cell):
        """Initialize the weights"""
        if isinstance(cell, nn.Linear):
            # Slightly different from the TF version which uses truncated_normal for initialization
            # cf https://github.com/pytorch/pytorch/pull/5617
            cell.weight.set_data(initializer(Normal(self.config.initializer_range),
                                                    cell.weight.shape, cell.weight.dtype))
            if cell.bias is not None:
                cell.bias.set_data(initializer('zeros', cell.bias.shape, cell.bias.dtype))
        elif isinstance(cell, nn.Embedding):
            weight = initializer(Normal(self.config.initializer_range),
                                                 cell.weight.shape,
                                                 cell.weight.dtype)
            if cell.padding_idx is not None:
                weight[cell.padding_idx] = 0
            cell.weight.set_data(weight)
        elif isinstance(cell, nn.LayerNorm):
            cell.weight.set_data(initializer('ones', cell.weight.shape, cell.weight.dtype))
            cell.bias.set_data(initializer('zeros', cell.bias.shape, cell.bias.dtype))

    @staticmethod
    def _convert_to_standard_cache(
        past_key_value: Tuple[Tuple[mindspore.Tensor, mindspore.Tensor]], batch_size: int
    ) -> Tuple[Tuple[mindspore.Tensor, mindspore.Tensor]]:
        """
        Standardizes the format of the cache so as to match most implementations, i.e. to tuple(tuple([batch_size,
        num_heads, ...]))
        """
        batch_size_times_num_heads, head_dim, seq_length = past_key_value[0][0].shape
        num_heads = batch_size_times_num_heads // batch_size
        # key: [batch_size * num_heads, head_dim, seq_length] -> [batch_size, num_heads, head_dim, seq_length]
        # value: [batch_size * num_heads, seq_length, head_dim] -> [batch_size, num_heads, seq_length, head_dim]
        return tuple(
            (
                layer_past[0].view(batch_size, num_heads, head_dim, seq_length),
                layer_past[1].view(batch_size, num_heads, seq_length, head_dim),
            )
            for layer_past in past_key_value
        )

    @staticmethod
    def _convert_to_bloom_cache(
        past_key_value: Tuple[Tuple[mindspore.Tensor, mindspore.Tensor]],
    ) -> Tuple[Tuple[mindspore.Tensor, mindspore.Tensor]]:
        """
        Converts the cache to the format expected by Bloom, i.e. to tuple(tuple([batch_size * num_heads, ...]))
        """
        batch_size, num_heads, head_dim, seq_length = past_key_value[0][0].shape
        batch_size_times_num_heads = batch_size * num_heads
        # key:  [batch_size, num_heads, head_dim, seq_length] -> [batch_size * num_heads, head_dim, seq_length]
        # value: [batch_size, num_heads, seq_length, head_dim] -> [batch_size * num_heads, seq_length, head_dim]
        return tuple(
            (
                layer_past[0].view(batch_size_times_num_heads, head_dim, seq_length),
                layer_past[1].view(batch_size_times_num_heads, seq_length, head_dim),
            )
            for layer_past in past_key_value
        )

mindnlp.transformers.models.bloom.modeling_bloom.BloomForSequenceClassification

Bases: BloomPreTrainedModel

The 'BloomForSequenceClassification' class represents a fine-tuned sequence classification model based on the Bloom architecture. This class inherits from the 'BloomPreTrainedModel' and includes methods for model initialization and inference. It provides functionality for computing sequence classification/regression loss and handling batch processing. The class also supports different problem types such as regression, single-label classification, and multi-label classification.

The class includes the 'forward' method for generating model outputs and computing loss based on the input data. It also handles deprecated arguments and provides warnings for functionality that will be removed in future versions. Additionally, the method supports the use of padding tokens and provides appropriate error handling for different scenarios.

The 'BloomForSequenceClassification' class is designed to be used for sequence classification tasks and provides flexibility in handling various types of input data and problem types.

For detailed information on the methods and parameters of this class, please refer to the method docstrings and the class code.

Source code in mindnlp/transformers/models/bloom/modeling_bloom.py
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
class BloomForSequenceClassification(BloomPreTrainedModel):

    """
    The 'BloomForSequenceClassification' class represents a fine-tuned sequence classification model based on the Bloom
    architecture. This class inherits from the 'BloomPreTrainedModel' and includes methods
    for model initialization and inference. It provides functionality for computing sequence classification/regression
    loss and handling batch processing. The class also supports different problem types such as
    regression, single-label classification, and multi-label classification.

    The class includes the 'forward' method for generating model outputs and computing loss based on the input data.
    It also handles deprecated arguments and provides warnings for functionality that will be
    removed in future versions. Additionally, the method supports the use of padding tokens and provides appropriate
    error handling for different scenarios.

    The 'BloomForSequenceClassification' class is designed to be used for sequence classification tasks and provides
    flexibility in handling various types of input data and problem types.

    For detailed information on the methods and parameters of this class, please refer to the method docstrings and the class code.
    """
    def __init__(self, config: BloomConfig):
        """
        Initializes an instance of the BloomForSequenceClassification class with the provided configuration.

        Args:
            self: The current instance of the class.
            config (BloomConfig): The configuration object for the BloomForSequenceClassification model.
                It contains various settings and hyperparameters.

                - num_labels (int): The number of labels for the classification task.

        Returns:
            None

        Raises:
            None
        """
        super().__init__(config)
        self.num_labels = config.num_labels
        self.transformer = BloomModel(config)
        self.score = nn.Linear(config.hidden_size, config.num_labels, bias=False)

        # Initialize weights and apply final processing
        self.post_init()

    def forward(
        self,
        input_ids: Optional[mindspore.Tensor] = None,
        past_key_values: Optional[Tuple[Tuple[mindspore.Tensor, mindspore.Tensor], ...]] = None,
        attention_mask: Optional[mindspore.Tensor] = None,
        head_mask: Optional[mindspore.Tensor] = None,
        inputs_embeds: Optional[mindspore.Tensor] = None,
        labels: Optional[mindspore.Tensor] = None,
        use_cache: Optional[bool] = None,
        output_attentions: Optional[bool] = None,
        output_hidden_states: Optional[bool] = None,
        return_dict: Optional[bool] = None,
        **deprecated_arguments,
    ) -> Union[Tuple[mindspore.Tensor], SequenceClassifierOutputWithPast]:
        r"""
        Args:
            labels (`mindspore.Tensor` of shape `(batch_size,)`, *optional*):
                Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
                config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
                `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
        """
        if deprecated_arguments.pop("position_ids", False) is not False:
            # `position_ids` could have been `mindspore.Tensor` or `None` so defaulting pop to `False` allows to detect if users were passing explicitly `None`
            warnings.warn(
                "`position_ids` have no functionality in BLOOM and will be removed in v5.0.0. You can safely ignore"
                " passing `position_ids`.",
                FutureWarning,
            )
        if len(deprecated_arguments) > 0:
            raise ValueError(f"Got unexpected arguments: {deprecated_arguments}")

        return_dict = return_dict if return_dict is not None else self.config.use_return_dict

        transformer_outputs = self.transformer(
            input_ids,
            past_key_values=past_key_values,
            attention_mask=attention_mask,
            head_mask=head_mask,
            inputs_embeds=inputs_embeds,
            use_cache=use_cache,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
        )

        hidden_states = transformer_outputs[0]
        logits = self.score(hidden_states)

        if input_ids is not None:
            batch_size = input_ids.shape[0]
        else:
            batch_size = inputs_embeds.shape[0]

        if self.config.pad_token_id is None and batch_size != 1:
            raise ValueError("Cannot handle batch sizes > 1 if no padding token is defined.")
        if self.config.pad_token_id is None:
            sequence_lengths = -1
        else:
            if input_ids is not None:
                # if no pad token found, use modulo instead of reverse indexing for ONNX compatibility
                sequence_lengths = ops.eq(input_ids, self.config.pad_token_id).int().argmax(-1) - 1
                sequence_lengths = sequence_lengths % input_ids.shape[-1]
            else:
                sequence_lengths = -1
                logger.warning(
                    f"{self.__class__.__name__} will not detect padding tokens in `inputs_embeds`. Results may be "
                    "unexpected if using padding tokens in conjunction with `inputs_embeds.`"
                )

        pooled_logits = logits[ops.arange(batch_size), sequence_lengths]

        loss = None
        if labels is not None:
            if self.config.problem_type is None:
                if self.num_labels == 1:
                    self.config.problem_type = "regression"
                elif self.num_labels > 1 and labels.dtype in (mindspore.int64, mindspore.int32):
                    self.config.problem_type = "single_label_classification"
                else:
                    self.config.problem_type = "multi_label_classification"

            if self.config.problem_type == "regression":
                if self.num_labels == 1:
                    loss = ops.mse_loss(pooled_logits.squeeze(), labels.squeeze())
                else:
                    loss = ops.mse_loss(pooled_logits, labels)
            elif self.config.problem_type == "single_label_classification":
                loss = F.cross_entropy(pooled_logits, labels)
            elif self.config.problem_type == "multi_label_classification":
                loss = ops.binary_cross_entropy_with_logits(pooled_logits, labels)
        if not return_dict:
            output = (pooled_logits,) + transformer_outputs[1:]
            return ((loss,) + output) if loss is not None else output

        return SequenceClassifierOutputWithPast(
            loss=loss,
            logits=pooled_logits,
            past_key_values=transformer_outputs.past_key_values,
            hidden_states=transformer_outputs.hidden_states,
            attentions=transformer_outputs.attentions,
        )

mindnlp.transformers.models.bloom.modeling_bloom.BloomForSequenceClassification.__init__(config)

Initializes an instance of the BloomForSequenceClassification class with the provided configuration.

PARAMETER DESCRIPTION
self

The current instance of the class.

config

The configuration object for the BloomForSequenceClassification model. It contains various settings and hyperparameters.

  • num_labels (int): The number of labels for the classification task.

TYPE: BloomConfig

RETURNS DESCRIPTION

None

Source code in mindnlp/transformers/models/bloom/modeling_bloom.py
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
def __init__(self, config: BloomConfig):
    """
    Initializes an instance of the BloomForSequenceClassification class with the provided configuration.

    Args:
        self: The current instance of the class.
        config (BloomConfig): The configuration object for the BloomForSequenceClassification model.
            It contains various settings and hyperparameters.

            - num_labels (int): The number of labels for the classification task.

    Returns:
        None

    Raises:
        None
    """
    super().__init__(config)
    self.num_labels = config.num_labels
    self.transformer = BloomModel(config)
    self.score = nn.Linear(config.hidden_size, config.num_labels, bias=False)

    # Initialize weights and apply final processing
    self.post_init()

mindnlp.transformers.models.bloom.modeling_bloom.BloomForSequenceClassification.forward(input_ids=None, past_key_values=None, attention_mask=None, head_mask=None, inputs_embeds=None, labels=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None, **deprecated_arguments)

PARAMETER DESCRIPTION
labels

Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy).

TYPE: `mindspore.Tensor` of shape `(batch_size,)`, *optional* DEFAULT: None

Source code in mindnlp/transformers/models/bloom/modeling_bloom.py
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
def forward(
    self,
    input_ids: Optional[mindspore.Tensor] = None,
    past_key_values: Optional[Tuple[Tuple[mindspore.Tensor, mindspore.Tensor], ...]] = None,
    attention_mask: Optional[mindspore.Tensor] = None,
    head_mask: Optional[mindspore.Tensor] = None,
    inputs_embeds: Optional[mindspore.Tensor] = None,
    labels: Optional[mindspore.Tensor] = None,
    use_cache: Optional[bool] = None,
    output_attentions: Optional[bool] = None,
    output_hidden_states: Optional[bool] = None,
    return_dict: Optional[bool] = None,
    **deprecated_arguments,
) -> Union[Tuple[mindspore.Tensor], SequenceClassifierOutputWithPast]:
    r"""
    Args:
        labels (`mindspore.Tensor` of shape `(batch_size,)`, *optional*):
            Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
            config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
            `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
    """
    if deprecated_arguments.pop("position_ids", False) is not False:
        # `position_ids` could have been `mindspore.Tensor` or `None` so defaulting pop to `False` allows to detect if users were passing explicitly `None`
        warnings.warn(
            "`position_ids` have no functionality in BLOOM and will be removed in v5.0.0. You can safely ignore"
            " passing `position_ids`.",
            FutureWarning,
        )
    if len(deprecated_arguments) > 0:
        raise ValueError(f"Got unexpected arguments: {deprecated_arguments}")

    return_dict = return_dict if return_dict is not None else self.config.use_return_dict

    transformer_outputs = self.transformer(
        input_ids,
        past_key_values=past_key_values,
        attention_mask=attention_mask,
        head_mask=head_mask,
        inputs_embeds=inputs_embeds,
        use_cache=use_cache,
        output_attentions=output_attentions,
        output_hidden_states=output_hidden_states,
        return_dict=return_dict,
    )

    hidden_states = transformer_outputs[0]
    logits = self.score(hidden_states)

    if input_ids is not None:
        batch_size = input_ids.shape[0]
    else:
        batch_size = inputs_embeds.shape[0]

    if self.config.pad_token_id is None and batch_size != 1:
        raise ValueError("Cannot handle batch sizes > 1 if no padding token is defined.")
    if self.config.pad_token_id is None:
        sequence_lengths = -1
    else:
        if input_ids is not None:
            # if no pad token found, use modulo instead of reverse indexing for ONNX compatibility
            sequence_lengths = ops.eq(input_ids, self.config.pad_token_id).int().argmax(-1) - 1
            sequence_lengths = sequence_lengths % input_ids.shape[-1]
        else:
            sequence_lengths = -1
            logger.warning(
                f"{self.__class__.__name__} will not detect padding tokens in `inputs_embeds`. Results may be "
                "unexpected if using padding tokens in conjunction with `inputs_embeds.`"
            )

    pooled_logits = logits[ops.arange(batch_size), sequence_lengths]

    loss = None
    if labels is not None:
        if self.config.problem_type is None:
            if self.num_labels == 1:
                self.config.problem_type = "regression"
            elif self.num_labels > 1 and labels.dtype in (mindspore.int64, mindspore.int32):
                self.config.problem_type = "single_label_classification"
            else:
                self.config.problem_type = "multi_label_classification"

        if self.config.problem_type == "regression":
            if self.num_labels == 1:
                loss = ops.mse_loss(pooled_logits.squeeze(), labels.squeeze())
            else:
                loss = ops.mse_loss(pooled_logits, labels)
        elif self.config.problem_type == "single_label_classification":
            loss = F.cross_entropy(pooled_logits, labels)
        elif self.config.problem_type == "multi_label_classification":
            loss = ops.binary_cross_entropy_with_logits(pooled_logits, labels)
    if not return_dict:
        output = (pooled_logits,) + transformer_outputs[1:]
        return ((loss,) + output) if loss is not None else output

    return SequenceClassifierOutputWithPast(
        loss=loss,
        logits=pooled_logits,
        past_key_values=transformer_outputs.past_key_values,
        hidden_states=transformer_outputs.hidden_states,
        attentions=transformer_outputs.attentions,
    )

mindnlp.transformers.models.bloom.modeling_bloom.BloomForTokenClassification

Bases: BloomPreTrainedModel

The BloomForTokenClassification class is a Python class that represents a model for token classification using the BLOOM architecture. This class inherits from the BloomPreTrainedModel class.

Class Attributes:

  • num_labels: The number of labels for the token classification task.
  • transformer: An instance of the BloomModel class that represents the BLOOM transformer model.
  • dropout: An instance of the Dropout class from the nn module for applying dropout regularization.
  • classifier: An instance of the Dense class from the nn module for the final classification layer.
METHOD DESCRIPTION
`__init__`

Initializes a new instance of the BloomForTokenClassification class. It takes a BloomConfig object as input and sets the necessary attributes.

`forward`

Constructs the BLOOM model for token classification. It takes various input tensors and arguments and returns the model output.

Parameters:

  • input_ids (Optional): Tensor containing the input token IDs.
  • past_key_values (Optional): Tuple of past key-value tensors.
  • attention_mask (Optional): Tensor containing the attention mask.
  • head_mask (Optional): Tensor containing the head mask.
  • inputs_embeds (Optional): Tensor containing the input embeddings.
  • labels (Optional): Tensor containing the labels for computing the loss.
  • use_cache (Optional): Boolean indicating whether to use cache.
  • output_attentions (Optional): Boolean indicating whether to output attentions.
  • output_hidden_states (Optional): Boolean indicating whether to output hidden states.
  • return_dict (Optional): Boolean indicating whether to return the output as a TokenClassifierOutput object.
  • **deprecated_arguments: Deprecated arguments that will be ignored.

Returns:

  • If return_dict is False, returns a tuple containing the logits and other model outputs.
  • If return_dict is True, returns a TokenClassifierOutput object containing the loss, logits, hidden states, and attentions.
Source code in mindnlp/transformers/models/bloom/modeling_bloom.py
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
class BloomForTokenClassification(BloomPreTrainedModel):

    """
    The `BloomForTokenClassification` class is a Python class that represents a model for token classification using
    the BLOOM architecture. This class inherits from the `BloomPreTrainedModel` class.

    Class Attributes:

    - `num_labels`: The number of labels for the token classification task.
    - `transformer`: An instance of the `BloomModel` class that represents the BLOOM transformer model.
    - `dropout`: An instance of the `Dropout` class from the `nn` module for applying dropout regularization.
    - `classifier`: An instance of the `Dense` class from the `nn` module for the final classification layer.

    Methods:
       `__init__`: Initializes a new instance of the `BloomForTokenClassification` class.
            It takes a `BloomConfig` object as input and sets the necessary attributes.
       `forward`: Constructs the BLOOM model for token classification.
            It takes various input tensors and arguments and returns the model output.

            Parameters:

            - `input_ids` (Optional): Tensor containing the input token IDs.
            - `past_key_values` (Optional): Tuple of past key-value tensors.
            - `attention_mask` (Optional): Tensor containing the attention mask.
            - `head_mask` (Optional): Tensor containing the head mask.
            - `inputs_embeds` (Optional): Tensor containing the input embeddings.
            - `labels` (Optional): Tensor containing the labels for computing the loss.
            - `use_cache` (Optional): Boolean indicating whether to use cache.
            - `output_attentions` (Optional): Boolean indicating whether to output attentions.
            - `output_hidden_states` (Optional): Boolean indicating whether to output hidden states.
            - `return_dict` (Optional): Boolean indicating whether to return the output as a `TokenClassifierOutput` object.
            - `**deprecated_arguments`: Deprecated arguments that will be ignored.

            Returns:

            - If `return_dict` is False, returns a tuple containing the logits and other model outputs.
            - If `return_dict` is True, returns a `TokenClassifierOutput` object containing the loss, logits, hidden states, and attentions.
    """
    def __init__(self, config: BloomConfig):
        """
        Initializes an instance of BloomForTokenClassification.

        Args:
            self: The instance of the class.
            config (BloomConfig): The configuration object containing settings for the model.
                It must be an instance of BloomConfig class.
                This parameter is required.

        Returns:
            None.

        Raises:
            TypeError: If the config parameter is not an instance of BloomConfig.
            AttributeError: If the config object does not contain the required attributes.
        """
        super().__init__(config)
        self.num_labels = config.num_labels

        self.transformer = BloomModel(config)
        if hasattr(config, "classifier_dropout") and config.classifier_dropout is not None:
            classifier_dropout = config.classifier_dropout
        elif hasattr(config, "hidden_dropout") and config.hidden_dropout is not None:
            classifier_dropout = config.hidden_dropout
        else:
            classifier_dropout = 0.1
        self.dropout = nn.Dropout(p=classifier_dropout)
        self.classifier = nn.Linear(config.hidden_size, config.num_labels)

        # Initialize weights and apply final processing
        self.post_init()

    def forward(
        self,
        input_ids: Optional[mindspore.Tensor] = None,
        past_key_values: Optional[Tuple[Tuple[mindspore.Tensor, mindspore.Tensor], ...]] = None,
        attention_mask: Optional[mindspore.Tensor] = None,
        head_mask: Optional[mindspore.Tensor] = None,
        inputs_embeds: Optional[mindspore.Tensor] = None,
        labels: Optional[mindspore.Tensor] = None,
        use_cache: Optional[bool] = None,
        output_attentions: Optional[bool] = None,
        output_hidden_states: Optional[bool] = None,
        return_dict: Optional[bool] = None,
        **deprecated_arguments,
    ) -> Union[Tuple[mindspore.Tensor], TokenClassifierOutput]:
        r"""
        Args:
            labels (`mindspore.Tensor` of shape `(batch_size,)`, *optional*):
                Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
                config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
                `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
        """
        if deprecated_arguments.pop("position_ids", False) is not False:
            # `position_ids` could have been `mindspore.Tensor` or `None` so defaulting pop to `False` allows to detect if users were passing explicitly `None`
            warnings.warn(
                "`position_ids` have no functionality in BLOOM and will be removed in v5.0.0. You can safely ignore"
                " passing `position_ids`.",
                FutureWarning,
            )
        if len(deprecated_arguments) > 0:
            raise ValueError(f"Got unexpected arguments: {deprecated_arguments}")

        return_dict = return_dict if return_dict is not None else self.config.use_return_dict

        transformer_outputs = self.transformer(
            input_ids,
            past_key_values=past_key_values,
            attention_mask=attention_mask,
            head_mask=head_mask,
            inputs_embeds=inputs_embeds,
            use_cache=use_cache,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
        )

        hidden_states = transformer_outputs[0]
        hidden_states = self.dropout(hidden_states)
        logits = self.classifier(hidden_states)

        loss = None
        if labels is not None:
            batch_size, seq_length = labels.shape
            loss = F.cross_entropy(
                logits.view(batch_size * seq_length, self.num_labels), labels.view(batch_size * seq_length)
            )

        if not return_dict:
            output = (logits,) + transformer_outputs[2:]
            return ((loss,) + output) if loss is not None else output

        return TokenClassifierOutput(
            loss=loss,
            logits=logits,
            hidden_states=transformer_outputs.hidden_states,
            attentions=transformer_outputs.attentions,
        )

mindnlp.transformers.models.bloom.modeling_bloom.BloomForTokenClassification.__init__(config)

Initializes an instance of BloomForTokenClassification.

PARAMETER DESCRIPTION
self

The instance of the class.

config

The configuration object containing settings for the model. It must be an instance of BloomConfig class. This parameter is required.

TYPE: BloomConfig

RETURNS DESCRIPTION

None.

RAISES DESCRIPTION
TypeError

If the config parameter is not an instance of BloomConfig.

AttributeError

If the config object does not contain the required attributes.

Source code in mindnlp/transformers/models/bloom/modeling_bloom.py
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
def __init__(self, config: BloomConfig):
    """
    Initializes an instance of BloomForTokenClassification.

    Args:
        self: The instance of the class.
        config (BloomConfig): The configuration object containing settings for the model.
            It must be an instance of BloomConfig class.
            This parameter is required.

    Returns:
        None.

    Raises:
        TypeError: If the config parameter is not an instance of BloomConfig.
        AttributeError: If the config object does not contain the required attributes.
    """
    super().__init__(config)
    self.num_labels = config.num_labels

    self.transformer = BloomModel(config)
    if hasattr(config, "classifier_dropout") and config.classifier_dropout is not None:
        classifier_dropout = config.classifier_dropout
    elif hasattr(config, "hidden_dropout") and config.hidden_dropout is not None:
        classifier_dropout = config.hidden_dropout
    else:
        classifier_dropout = 0.1
    self.dropout = nn.Dropout(p=classifier_dropout)
    self.classifier = nn.Linear(config.hidden_size, config.num_labels)

    # Initialize weights and apply final processing
    self.post_init()

mindnlp.transformers.models.bloom.modeling_bloom.BloomForTokenClassification.forward(input_ids=None, past_key_values=None, attention_mask=None, head_mask=None, inputs_embeds=None, labels=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None, **deprecated_arguments)

PARAMETER DESCRIPTION
labels

Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy).

TYPE: `mindspore.Tensor` of shape `(batch_size,)`, *optional* DEFAULT: None

Source code in mindnlp/transformers/models/bloom/modeling_bloom.py
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
def forward(
    self,
    input_ids: Optional[mindspore.Tensor] = None,
    past_key_values: Optional[Tuple[Tuple[mindspore.Tensor, mindspore.Tensor], ...]] = None,
    attention_mask: Optional[mindspore.Tensor] = None,
    head_mask: Optional[mindspore.Tensor] = None,
    inputs_embeds: Optional[mindspore.Tensor] = None,
    labels: Optional[mindspore.Tensor] = None,
    use_cache: Optional[bool] = None,
    output_attentions: Optional[bool] = None,
    output_hidden_states: Optional[bool] = None,
    return_dict: Optional[bool] = None,
    **deprecated_arguments,
) -> Union[Tuple[mindspore.Tensor], TokenClassifierOutput]:
    r"""
    Args:
        labels (`mindspore.Tensor` of shape `(batch_size,)`, *optional*):
            Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
            config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
            `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
    """
    if deprecated_arguments.pop("position_ids", False) is not False:
        # `position_ids` could have been `mindspore.Tensor` or `None` so defaulting pop to `False` allows to detect if users were passing explicitly `None`
        warnings.warn(
            "`position_ids` have no functionality in BLOOM and will be removed in v5.0.0. You can safely ignore"
            " passing `position_ids`.",
            FutureWarning,
        )
    if len(deprecated_arguments) > 0:
        raise ValueError(f"Got unexpected arguments: {deprecated_arguments}")

    return_dict = return_dict if return_dict is not None else self.config.use_return_dict

    transformer_outputs = self.transformer(
        input_ids,
        past_key_values=past_key_values,
        attention_mask=attention_mask,
        head_mask=head_mask,
        inputs_embeds=inputs_embeds,
        use_cache=use_cache,
        output_attentions=output_attentions,
        output_hidden_states=output_hidden_states,
        return_dict=return_dict,
    )

    hidden_states = transformer_outputs[0]
    hidden_states = self.dropout(hidden_states)
    logits = self.classifier(hidden_states)

    loss = None
    if labels is not None:
        batch_size, seq_length = labels.shape
        loss = F.cross_entropy(
            logits.view(batch_size * seq_length, self.num_labels), labels.view(batch_size * seq_length)
        )

    if not return_dict:
        output = (logits,) + transformer_outputs[2:]
        return ((loss,) + output) if loss is not None else output

    return TokenClassifierOutput(
        loss=loss,
        logits=logits,
        hidden_states=transformer_outputs.hidden_states,
        attentions=transformer_outputs.attentions,
    )

mindnlp.transformers.models.bloom.modeling_bloom.BloomForQuestionAnswering

Bases: BloomPreTrainedModel

This class represents a Bloom model for question answering tasks. It is a subclass of BloomPreTrainedModel, which provides the basic structure and functionality for pre-trained models. The BloomForQuestionAnswering class includes methods for model forwardion and inference.

ATTRIBUTE DESCRIPTION
transformer

An instance of the BloomModel class, which is responsible for the main transformer architecture of the model.

qa_outputs

A neural network layer that takes the output of the transformer and produces logits for start and end positions of the answer span.

METHOD DESCRIPTION
__init__

Initializes the BloomForQuestionAnswering instance with a given configuration.

Source code in mindnlp/transformers/models/bloom/modeling_bloom.py
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
class BloomForQuestionAnswering(BloomPreTrainedModel):

    """
    This class represents a Bloom model for question answering tasks. It is a subclass of BloomPreTrainedModel, which provides the basic structure and functionality for pre-trained models. The
    BloomForQuestionAnswering class includes methods for model forwardion and inference.

    Attributes:
        transformer: An instance of the BloomModel class, which is responsible for the main transformer
            architecture of the model.
        qa_outputs: A neural network layer that takes the output of the transformer and produces logits
            for start and end positions of the answer span.

    Methods:
        __init__(self, config): Initializes the BloomForQuestionAnswering instance with a given configuration.
        forward(self, input_ids, attention_mask, position_ids, head_mask, inputs_embeds, start_positions,
            end_positions, output_attentions, output_hidden_states, return_dict):
            Constructs the model for question answering based on the given inputs and returns the predicted start
            and end logits of the answer span, as well as other optional outputs.
    """
    def __init__(self, config):
        """
        Initializes the BloomForQuestionAnswering class.

        Args:
            self: The object instance.
            config: A dictionary containing the configuration parameters for the model.

        Returns:
            None.

        Raises:
            None.
        """
        super().__init__(config)
        self.transformer = BloomModel(config)
        self.qa_outputs = nn.Linear(config.hidden_size, 2)

        # Initialize weights and apply final processing
        self.post_init()

    def forward(
        self,
        input_ids: Optional[mindspore.Tensor] = None,
        attention_mask: Optional[mindspore.Tensor] = None,
        position_ids: Optional[mindspore.Tensor] = None,
        head_mask: Optional[mindspore.Tensor] = None,
        inputs_embeds: Optional[mindspore.Tensor] = None,
        start_positions: Optional[mindspore.Tensor] = None,
        end_positions: Optional[mindspore.Tensor] = None,
        output_attentions: Optional[bool] = None,
        output_hidden_states: Optional[bool] = None,
        return_dict: Optional[bool] = None,
    ) -> Union[Tuple, QuestionAnsweringModelOutput]:
        r"""
        Args:
            start_positions (`mindspore.Tensor` of shape `(batch_size,)`, *optional*):
                Labels for position (index) of the start of the labelled span for computing the token classification loss.
                Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence
                are not taken into account for computing the loss.
            end_positions (`mindspore.Tensor` of shape `(batch_size,)`, *optional*):
                Labels for position (index) of the end of the labelled span for computing the token classification loss.
                Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence
                are not taken into account for computing the loss.
        """
        return_dict = return_dict if return_dict is not None else self.config.use_return_dict

        outputs = self.transformer(
            input_ids,
            attention_mask=attention_mask,
            position_ids=position_ids,
            head_mask=head_mask,
            inputs_embeds=inputs_embeds,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
        )

        sequence_output = outputs[0]

        logits = self.qa_outputs(sequence_output)
        start_logits, end_logits = logits.split(1, axis=-1)
        start_logits = start_logits.squeeze(-1)
        end_logits = end_logits.squeeze(-1)

        total_loss = None
        if start_positions is not None and end_positions is not None:
            # If we are on multi-GPU, split add a dimension
            if start_positions.ndim > 1:
                start_positions = start_positions.squeeze(-1)
            if end_positions.ndim > 1:
                end_positions = end_positions.squeeze(-1)
            # sometimes the start/end positions are outside our model inputs, we ignore these terms
            ignored_index = start_logits.shape[1]
            start_positions = start_positions.clamp(0, ignored_index)
            end_positions = end_positions.clamp(0, ignored_index)

            start_loss = F.cross_entropy(start_logits, start_positions, ignore_index=ignored_index)
            end_loss = F.cross_entropy(end_logits, end_positions, ignore_index=ignored_index)
            total_loss = (start_loss + end_loss) / 2

        if not return_dict:
            output = (start_logits, end_logits) + outputs[2:]
            return ((total_loss,) + output) if total_loss is not None else output

        return QuestionAnsweringModelOutput(
            loss=total_loss,
            start_logits=start_logits,
            end_logits=end_logits,
            hidden_states=outputs.hidden_states,
            attentions=outputs.attentions,
        )

mindnlp.transformers.models.bloom.modeling_bloom.BloomForQuestionAnswering.__init__(config)

Initializes the BloomForQuestionAnswering class.

PARAMETER DESCRIPTION
self

The object instance.

config

A dictionary containing the configuration parameters for the model.

RETURNS DESCRIPTION

None.

Source code in mindnlp/transformers/models/bloom/modeling_bloom.py
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
def __init__(self, config):
    """
    Initializes the BloomForQuestionAnswering class.

    Args:
        self: The object instance.
        config: A dictionary containing the configuration parameters for the model.

    Returns:
        None.

    Raises:
        None.
    """
    super().__init__(config)
    self.transformer = BloomModel(config)
    self.qa_outputs = nn.Linear(config.hidden_size, 2)

    # Initialize weights and apply final processing
    self.post_init()

mindnlp.transformers.models.bloom.modeling_bloom.BloomForQuestionAnswering.forward(input_ids=None, attention_mask=None, position_ids=None, head_mask=None, inputs_embeds=None, start_positions=None, end_positions=None, output_attentions=None, output_hidden_states=None, return_dict=None)

PARAMETER DESCRIPTION
start_positions

Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.

TYPE: `mindspore.Tensor` of shape `(batch_size,)`, *optional* DEFAULT: None

end_positions

Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.

TYPE: `mindspore.Tensor` of shape `(batch_size,)`, *optional* DEFAULT: None

Source code in mindnlp/transformers/models/bloom/modeling_bloom.py
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
def forward(
    self,
    input_ids: Optional[mindspore.Tensor] = None,
    attention_mask: Optional[mindspore.Tensor] = None,
    position_ids: Optional[mindspore.Tensor] = None,
    head_mask: Optional[mindspore.Tensor] = None,
    inputs_embeds: Optional[mindspore.Tensor] = None,
    start_positions: Optional[mindspore.Tensor] = None,
    end_positions: Optional[mindspore.Tensor] = None,
    output_attentions: Optional[bool] = None,
    output_hidden_states: Optional[bool] = None,
    return_dict: Optional[bool] = None,
) -> Union[Tuple, QuestionAnsweringModelOutput]:
    r"""
    Args:
        start_positions (`mindspore.Tensor` of shape `(batch_size,)`, *optional*):
            Labels for position (index) of the start of the labelled span for computing the token classification loss.
            Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence
            are not taken into account for computing the loss.
        end_positions (`mindspore.Tensor` of shape `(batch_size,)`, *optional*):
            Labels for position (index) of the end of the labelled span for computing the token classification loss.
            Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence
            are not taken into account for computing the loss.
    """
    return_dict = return_dict if return_dict is not None else self.config.use_return_dict

    outputs = self.transformer(
        input_ids,
        attention_mask=attention_mask,
        position_ids=position_ids,
        head_mask=head_mask,
        inputs_embeds=inputs_embeds,
        output_attentions=output_attentions,
        output_hidden_states=output_hidden_states,
        return_dict=return_dict,
    )

    sequence_output = outputs[0]

    logits = self.qa_outputs(sequence_output)
    start_logits, end_logits = logits.split(1, axis=-1)
    start_logits = start_logits.squeeze(-1)
    end_logits = end_logits.squeeze(-1)

    total_loss = None
    if start_positions is not None and end_positions is not None:
        # If we are on multi-GPU, split add a dimension
        if start_positions.ndim > 1:
            start_positions = start_positions.squeeze(-1)
        if end_positions.ndim > 1:
            end_positions = end_positions.squeeze(-1)
        # sometimes the start/end positions are outside our model inputs, we ignore these terms
        ignored_index = start_logits.shape[1]
        start_positions = start_positions.clamp(0, ignored_index)
        end_positions = end_positions.clamp(0, ignored_index)

        start_loss = F.cross_entropy(start_logits, start_positions, ignore_index=ignored_index)
        end_loss = F.cross_entropy(end_logits, end_positions, ignore_index=ignored_index)
        total_loss = (start_loss + end_loss) / 2

    if not return_dict:
        output = (start_logits, end_logits) + outputs[2:]
        return ((total_loss,) + output) if total_loss is not None else output

    return QuestionAnsweringModelOutput(
        loss=total_loss,
        start_logits=start_logits,
        end_logits=end_logits,
        hidden_states=outputs.hidden_states,
        attentions=outputs.attentions,
    )

mindnlp.transformers.models.bloom.tokenization_bloom_fast.BloomTokenizerFast

Bases: PreTrainedTokenizerFast

Construct a "fast" Bloom tokenizer (backed by HuggingFace's tokenizers library). Based on byte-level Byte-Pair-Encoding.

This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not:

Example
>>> from transformers import BloomTokenizerFast
...
>>> tokenizer = BloomTokenizerFast.from_pretrained("bigscience/bloom")
>>> tokenizer("Hello world")["input_ids"]
[59414, 8876]
...
>>> tokenizer(" Hello world")["input_ids"]
[86153, 8876]

You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer, but since the model was not pretrained this way, it might yield a decrease in performance.

When used with is_split_into_words=True, this tokenizer needs to be instantiated with add_prefix_space=True.

This tokenizer inherits from [PreTrainedTokenizerFast] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.

PARAMETER DESCRIPTION
vocab_file

Path to the vocabulary file.

TYPE: `str` DEFAULT: None

merges_file

Path to the merges file.

TYPE: `str` DEFAULT: None

errors

Paradigm to follow when decoding bytes to UTF-8. See bytes.decode for more information.

TYPE: `str`, *optional*, defaults to `"replace"`

unk_token

The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.

TYPE: `str`, *optional*, defaults to `<|endoftext|>` DEFAULT: '<unk>'

bos_token

The beginning of sequence token.

TYPE: `str`, *optional*, defaults to `<|endoftext|>` DEFAULT: '<s>'

eos_token

The end of sequence token.

TYPE: `str`, *optional*, defaults to `<|endoftext|>` DEFAULT: '</s>'

add_prefix_space

Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. (Bloom tokenizer detect beginning of words by the preceding space).

TYPE: `bool`, *optional*, defaults to `False` DEFAULT: False

trim_offsets

Whether or not the post-processing step should trim offsets to avoid including whitespaces.

TYPE: `bool`, *optional*, defaults to `True`

Source code in mindnlp/transformers/models/bloom/tokenization_bloom_fast.py
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
class BloomTokenizerFast(PreTrainedTokenizerFast):
    """
    Construct a "fast" Bloom tokenizer (backed by HuggingFace's *tokenizers* library). Based on byte-level
    Byte-Pair-Encoding.

    This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
    be encoded differently whether it is at the beginning of the sentence (without space) or not:

    Example:
        ```python
        >>> from transformers import BloomTokenizerFast
        ...
        >>> tokenizer = BloomTokenizerFast.from_pretrained("bigscience/bloom")
        >>> tokenizer("Hello world")["input_ids"]
        [59414, 8876]
        ...
        >>> tokenizer(" Hello world")["input_ids"]
        [86153, 8876]
        ```

    You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer, but since
    the model was not pretrained this way, it might yield a decrease in performance.

    <Tip>

    When used with `is_split_into_words=True`, this tokenizer needs to be instantiated with `add_prefix_space=True`.

    </Tip>

    This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should
    refer to this superclass for more information regarding those methods.

    Args:
        vocab_file (`str`):
            Path to the vocabulary file.
        merges_file (`str`):
            Path to the merges file.
        errors (`str`, *optional*, defaults to `"replace"`):
            Paradigm to follow when decoding bytes to UTF-8. See
            [bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information.
        unk_token (`str`, *optional*, defaults to `<|endoftext|>`):
            The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
            token instead.
        bos_token (`str`, *optional*, defaults to `<|endoftext|>`):
            The beginning of sequence token.
        eos_token (`str`, *optional*, defaults to `<|endoftext|>`):
            The end of sequence token.
        add_prefix_space (`bool`, *optional*, defaults to `False`):
            Whether or not to add an initial space to the input. This allows to treat the leading word just as any
            other word. (Bloom tokenizer detect beginning of words by the preceding space).
        trim_offsets (`bool`, *optional*, defaults to `True`):
            Whether or not the post-processing step should trim offsets to avoid including whitespaces.
    """
    vocab_files_names = VOCAB_FILES_NAMES
    pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
    model_input_names = ["input_ids", "attention_mask"]
    slow_tokenizer_class = None
    # No `max_model_input_sizes` as BLOOM uses ALiBi positional embeddings

    def __init__(
        self,
        vocab_file=None,
        merges_file=None,
        tokenizer_file=None,
        unk_token="<unk>",
        bos_token="<s>",
        eos_token="</s>",
        pad_token="<pad>",
        add_prefix_space=False,
        clean_up_tokenization_spaces=False,
        **kwargs,
    ):
        """
        Initialize a BloomTokenizerFast object.

        Args:
            self: The instance of the class.
            vocab_file (str): Path to a vocabulary file.
            merges_file (str): Path to a merges file.
            tokenizer_file (str): Path to a tokenizer file.
            unk_token (str): The unknown token.
            bos_token (str): The beginning of sequence token.
            eos_token (str): The end of sequence token.
            pad_token (str): The padding token.
            add_prefix_space (bool): Flag indicating whether to add prefix space.
            clean_up_tokenization_spaces (bool): Flag indicating whether to clean up tokenization spaces.

        Returns:
            None.

        Raises:
            None.
        """
        super().__init__(
            vocab_file,
            merges_file,
            tokenizer_file=tokenizer_file,
            unk_token=unk_token,
            bos_token=bos_token,
            eos_token=eos_token,
            pad_token=pad_token,
            add_prefix_space=add_prefix_space,
            clean_up_tokenization_spaces=clean_up_tokenization_spaces,
            **kwargs,
        )
        # TODO @ArthurZucker this can only work one way for now, to update later-on. Tests should also properly
        # check this as they were green before.
        pre_tok_state = pickle.dumps(self.backend_tokenizer.pre_tokenizer)
        decoder_state = pickle.dumps(self.backend_tokenizer.decoder)

        if add_prefix_space:
            pre_tok_state = pre_tok_state.replace(b'"add_prefix_space":false', b'"add_prefix_space": true')
            decoder_state = decoder_state.replace(b'"add_prefix_space":false', b'"add_prefix_space": true')
        self.backend_tokenizer.pre_tokenizer = pickle.loads(pre_tok_state)
        self.backend_tokenizer.decoder = pickle.loads(decoder_state)

        self.add_prefix_space = add_prefix_space

    def _batch_encode_plus(self, *args, **kwargs) -> BatchEncoding:
        """
        The `_batch_encode_plus` method is used in the `BloomTokenizerFast` class to encode a batch of inputs into a `BatchEncoding` object.

        Args:
            self: The instance of the `BloomTokenizerFast` class.

        Returns:
            A `BatchEncoding` object that contains the encoded representations of the inputs.

        Raises:
            Exception: If the `add_prefix_space` parameter is False and `is_split_into_words` is True.
                In this case, the `BloomTokenizerFast` class needs to be instantiated with `add_prefix_space=True`
                to work with pretokenized inputs.
        """
        is_split_into_words = kwargs.get("is_split_into_words", False)
        if not (self.add_prefix_space or not is_split_into_words):
            raise Exception(
                f"You need to instantiate {self.__class__.__name__} with add_prefix_space=True to use it with"
                " pretokenized inputs."
            )

        return super()._batch_encode_plus(*args, **kwargs)

    def _encode_plus(self, *args, **kwargs) -> BatchEncoding:
        """
        Encodes the input sequence into a batch of encoded sequences using the BloomTokenizerFast.

        Args:
            self (BloomTokenizerFast): An instance of the BloomTokenizerFast class.

        Returns:
            BatchEncoding: A batch of encoded sequences.

        Raises:
            Exception: If the BloomTokenizerFast instance is not instantiated with add_prefix_space=True
                and the input is pretokenized.

        Note:
            This method is used to encode the input sequence into a batch of encoded sequences.
            It checks if the BloomTokenizerFast instance is instantiated with add_prefix_space=True and the input is not
            pretokenized. If not, it raises an exception.

        Example:
            ```python
            >>> tokenizer = BloomTokenizerFast(add_prefix_space=True)
            >>> encoding = tokenizer._encode_plus(input_sequence)
            ```
        """
        is_split_into_words = kwargs.get("is_split_into_words", False)

        if not (self.add_prefix_space or not is_split_into_words):
            raise Exception(
                f"You need to instantiate {self.__class__.__name__} with add_prefix_space=True to use it with"
                " pretokenized inputs."
            )

        return super()._encode_plus(*args, **kwargs)

    def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
        """
        Save the tokenizer's vocabulary to a specified directory.

        Args:
            self (BloomTokenizerFast): An instance of the BloomTokenizerFast class.
            save_directory (str): The directory where the vocabulary files will be saved.
            filename_prefix (Optional[str], optional): A prefix to prepend to the vocabulary file names. Defaults to None.

        Returns:
            Tuple[str]: A tuple of file names that were saved in the specified directory.

        Raises:
            None

        The 'save_vocabulary' method saves the tokenizer's vocabulary to the specified 'save_directory'.
        The vocabulary files are saved using the 'filename_prefix' if provided, or a default name if not specified.

        Example:
            ```python
            >>> tokenizer = BloomTokenizerFast()
            >>> tokenizer.save_vocabulary('/path/to/save', 'vocab_')
            ```
            This will save the tokenizer's vocabulary files in the '/path/to/save' directory with file names
            prefixed by 'vocab_'. The method returns a tuple of file names that were saved.
        """
        files = self._tokenizer.model.save(save_directory, name=filename_prefix)
        return tuple(files)

    @property
    # Copied from transformers.models.gpt2.tokenization_gpt2.GPT2Tokenizer.default_chat_template
    def default_chat_template(self):
        """
        A simple chat template that ignores role information and just concatenates messages with EOS tokens.
        """
        logger.warning_once(
            "\nNo chat template is defined for this tokenizer - using the default template "
            f"for the {self.__class__.__name__} class. If the default is not appropriate for "
            "your model, please set `tokenizer.chat_template` to an appropriate template. "
            "See https://hf-mirror.com/docs/transformers/main/chat_templating for more information.\n"
        )
        return "{% for message in messages %}" "{{ message.content }}{{ eos_token }}" "{% endfor %}"

mindnlp.transformers.models.bloom.tokenization_bloom_fast.BloomTokenizerFast.default_chat_template property

A simple chat template that ignores role information and just concatenates messages with EOS tokens.

mindnlp.transformers.models.bloom.tokenization_bloom_fast.BloomTokenizerFast.__init__(vocab_file=None, merges_file=None, tokenizer_file=None, unk_token='<unk>', bos_token='<s>', eos_token='</s>', pad_token='<pad>', add_prefix_space=False, clean_up_tokenization_spaces=False, **kwargs)

Initialize a BloomTokenizerFast object.

PARAMETER DESCRIPTION
self

The instance of the class.

vocab_file

Path to a vocabulary file.

TYPE: str DEFAULT: None

merges_file

Path to a merges file.

TYPE: str DEFAULT: None

tokenizer_file

Path to a tokenizer file.

TYPE: str DEFAULT: None

unk_token

The unknown token.

TYPE: str DEFAULT: '<unk>'

bos_token

The beginning of sequence token.

TYPE: str DEFAULT: '<s>'

eos_token

The end of sequence token.

TYPE: str DEFAULT: '</s>'

pad_token

The padding token.

TYPE: str DEFAULT: '<pad>'

add_prefix_space

Flag indicating whether to add prefix space.

TYPE: bool DEFAULT: False

clean_up_tokenization_spaces

Flag indicating whether to clean up tokenization spaces.

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION

None.

Source code in mindnlp/transformers/models/bloom/tokenization_bloom_fast.py
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
def __init__(
    self,
    vocab_file=None,
    merges_file=None,
    tokenizer_file=None,
    unk_token="<unk>",
    bos_token="<s>",
    eos_token="</s>",
    pad_token="<pad>",
    add_prefix_space=False,
    clean_up_tokenization_spaces=False,
    **kwargs,
):
    """
    Initialize a BloomTokenizerFast object.

    Args:
        self: The instance of the class.
        vocab_file (str): Path to a vocabulary file.
        merges_file (str): Path to a merges file.
        tokenizer_file (str): Path to a tokenizer file.
        unk_token (str): The unknown token.
        bos_token (str): The beginning of sequence token.
        eos_token (str): The end of sequence token.
        pad_token (str): The padding token.
        add_prefix_space (bool): Flag indicating whether to add prefix space.
        clean_up_tokenization_spaces (bool): Flag indicating whether to clean up tokenization spaces.

    Returns:
        None.

    Raises:
        None.
    """
    super().__init__(
        vocab_file,
        merges_file,
        tokenizer_file=tokenizer_file,
        unk_token=unk_token,
        bos_token=bos_token,
        eos_token=eos_token,
        pad_token=pad_token,
        add_prefix_space=add_prefix_space,
        clean_up_tokenization_spaces=clean_up_tokenization_spaces,
        **kwargs,
    )
    # TODO @ArthurZucker this can only work one way for now, to update later-on. Tests should also properly
    # check this as they were green before.
    pre_tok_state = pickle.dumps(self.backend_tokenizer.pre_tokenizer)
    decoder_state = pickle.dumps(self.backend_tokenizer.decoder)

    if add_prefix_space:
        pre_tok_state = pre_tok_state.replace(b'"add_prefix_space":false', b'"add_prefix_space": true')
        decoder_state = decoder_state.replace(b'"add_prefix_space":false', b'"add_prefix_space": true')
    self.backend_tokenizer.pre_tokenizer = pickle.loads(pre_tok_state)
    self.backend_tokenizer.decoder = pickle.loads(decoder_state)

    self.add_prefix_space = add_prefix_space

mindnlp.transformers.models.bloom.tokenization_bloom_fast.BloomTokenizerFast.save_vocabulary(save_directory, filename_prefix=None)

Save the tokenizer's vocabulary to a specified directory.

PARAMETER DESCRIPTION
self

An instance of the BloomTokenizerFast class.

TYPE: BloomTokenizerFast

save_directory

The directory where the vocabulary files will be saved.

TYPE: str

filename_prefix

A prefix to prepend to the vocabulary file names. Defaults to None.

TYPE: Optional[str] DEFAULT: None

RETURNS DESCRIPTION
Tuple[str]

Tuple[str]: A tuple of file names that were saved in the specified directory.

The 'save_vocabulary' method saves the tokenizer's vocabulary to the specified 'save_directory'. The vocabulary files are saved using the 'filename_prefix' if provided, or a default name if not specified.

Example

>>> tokenizer = BloomTokenizerFast()
>>> tokenizer.save_vocabulary('/path/to/save', 'vocab_')
This will save the tokenizer's vocabulary files in the '/path/to/save' directory with file names prefixed by 'vocab_'. The method returns a tuple of file names that were saved.

Source code in mindnlp/transformers/models/bloom/tokenization_bloom_fast.py
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
    """
    Save the tokenizer's vocabulary to a specified directory.

    Args:
        self (BloomTokenizerFast): An instance of the BloomTokenizerFast class.
        save_directory (str): The directory where the vocabulary files will be saved.
        filename_prefix (Optional[str], optional): A prefix to prepend to the vocabulary file names. Defaults to None.

    Returns:
        Tuple[str]: A tuple of file names that were saved in the specified directory.

    Raises:
        None

    The 'save_vocabulary' method saves the tokenizer's vocabulary to the specified 'save_directory'.
    The vocabulary files are saved using the 'filename_prefix' if provided, or a default name if not specified.

    Example:
        ```python
        >>> tokenizer = BloomTokenizerFast()
        >>> tokenizer.save_vocabulary('/path/to/save', 'vocab_')
        ```
        This will save the tokenizer's vocabulary files in the '/path/to/save' directory with file names
        prefixed by 'vocab_'. The method returns a tuple of file names that were saved.
    """
    files = self._tokenizer.model.save(save_directory, name=filename_prefix)
    return tuple(files)